403Webshell
Server IP : 80.87.202.40  /  Your IP : 216.73.216.169
Web Server : Apache
System : Linux rospirotorg.ru 5.14.0-539.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Dec 5 22:26:13 UTC 2024 x86_64
User : bitrix ( 600)
PHP Version : 8.2.27
Disable Function : NONE
MySQL : OFF |  cURL : ON |  WGET : ON |  Perl : ON |  Python : OFF |  Sudo : ON |  Pkexec : ON
Directory :  /lib64/python3.9/site-packages/

Upload File :
current_dir [ Writeable] document_root [ Writeable]

 

Command :


[ Back ]     

Current File : /lib64/python3.9/site-packages/perf.cpython-39-x86_64-linux-gnu.so
ELF>0�@0�k@8	@p�p����9�9��	�	�	�`�`�j�/j�/j�h���j��j��j  888$$P�td�Ti�Ti�Ti��Q�tdR�td�j�/j�/j0�0�GNU�g�B�j�(/��\k��jQ��	�@�H
��I�!�@��
��� 0   A4B24T$�'	0�D@!$0"	`BP� 1�a*@(�
�DD(0Ú ��2� � �@H#h
@b!aHX�Q2�H
��[�pj.��
�H�55�!�p	C`H @��0�D(j"A!aH���x����
J��*�`P�%�q��D�/\(k"VX�d!P
�,䁢1�S�BX@��`��" �@.
d�Y���K� Ԉ1@�@@��� �u@��`&�PQM�
!H����H"
 "љґ
�qhUH!8(QH@� 0"2@�4��@�F*�c@ 8B�L0�B`�E�`!�@��Q�$�<a
�X@ ��!80�žJ�����������������������������	
"#%&()+,-13679<?ABCEFHIKNPQVXZ[\]^`bcdghiklmnpqrstuwxz}������������������������������������������������������������������������	
"%&'(+,-./023679:=?ADEJLMOQRSTVX[\_adghikmnoprtuwy|~�����������������������������������������������������������������������������������	
 !"#$')*+,]*�e���߷Cn�+;���F�2L΋��f?$n���9�<�{���;xe����S5��
�[��6��nw}T!��
\�\�ۓJh&h�j�P�W��M�p'H<y�	�o2��Sa��c��C�0(ܻ
qy�;
�|�Ja\��@���~nY��ߩ7��X���u������9ؤͿ�V1����`�H���X�T��b��@�J���j�=���g4�'�Y��艀NuL1Et�{�]y�d�P���-���ZGZ�?~!���cӽf�B��צ���l��u�ɭIvP�N�zb~ҁπ�nz���vf�El�pz��ѡ�uqz��Y]M�BO|��s���=Ó�2�q�����nI3�(\=���)��D�	��"rI��J�:��W`�\�����4
o��O)���UԪ��c���qM"��)~8��L����j��#x0#�� 
��q{�x��ڿ����)o��.����(�d,�Yv������>Gp��_.�����p�����i��?i�c�u��͸����^ԸK^�b:��v+1�9��-L���ehf��ւN%78�g�JhoPA����MA��h������*�3cl]ZHy8*��r=z��K�t�?:2%6����n�-29�P��Ճ��J��MS�Q�������Ͼ�}���
;��6�UîdS���h��O�os��_edİ�5� �:���I�n����y��„�a�p�fu0��\�q��x��x��xW�3攻<ͽ�x�B����u�i��xΆ|�H�߭�uo
��x�iuQz��&_�^[gM(�=�S����a��ֺ��N��'Z���j�{#+�>���␅����B�ᴻ����]�uw=i�E���=VM�̓�%t�!�����~d8g�!X6r.��+�1h=ܟL��qr-���q�G�-�q�G2L�\���T�S�+p��]�W�R�1���=p{��`�u�[���\�����S\������_���d��j_���>�yaR%���Es�X`�0��Gk.�j��l� ����ж�tG�Ʉz�C��
4�+B���
��OC(�`��g�S<���ً`�e�!w��i���|���7�C�����"c���1��ܕj���܊�S��q@"P�7���-aX0����Y=!��k8�Kr��}֨U�W7�G	�����q�O�sV��BQö��9��=&���
u�*�ʹ+�-zt�C6��9y���y©�tz#{��@�j`ٛ��eߧ0��(�NG���ޮ��0�o�5;���G�Q
k��dJ��d'}�n�|�t�����9["��p�y��<�E�oI'��+��J��olߧ2*�����(1�����5�+s8�/���,D�4�gO�5�#��"��ܷ%0ݩ�N�(ژ�5�=7�����tX��u��lӴ� ��Q�����D9~\4��ރUS���d��'fg��}+��4�+B�	�+5�Msv������q��$z;�+���zK��7��i�m������8Fa鹑�|��u�xYH��ș@�=�D��,�3�8�����ՙA�c&5v�	��x��x����j�XO�P��v��ω��CKDP� �=��`	���k')���("�EP���2=��n��졓�#�CR���Q©�d`-8e^�8�dN��y������Hw)���g��3���L�$wbA�<2եA(��0DM���c;x?�l�roAx©�I&oC�+����5C
Ώ�{?���Z���6�5<Bc߶���T���R|����>$-pp4Ci|8�w�j���Oȯ5��wr9(�(��MM�5L�D̡�<Ѳ��IV��A�ݴJF̴Ŀ�"���o��p��UČ�7������|�œ-C۠�w謣Њ�tg�;�Z��m���B��7�3��7���E�L�����ޓ+Ŀ 
��M��U�1��D���#���#3�AA�z
�W5�2���iAs�F�i`O�`ӧ�}�Ζ��6�d��i!�;��
��B��H�
Q}�:�&��>Ѣ!#�GzC�9��`�d�ds�����HX�e��.KV���x~�x�?���H��ĊA׋�r0_�OՂ�T��w%%5��b�;(�b�E�����T�5l_��>�?���|5��C�^��X���J�'Ӿ�d��߻�u���D̢6�]q_n���J��n�Ԍ�j��"� �{�9�y��L��ZԷ����&�
ƾ���N��W>��4�?�z �=�n�.�'N�< �
�}*�x~��g����c�5u�m��v�R�}��r�3��L�b��!D��� �K�1�-Nad��_�6-�ڝߧ�|���?��Y�Q�Qɝ�CY˪�r�	���h^7�X�J)&?Y�+1,�C/�_(k2u8�NY8�,M3����3���XZ�y(.5 d6w$�+��)�)'Xi-+�F�6D~�#/0F8�F,FX�G�7fXG"�#	
�+6SK��X�$|&�9$�71�:�/�9�@�!G!M�C�$E|W>�W/^&�42:<�3}2 � 	�@�3>&g$�4s.1?;3:z
V7��+��1q?)��2�*a�V�.�S�W�L/Z�:�5eKWI<�&�WM�,�6�/^WP@$JAxIM'��'�;jx/"�/�K9F!"�Ay1�,41'(S%QN*9WBFC:��W�K �FX�KxL�2�8�i
-W�VC%!�7F`>ND�W2)t>�(��/}�GC+��}7K�KZ64�#��A�.=.�/��6�5X �M�.4��A�.�&�W?�4�8��7,�0YD��
(93��*[�H�IDA, ��n:[~qR�V[.{��/��,D-�GF��#�;>D��1�Y0�	.�DF"�5C+�'H
�$�?5C)�9�7%M.���)}'�<�K�CvD�D4�?&'-nF���W�9�'�W7��	u(B,,@ �1V�x
��A
Y�>�M�8�+���t!��@�Jg?���H�%�N�KW��<�5��M6�\9R0�4j"�=�BK@�H�:3>!\
-@L\�L�8�(��wQX�B�A��2*8A�$E�.�9x� �)IB�B��Hs�;d�TH�c-��d�
��J�)W !�+G`~5E��GB)w���Hd�2�6�=7�%#/+TF�6�@�V�X3��W4���@���!�p
�!�?S�Kx,�/Cd<eB(:�@KW(7c�8�N�G�L��G� iXfNBX�H�N�0wX|97�(�3"�8�8K�LE�D�
�+�-��$`#@r*?(��:@�/�hZ. �W�.�C�?�p�D�M!!6a2�dM�D
H~=7GZ:M�=G�.�?�A?�+6T*.B"6M. <�X�!�E�En##�`!�@�X=��/�5u*�s�N�Bu
��.y8�QY>��;L)t�D�5�</��27�
�����,�!b���"=o7w6N<
�&��!�8+�5t�K��Gx	[*�Lq�1$;�)EHT=/:H1?&MC&DG1M�.M^X�X��4�T�>�#82 ��>:W��
)�1|>T+	�"+@��k*��%�-��
�'j+�1�H�]|<V1�L�1�-A��?�0�j�,�Uk@Z�8:f<kgpH�E
�;���k�Z�lkG�\	K�����U�mj3�ak�-�=k@�P}JgI�dk0�C�O	��YЊ	1�F3��E N	XG$�k�0�@k@\ "��'�ak@�p�P5`bk@� !k`U@tj �R�~jD@9:��:��I �1p���#p9k28�Xk@�=�
U�.[k��'�C`bw+R�uj��I@L	����L�����G��?�D����?�p[�O pj�P���I�	k�pBf�8:�=0F	DL
X�I�He����CPQ	[�K@�k�cN��jC>P
�B��0�)�;k�`:#G0lk�P0n	��)08k�P�yj��>��@�
pk�'qA�W	o�
���"0:k�6p7kuVsj0�	p'p��A9`�6+;�Tk@�@�W?�* �6�
��E���r�: Bk�.�:k�(`]k@@87�9P6k�`�D�%�8k�J��k%2�ck�P�j`Q`�j`T�{j0�>�k�Q`�j@�MP�	6&A
	`�0z!�+�Ak� �ck�R �j0�3�Tk@3��0�0�k	���S�E`
	5�2p8kyN�Qhg0�#��5! q�H����-P<ks	�)u�'�5k`��BJI�$@Uk@�R�zj�U�qjKИ[M Ek%0NL6K�P=G&k	�
!�Bk�=u;�bk@TU`j	��@��Fn(�Zk@qYP�	3�@�$*Vk@�T |j�NI`�s��dX3 `k@x=�9	n���k�<06kc)�7k@U@pj@�@����5dk����
����6H =/
��|i�C��@�&p:k��/�# _k@`5�ck[8@Zk@�BPG!�#�bk@P{�j�v!C lkV<;kx���"�=k@�Q�{j�*P7ku Ak`P�|j�_E 
P�%�`k�I�lk@rT�sj��	�	��f�R�}j��	�*~;P;k�\k@�p`V�tj�����@@J�]����k��8k,U�j0�T�j0�U@j0�YЉ	3�(`_k@�Q�~j@;OnjP�	P��~E`
Q�~j0�T�tj�N�SkF7�8kU�k�P/���kO���D*�^k@�@�_�Vk�E���TP�j��<�Zk@F0�#!.`=k�
�a� ^k@W
 $���/�I�TlNP5k,'�Uk@~�}*�8�:kQO�mjj007kl4�;k�F�5S@�<�/�9koB�WJ�9:kMzm���	t�3��BQ�wj��0�&�H�k^�	�@�kL�5	�GZ�	6�S`f	���U�@�0#@Vk@ ����p�	�? �5OS�jPpj�?�E	>S jbJ�dk�����O�~j	��-Q�~jR`*D�qC00	�"^>@��S`�j0�$��rYR�rj0�3�5k���1`�j(3�=kaY�	3dS�yj�p��qU�pj�nL0+I2�m	B:N��T"�?2T�rj^T�xjP��ck�>P�V-?0��&�k�Q�j��9k1IP� ~{ m}
�
���
@dn[!Xk@I�boAV	��`�	q �0���'�<�L:uK .	�%NP�T�Y�	6�0C-�.�A�JF��Dg�,�b	��U`zjT+�6k�6�<k8DP�i	��#�:k�%0dk���	_�	6���p�
P�Op�	�
��Rpxj�Y��	3,D 	9�G�f>'0<kn%�9kЯ7o@��k�	*(|#p6k���@���jq5�Ck@<@M;�PL�T��jp� �5kM'`\k@G�E���}-E`X	�v
09��&�h9 ak@�9Yk@O�|jm3���m8�k5-�;kFM�Z	C���0�Q`sj03�9kS$p;k36�6kb.�6kiD0;	�v�|v-@Wk@X;�[k@�T�pjU,`ck@d/`Ck@0�#O0l	v[��" \k@:�� �UQ��+��> ��jpb�/> Ta�N0�W�O�wj�
�:FZ�<k?R�zj
�p��%j`�#�9+bZP�	3qQ�nj�l&@k�?<	�|G�_	@�P��@���9�6k5ZЋ	1j�	��;k�>HdkM`	�;
�<,!� �k�sJ0\}*9:J�|S�{j&P�		6#��o_	��(`^k@�N�K	���igp�	�M3	�{@�fAp��L`�MEV�qjO
@L�P�K�O�}j�
`A�N?�R	B�;�l	t?PGwN`Ql�H ^R�4У5�?�k(�7Zk@a,�Ck@�"`d	����V�pjVpzj�P��^A@RmHN`�: >k"dk^C KM�A0���P:k~��d���,�VW�0p5k�2�Wk@��i`ak@�8@�@�S@vj����kqp��V ~js�� FpO�k'�ck�
�9��' Dk�P�oj�I#09kG`dk�%Uk@� ��p5``k@�
`, .�_ �5k�
��T@oj0C
�$�<,�Yk@?(�Ak@��#x)�@k@����(�j�X �	���Wk@�< ]k@2J�^G�&�:k�~�N-0;k����R�rj�`�Wn��R��'�:�]k@�
 N@`�ctD��etMp�=�Vb[
`JmP8k�����6k�@q4��k�A�\��B����<k@�6�7k���$L0P��39k�ZЌ	3��Yk@``��Opwj		 N���EQA��7kV`vj0#8k�<@Yk@<�8/P�vj�~O@~j@0�>k�P`tjp�;�7k/��$�8�Xk@�FU��Y��	1KJ�4k�@�0:p�hO�vj�0L/Ї�
!Z��	3�J�Z	H5�ck���0m	RS@�0{0�^G���ZP�	3�LPJqP`nj �E`C	�+��3� z�@Xk@�P�		� ��(Rg	�WQ�xj�F@>k@Q�D)�0S%��L� ��.V�mj0���)
`z}�������1 ck@'8�8k��_k@�Хk�Q�ujd�����k6 7k��'D	�� Ck@b@�b
�p�	�U@sj�= [���Pa"�MPM����	�hG0��
�-GT0��x��wP0sjk
$
�B��XoH�`��1 ��p
C�-�^k@JT�wj0%S�mj1=p
Td@�S�\k@_=W��!�]k@� �_k@�p�(��'�S�m	D� �Sb����!?k!�!k��0�:yZ��	3aLp��7P9k�Y�	1__gmon_start___ITM_deregisterTMCloneTable_ITM_registerTMCloneTable__cxa_finalize__pr_debugsysfs__read_boolfdarray__pollstrcpyscandir64tracefs__configured__strcpy_chksysfs__read_intfdarray__fprintfstatfs64alphasort64getenvsysfs__write_intfilename__read_strhugetlbfs__mountpointdebugfs__mountpoint__xpg_strerror_r__fprintf_chkfdarray__deleteprocfs__mountpointget_tracing_filebpf_fs__mountprocfs__read_strsysfs__read_ullfdarray__newget_events_filesysfs__mounttracing_path_mountfilename__write_int__stack_chk_failfilename__read_intstrtolbpf_fs__configuredcalloctracing_path__strerror_open_tphugetlbfs__configuredpthread_oncetracefs__mountpointsysfs__read_strstrstrfdarray__initfdarray__grow__pr_warnstrncmpsysfs__mountpointstrncpytracing_events__scandir_alphasortreallocdebugfs__mounthugetlbfs__mounttracing_path_setdebugfs__configuredcgroupfs_find_mountpointtracing_events__opendirstrdup__asprintf_chkstrtoull__pr_info__snprintf_chkput_tracing_file__vfprintf_chkfclosefilename__read_xllprocfs__mount__assert_failstrcmppthread_mutex_unlocktracefs__mountcpu__get_max_freqstderrfdarray__dup_entry_fromfilename__read_ullprocfs__configuredbpf_fs__mountpointpthread_mutex_lock__sprintf_chk__errno_locationput_events_file__isoc99_fscanffdarray__exit__ctype_toupper_locgetlinestrlensysctl__read_intstrchrlibapi_set_printsysfs__read_xllstr_error_rsysfs__configuredfopen64perf_cpu_map__newioctlperf_cpu_map__is_any_cpu_or_is_emptyperf_evlist__disableperf_evlist__removeperf_evsel__munmapsyscalllibperf_initperf_evlist__closeperf_thread_map__putperf_thread_map__new_arrayqsortperf_evlist__openperf_mmap__read_doneperf_evsel__enable_threadperf_evlist__addperf_thread_map__set_pidmallocperf_thread_map__pidstrtoulperf_cpu_map__has_any_cpu_or_is_emptyperf_cpu_map__maxperf_evsel__closeperf_evsel__deleteperf_cpu_map__new_any_cpuperf_evsel__enableperf_evsel__cpusperf_evsel__openperf_evlist__set_mapsperf_thread_map__getperf_mmap__read_eventperf_cpu_map__equalperf_cpu_map__is_emptyperf_mmap__read_initperf_evsel__readperf_evlist__set_leaderperf_evsel__attrperf_evlist__newperf_evsel__disableperf_evlist__enableperf_evlist__nextperf_cpu_map__minperf_evsel__mmap_baseperf_evlist__filter_pollfdperf_thread_map__commperf_evsel__mmapperf_cpu_map__mergeperf_thread_map__new_dummyperf_evsel__threadsperf_evlist__mmapperf_cpu_map__getperf_counts_values__scaleperf_evlist__nr_groupsperf_evsel__disable_cpu__ctype_b_locpreadperf_evlist__next_mmapperf_mmap__consumeperf_cpu_map__new_online_cpusfwriteperf_evlist__deleteperf_evlist__munmapperf_evlist__pollperf_cpu_map__nrperf_cpu_map__putperf_thread_map__nrperf_evsel__enable_cpuperf_evsel__close_cpuperf_cpu_map__hasfcntlperf_evsel__newperf_cpu_map__intersectperf_cpu_map__cpuperf_cpu_map__has_any_cpucheck_if_command_finishedlist_commandsunsetenvwaitpidstdoutrun_commandstrndupset_argv_exec_pathsigchain_popexec_cmd_initmemmovefinish_commandsetup_pagerdup2setup_pathisattystart_commanderror_bufset_option_flagsubcmd_configload_command_listuniqselectfflushputenvparse_opt_verbosity_cb__ctype_tolower_locsigchain_push_commonset_option_nobuildpager_initchdir__printf_chkusage_with_optionsstrcasestrclean_cmdnamesstrlcpyparse_optionsreaddir64run_command_v_optexecl_cmdexecv_cmdforce_pageris_in_cmdlist__memcpy_chksystem_pathusage_with_options_msggetcwdextract_argv0_pathget_argv_exec_pathadd_cmdnamepager_in_usefputcparse_options_subcommandaccesscmdname_compare__vsnprintf_chkpager_get_columnsexecvpclosedirexclude_cmds__vasprintf_chkraiseparse_options_usagekallsyms__is_functionkallsyms2elf_type_ctypekallsyms__parseputcharclock_gettimestrbuf_releasesetrlimit64pthread_barrier_destroyp0warnxbench_formatcond_signalperf_session__deleterandombpf_object__destroy_skeletonperf_pmus__scan_corebench_kallsyms_parsepthread_attr_destroynuma_node_to_cpuspthread_createquietnuma_bitmask_setbitset_mempolicygeteuidpthread_attr_initcgroup__putpthread_exitbench_numaparse_events_error__initbench_futex_wake_parallelbench_uprobe_emptybench_synthesize_find_first_bitbench_futex_lock_pinuma_nodes_ptrbench_futex_wakeusleepstrbuf_initgetpidmemcpy_origbench__endupdate_statsfcntl64getrlimit64prctlverboseperf_pmus__destroysetvbufperf_event__synthesize_sampleperrorbench_epoll_waiterrxsystemcmd_benchopenat64bench__startbench_mem_memcpyparse_events_error__printeprintfeventfdrealpathcpu__max_cpuperf_event__sample_event_sizebench_evlist_open_closeskelparse_events_error__exitnftw64bench_sched_pipestrbuf_detachbench_syscall_getpgidnuma_num_configured_cpuscgroup__newnanosleepwait4numa_bitmask_freestrtodbench_syscall_execvenuma_allocate_nodemaskkillmadvisebpf_object__open_skeletonperf_atollperf_set_singlethreadedbench_uprobe_trace_printk_retnuma_num_possible_cpusmutex_init_psharednuma_node_of_cpunuma_max_nodebench__runtimememcpy_functionscond_initbench_syscall_forkpthread_attr_setstacksizegetrusagebench_futex_requeue__sched_cpuallocstrbuf_addchepoll_create_find_next_bitperf_pmus__findsigfillsetcond_init_psharedavg_stats__memcpymutex_inittarget__strerrorperf_event_open_cloexec_flagnuma_bitmask_clearallsigactionbench_repeatbench_uprobe_baselinebench_breakpoint_enableperf_header__write_pipebench_syscall_basicgettimeofday__memsetsrandbpf_program__attach_uprobe_optsbench_mem_find_bitperf_pmu__num_eventspthread_barrier_initMEMSETgetppidevlist__create_mapsbench_futex_hashis_cpu_onlinepthread_barrier_waitbench_uprobe_empty_retsched_getcputhread_paramsbench_breakpoint_threadsetlocalemmap64target__validatesched_getaffinitythread_map__new_by_pidstrsepbench_epoll_ctlperf_set_multithreadedsched_setaffinitybench_pmu_scansocketpaircond_destroyMEMCPYbench_sched_messagingrel_stddev_statsstrbuf_addcmd_injectbench_mem_memset__sysconfsymbol__init__sched_cpufreebench_inject_build_id__perf_session__newnuma_allocate_cpumaskcond_broadcastenable_paramsnuma_bitmask_isbitsetevlist__configpthread_attr_setaffinity_npmutex_destroyfilename__read_build_idbench_uprobe_trace_printkperf_pmus__scan__machine__synthesize_threadstarget__parse_uidpthread_attr_setdetachstatepthread_joinmemset_origmlockallcond_waitbpf_object__load_skeleton__memset_chkmetricgroup__lookup__thread__set_commbtf__freerdtscmachine__findnew_threadmaps__for_each_mapmkstemp64sqrttests__switch_trackingtests__kmod_path__parsetest_dwarf_unwind__compareprint_hists_outaggr_cpu_id__cpumaps__machineperf_mem__snp_scnprintfdwarf_callchain_usersrm_rfpopentests__event_timessuite__event_groupsperf_event__synthesize_mmap_eventsparse_events_terms__initsuite__pe_file_parsingsuite__perf_hookstest_attr__opensuite__code_readingperf_configtests__dwarf_unwindtests__bp_accountingmem_info__newtests__bp_modifyperf_event__synthesize_event_update_scaleevlist__alloc_aggr_statsstrreplace_charsaggr_cpu_id__diefind_sys_events_tablehists__initmachine__delete__strcat_chksigemptysettests__event_updatesuite__kmod_path__parsesuite__sdt_eventperf_event__synthesize_stathist_iter_cumulativesymbol__fprintfmachine__process_mmap2_eventmachine__for_each_dsosuite__backward_ring_bufferdump_tracepclosetests__jit_write_elfmaps__find_symbolperf_hooks__recoverfaccessataggr_cpu_id__socketexpr__find_idssuite__attrsuite__dwarf_unwindleafmachine__process_mmap_eventtests__mmap_thread_lookupmetricgroup__parse_groups_testset_buildid_dirtest_dwarf_unwind__krava_3suite__fdarray__filterthread_map__new_by_tidbtf__find_by_name_kindmaps__refcntrb_nextsuite__pmu_eventssuite__wptests__pe_file_parsingmaps__find_symbol_by_nameworkload__leafloopdata1tests__perf_evsel__roundtrip_name_testdata2metricgroup__rblist_exitstrbuf_slopbuftests__bitmap_printtest__hybridbsearchcreate_script_test_suitesperf_read_tsc_conversionevsel__newtp_idxsuite__openat_syscall_eventsysctl_perf_event_max_stackmem2node__exitfgetsperf_mem__lvl_scnprintfdirnamerblist__initsuite__thread_map__getdelimperf_hooks__invokemachine__process_eventsuite__bp_accountingevlist__new_defaultsuite__perf_evsel__roundtrip_name_testsuite__fdarray__addtest_attr__initperf_event__processperf_tool__initparse_eventtest__bp_modifytests__api_ioworkload__datasymsuite__PERF_RECORDoffsetsparse_events_terms__exitreadlinkunwind__get_entriestests__thread_maps_sharethread__find_mapmaps__equalsuite__parse_eventsfeofaggr_cpu_id__nodetests__is_printable_arrayperf_pmus__find_by_typeevsel__output_resortsuite__hybridparenttests__synthesize_stathists__filter_by_socket__evsel__hw_cache_type_op_res_nameperf_hooks__set_hookhists__collapse_resortsuite__bitmap_printpmu_events_table__find_eventperf_pmus__add_test_pmutests__amd_ibs_via_core_pmuis_pmu_coresuite__sample_parsingcallchain_register_paramsetup_fake_machinesuite__synthesize_statexpr__ctx_newmaps__nr_mapssuite__hists_outputcreat64tests__maps__merge_inevsel__name_issymbol__newevsel__nametests__event_groupssuite__jit_write_elfsuite__event_updateevsel__fieldprobe_cache__find_by_nameintlist__finddso_to_testsuite__parse_metriccallchain_parammaps__inserttest__dwarf_unwindtests__hists_cumulateperf_event__synthesize_cpu_mapabortperf_event__synthesize_thread_mapperf_event__synthesize_event_update_nametest_dwarf_unwind__threadperf_time__parse_for_rangesevlist__set_tracking_eventsuite__event_timessuite__intel_ptsuite__thread_map_synthesizealarmevsel__set_sample_idtests__hists_outputmachine__create_kernel_mapssuite__pfmevsel__open_per_cpumachine__resolveevlist__expand_cgroupstrrchrfind_core_events_tablehashmap__sizesuite__dso_dataparse_events__decode_legacy_cachesuite__maps__merge_inhists__matchperf_event__synthesize_event_update_cpussuite__unit_number__scnprintperf_regs_loadevlist__strerror_opentests__python_usesuite__symbolstests__synthesize_stat_configworkload__brstackperf_event__fprintfsuite__pmudso__needs_decompresssuite__perf_evsel__tp_sched_testcpu__setup_cpunode_mappmu_name_cmptests__expand_cgroup_eventsthread_map__new_strcallchain_param_defaulttests__syscall_openat_tp_fields__sw_hweight64tests__memdefault_breakpoint_lenmachine__findnew_dsomachines__exitevsel__open_per_threadtests__backward_ring_buffertests__perf_hookstests__time_utilsjava_demangle_symintel_pt_upd_pkt_ctxexpr__ctx_clearcore_widemachine__load_kallsymssuite__keep_trackingtests__attrdso__data_closemachine__fprintfdso__data_put_fdsuite__dlfilteris_kernel_moduleevlist__start_workloadsuite__parse_no_sample_id_allsymbol_conftests__bp_signal_overflowaggr_cpu_id__coretests__mem2nodethread_map__read_commsthread_map__fprintfevlist__add_sched_switchtest_dwarf_unwind__krava_2perf_pmus__num_core_pmussuite__sw_clock_freqperf_record_cpu_map_data__test_bitpmu_name_len_no_suffixevlist__new_dummy__perf_hook_desc_testbitmap_scnprintfsuite__hists_cumulateevlist__can_select_eventperf_session__write_headerdso__new_mapthread_map__removeperf_header__set_featsuite__openat_syscall_event_on_all_cpustest__intel_pt_pkt_decoderperf_pmu__have_eventevsel__sw_namestests__thread_map_removetests__utiltests__fdarray__filterintel_pt_pkt_descunlinkrb_erase__evsel__read_on_cpuaddr_location__initdso__puttests__bp_signalpmu_events_table__for_each_eventtest_attr__readyids__newparse_events_error__containssuite__sigtrapmap__loadsuite__demangle_javaevsel__free_countstests__task_exitbtf__type_by_idsuite__utilprobe_cache__deletetests__unit_number__scnprinttests__x86_sample_parsingocaml_demangle_symmemcmpfile_availabledsos__addsuite__amd_ibs_via_core_pmuevsel__find_pmudso__fprintflstat64suite__bp_signalhashmap_findperf_envdso__data_read_offsetsuite__cpu_maphists__filter_by_symbolcpu__get_nodesymbols__insertmkdiratmachines__findsuite__mem2nodecmd_testreset_fd_limitmap__find_symbol_by_namethread_map__newhists__filter_by_threadtests__perf_evsel__tp_sched_testfield_ordercpu_map__snprintexpr__ctx_freegetsidevsel__hw_namesscandirat64suite__exprdsos__initwalltime_nsecs_statstest_objdump_pathtest__intel_pt_hybrid_compatperf_event__synthesize_thread_map2ptracehists__add_entrysuite__task_exit__test_functionfind_core_metrics_tableevsel__has_leadersuite__vmlinux_matches_kallsymstest__arch_unwind_sampletests__vmlinux_matches_kallsymsrb_firstmachine__for_each_kernel_mappmu_add_sys_aliasescolor_fprintfperf_event__synthesize_stat_configpmu_add_cpu_aliases_tablehists__filter_by_dsoparse_nsec_timetests__symbolsthread_map__new_eventperf_event__nameprint_hists_inperf_event__synthesize_attrids__insertworkload__sqrtlooptsc_to_perf_timepmu_for_each_sys_metricfstatat64map__rip_2objdumpevlist__parse_sampleevlist__prepare_workloadperf_pmu__find_eventtests__sw_clock_freqperf_event__process_attrsuite__thread_map_removesuite__perf_time_to_tscsuite__x86_sample_parsingtest_generic_metricmaps__findtests__demangle_javatest__amd_ibs_via_core_pmusuite__synthesize_stat_roundperf_pmu__has_formattests__hists_linkmem_info__putevlist__to_frontmachine__initsuite__syscall_openat_tp_fieldsperf_hooks__get_hookbtf__load_vmlinux_btftest_attr__enabledrlimit__bump_memlocksuite__expand_cgroup_eventstest__x86_sample_parsingdso__decompress_kmodule_pathperf_time__parse_straddr_location__exithists__linkperf_pmu__getcpuidworkload__noplooptests__demangle_ocamlthread__comm_strperf_pmu__deletetests__dlfilterworkload__thlooptests__fdarray__addworkload__landlockthread__putevlist__event2evselperf_event__read_stat_configevlist__alloc_stats__evsel__set_sample_bitdso__loadsmt_onevsel__new_idxids__freemkdtempsuite__bp_signal_overflowunit_number__scnprintfmaps__find_by_namehist_iter_normalsuite__switch_trackingget_filter_descevsel__parse_sampledso__newarch_testssuite__mmap_thread_lookupperf_pmus__supports_extended_typehexdso__data_get_fdmaps__newdsos__exitperf_exesuite__session_topologyevlist__id2evselevsel__intvalmachine__exitsetup_sortingsuite__synthesize_stat_configmachine__load_vmlinux_pathsort_orderprobe_cache__newtests__thread_map_synthesizemaps__putmachine__new_hostsuite__memtests__code_readinghist_entry_iter__addtests__thread_mapthe_var__evsel__sample_sizetest_loopsuite__time_utilsexpr__add_id_valtests__keep_trackingatoimem2node__initevsel__configmem2node__nodebtf__name_by_offsetevsel__is_cache_op_validmachine__remove_threadtest_dwarf_unwind__krava_1expr__parsetests__parse_metricparse_events_termstests__sample_parsingsuite__is_printable_arraybuild_id__sprintfsuite__hists_filtersuite__python_useperf_time__ranges_skip_samplehist_entry__deleteids__union__isoc99_sscanfperf_pmu__config_termsintel_pt_get_packetarch__compare_symbol_namesmachine__delete_threadsfputssuite__bp_modifycpu_map__new_datasuite__basic_mmapcpu__get_socket_idtests__sdt_eventsuite__demangle_ocamlmachine__find_threadtests__hists_filterperf_pmus__find_core_pmutests__synthesize_stat_roundtests__sigtrapevlist__free_statssuite__thread_maps_sharepmu_for_each_core_metricthread__get__evsel__reset_sample_bitperf_event__synthesize_event_update_unitperf_event__synthesize_stat_roundmachines__initintlist__newsuite__api_io__build_id_cache__add_sperf_pmu__matchtests__session_topologysuite__hists_linktests__parse_no_sample_id_allperf_event__synthesize_threadstests__exprui_browser__write_nstringui_browser__help_windowperf_hpp_list__register_sort_fieldvscnprintfui_browser__dialog_yesnoattr_to_scriptins__is_lockcallchain_node__scnprintf_valueres_sample_browseui_browser__list_head_refreshperf_hpp_list__prepend_sort_fieldSLutf8_enableSLtt_set_cursor_visibilityui_browser__refreshperf_hpp__append_sort_keysperf_hpp__reset_widthrb_laststrimuse_browserperf_gtk_handleui_progress__finishhists__total_periodui_helpline__pushperf_header__fprintf_infoui_helpline__printfcallchain_list__sym_nameins__is_fusedui_progress__updateSLang_getkeyui_browser__input_windowSLkp_initSLsmg_write_stringhist_entry__has_hierarchy_childrenSLkp_define_keysymscript_browsemap_symbol__tui_annotateui__lockres_sample_initskip_spacesperf_hpp__set_elideannotation__lockSLsmg_write_charui_browser__vprintfhist_entry__snprintf_alignmentui_helpline__last_msgSLsmg_draw_boxui_helpline__puts__ui_progress__initsort__modeSLsmg_gotorcopen_memstreampstack__peekui_browser__runperf_hpp__is_sort_entryrb_insert_colorhist_browser__newperf_hpp_fmt__dup__rb_hierarchy_nextui_browser__show__ui__info_windowSLtt_set_colorSLang_init_ttyperf_top__reset_sample_countersevsel__group_nameui_helpline__vpushui_browser__gotorc_titleui_browser__argv_refreshrb_hierarchy_lastdisasm_line__is_valid_local_jumpSLsmg_vprintfSLsmg_refreshsigaddsetui_browser__update_nr_entriesannotate_optssymbol__histsui_browser__set_colorrb_prevSLang_TT_Read_FDexit_browser__hpp__slsmg_color_printfevents_stats__fprintfpstack__newperf_tui_eopscallchain_node__fprintf_value__map__is_kernelui_browser__gotorcperf_hpp__cancel_cumulatehist_entry__sym_snprintfui__popup_menuui__exithpp_dimension__add_outputui_helpline__vshowSLtt_Screen_Colsui_browser__initbacktraceannotation_line__writehist_browser__runSLang_reset_ttyperf_hpp__reset_sort_widthrun_scriptpthread__unblock_sigwinchperf_env__lookup_objdumpsetup_browserannotation__toggle_full_addrevlist__toggle_enablemap_symbol__annotation_dumpui__question_windowdotsui_browser__reset_indexperf_use_color_defaultSLsmg_set_char_setperf_hpp_list__column_registerhists__sort_list_widthbacktrace_symbols_fdui__initSLsmg_reinit_smgperf_hpp__set_user_widthtui__header_windowhist_browser__deletedso__insert_symbolpsignalui_browser__rb_tree_seekpstack__emptyconvert_unittimestamp__scnprintf_usecannotation_br_cntr_abbr_listcallchain_list_counts__printf_valueui_browser__warning__hist_entry__snprintfpthread_sigmaskperf_hpp__setup_hists_formatsui_helpline__inittui_helpline_fnsSLtt_get_terminfocallchain_node__make_parent_listui__errorhpp_color_scnprintftui_progress__initfreadui_helpline__popperf_guestsymbol__calc_percentperf_hpp__initperf_config_colorboolannotation__unlockSLkp_getkeySLsmg_draw_vlineui_browser__list_head_seekhists__fprintfpstack__deleteannotated_source__get_lineui_browser__mark_fusedSLsmg_write_nstringtcgetattrui_browser__write_graphui__warningmap__find_symbolis_strict_orderperf_config_u64perf_hpp__defined_dynamic_entrypthread__block_sigwinchperf_hpp__setup_output_fieldhist_entry__tui_annotateperf_hpp__formatSLsmg_draw_hlinehist_browser__initui_browser__handle_resizehpp__fmtmutex_trylockui_browser__refresh_dimensionsui__getchperf_hpp_listhists__reset_column_width__ui_browser__vline__ui_browser__line_arrowhists__fprintf_headersui_progress__opsui_browser__argv_seekannotated_source__purgesymbol__strerror_disassembleui__help_windowtui_helpline__setblock_info__total_cycles_percentperf_hpp__is_dynamic_entrytimestamp__scnprintf_nsecui_browser__is_current_entrySLsmg_fill_regionSLsmg_set_colorevlist__tui_browse_histssymbol__annotate_decay_histogrampstack__removerb_hierarchy_prevui_helpline__currentinput_nameins__is_jumppercent_color_len_snprintfmap__browseui_browser__rb_tree_refreshannotation__update_column_widthshpp__fmt_accperf_hpp__reset_output_fieldsymbol__annotate2graph_dotted_lineins__is_retui_browser__set_percent_color__hists__scnprintf_titlestdio__config_colorhist_browser__init_hppperf_hpp_list__initevlist__set_selectedui__dialog_yesnopstack__pushSLsmg_reset_smghists__overhead_widthblock_hists_tui_browseSLsmg_write_wrapped_stringSLsmg_init_smgSLtty_set_suspend_stateui_browser__printffind_scriptssymbol__deleteui_helpline__fpushperf_error__registerperf_error__unregisterSLtt_get_screen_sizeSLtt_Screen_Rows__ui_browser__show_titletcsetattrui_browser__hidehist_entry__annotate_data_tuiis_perf_magicmaps__fprintfSLang_ungetkeyui__refresh_dimensionspmu_metrics__broadwellxpmu_for_each_core_eventpmu_events__tigerlakepmu_events__westmereexpmu_metrics__icelakexpmu_events__broadwellpmu_metrics__broadwelldepmu_metrics__amdzen4pmu_metrics__haswellxpmu_metrics__skylakepmu_metrics__icelakepmu_metrics__amdzen2pmu_metrics__snowridgexpmu_events__icelakexperf_pmu__find_events_tablepmu_events__broadwellxpmu_events__nehalemexpmu_events__grandridgepmu_events__alderlakenpmu_events__knightslandingdescribe_metricgrouppmu_events__skylakexpmu_events__amdzen5pmu_events__test_soc_cpupmu_metrics__amdzen5pmu_events__haswellxpmu_events__graniterapidspmu_events__snowridgexpmu_metrics__meteorlakepmu_events_mappmu_metrics__ivytownpmu_events__elkhartlakepmu_events__amdzen4pmu_metrics__alderlakenpmu_metrics_table__for_each_metricpmu_events__ivytownpmu_events__emeraldrapidspmu_events__sandybridgestrcmp_cpuid_strpmu_metrics__haswellpmu_events__broadwelldepmu_events__bonnellpmu_metrics__emeraldrapidspmu_metrics__sandybridgepmu_events__test_soc_syspmu_metrics__amdzen3pmu_events__westmereep_sppmu__name_matchpmu_metrics__amdzen1pmu_metrics__grandridgepmu_metrics__elkhartlakepmu_metrics__ivybridgepmu_for_each_sys_eventpmu_events_table__num_eventspmu_metrics__alderlakestrcasecmppmu_metrics__cascadelakexpmu_events__alderlakepmu_metrics__rocketlakepmu_events__icelakepmu_events__haswellpmu_events__meteorlakepmu_events__cascadelakexpmu_events__amdzen2pmu_metrics__jaketownpmu_metrics__test_soc_cpupmu_metrics__skylakexpmu_metrics__sierraforestpmu_events__lunarlakepmu_events__amdzen1pmu_events__skylakeperf_pmu__find_metrics_tablepmu_events__sierraforestpmu_events__goldmontpluspmu_metrics__sapphirerapidspmu_events__amdzen3pmu_events__nehalemeppmu_events__westmereep_dppmu_events__ivybridgepmu_events__goldmontpmu_metrics__tigerlakepmu_events__sapphirerapidspmu_events__jaketownpmu_events__rocketlakepmu_metrics__broadwellpmu_events__silvermont__cxa_atexitPyArg_ParseTupleAndKeywordsPy_BuildValuePyUnicode_FromStringPyErr_NoMemoryPyObject_GenericGetAttrPyObject_StrPyUnicode_AsUTF8tep_find_any_fieldPyByteArray_FromStringAndSizetep_read_numberPyLong_FromLongPyLong_FromUnsignedLongtrace_event__tp_format_idPyUnicode_FromFormatevsel__initevsel__exitPyArg_ParseTupleevlist__init_PyObject_NewPyExc_OSErrorPyErr_Format_Py_NoneStructevlist__exitPyList_NewPyFile_FromFdPyList_Append_Py_DeallocPyErr_SetFromErrnotrace_event__tp_formatPyInit_perfPyModule_Create2PyType_GenericNewPyType_ReadyPyModule_AddObjectPyModule_GetDictPyDict_SetItemStringPyErr_OccurredPyExc_ImportErrorPyErr_SetStringkvm_entry_eventkvm_exit_eventexit_event_beginexit_event_endexit_event_decode_keyperf_stat__set_no_csv_summaryperf_stat__set_big_numscript_spec_registerarch_syscalls__strerrno_functionperf_kwork_add_workscript_fetch_insnperf_sample__sprintf_flagsmatch_callstack_filterlock_stat_findlock_stat_findnewscripting_max_stacklibm.so.6libbpf.so.1libslang.so.2libc.so.6libnuma.so.1GLIBC_2.2.5LIBBPF_0.0.1LIBBPF_0.0.7LIBBPF_0.5.0LIBBPF_0.0.6SLANG2GLIBC_2.14GLIBC_2.15GLIBC_2.28GLIBC_2.6GLIBC_2.3.3GLIBC_2.8GLIBC_2.33GLIBC_2.3GLIBC_2.4GLIBC_2.3.2GLIBC_2.3.4GLIBC_2.17GLIBC_2.7GLIBC_2.32GLIBC_2.34libnuma_1.3libnuma_1.1libnuma_1.2	
				
	

			
				�Z ui	�Z�ZPQv0�ZWv0[Pq0
[Vv0[�Z �b�)[�Z���0[���;[���F[ii
Q[si	[[ii
g[���q[ii
|[ii
�[ri	
�[ti		�[����[ii
�[����[����[ui	�Z�ZC�E�[A�E�[B�E�[�/j@��/j��/j�/j0j��	 0j��	(0j��	00jҐ	80jې	P0j�	X0j�	p0j�	�0j��	�0jx�	�0j��	�0jB
�0j͛k1j��	H1jO�
P1j̛k`1j��	�1j!h
�1j��j�1jd�	2j}�	2j��j 2j�	�2j�	�2j,o
�2j��j3j=�	H3jU�	P3j�k`3j �	�3jLi
�3j^�	�3jP�	�3j��`4j�	�4j,o
�4j��j�4j=�	@5j��	P5j��	h5jC(
p5j�jx5j�	�5jx�	�5j	�	�5j�j�5j5K
�5j��	(6j}�	06j��j@6j8�	�6j�%
�6j��k�6jp�	@7j��	h7j�0
p7j��j�7ji�	�7j��	�7j��j�7j��	(8jF�	08j��j@8j��	�8j��	�8j��j�8j�	�8j�	�8j��j9jH�	H9j!�	P9j��j`9jx�	:j`�	(:j�0
0:j�j@:ji�	�:j��	�:j�j�:j��	�:j��	�:j�j;j�	H;j�	P;j�j`;jH�	�;j!�	�;j�j�;jx�	`<j��	�<j�0
�<jL�k�<ji�	�<j��	�<jT�k=j�	H=j��	P=j@�k`=j�	�=j�	�=jA�k�=jH�	>j!�	>jB�k >jx�	�>jP�	�>j�0
�>j,�j?ji�	H?j�	P?j8�j`?jx�	�?j��	�?j �j�?j�	@j�	@j!�j @jH�	h@j!�	p@j"�j�@jx�	�@j
�	�@j%�j�@j�	(Aji�	0Aj$�j@Aj��	�Aj��	Bj�0
BjL�j Bji�	hBj��	pBjH�j�Bj��	�Bj3�	�BjC�j�Bj9�	(Cj��	0Cj@�j@Cj�	�Cj�	�CjA�j�CjH�	�Cj!�	�CjB�jDjx�	�Dj��	�Dj�0
�DjL�k�Dji�	(Ej��	0Ej`�j@Ej��	�Ej�	�Ej\�j�Ej��	�Ej�	�Ej0�kFj(�	HFj>�	PFjE�k`Fj��	�FjH�	�FjF�k�FjP�	Gj]�	Gj(�k Gj(�	hGjd�	pGjD�k�GjX�	�Gjp�	�Gj,�k�Gj��	(Hjw�	0Hj)�k@Hj�	�Hj��	�Hj*�k�Hj��	@Ijx�	hIj�0
pIj��k�Iji�	�Ij��	�Ijh�j�Ij��	(Jj�	0Jjd�j@Jj��	�Jj�	�Jj��k�Jj(�	�Jjp�	�Jj��kKj��	HKj>�	PKj��k`Kj��	�KjH�	�Kj��k�KjP�	`Lj��	�Lj�	�Lj��k�Lj��	�Lj�	�Lj��kMj�	HMj(�	PMjx�j`Mj�	�Mj4�	�Mjt�j�Mj@�	Nj@�	Njp�j Njx�	hNjR�	pNjl�j�Nj��	 Oj0�	HOjG�	POj|�j`Oj`�	Pj8�	(Pjr�	0Pj��j@Pj`�	�Pj��	�Pj��j�Pj��	@Qj��	`Qj��	�Qj�9
�Qj�j�Qj�9
�Qj��	�Qj,�	�Qj��jRj8�	HRjG�	PRj��j`Rj��	�Rj6�	�Rj��j�Rj��	SjmX
Sj��jSjmX
 Sj�	hSj�S
pSj��jxSj�S
�Sj8�	�SjPi�Sj��j�SjPi�Sj`�	(Tj�[
0Tj��j8Tj�
@Tj?�	�TjO�	�Tj��j�TjZ�	@Uj��	PUj�	`Uj�	�UjG�	�Uj��j�Uj`�	@Vj
�	hVj,o
pVj��j�Vj=�	 Wj��	0WjI�	@Wjhw
HWj �PWj�XWj�w
`Wj �hWj��pWj�w
xWj ��Wj`��Wj(x
�Wj���Wj@��Wj�"
�Wj���Wj�"
�Wj���Wj�"
�Wj ��Wj0��Wj�"
�Wj ��Wj��Xj�x
Xj �Xj��Xj�"
 Xj �(Xj�0Xj�x
8Xj �@Xjp�HXj�"
PXj �XXj��`Xj#
hXj �pXj��xXj#
�Xj ��Xj ��Xj�x
�Xj ��Xj��Xjy
�Xj ��Xj@�Xj(y
�Xj ��Xj���XjPy
�Xj ��Xj���Xjxy
�Xj �Yj�Yj�y
Yj �Yj � Yj�y
(Yj �0YjP8Yjz
@Yj �HYj�PYj'#
XYj �`Yj��hYj0z
pYj �xYj��Yj<#
�Yj ��YjP��YjO#
�Yj ��Yj ��Yjd#
�Yj ��Yj�+�Yjq#
�Yj ��Yj�,�Yj~#
�Yj ��Yj0��Yjhz
Zj �Zj�Zj�#
Zj � Zj�-(Zj�#
0Zj �8Zj�.@Zj�#
PZj��XZj�#
hZj��pZj�#
�Zj0��Zj�#
�Zj0��Zj|!
�Zj��Zj�#
�Zjp��Zj	$
�Zj���Zj$
�Zj0�[j$$
[j��[j*$
([j��0[j2$
@[j��H[j:$
X[j�`[jB$
p[j�x[j^$
�[j���[jk$
�[j���[jr$
�[j���[jy$
�[j���[j�$
�[j��[j�$
\j��\j�!
\j�  \j�!
0\jP*8\j�!
H\jp"P\j�!
`\j$h\j�z
x\j���\j�$
�\j���\j�$
�\j ��\j�$
�\j�\j�!
�\j01�\j�$
�\j��\j�z
]jp�]j�z
 ]j �(]j�$
8]j@@]j@{
P]j�	X]j%
h]j��p]j%
�]j���]j&%
�]j���]j@%
�]j��]j\%
�]j ��]j�{
�]jP�]j�{
�]j�^jy%
^j��^j�{
(^j�0^j�%
@^j�H^j�%
X^j0�`^j�%
p^j�/x^j�%
�^jP��^j�%
�^j ��^j�%
�^j���^j�%
�^j@��^j�%
�^j���^j&
_j@�_j�{
_j�� _jM 
(_jp�0_j��8_j|
H_j@�P_j!&
`_j�+h_j+&
x_j�,�_j4&
�_j0��_jX|
�_j��_jE&
�_j�-�_jW&
�_j�.�_jh&
�_j���_j&
`j��`j�&
 `j��(`j�&
8`j�@`j�&
P`j�%X`j�&
h`j�(p`j�&
�`j '�`j'
�`jP�`j2'
�`j�`jL'
�`j�2�`jh'
�`j��`j�'
�`j0�aj�'
aj�/aj�|
(aj�0aj�'
@aj��`ajg-
haj.
�aj]-
�aj&.
�aj2.
�ajW.
�ajd.
�aj:.
�ajA.
�ajL.
�ajY.
�ajg.
�aj{.
�aj�.
�aj�.
�aj�.
�aj�.
 bjJ-
(bj�.
0bj�.
8bj/
@bj/
Hbj!/
Pbj2/
XbjG/
`bj\/
hbjg/
xbjl�	�bjn:
�bj�:
�bj�:
�bj�:
�bj�:
�bj�:
�bj
;
�bj$;
�bj>;
�bjX;
�bjr;
�bj�;
�bj�;
�bj�;
�bj�;
�bj�;
cj<
cj<
cj(<
cj5<
 cjB<
(cjO<
0cj\<
8cji<
@cjv<
Hcj�<
Pcj�<
Xcj�<
`cj�<
hcj�<
pcj�<
xcj�<
�cj�<
�cj�<
�cj=
�cj=
�cj=
�cj*=
�cj8=
�cjF=
�cjT=
�cjb=
�cjp=
�cj~=
�cj�=
�cj�=
�cj�=
�cj�=
dj��
djВ
dj�
dj0�
 dj`�
@dj|@
Hdj�@
Pdj�@
Xdj��
`dj�@
pdj�@
�dj�@
�dj��
�djP@
�dj�@
�djK@
�djA
�djA
�dj�@
�dj%A
�djA
�djA
�dj7@
ej8A
ej4@
ejQA
ej\A
 ej�@
0ejsA
HejQA
Pej\A
Xej @
`ej�A
pej�A
xej�A
�ej�@
�ej�A
�ej�A
�ej�A
�ej�A
�ej@
�ej�
�ej�A
�ej�A
�ej�@
�ej�A
�ej�A
fj�A
fj�A
fj�?
 fj�A
0fjB
8fjB
@fj�@
HfjB
Pfj.B
hfj<B
pfjB
xfj�?
�fjFB
�fjUB
�fj`B
�fj�@
�fj`B
�fjqB
�fjUB
�fj`B
�fj�?
�fj}B
�fj�B
�fj�B
gj�@
gj�B
gjqB
(gj�B
0gj�B
8gj�?
@gj�
Pgj�B
Xgj0�
`gj�@
hgj0�
pgjqB
�gj�B
�gj0�
�gj�?
�gj�B
�gj�B
�gj�B
�gj�@
�gj�B
�gj�B
�gjC
�gj�B
�gj�?
hj
C
hjC
hj!C
 hj�)
(hj��
0hj?
HhjC
Phj��
`hj7C
phjBC
xhj��
�hj[C
�hj?
�hjaC
�hj��
�hj{C
�hj��
�hj �
�hj[C
�hj?
ijh�
ij �
 ij�C
0ij��
8ij��
@ij[C
Pij?
hij�
pij��
�ij�C
�ij�C
�ij�C
�ij�C
�ij?
�ij�C
�ij�C
�ij?
�ij�C
�ij�C
jj�C
jj?
(jj�C
0jj�C
@jj�,
Hjj TkXjjD
`jj�Skpjj�2
xjj Sk�jjD
�jj�Rk�jj�,
�jj�2
�jjD
�jj�,
kj�2
(kjD
@kjD
Xkj�2
pkjD
�kj�,
�kj�,
�kjD
�kj,^
�kjX
�kja
ljTl
lj[l
 lj@
0lj�A
@ljbl
`ljVn
hljon
plj�n
xlj�n
�lj�n
�lj�n
�lj�n
�lj�n
�lj�p
�lj0��lj�
�lj��lj�p
�lj���lj�p
�lj`�mj�
mj`�mj�#
(mj��0mj�p
@mj0�Hmj8�
Xmj�`mj�p
pmj��xmjp�
�mj��mj��
�mj��
�mj��
�mj��
�mj�Wf�mj�`f�mj�if�mj@rfnj vfnj�uf nj�uf0nj�uf@nj�uf`nj8zfpnj0zf�nj`�f�nj�f�nj`�f�nj �f�nj �f�nj�f�nj�f�nj@�foj@{foj�zf oj@zf@oj@�fPoj0�f`oj(�f�oj��f�oj �f�oj��f�oj`�f�oj �f�oj@�f�oj��f�oj�fpj��fpj`�f pj �f@pj��fPpj��f`pj`�fppjX�f�pj��f�pj��f�pj�g�pj@g�pj(g�pj$g�pj g�pj�fqj�fqj�f qj�f0qj@�f@qj`�fPqj0�f`qj,�fpqj(�f�qj �f�qjg�qj /g�qj`$g�qj�#g�qj`#g�qj g�qj�grj@grj`g rj�g0rj�g@rj@gPrj g`rjgprj�
g�rj@
g�rj�g�rj�
g�rj`5g�rj@7g�rj7g�rj�6g�rj�=gsj�Agsj`Ag sj\Ag0sjGg@sj�Og`sj \gpsj�Xg�sj�Xg�sjcg�sj^g�sj�]g�sj�]g�sj�]g�sj�]gtj�]gtj�]g tjp]g@tj@fgPtj�eg`tj`ngptj�gg�tj�gg�tj�gg�tjpgg�tj`gg�tjgg�tj@tg�tj@~g�tj�|guj�zgujzg uj`yg0uj�xg@uj`wgPuj�vg`uj�ugpuj`ug�uj�g�uj�g�uj��g�uj��g�uj`�g�uj��g�uj��gvj`�gvj`�g vj`�g0vj�g@vj��g`vj`�gpvj �g�vj��g�vj`�g�vj��g�vj �g�vj��g�vj��g�vj��g�vjd�gwj@�gwj�g wj@�g0wj��g@wj �gPwj��g`wj��gpwj@�g�wj@�g�wj�g�wj��g�wj��g�wj�
h�wj�
h�wj`h�wj@hxj`hxj`h xj�g0xj�g@xj��gPxj`�g`xj �gpxj�h�xj`h�xj0h�xj�h�xj�h�xj�h�xj�(h�xj�'hyjh'hyjd'h yj`'h0yj�!h@yj�!hPyj�!h`yj�!hpyjh�yj h�yjh�yj`h�yjHh�yj@h�yj@.h�yj�6h�yj�6hzj`3hzj@3h zj`1h0zj�0h@zj`0hPzjT0h`zj`9hpzj@<h�zj?h�zj`hh�zj@]h�zj`\h�zj@\h�zj�Xh�zj�Xh{j Vh{j@Uh {j�Rh0{j�Oh@{j MhP{jHh`{j�Fhp{j�Fh�{j Fh�{j�Eh�{j�Ch�{j�nh�{j�nh�{j�sh�{j�sh|j�sh |j@�h0|j`�h@|j �hP|j �h`|j�hp|j�h�|j {h�|j�zh�|j`zh�|jxh�|j��h�|j��h�|j��h}j �h}j�h }j �h0}j �h@}j��hP}j��h`}j`�hp}j�h�}j��h�}j`�h�}ji�}j`�h�}j�h�}j��h�}j��h�}j��h~j��h~j��h ~j`i@~j	iP~j�i`~j�ip~j�i�~j�i�~j�i�~j`i�~j i�~j�i�~j�i�~j�&i�~j�&ij@&ij!i j)i@j�,iPj�,i`j@)ipj -i�j�0i�j�0i�j`-i�j�0i�j�3i�j�3i�j 1i�j 4i�j`6i�j 6i �j6i0�j�5i@�j�5iP�j�5i`�j@<ip�j�8i��j�8i��j Ci��j@>i��j>iЀj�=i�j�=i�j�=i�j�=i �jPEi0�jLEi@�jHEiP�j`Ei`�j�Eip�j�Ei��j�Ei��j�Ei��j�Ei��j�Ei��j�IiЁj�Ji�j�Ji�j�Ji�j�Ji�j�Ji �jKi0�j"Ki@�j8KiP�jRKi`�jhKip�j�Ki��j�Ki��j�Ki��j�Ki��j�Ki��j�KiЂj�Ki�jLi�j#Li�j:Li�jRLi �jmLi0�j�Li@�j�SiP�jTi`�j�Lip�j�Li��j�Li��j�Li��j�Li��jMi��j,MiЃjEMi�j_Mi�jyMi�j�Mi�j�Mi �j�Mi0�j�Mi@�j�MiP�j�Mi`�j�Mip�j�Mi��j�Mi��j�Mi��jNi��jNi��j+NiЄjENi�jONi�j\Ni�jhNi�jtNi �j�Ni0�j�Ni@�j�NiP�j�Ni`�j�Nip�j�Ni��j�Ni��j�Ni��j�Ni��jOi��jOiЅj"Oi�j9Oi�j@�k�j�k�j@�k �j �k(�j`�k0�j`�k8�j �k@�j`�kH�j~kp�j@zkx�j@zk��j�8
��j��k��jK�	��j��	�jZ�	�j��	��j�	��j�	��j��	��j@�	Їj��	؇jp�	�jG�	�j��j �j��	h�jX�	p�j��j��j0�	Ȉja�	Јj��j�jh�	(�jH�	@�j��	�j��	�j�	H�j��	`�j�	�j��	 �j��	h�j�	��j�	ȋj�0
�ji�	��jc�	��jk�	�j�	�j��	H�j��	X�j�	`�j��	��j��	��j�	��j��	�j��	�j�	 �j�	h�j��	x�j�	��j��	Ȏj}�	�j(�	(�j�	@�jX�	��jk�	��j��	�j�	�j��	H�j%�	`�j�	��j1�	��j �	�j@�	 �jH�	h�j��	��jp�	ȑjQ�	�j[�	(�jy�	@�j��	��j��	��j�	�j��	�j
H�j��	`�j��	��j5K
��j��	�j��	 �j@
h�j��	��jh
Ȕj��	�j��	(�j�	@�j�
��j�	��j*�	�j�Z
��jC�	�j
�j :H�jV�	X�j_�	`�j`
x�j�9�j�>
`�j�>
h�j�>
��j�Ei��j�Hi��j�Eiȗj�Ei�j�Ei��j�Hi �j�Ei(�j�EiP�j�EiX�jFi��j�Ei��jFi��j�Ei��j�Hi�j�Ei�j)Fi�j�Ei�j>Fi@�j�EiH�jPFip�j�Eix�jeFi��j�Ei��jwFiЙj�Eiؙj�Fi�j�Ei�j�Fi0�j�Ei8�j�Fi`�j�Eih�j�Fi��j�Ei��j�Fi��j�EiȚj�Fi�j�Ei��jGi �j�Ei(�jGiP�j�EiX�j*Gi��j�Ei��jAGi��j�Ei��jSGi�j�Ei�jhGi�j�Ei�j~Gi@�j�EiH�j�Gip�j�Eix�j�Gi��j�Ei��j�GiМj�Ei؜j�Gi�j�Ei�jIi0�j�Ei8�j0Ii`�j�Eih�j�Gi��j�Ei��j�Gi��j�EiȝjHi�j�Ei��jHi �j�Ei(�j+HiP�j�EiX�j=Hi��j�Ei��jXIi��j�Ei��jOHi�j�Ei�j�Ii�j�Ei�jmHi@�j�EiH�j�Hi�j��	�j0j`�j��	h�j�	��j��	��j 0j �j��	(�jP0j��j��	��jp0j��jQ"
��j�0j(�j`�0�j`�8�j��@�j
H�jΚ	`�jx�	h�jx�	p�jx�	x�jx�	��jx�	��ja�	��jg�	��j` k��j��	��j��	��j�k��j��	��j��	�j@k�j��	�j �	�j� k �jż	(�j˼	0�j�k8�j�	@�j�	H�j kP�j�	X�j�	`�j�kh�j+�	p�j6�	x�j k��jL�	��jS�	��j�k��j5K
��je�	��j��j��j��j��j��j��j��j�j5K
�j�	�jo�	 �j��	(�j��	0�j��	8�jJ^
@�j��	H�jJ^
P�j��	X�j��	`�j��	h�j,^
p�j��	x�j,^
��j��	��jXX
��j��	��j��	��jI^
��j��	��j��	��j��	��j��	��jJ^
��j��	��jJ^
��j��	��j��	��j��	��j,^
�j��	�j,^
�j��	�jXX
 �j��	(�j��	0�jI^
8�j��	@�j��	H�j�	@�j�	H�j��	P�j��	X�jJ^
`�j��	h�jJ^
p�j��	x�j��	��j��	��j,^
��j��	��jJ^
��j��	��jXX
��j��	��j��	��jI^
��j��	��j �	��j��	��j��	��j07
��j��	��jJ^
�j��	�j��	�j��	�j1�	 �j��	(�jC
0�j��	8�jXX
@�j��	H�j��	P�jI^
X�j��	`�j5�	h�j��	p�j��	x�j07
��j��	��jJ^
��j��	��j��	��j��	��j1�	��j��	��jG�	��j��	��jXX
��j��	��j��	��jI^
��j��	��jK�	��j��	�j��	�j07
�j��	�jJ^
 �j��	(�j��	0�j��	8�jY�	@�j��	H�j]�	P�j��	X�jXX
`�j��	h�j��	p�jI^
x�j��	��ja�	��j��	��j��	��jJ^
��j��	��jAM
��j��	��j��	��j��	��j4]
��js�	��j��	��jI^
�j{�	�j��	 �j��	(�jJ^
0�j��	8�jMM
@�j��	H�j��	P�j��	X�j4]
`�js�	h�j��	p�jI^
��j��	��j��	��j��	��jJ^
��j��	��jz�	��j��	��j��	��j��	��j4]
��js�	��j��	�jI^
0�j��	8�j��	@�j��	H�j07
P�j��	X�jAM
`�j��	h�j��	p�j��	x�j4]
��js�	��j��	��jI^
��j��	��j��	��j��	��jAM
��j��	��jAM
��j��	��j��	�j��	�j4]
�js�	�j��	 �jI^
P�j��	X�j��	`�j��	h�jMM
p�j��	x�jMM
��j��	��j��	��j��	��j4]
��js�	��j��	��jI^
��j��	��j��	��j��	��jMM
�j��	�jMM
�j��	�j��	 �j��	(�j4]
0�js�	8�j��	@�jI^
H�j��	P�j�	p�j��	x�j��	��j��	��jMM
��j��	��jz�	��j��	��j��	��j��	��j4]
��js�	��j��	��jI^
�j�	�j��	�j��	�jMM
 �j��	(�j[�	0�j��	8�j��	@�j��	H�j4]
P�js�	X�j��	`�jI^
��j�	��j��	��j��	��j[�	��j��	��jMM
��j��	��j��	��j��	��j4]
��js�	��j��	��jI^
 �j(�	(�j��	0�j��	8�j[�	@�j��	H�jMM
P�j��	X�j��	`�j��	h�j4]
p�js�	x�j��	��jI^
��j��	��j�	��j@�	��j��	��j��	��jAM
��j��	��jJ^
��j��	��j��	��j��	��j4]
�js�	�j��	�jI^
@�jR�	H�j��	P�j��	X�jMM
`�j��	h�jJ^
p�j��	x�j��	��j��	��j4]
��js�	��j��	��jI^
��jd�	��j��	��j��	��j[�	��j��	��jJ^
�j��	�j��	�j��	�j4]
 �js�	(�j��	0�jI^
`�jv�	h�j��	p�j��	x�j�<
��j��	��jJ^
��j��	��j]X
��j��	��j4]
��js�	��j��	��jI^
�j��	��j��	�j��	�j��	�j��	�jJ^
 �j��	(�j��	0�j��	8�j4]
@�js�	H�j��	P�jI^
��j��	��j��	��j��	��j07
��j��	��jJ^
��j��	��j��	��j��	�jXX
�j��	�j��	�jI^
�j��	�j��	 �j��	(�jAM
0�j��	8�jJ^
@�j��	H�j��	P�j��	X�jXX
`�j��	h�j��	p�jI^
��j��	��j��	��j��	��jMM
��j��	�jJ^
�j��	�j��	�j��	�jXX
�j��	��j��	�jI^
0�j��	8�j��	@�j��	H�j[�	P�j��	X�jJ^
`�j��	h�j��	p�j��	x�jXX
��j��	��j��	��jI^
��j��	�j��	�j��	�j[�	�j��	�jJ^
�j��	��j��	�j��	�jXX
�j��	�j��	 �jI^
(�j��	0�j�	Pk�	Xk��	`k��	hk�<
pk��	xkJ^
�k��	�k]X
�k��	�kXX
�k��	�k��	�kI^
�k�	�k��	�k��	�kJ^
k��	kMM
k(�	k]X
 k��	(kXX
0k��	8k��	@kI^
pk+�	xk��	�k��	�kJ^
�k��	�k[�	�k(�	�k]X
�k��	�kXX
�k��	�k��	�kI^
k;�	k��	k��	kJ^
 k��	(k�<
0k(�	8k��	@k��	HkXX
Pk��	Xk��	`kI^
�kK�	�k��	�k��	�kJ^
�k��	�k��	�k(�	�k�U
�k��	�kXX
�k��	�k��	�kI^
 k[�	(k��	0k��	8k07
@k��	HkAM
Pk��	Xk��	`k��	hkXX
pk��	xk��	�kI^
�	kl�	�	k��	�	k��	�	kMM
�	k��	�	kMM
�	k��	�	k��	�	k��	�	kXX

k��	
k��	
kI^
@k}�	Hk��	Pk��	XkMM
`k��	hkz�	pk��	xk��	�k��	�kXX
�k��	�k��	�kI^
�k��	�k��	�k��	�kMM
�k��	�k[�	
k��	
k��	
k��	
kXX
 
k��	(
k��	0
kI^
`k��	hk��	pk��	xkMM
�k��	�k[�	�k��	�k��	�k��	�kXX
�k��	�k��	�kI^
�k��	�k�	�k��	�k��	k��	kAM
k��	kAM
 k��	(k��	0k��	8kXX
@k��	Hk��	PkI^
�k��	�k��	�k��	�k�X
�k��	�k�X
�k��	�k��	�k��	�kXX
�k��	�k��	�kI^
k��	k��	 k��	(k07
0k��	8k�<
@k��	Hk��	Pk��	XkXX
`k��	hk��	pkI^
�k��	�k��	�k��	�kJ^
�k��	�k��	�k��	�k��	�k��	�kXX
�k��	�k��	kI^
0k��	8k��	@k��	HkJ^
Pk��	Xk��	`k(�	hk��	pk��	xkXX
�k��	�k��	�kI^
�k
�	�k��	�k��	�kJ^
�k��	�k��	�k(�	�k��	k��	kXX
k��	k��	 kI^
(k��	0k�	Pk�	Xk��	`k��	hk07
pk��	xk�<
�k(�	�k-�	�k��	�kXX
�k��	�k��	�kI^
�k1�	�k��	�k��	�k07
k��	k�<
k(�	k-�	 k��	(kXX
0k��	8k��	@kI^
Hk��	Pk�	�k��	�kH�	�kxh
�kh�	�kt�	�k��	�k��	�k��	�k��	�k �	 kO�
(k`�	8k��	@k��	Pk5K
Xk��	�k�T
�k��	�k��	�kн	�k�	�k��	�k�	�k��	�k*�	�k��	 kk�	(k �	8ka�	@kH�	Pk5K
Xk3�	�kL�	�kp�	�kQ�	�k��	�kV�	�k��	�k�	�k��	�kd�	�k�	�k5K
k3�	@kl�	Hk(�	XkJ�	`kP�	pks�	xkx�	�k5K
�k��	�k|�	�k��	�k��	�k��	�k�
�k��	 k��	 k��	  k5K
( k˾	` k�	h k�	x kB
� k8�	� k5K
� k�	� k��	� k
�	� k5K
� k*�	 !k�	(!k��	8!k��	@!k�	P!k��	X!k@�	p(k�,
x(k�,
�(kJD
�(k�,
�(k�,
�(k]D
�(kD
�(kD
�(kJD
�(k�,
�(k�,
�(kiD
)kD
)k�2
 )kPD
8)kD
@)kD
H)k#D
`)kD
h)kD
p)kOD
�)k�,
�)kD
�)k.D
�)k�,
�)kD
�)k#D
�)k�,
�)kD
�)kD
*k�,
*k�2
*kXD
(*k�,
0*k�2
8*kPD
P*k�,
X*k�,
`*kJD
x*k�,
�*k�,
�*k]D
�*kD
�*kD
�*kJD
�*k�,
�*k�,
�*kiD
�*kD
�*k�2
+kPD
+kD
 +kD
(+k#D
@+kD
H+kD
P+kOD
h+k�,
p+kD
x+k#D
�+k�,
�+kD
�+kD
�+k�,
�+k�2
�+kXD
�+k�,
�+k�2
�+kPD
,k�,
,kD
,k.D
0,k�,
8,k�,
@,kJD
X,kD
`,kD
h,k#D
�,kD
�,kD
�,kJD
�,kD
�,kD
�,kOD
�,k�,
�,kD
�,k#D
�,k�,
-kD
-kD
 -k�,
(-k�2
0-kXD
H-k�,
P-k�2
X-kPD
p-k�,
x-k�,
�-kiD
3k 3kP5k-
p5k�~
�5kH�
�5k�2
�5kp�
�5k��
6k��
06k�D
P6kF
p6k�F
�6k'G
�6k�G
�6k0�
�6kH
7k�H
07kx�
P7kp�
p7k9K
�7k�K
�7kMN
�7k0�
�7k�P
8k�o
08k�
P8k�R
p8k�T
�8k�T
�8k�T
�8k]W
�8k�W
9k"Z
09kNZ
P9knZ
p9k�Z
�9k�[
�9k�_
�9k�_
�9k%`
:kQ`
0:k`
P:k�`
p:k�`
�:kva
�:kYb
�:k�c
�:k�c
;k�d
0;k�d
P;kbe
p;k'h
�;kNh
�;k8i
�;k[k
�;kBl
<kjl
0<km
P<k�m
�<ko
�<ko
�<ko
 =kWo
(=k^o
0=kwo
`=k�o
�=k�o
�=k�o
�=k�o
�=k�o
�=k�o
 >k�o
@>k�o
H>k�o
�>k�o
�>kp
�>k p
�>k��
�>k
?k�v
?k ?k ?k5"
(?k"
0?k"
8?k��@?k&"
H?k�v
P?k"
X?k�`?k1"
h?k�v
p?k"
x?k��?k="
�?kw
�?kC"
�?k��?kW"
�?ki"
�?k���?k�"
�?k@w
�?k@�@k�)
@k @k @k�)
(@k2C
8@kP8@@k�)
H@k�)
X@k`;`@k�)
h@k�)
x@k`@�@k0-
�@k-
�@k S�@ks/
�@k�~
�@k�X Ak�/
(Ak@Ak@Ak�/
HAk�/
PAk"
XAk�_�Ak�
�Ak�Ak�Ak�
�Ak�
�Ak"
�Ak b�Ak�0
�AkH�
�Ak�f Bk�1
(Bk@Bk@Bk�1
HBkp�
PBk"
XBk@n`Bk�1
hBk��
pBk"
xBkn�Bk�1
�Bkȁ
�Bk"
�Bk�m�Bk��
�Bk�Bk�Bk�2
�Bk��
�Bk"
�Bkt Ck��
(Ck�2
8Ck``Ck�3
hCkp�
xCk��Ck4
�Ck��
�Ck@��Ck4
�Ck��
�Ck � Dk�9
(Dk@Dk@Dk�9
HDkh�
XDk�`Dk&"
hDk�9
xDk@��Dk�9
�Dk	:
�DkP��Dk:
�Dk":
�Dk��Dk5:
�Dk>:
�Dk0��DkR:
�Dk\:
�Dk`� EkE?
(Ek@Ek@EkZ?
HEkj?
XEk��`Ek�?
hEk�?
xEk���Ek�o
�Ek��
�Ek�?
�Ek��Ek�?
�Ek �
�Ek��Ek�?
�EkX�
�Ekp�Fk�?
�Fk�gjGk�?
�Gk@gj�Gk�fj�Gk�fj0Hk�?
�Hk fjHIk�?
Jk�ej`Jk@
(Kk`ejxKk @
�Kk4@
@Lkej�Lk7@
�LkK@
XMk�dj�MkP@
�Mkd@
pNk@dj�NkP@
�Nkj@
�Ok@dj�OkP@
�Okp@
�Pk@dj�PkP@
Qkv@
�Qk@dj Rkej(Rk�dj0Rk@dj@Rk�gjHRk@gjPRk�fjXRk�fj`Rk fjhRk�ejpRk`ej�Rk�ij�Rk�ij�Rk ij�Rk�hj�Rk`hj�Rkhj�RkD
�Rk#D
Sk.D
0SkPD
HSkXD
`SkBD
xSkPD
�SkXD
�SkBD
�SkJD
�SkOD
TkWD
0TkJD
HTk]D
`TkiD
�Tk�D
�Tk�D
�Tk ��Tk(F
�TkF
�Tkp�Uk�F
Uk�F
Uk��@UkCG
HUk'G
XUk��Uk�G
�Uk�G
�Uk�Uk�G
�Uk0�
�Uk�!VkH
VkH
Vk`&@Vk�H
HVk�H
XVkP0�Vk�I
�Vk�Vk�VkGI
�VkMI
�VkbI
�Vk�5�Vk{I
�Vk�I
�VkbI
�Vk 7�Vk�I
�Vk�I
�VkbI
�Vk�8Wk�I
Wk�I
WkbI
Wk:@WkJ
HWkx�
XWk>�WkMJ
�Wkp�
�Wk�D�WkLK
�Wk9K
�Wk`FXk�K
Xk�K
Xk�M@XkeN
HXkMN
XXk�R�Xk�N
�Xk0�
�Xk�b�Xk�P
�Xk�P
�Xk�~Yk�o
Yk�o
Yk�@Yk�R
HYk�
XYkp��Yk�R
�Yk�R
�Yk���Yk�T
�Yk�T
�YkдZk�T
Zk�T
Zk�@Zk)T
HZk�T
XZk`��ZknW
�Zk]W
�Zk ��Zk��	�Zk�W
�Zk0�[kY
[k [k [k"Y
([k5Y
8[kp�@[kHY
H[kVY
X[k��`[kdY
h[krY
x[kP��[k�Y
�[k�Y
�[k���[k�Y
�[k�Y
�[k��[k8Z
�[k"Z
�[k�� \k^Z
(\kNZ
8\k��`\k�Z
h\knZ
x\k���\k[
�\k�Z
�\k��\k�[
�\k�[
�\k�� ]k�_
(]k�_
8]k��`]k�_
h]k�_
x]k�]k6`
�]k%`
�]k�]kQ`
�]kQ`
�]k� ^k�`
(^k`
8^k`^k�`
h^k�`
x^k �^ka
�^k�`
�^k��^kva
�^kva
�^k0 _kYb
(_kYb
8_k�`_k�c
h_k�c
x_k&�_k�c
�_k�c
�_k.�_k�d
�_k�d
�_k@P `ke
(`k�d
8`kT``kqe
h`kbe
x`k�U�`k�e
�`k�`k�`k�e
�`kض
�`k�e
�`k�V�`k�e
�`k�e
�`k�e
�`k0W akAh
(ak'h
8ak�Z`ak^h
hakNh
xak`b�akTi
�ak8i
�ak�f�ak�i
�akbkbk�i
bk�i
bk��
bkk bk�i
(bk�i
0bkȺ
8bk@k`bkjk
hbk[k
xbk�|�bk�k
�bkBl
�bkp~�bkwl
�bkjl
�bk� ck m
(ckm
8ck�`ck�m
hck�m
xck��ck"o
�ck`��ck*o
�ck���ck�
�ck@��ck�
�ck���ck�
�ck��ck�
�ck}�dk1o
dk�dkSo
dk =k0dk�o
8dk�>k`dkpdkpdk��xdk0��dkp��dk�dk�dk���dk�dk�dk���dk0��dk7�
�dk���dk��ek ek�ek��ek�� ek@�(ek��pekF�
xek���ek���ek��ek0�ek���ek���ek��ek��fkP�
fk��fk�� fk�(fk�0fk��8fk��@fk��Hfk���fk@�
�fk���fk���fk@�fk��fk���fk���fkp��fk�� gkJ�
(gk��0gk��@gk�HgkPPgk��Xgk��`gk�hgk���gkT�
�gk���gk���gk`�gk�gk���gk���gk���gk��@hk]�
Hhk��Phk��hhkphk��xhk���hk���hk���hke�
�hk���hk���hk�ik��ik��ikP�ik��`ikl�
hik��pik���ik`�ik���ik���ik���ik���ikt�
�ik��jk��jk jk��(jk��0jk��8jk���jk|�
�jk���jk���jk��jk���jk���jk���jk�� kks�
(kk�	0kk�	@kkw�
Hkk~�
Pkk�	`kk��
hkk�	pkk�	�kk��
�kk��
�kk��
�kk��
�kk��
�kk�	�kkMPi�kk��
�kk�	�kk�	�kk��
�kk��
 lk�X	(lk�Y	0lk0[	8lk@\	@lkp[	Plk]	XlkP^	`lk�]	�nkF�
�nkR
�nk�Ii�nkJiok
JiokmX
ok�R
@ok�Z
Hok�0
Pok!h
XokJi�ok� 
�okm
�okJi�okC
�ok1
�ok)Ji�ok5Ji�okJi�ok�
�ok�
�ok�
�ok�
�ok
�ok>Ji�ok�1
�okKJipk�S
pk_
pkZJipkgJi pk��	(pkvJi0pk 
8pk�Ji@pk�R
Hpk�JiPpk�
Xpk�Ji`pk�
�pk�S
�pkPi�pk�[
�pk�Ji�pk�,
�pk9�	qk@qk@qk�JiHqk��	Xqk�Ji�qkPOi�qk@�	�qk sk0rk�lkhrk�sk�rkp}	 sk o	8sk�p	�sk�1
�skЃ	�sk Ti�sk))
�skp�	�sk\Oi�sk�	�sk��	�sk@Ti�skwOi�sk��	�sk`Titk�3
tk�	tk�Ti tk�Oi(tk~	8tk�Oixtk�Oi�tk}	uk mkHukvk�uk@y	vk))
vk��	vk�TiXvk�Oipvks	�vk�wk�vkPmkhwk�q	�wk@r	�wk�r	Xxk�Oipxk@q	�xk�yk�xkpmkhyk`o	�yk�o	�yk p	Xzk�Oi�zkps	�zk�mk0{k�{k�{k�Oi|k�Oi|k�Oi(|kPi0|k
PiP|kPiX|kPix|k+Pi�|k;Pi�|kGPi�|kRPi�|k\Pi�|kePi�|kvPi�|kC
}k�Pi }k�Pi@}k�PiH}k� 
h}k�Oip}k�Pi�}k�Pi�}k�Pi�}k�Pi~k�PiX~kt	�~k�u	�~k�mk�~k�k�k�Oi�k�Oi�k�Oi�kPi�k
Pi�kPi�kPi8�k+Pi@�k;Pi`�kGPih�kRPi��k\Pi��kePi��kvPi��kC
؀k�Pi�k�Pi�k�Pi�k� 
(�k�Oix�k�Pi��k�w	�k�mkP�k�k�k�Oi �k�Oi(�k�OiH�kPiP�k
Pip�kPix�kPi��k+Pi��k;Pi��kGPiȃkRPi�k\Pi�kePi�kvPi�kC
8�k�Pi@�k�Pi`�k�Pih�k�S
��kPi��kPi��kPi��kQi8�k�t	��k�mkЅk��k��k�Oi��k�Oi��k�OiȆkPiІk
Pi�kPi��kPi�k+Pi �k;Pi@�kGPiH�kRPih�k\Pip�kePi��kvPi��kC
��k�Pi��k�Pi�k�Pi�k�Y
�k\Pi�kQi0�kQix�k-Qi��k�w	�knkP�k�k�k�Oi �k�Oi(�k�OiH�kPiP�k
Pip�kPix�kPi��k+Pi��k;Pi��kGPiȊkRPi�k\Pi�kePi�kvPi�kC
8�k�Pi@�k�Pi`�k�Pih�k� 
��k�Oi��k�
��k1Pi��k�Y
؋k\Pi�klPi�kvPiX�kAQi��k�x	�k0nk0�k�k�k�Oi�k�Oi�k�Oi(�kPi0�k
PiP�kPiX�kPix�k+Pi��k;Pi��kGPi��kRPiȎk\PiЎkePi�kvPi��kC
�k�Pi �k�Pi@�k�PiH�k� 
h�k�Oip�k�S
��kPi��kPi��kPi��k�S
�k�S
8�kQQix�k`x	Аk`nk�k��k��k�Oi�k�Oi�k�Oi�kPi�k
Pi0�kPi8�kPiX�k+Pi`�k;Pi��kGPi��kRPi��k\Pi��kePiВkvPiؒkC
��k�Pi�k�Pi �k�Pi(�k� 
H�k�OiP�k�S
p�kPix�kgQi��kaQi��kPi��kPiȓkrQi�klQi�k�
�k1PiX�kwQi��k u	�k�nk0�k�k�k�Oi�k�Oi�k�Oi(�kPi0�k
PiP�kPiX�kPix�k+Pi��k;Pi��kGPi��kRPiȖk\PiЖkePi�kvPi��kC
�k�Pi �k�Pi@�k�PiH�k� 
h�k�Oip�k�Qi��k�Qi��k�S
��kPi��kPi�kPi�k�Qi�k�Qi�k:
0�k�Qi8�k�QiX�k�Qi`�k�Qi��k�Qi�j@�j��j)ȇj�jb0�j{��j{�j�P�j��j��jp�jЋjx�j��jW�jWP�jW��jW�jW p�jW(ЎjWX0�jW\��jW`�jWPP�jWQ��jWR�jWSp�jWTБjWd0�jWe��jWf�jW�P�jWh��jWl�jWpp�jW�ДjW���jW(�jW0�j�j�j1p�j(��j���j��j�Зj'�j��j�0�j�@�j�`�jzp�jj��js��j"��jSИj)�jp�j[ �jC0�jCP�jx��j���j
��j��j��j� �j	@�j�P�j@p�j��j���j���j�Кj��j��j��j0�j@�j�`�j���j���jwЛj(�j	 �jP�j`�j���j���j���jg��jD�j��j1�jw@�j[P�j�p�j���j���jI��j�Нj��j��j�0�j`�j���j���j���j�Оj��j��j� �j�0�j�P�j�`�j�ؼj�j�j��j�j]�j(�j*�j� �j�(�j�0�jK8�jS@�jSH�jVP�j�X�jr`�jah�jbp�jh��j��j���j��j&��j���j���j���jXȽjKнj�ؽj��j��j��j���j��jX�j%�j��j� �j�0�j8�j@�j�H�j!P�j,X�j�`�j2h�j�p�j:x�j
��j���jM��jb��jj��j���j���jl��j���j+Ⱦj�оj�ؾj��j��j$�j���j��j�j��j� �j�(�j�0�jv�lkv�lkv�lkv�lkv8�jOH�jRP�jVX�jZ`�jch�j�p�jpx�jy��j���j���j(��j���j���j0��j���j���jfȿj
пj�ؿj��j�j{�j��k!�k��k��kY�k�0k�Hk�k��k.�k��k��k;0kHk^�k#�k��k��k�k�Pkhkk�k��k��kF k_ kyp k�� kE� kj0!k�H!k�`!k�3k� 3k�(3kr03k/83k@3k�H3k'P3k�X3kJ`3k�h3kFp3kJx3k��3k?�3k��3k��3kY�3kO�3k,�3k+�3k�3k��3k��3kq�3k��3k��3kZ�3k�3k"4k4k�4k4k� 4k�(4k�04k84k�@4k�H4k�P4k$X4k7`4k�h4k,p4k�x4k��4k��4k�4k��4k)�4k��4k�4k��4kc�4k�4k��4k�4ku�4km�4k��4k��4k|5k*5kR5k�5k� 5k(5k%05k�85k
@5k=X5k�x5k��5k��5k�5k3�5k6kt86k2X6kx6k��6kB�6k��6k��6kR7k�87k/X7k�x7k��7k��7k��7k��7k�8k!88k�X8k�x8k��8ks�8kg�8k��8kO9k089k7X9kx9k}�9k�9k��9k �9k�:k 8:k�X:k_x:k�:k��:ke�:k��:k�;k!8;kVX;k�x;k �;k��;k��;k��;kL<ki8<k�X<k2�<k��<k��<kB�<k�<k|�<k�=k�=k}8=kh=ko�=k#�=k��=kq(>k�X>k��>k��>k�ءj��j�j:�j��j�j�j�j��j �j(�j0�j	8�j
@�jH�j
P�jX�j`�jh�jp�jx�j��j��j��j��j���j���j��j��j��jȢjQТj�آj�j�j�j��j �j!�j"�j#�j$ �j%(�j&0�j'8�j�@�j�H�j)P�jX�j�`�j�h�j+p�j,x�j-��j.��j���j��j/��j0��j���j1��j2��j3ȣj4Уj5أj6�j7�j8�j9��j:�j;�j�j<�j= �j>(�j?0�j@8�j�@�jAH�jBP�jCX�jD`�jEh�jFp�jGx�jH��jI��jt��j���jJ��jL��jM��jN��jO��jPȤjФj�ؤjQ�jR�jT�jU��jW�jX�jY�jZ�j[ �j�(�j\0�j]8�j-@�j^H�j_P�j`X�jc`�jdh�jep�jfx�j
��jg��ji��jE��jj��jk��jl��jm��jN��jnȥjoХjpإjq�jr�js�jt��ju�j�jv�jw�jx �jy(�jz0�j8�j|@�j}H�j~P�jX�j�`�j�h�j�p�j�x�j���j���j���j���j���j���j���jk��j���joȦj�Цj�ئj��jA�jz�j���j,�j��j��j��j� �j�(�j�0�j�8�j�@�j�H�j�P�j�X�j�`�j�h�j�p�j�x�j���j���j���j���j���j���j���j.��j��j�ȧj�Чjdاj��j��j��j���jV�j��j��j��j� �j�(�j�0�j8�j�@�j�H�j�P�j�X�j�`�j{h�j�p�j�x�j<��j���j���j���j���j���j���j���j���j�Ȩj�Шj�بj��j��j)�j���j��j��j��j��j� �j�(�j�0�j�8�j�@�j�H�j�P�j�X�j�`�j�h�j�p�j�x�j���j���j���j���j���j���j���j���j���j�ȩj�Щj�ةj��j�j��j���jn�j<�j��j��j� �j�(�j�0�j88�j�@�j�H�j�P�j�X�j�`�j�h�j�p�j�x�j���j���j���j���j���j���j���ju��j���j�ȪjTЪjتj��j��j��j���j��j��j��j��j� �j�(�j�0�j�8�ji@�j�H�j�P�j�X�ja`�j�h�j�p�j�x�j��j��j��j��j��j��j��j3��jp��j�ȫjЫjثj�j�j	�j
��j�j�j
�j�j �j(�j0�j8�j@�j�H�j%P�j�X�j`�jh�jp�jx�j��j���j��j��j��j��j��j��j��j Ȭj!Ьj"جj#�j$�jH�j���j%�j&�j'�j(�j) �j*(�j?0�j&8�j+@�j-H�j.P�j�X�j�`�j/h�j0p�jZx�j1��j"��j3��j4��j5��j6��j7��j���j4��j8ȭj9Эj�حj;�j
�j<�j=��j>�j?�j@�jA�j �jB(�jC0�jD8�j@�jEH�jFP�jGX�j�`�jHh�jIp�jJx�jK��jL��jN��jO��jP��jD��jm��j��jQ��jRȮjSЮjTخj5�jU�jV�j���jW�jX�jY�j�jZ �j[(�j\0�j]8�j^@�j_H�j`P�jaX�jc`�jdh�jep�jfx�jg��jh��ji��jL��jk��jl��jm��jn��j���joȯjpЯj�دjq�jr�j��js��jt�j~�ju�jv�jw �jx(�jy0�jz8�j{@�j|H�j}P�j~X�j`�j�h�j�p�j�x�j���j���j���j���j���j���j���j���j���j�ȰjIаj�ذj��j��j��j���j��j��j��j#�jW �j�(�j�0�j�8�j�@�j�H�j�P�j�X�j�`�j�h�j�p�j�x�j���j���j���j���j���j���j`��j���j���j�ȱj�бj�رj��j��j��j���j��j��j�j��j� �j�(�j�0�j�8�j�@�j�H�j�P�j�X�j�`�j�h�jp�j�x�j@��j���j��j���j���j���j���j���j��j�Ȳj�вj�زj��j��j��j���j�j��j��j��j� �j�(�j�0�j�8�j�@�j�H�j�P�j=X�j�`�j�h�j�p�j�x�j���j���j���j���j���j���j���j���j���j�ȳj�гj�سj��j��j��j���j+�j�j��j��j� �j�(�j�0�j�8�jM@�j�H�j�P�j�X�j�`�j�h�j�p�j�x�j���j���j���j���j���j���j���j]��j���j�ȴj�дj�شj��j��j��j���j��j��j��j��j �j(�j0�j68�j@�jH�jP�jX�j`�jh�j	p�j
x�j��j��j
��j��j��j��j��j���j��jȵjeеj6صj�j�j��j��j�j�j�j�j �j(�j0�j8�j@�jH�j P�j�X�j!`�j"h�j#p�j�x�j$��j%��j&��j'��j(��j��j)��j*��j+��j,ȶj-жj.ضj/�j0�j1�j���j2�j3�j4�j5�j6 �j�(�j70�j88�j9@�j:H�j;P�j<X�j=`�j>h�j�p�j?x�j@��j���jA��jB��jC��jD��jE��j-��jF��jGȷjHзjIطj��jJ�jK�jL��jM�j��jA�jN�j� �jU(�j80�jP8�jQ@�jSH�j�P�jTX�j�`�jUh�jWp�j�x�j���jX��jY��j���j[��j\��j]��j^��j_��j`ȸjaиjbظj��j�jd�je��j��jf�jg�jh�ji �jj(�jk0�jl8�j@�jmH�jnP�j�X�jo`�jph�jqp�jrx�js��j>��jt��j���ju��j���jv��jw��jx��jzȹj{йj|عj}�j~�j�j���j�j��j�j��j� �j�(�j�0�j�8�j�@�j�H�jP�j�X�j�`�j�h�j�p�jKx�j���j���j���j��j���j���j���j���j���j�Ⱥj�кj�غj��j�j��j���j��j��j��j��j� �j�(�j�0�j�8�j�@�j�H�j�P�j�X�j�`�j�h�j�p�j�x�j���j���j��j���j���j���j���j���j���j�Ȼj�лj�ػj��j�j��j���j��j��j��jl�j� �j�(�j�0�j�8�j�@�j�H�j�P�j�X�j�`�j�h�j�p�j�x�j���j���j���j���j���j��jv��j���j���j�ȼj�мj���H��H�QhH��t��H����5�h�%�h@�%�hh����%�hh�����%�hh����%�hh����%�hh����%zhh����%rhh����%jhh�p����%bhh�`����%Zhh	�P����%Rhh
�@����%Jhh�0����%Bhh� ����%:hh
�����%2hh�����%*hh���%"hh����%hh�����%hh����%
hh����%hh����%�hh����%�hh����%�hh�p����%�hh�`����%�hh�P����%�hh�@����%�hh�0����%�hh� ����%�hh�����%�hh�����%�hh���%�hh ����%�hh!�����%�hh"����%�hh#����%�hh$����%zhh%����%rhh&����%jhh'�p����%bhh(�`����%Zhh)�P����%Rhh*�@����%Jhh+�0����%Bhh,� ����%:hh-�����%2hh.�����%*hh/���%"hh0����%hh1�����%hh2����%
hh3����%hh4����%��gh5����%�gh6����%�gh7�p����%�gh8�`����%�gh9�P����%�gh:�@����%�gh;�0����%�gh<� ����%��gh=�����%��gh>�����%��gh?���%��gh@����%��ghA�����%��ghB����%��ghC����%��ghD����%z�ghE����%r�ghF����%j�ghG�p����%b�ghH�`����%Z�ghI�P����%R�ghJ�@����%J�ghK�0����%B�ghL� ����%:�ghM�����%2�ghN�����%*�ghO���%"�ghP����%�ghQ�����%�ghR����%
�ghS����%�ghT����%��ghU����%�ghV����%�ghW�p����%�ghX�`����%�ghY�P����%�ghZ�@����%�gh[�0����%�gh\� ����%��gh]�����%��gh^�����%��gh_���%��gh`����%��gha�����%��ghb����%��ghc����%��ghd����%z�ghe����%r�ghf����%j�ghg�p����%b�ghh�`����%Z�ghi�P����%R�ghj�@����%J�ghk�0����%B�ghl� ����%:�ghm�����%2�ghn�����%*�gho���%"�ghp����%�ghq�����%�ghr����%
�ghs����%�ght����%��ghu����%�ghv����%�ghw�p����%�ghx�`����%�ghy�P����%�ghz�@����%�gh{�0����%�gh|� ����%��gh}�����%��gh~�����%��gh���%��gh�����%��gh������%��gh�����%��gh�����%��gh�����%z�gh�����%r�gh�����%j�gh��p����%b�gh��`����%Z�gh��P����%R�gh��@����%J�gh��0����%B�gh�� ����%:�gh������%2�gh������%*�gh����%"�gh�����%�gh������%�gh�����%
�gh�����%�gh�����%��gh�����%�gh�����%�gh��p����%�gh��`����%�gh��P����%�gh��@����%�gh��0����%�gh�� ����%��gh������%��gh������%��gh����%��gh�����%��gh������%��gh�����%��gh�����%��gh�����%z�gh�����%r�gh�����%j�gh��p����%b�gh��`����%Z�gh��P����%R�gh��@����%J�gh��0����%B�gh�� ����%:�gh������%2�gh������%*�gh����%"�gh����%�gh����%�gh���%
�gh���%�gh���%��gh���%�gh���%�gh��p�%�gh��`�%�gh��P�%�gh��@�%�gh��0�%�gh�� �%��gh���%��gh���%��gh����%��gh����%��gh����%��gh���%��gh���%��gh���%z�gh���%r�gh���%j�gh��p�%b�gh��`�%Z�gh��P�%R�gh��@�%J�gh��0�%B�gh�� �%:�gh���%2�gh���%*�gh����%"�gh����%�gh����%�gh���%
�gh���%�gh���%��gh���%�gh���%�gh��p�%�gh��`�%�gh��P�%�gh��@�%�gh��0�%�gh�� �%��gh���%��gh���%��gh����%��gh����%��gh����%��gh���%��gh���%��gh���%z�gh���%r�gh���%j�gh��p�%b�gh��`�%Z�gh��P�%R�gh��@�%J�gh��0�%B�gh�� �%:�gh���%2�gh���%*�gh����%"�gh����%�gh����%�gh���%
�gh���%�gh���%��gh���%�gh���%�gh��p�%�gh��`�%�gh��P�%�gh��@�%�gh��0�%�gh�� �%��gh���%��gh���%��gh�����%��gh����%��gh����%��gh���%��gh���%��gh���%z�gh���%r�gh���%j�gh�p��%b�gh�`��%Z�gh	�P��%R�gh
�@��%J�gh�0��%B�gh� ��%:�gh
���%2�gh���%*�gh����%"�gh����%�gh����%�gh���%
�gh���%�gh���%��gh���%�gh���%�gh�p��%�gh�`��%�gh�P��%�gh�@��%�gh�0��%�gh� ��%��gh���%��gh���%��gh����%��gh ����%��gh!����%��gh"���%��gh#���%��gh$���%z�gh%���%r�gh&���%j�gh'�p��%b�gh(�`��%Z�gh)�P��%R�gh*�@��%J�gh+�0��%B�gh,� ��%:�gh-���%2�gh.���%*�gh/����%"�gh0����%�gh1����%�gh2���%
�gh3���%�gh4���%��gh5���%�gh6���%�gh7�p��%�gh8�`��%�gh9�P��%�gh:�@��%�gh;�0��%�gh<� ��%��gh=���%��gh>���%��gh?����%��gh@����%��ghA����%��ghB���%��ghC���%��ghD���%z�ghE���%r�ghF���%j�ghG�p��%b�ghH�`��%Z�ghI�P��%R�ghJ�@��%J�ghK�0��%B�ghL� ��%:�ghM���%2�ghN���%*�ghO����%"�ghP����%�ghQ����%�ghR���%
�ghS���%�ghT���%��ghU���%�ghV���%�ghW�p��%�ghX�`��%�ghY�P��%�ghZ�@��%�gh[�0��%�gh\� ��%��gh]���%��gh^���%��gh_����%��gh`����%��gha����%��ghb���%��ghc���%��ghd���%z�ghe���%r�ghf���%j�ghg�p��%b�ghh�`��%Z�ghi�P��%R�ghj�@��%J�ghk�0��%B�ghl� ��%:�ghm���%2�ghn���%*�gho����%"�ghp����%�ghq����%�ghr���%
�ghs���%�ght���%��ghu���%�ghv���%�ghw�p��%�ghx�`��%�ghy�P��%�ghz�@��%�gh{�0��%�gh|� ��%��gh}���%��gh~���%��gh����%��gh�����%��gh�����%��gh����%��gh����%��gh����%z�gh����%r�gh����%j�gh��p��%b�gh��`��%Z�gh��P��%R�gh��@��%J�gh��0��%B�gh�� ��%:�gh����%2�gh����%*�gh�����%"�gh�����%�gh�����%�gh����%
�gh����%�gh����%��gh����%��gh����%��gh��p��%��gh��`��%��gh��P��%��gh��@��%��gh��0��%��gh�� ��%��gh����%��gh����%��gh�����%��gh�����%��gh�����%��gh����%��gh����%��gh����%z�gh����%r�gh����%j�gh��p��%b�gh��`��%Z�gh��P��%R�gh��@��%J�gh��0��%B�gh�� ��%:�gh����%2�gh����%*�gh�����%"�gh�����%�gh�����%�gh����%
�gh����%�gh����%��gh����%��gh����%��gh��p��%��gh��`��%��gh��P��%��gh��@��%��gh��0��%��gh�� ��%��gh����%��gh����%��gh�����%��gh�����%��gh�����%��gh����%��gh����%��gh����%z�gh����%r�gh����%j�gh��p��%b�gh��`��%Z�gh��P��%R�gh��@��%J�gh��0��%B�gh�� ��%:�gh����%2�gh����%*�gh�����%"�gh�����%�gh�����%�gh����%
�gh����%�gh����%��gh����%��gh����%��gh��p��%��gh��`��%��gh��P��%��gh��@��%��gh��0��%��gh�� ��%��gh����%��gh����%��gh�����%��gh�����%��gh�����%��gh����%��gh����%��gh����%z�gh����%r�gh����%j�gh��p��%b�gh��`��%Z�gh��P��%R�gh��@��%J�gh��0��%B�gh�� ��%:�gh����%2�gh����%*�gh�����%"�gh�����%�gh�����%�gh����%
�gh����%�gh����%��gh����%��gh����%��gh��p��%��gh��`��%��gh��P��%��gh��@��%��gh��0��%��gh�� ��%��gh����%��gh����%��gh������%��gh�����%��gh�����%��gh����%��gh����%��gh����%z�gh����%r�gh����%j�gh�p���%b�gh�`���%Z�gh	�P���%R�gh
�@���%J�gh�0���%B�gh� ���%:�gh
����%2�gh����%*�gh�����%"�gh�����%�gh�����%�gh����%
�gh����%�gh����%��gh����%��gh����%��gh�p���%��gh�`���%��gh�P���%��gh�@���%��gh�0���%��gh� ���%��gh����%��gh����%��gh�����%��gh �����%��gh!�����%��gh"����%��gh#����%��gh$����%z�gh%����%r�gh&����%j�gh'�p���%b�gh(�`���%Z�gh)�P���%R�gh*�@���%J�gh+�0���%B�gh,� ���%:�gh-����%2�gh.����%*�gh/�����%"�gh0�����%�gh1�����%�gh2����%
�gh3����%�gh4����%��gh5����%��gh6����%��gh7�p���%��gh8�`���%��gh9�P���%��gh:�@���%��gh;�0���%��gh<� ���%��gh=����%��gh>����%��gh?�����%��gh@�����%��ghA�����%��ghB����%��ghC����%��ghD����%z�ghE����%r�ghF����%j�ghG�p���%b�ghH�`���%Z�ghI�P���%R�ghJ�@���%J�ghK�0���%B�ghL� ���%:�ghM����%2�ghN����%*�ghO�����%"�ghP�����%�ghQ�����%�ghR����%
�ghS����%�ghT����%��ghU����%��ghV����%��ghW�p���%��ghX�`���%��ghY�P���%��ghZ�@���%��gh[�0���%��gh\� ���%��gh]����%��gh^����%��gh_�����%��gh`�����%��gha�����%��ghb����%��ghc����%��ghd����%z�ghe����%r�ghf����%j�ghg�p���%b�ghh�`���%Z�ghi�P���%R�ghj�@���%J�ghk�0���%B�ghl� ���%:�ghm����%2�ghn����%*�gho�����%"�ghp�����%�ghq�����%�ghr����%
�ghs����%�ght����%��ghu����%��ghv����%��ghw�p���%��ghx�`���%��ghy�P���%��ghz�@���%��gh{�0���%��gh|� ���%��gh}����%��gh~����%��gh�����%��gh������%��gh������%��gh�����%��gh�����%��gh�����%z�gh�����%r�gh�����%j�gh��p���%b�gh��`���%Z�gh��P���%R�gh��@���%J�gh��0���%B�gh�� ���%:�gh�����%2�gh�����%*�gh������%"�gh������%�gh������%�gh�����%
�gh�����%�gh�����%��gh�����%��gh�����%��gh��p���%��gh��`���%��gh��P���%��gh��@���%��gh��0���%��gh�� ���%��gh�����%��gh�����%��gh������%��gh������%��gh������%��gh�����%��gh�����%��gh�����%z�gh�����%r�gh�����%j�gh��p���%b�gh��`���%Z�gh��P���%R�gh��@���%J�gh��0���%B�gh�� ���%:�gh�����%2�gh�����%*�gh������%"�gh������%�gh������%�gh�����%
�gh�����%�gh�����%��gh�����%��gh�����%��gh��p���%��gh��`���%��gh��P���%��gh��@���%��gh��0���%��gh�� ���%��gh�����%��gh�����%��gh������%��gh������%��gh������%��gh�����%��gh�����%��gh�����%z�gh�����%r�gh�����%j�gh��p���%b�gh��`���%Z�gh��P���%R�gh��@���%J�gh��0���%B�gh�� ���%:�gh�����%2�gh�����%*�gh������%"�gh������%�gh������%�gh�����%
�gh�����%�gh�����%��gh�����%��gh�����%��gh��p���%��gh��`���%��gh��P���%��gh��@���%��gh��0���%��gh�� ���%��gh�����%��gh�����%��gh������%��gh������%��gh������%��gh�����%��gh�����%��gh�����%z�gh�����%r�gh�����%j�gh��p���%b�gh��`���%Z�gh��P���%R�gh��@���%J�gh��0���%B�gh�� ���%:�gh�����%2�gh�����%*�gh������%"�gh������%�gh������%�gh�����%
�gh�����%�gh�����%��gh�����%��gh�����%��gh��p���%��gh��`���%��gh��P���%��gh��@���%��gh��0���%��gh�� ���%��gh�����%��gh�����%��gh������%��gh�����%��gh�����%��gh����%��gh����%��gh����%z�gh����%r�gh����%j�gh�p���%b�gh�`���%Z�gh	�P���%R�gh
�@���%J�gh�0���%B�gh� ���%:�gh
����%2�gh����%*�gh�����%"�gh�����%�gh�����%�gh����%
�gh����%�gh����%��gh����%��gh����%��gh�p���%��gh�`���%��gh�P���%��gh�@���%��gh�0���%��gh� ���%��gh����%��gh����%��gh�����%��gh �����%��gh!�����%��gh"����%��gh#����%��gh$����%z�gh%����%r�gh&����%j�gh'�p���%b�gh(�`���%Z�gh)�P���%R�gh*�@���%J�gh+�0���%B�gh,� ���%:�gh-����%2�gh.����%*�gh/�����%"�gh0�����%�gh1�����%�gh2����%
�gh3����%�gh4����%��gh5����%��gh6����%��gh7�p���%��gh8�`���%��gh9�P���%��gh:�@���%��gh;�0���%��gh<� ���%��gh=����%��gh>����%��gh?�����%��gh@�����%��ghA�����%��ghB����%��ghC����%��ghD����%z�ghE����%r�ghF����%j�ghG�p���%b�ghH�`���%Z�ghI�P���%R�ghJ�@���%J�ghK�0���%B�ghL� ���%:�ghM����%2�ghN����%*�ghO�����%"�ghP�����%�ghQ�����%�ghR����%
�ghS����%�ghT����%��ghU����%��ghV����%��ghW�p���%��ghX�`���%��ghY�P���%��ghZ�@���%��gh[�0���%��gh\� ���%��gh]����%��gh^����%��gh_�����%�% H�%�H�%(�%@�%@�%@�%@f.�@H�=�hH��hH9�tH�f�gH��t	�����H�=�hH�5�hH)�H��H��?H��H�H�tH�E�gH��t��fD�����=��hu+UH�=2�gH��tH�=�Xg�����d������h]������w������H�GH�GH��wÐ��AVAUATUHc�SD�oH��H�A�Ic�I��H��I�����H��t^H�{L��I�����I��H��tNHcCH��1�I�<��=��Hc{H��1�H��H��L��$��D�k1�L�cL�s[]A\A]A^ø���L���N���������ATU���S�� ���I��H��t��H��������uA�\$L��[]A\�f�L��E1����L��[]A\�ff.�f���UH��H�����H�}����H��1�]�K�ff.���UH������H��]���f.���AVA��AUA��ATD�'U��SD��H��D9gt5H�KHcƒ�H��H��HCD�1fD�i�h�[D��]A\A]A^�fD�w������x��A������92~XATHc�I��UH��SH�BH��H�RH��H��L�0�P�����xH�MHc�H��IT$�o[]A\������f.���AWAVAUATUSH�������I��H��H�WA��1�E1��!H��H��HE�@����A��H��9]~NL�$�J�"f�xt�D��f#pt�M��tH��H�L$��H��A��H�UH�L$J�"�@H��9]�H��D��[]A\A]A^A_��E1�������Hc7H����ff.���AWI��H�x�1�AV�AUATUH��SH���L���p��A�ċE��~OH�E1�H�
b�L�-@�L�5_VD��fDH�EL��D��L��L��1��#��H��A�9]�L��H���1����H��[D�]A\A]A^A_�f.�DAVAUATUSH��H�$H�ĀH�5�HdH�%(H��$x1�I��H�=����H����H��H��L�l$pL�5���f�I�4$H������tH1�H��L��L��H���m����t�H���@�1�H��$xdH+%(u7H�Ā[]A\A]A^�L���@�H��I�D$��I�|$���1���_�ff.�@UH��AWAVAUL�oATSH��L��H��dH�%(H�E�1���L�cM��t5L�����H�E�dH+%(�UH�e�L��[A\A]A^A_]��L�I��L��L��H�������H��L��H���I��H�@!H��H%�H)�H���H9�tH��H��$�H9�u��H)�H����1�L�5�L��H��H������I���{�M��t)�A��J�<H�0H���H�H�����P�H9�u�L���U�I��H��t]H�E1�1�1�L��L���h�������H��P���L��L�s@���������L;�P�������L����H�CI�����@H�CL� ��H�L��'�����AVAUATA��USH��1�H��PdH�%(H�D$H1�������xkI��@�lj�L��A������	��H��~1�D��L��E1���H���H�A��A�݉����H�D$HdH+%(uH��PD��[]A\A]A^��A���������ff.�UH��AWAVAUATSH��H��L�?dH�%(H�E�1�I��L�����H��I��H�@H��H%�H)�H���H9�tH��H��$�H9�u��H)�H���FL��L��H��I����M��t#�K��J�,H�0H��H�H�����P�H9�u�M��HL��A�E_PATfA�E�P�H��H��t@���H�CH����L��H�{ugH�
e���H�5��H�=������DL��L�cI�<$H��t_L��P����L��L�k@������x9L;�P���u0I�<$�^�H�CH��t,H�E�dH+%(uaH�e�[A\A]A^A_]�I�|$I��H��u�H���!������V���H�{t�H�
����H�5�H�=��C��H�L����� �L���C������H�=U�g������H�=��g������H�=u�g����H�=�g�����H�=��g������H�=%�g�����H��H�5����H�=�h����H���gH���f�AUATUSH��H�$H��dH�%(H��$1�H��H��A���)��H��t]I��H��I���S�L��L����1��9�L��D��H��L���h���H��$dH+%(uH��[]A\A]ø���������ff.���H�����H��tH���f�H�=��gH���P�����H���s��H����H������H��H�5q���H�=��h���H�^�gH���f���H���C�H��tH���f�H�=�gH��������H����H����H������H��H�5���H�=.�h�5��H�~�gH���f���H������H��tH���f�H�=9�gH���P�����H�����H����H������H��H�5����H�=��h���H���gH���f���H�����H��tH���f�H�=Y�gH��������H��賿��H����H������H��H�5!���H�=&�h�5��H���gH���f���H����H��tH���f�H�=y�gH���P�����H���c�H����H������H��H�5����H�=��h���H���gH���f���H��胼��H��tH���f�H�=��gH��������H���S���H����H������AUATUSH��1�H��XdH�%(H�D$H1��H����x\I��@�lj�L��A������j��H��~�
1�L��E1��s������*���H�D$HdH+%(uH��XD��[]A\A]�@A��������k�ff.�������f���1����D��AWAVAUATUSH��8H�t$1�H�T$dH�%(H��$(1��w��������H�D$H�\$ E1�I��E1�E1�H�H���Ic�A��D���L��I9���D�L�@E����A���u�M���L��D�L$L��L�$�X�H���\fo�$�I��fo�$�fo�$�D�L$fo�$�B8fo�$�fo�$�BL8L�$fo�$BT8 fo�$B\80Bd8@Bl8PBt8`B|8pM��1�A�������H�މ����H��~L�,H��D�L�@E������@Mc�L��O�,>I�u�u�H��t}J�<8L��H�$E��H��$����L�$H�D$L�H�D$C�(L�(E��xWE1�����H��$(dH+%(uPH��8D��[]A\A]A^A_�D�k��D� A����L��A������E1�f�L������H�D$H�����D��ATUS��H��PdH�%(H�D$H1��8����xdI���A�غ@L��H�
���1���L���@�2�E1��H��@A��A���
���H�D$HdH+%(uH��PD��[]A\ÐA��������S����AUATUSH��H�$H��dH�%(H��$1�H��H��I����H��t]I��H��I���S�L��L�r��1���L��L��H��L���4��H��$dH+%(uH��[]A\A]ø�����������������f���1�����D��ATUSH��H�$H��dH�%(H��$1�H��H���*��H��tXI��H��I���SL��L�����1��:�L��H��L�����H��$dH+%(uH��[]A\ø�������������AUATUSH��H�$H��dH�%(H��$1�H��H��I�����H��t]I��H��I���S�L��L���1���L��L��H��L������H��$dH+%(uH��[]A\A]ø�������7�����ATUSH��H�$H�� dH�%(H��$1�I��H������H����H�l$H��I���ATH��L�U�1������XH��Z1�1��%���Ņ�xoH�����P���H��~�$<YtKA<1tE<NtY<0tUA�����������H��$dH+%(uEH�� D��[]A\�fD<nt<yu��E1���{��D� A����E1��A���������ff.���ATUSH��H�$H��dH�%(H��$1�H��H�����H��tXI��H��I���SL��L�����1����L��H��L���|�H��$dH+%(uH��[]A\ø�������q�����ATUSH��H�$H��dH�%(H��$1�H��A�����H��teH��H��I���SL����H��1��*�H��=�0D��H����H��$dH+%(uH��[]A\�fD�����������@��S���H��tAI��H��H�L�PH��g��1�L�	��H����XH��Z[�@���H��t�H��H�ʣI��R�D��H��I��H����PL���1���H�=��g�9�H���@��SH��H��dH�%(H�D$1����H��I�ؾH��H���1��������HI$H�T$dH+%(uH��[����ff.�����g�����SH��H��dH�%(H�D$1����H��I�ؾH��H�Ң1��s�����HI$H�T$dH+%(uH��[��,��ff.�����������ATH�=�qE1�UH����H��tH��H�����H��I�����H��L��]A\���ATI��H�=_qUH����H��t1H�
j�gH��L��1�H���:��H��A����H��D��]A\�@I�$E1�H��D��]A\�f.���AWAVAUATUSH��H�$H��dH�%(H��$�1�M��H�;QI��IE���I��I��L��$�H���PI��L�ùL�p��L��1����Y^��ta��
��H�満����H�G�L��L��H��1��j���H��$�dH+%(��H�Ę1�[]A\A]A^A_���ü��������$�sdt_t^H��L�
��gL���AW���[��L�
��gL���PAW�L��L��1�H������1��XZ�`���f.�H��L�
e�gL��1�SL����L��ATH�����AW����H�� �����������H���H�
d�H�2�L��L���Z�����������AWAVAUATUSH��H�$H��HH�-��hH�|$I��H�t$dH�%(H��$81�H�D$ H�D$(H��tH��H��H�����A�Ņ���H�5�2H�=�����I��H���H�D$0�D$0L�t$(H�D$H�\$ L�=f�L��L��H������H�����H�|$ � ����H��t�H�h� H������H��t��H�x�L��H�$�b�����u�H�$�y2�`H�yL���$��H��t��p�H�=�H�$�{��H���f���L���:���H�$H�=���4�V��H���A���H�|$0�H���;��H�|$ ����L�����H�=B�hH����L�����H�t$0H��H�� ����|$0��H�l$0H�����H;D$��H�|$H�PH��E1����H��$8dH+%(��H��HD��[]A\A]A^A_��H�� H���T���H9�s_H�|$H�PH�����@�� ���H�r�hH��H���K����(����H�|$�H���.������f�A������Q���������ATUH��H�$H��(dH�%(H��$1�H��H�t$H�=;��Ҷ����xVL�d$D�L$1��L���L�a����H��L���9���H��$dH+%(uH��(]A\���������<��f.�f���H���I��H�t$(H�T$0H�L$8L�D$@L�L$H��t7)D$P)L$`)T$p)�$�)�$�)�$�)�$�)�$�dH�%(H�D$1�H��$�L��H��H�D$H�D$ �H�D$H�e�g�$H�8�D$0�.���H�T$dH+%(uH�����a�����H���gH�8H���gH�0H���gH��f.���ATI��UH��S��������uL��[]A\��PA�ٺH��UL��L��1�H��������XL��Z[]A\�f���H���gH��H��H�8�b���f���H���H�T$0H�L$8L�D$@L�L$H��t7)D$P)L$`)T$p)�$�)�$�)�$�)�$�)�$�dH�%(H�D$H���gH��t+H��$��$H��H�L$H�L$ �D$0H�L$��H�D$dH+%(uH��������@��SH����~���H�[�hH�x�g�[�f.�f����+��AVAUATUS����Hc�I��L�4�L���x���I��H���|�XH�xI�V�L������H�
�����H��H�������~X1�1�@A�L�H��tA;L�tHc��A�L�H��H9�u�A�T$9�|&[L��]A\A]A^�fDE1�[]L��A\A]A^Ð1���H�
U���H�5S`H�=���ݰ��ff.�f����w����1���t&SHclj�H�<�胳��H��t	�X�[��ff.�@��H����N���H��tH��@����H�H�����H��t7H���������9�t&�J��tA��u�H��H���f�H���@���u�H�
���`H�5`�H�=�����H�
���mH�5A�H�=���ů��D��H��t/UH������J�9�rk����M9�t�ƒ��u�]�D����u�E��u	H��]�,������h��u�H��g��H�=J�H�������h�H�
����H�5��H�=��������AWAVAUATUSH��8dH�%(H�D$(1�H�D$ H���bI������I�I��H�H���Hf%����f���YE1�E1�L�l$ 1�1�L��L��L�$H�D$ ����H��H=���wiH�T$ �<-w]H��������H��rM<-L�$I���I�����Hc�I�IH�������u�L��I�t��
@H��H9�to9u�E1�L���ȸ��H�T$(dH+%(��H��8L��[]A\A]A^A_�f�f���)���E1�����L��H�5қ�1�������I���@D9���A�H����H��I9��N���L�|$ A���tI�GH�D$ A�WI��I�
H��DA��������{���L�������I���-���DH�D$ H�zL��1����I��H=��������H�D$ �<,���H�������H������L9�L�$���������@I�̸L��H�T$I)�L�L$I9�L�T$LB�H�$A�Ic�H���a��I��H�������H�T$L�L$I��L�T$H�$����腥��I���e������h��tI9���������E1��E���H��g��H�
�L�L$H�81�L�$販��L�L$L�$�_�h��Ƚ��I��������ff.���ATH�=g�USH�� dH�%(H�D$1�H�T$H�t$H�D$�5����xAH�|$����H�|$I���Z���M��t%H�D$dH+%(��H�� L��[]A\�D�T���H�ÉŅ�xq�S�����9�uE��t]Hc�H�<�����I��H��tE�X1��fDA�T�H��9���y���fD1�A�؉ڿH�5U��0�����u�E1��O�������ff.�@�������H��t9w�@Hc��D�����H��t�G�ff.����H��t�����f.���H��t1��t�fD����ø�f.���H�����D��H��t1�WE1��$�A������Hc�A��;t�t}��A9�|�A�����D���@D�@��f���H��t�W1���
�Lc�B;t�t}��9��1��fD��f.��H��ff.���H9�tGH����H�����uGHcO;Nu#��~*1��fDH��H9�tD�D�D9D�t��f.���f.�1��ff.�f���1�H��t�����ff.�f���AT1�I��U1�S輬���������u ��L���螬����L���t���9�|������[]A\�@���GA�������~
��H�D�D�D����H9���H�����uOH��tJD�N�OA9�>��~:��H�WL�D�1���H��L9�tHc��|�9:u�A9�u��@����AWAVAUATI��USH��(�v������^I��L��L���`������(A�NE�l$A�D
�L$Hc��D$H��親��I��H����E��L$�~���
�1�1��-f.�@�ƃ�A�T��@���D9�}2H��9���Hc�Hc�A��A�T�A�t�9�}Ɖ��A�T��D9�|�9�~9Ic�L�D$I�<��A��L$)�H��Hc�I�t��Y���L$L�D$A�A)�D9|$��L��D��L�D$�/���H�|$I��袱��L��躱��H��(L��[]A\A]A^A_��L��蘱��H��(L��[]A\A]A^A_�2���f�M���E1�1�1�@E�U�IcljL$A)�I�<�A9�L�D$D��D�T$H���HN�Hc�I�t����D�T$1�A9�L�D$�L$DN�G�|���E1�1�����H�
^���H�5|UH�=&�����E1��$���ff.���AVAUATI��UH��H��SH���C���L����H���0�������Mcl$�]A9�Hc�IM�H���}���I��H����E��~P1�1�1����D��9�}9D9�}4Hc�Hc�A�L��t�9���9�|�Hc�����A��9�|�f���u,E1�L����[L��]A\A]A^�DH��[]A\A]A^頵��L���H�I����E1���f.�@��Hc�ATUH��H��SH��1�H��t�_���I��H��t'Hc�)�1�H��Hc�H��H��H�����A�D$����L��[]A\�@��Hc�H���T>���Hc�H��H�D>�ff.�@��AVAULc�ATUH��SI�]H��H������I��H��tTH�S�H�x1�M��蔞��A�D$����E��~'1�@�޺����H��t�T�L��H���T���L9�u�E�t$A�$[L��]A\A]A^�fD��1���`����H��t7H���������9�t&�J��tA��u�H��H���f�H���@���u�H�
\��`H�5АH�=��T���H�
U��mH�5��H�=��5���D��H��t?UH��SH����"f.��J�9�������M9�t�ƒ��u�H��[]�@����u�E��u5�E��~ 1�@��H����;���H���s���;]|�H��H��[]�`�����h��u�H�O�g��H�=��H������Ēh�H�
3���H�5ǏH�=C��K���ff.����H��t�G�ff.���Hc�H���D>�H���H�P ��~qAULc�ATI��US1�H���L9hv7H9�s2H�H�PI��H��H�l(Hŋ}��x�:����E����I��$�H�P H��9��H��[]A\A]�@�ff.�@AVAUI��ATUSH���H�P ��~~Lc�L9pvaH��1�Lc�H��u
�RfDH9�vGH�H�PI��H��Hȋ|(��x-1�L��L���|�����u!H����KH�P 9�}H��L;pr������[]A\A]A^�fD[1�]A\A]A^�ff.�f����oH�?H�G�oNO �oV W0�o^0_@�of@gP�onPo`�ov`wp�o~p��H������H���H����ff.�@��SH�����N*H��tp�oH�H�@@�oKH �oS P0�o[0X@�oc@`P�okPh`�os`pp�o{p��H���ǀ�H���H���[�����w������ATI��U�պS������)A��I��$�H��tZ��~SLc�1�Lc�@��~:L�@1�fDI9�v H9P vH�H�xH��H��H��D(����H��L9�u�H��L9�u�E1�[D��]A\�ff.���H����@��~.UH��S1�H����H�������H���;X|�H��[]��ff.�@��SH��H����)Hǃ�[�ff.�@��UH��SH��H���H��t8�G��~!1�fD��H����K���H���;_|��*)HDž�H��[]����AWAVAUI��ATI��UH��SH��H���lM����H�����1�L��E1�踡��A��L��荠��D9���A�D$1ۅ���H���L;x��H9X ��H�H�PI��H��H�T(H�H�D$H���H9���H���H���vL;x��H9X ��H�H�PI��H��H�D�D(A���t}H��E1�H�uD��H���*A�T1�������H�|$H���A9\$�8���A�wL��I��轠��A������DA���H���R���H��D��[]A\A]A^A_�A��������A������h���DL�-�hM��������K���H��hI��H���l���A���E�t$L������H���D��������Y�����@L�%��hM���5���賥��H���hI��H�������fDE1��>����;���D� E��t�A���!���A���������ff.�f���H���t
���D����H���H����ATI��USH���H����H�H1�����H�P 1ۅ�~wfDH9�v\H9�sWH�H�HH��H��H‹T(��x=H��t#H9oviH9_ vcH�H�GH��H��H�D(H��I��$�I��$�H�P H��H�H9��H��9��s�����%IDŽ$�[]A\�D1�����AWAVAUATUSH��(dH�%(H�D$1�H��h�D$�0�F��D$H���H���zH���H���i�p �x�`�$H���H���PH���H�\$E1�H�\$H�H����H�P E��E1�������DL9���H�H�PD��I��I��H�T(H�违��A�Ƌ��xUH���H��t#L9gv|L9 vvH�H�WI��I��H�T(H�1�1�1�H�<$���H�t$D��H�<$���uYH���I��H�P H�HD9�~)H���I9��Q���D���1������1��@I��D9�����1���H��$�M����$H�T$dH+%(uH��([]A\A]A^A_ø���ڸ���������H���Hc�Hc�H;psQH;P sKL�H�HL��H��L��D(��x1H���H��t'H;psH;P sH�0H�PH�H�D(��1��D��H�O0�ȃ������PE�H�ʃ�H��҃�����rEփ�t
�������f.���AVA��A��AUATUSH��H��0dH�%(H�D$(1����Ic�A��H���H;p�#Ic�H;H �H�PI��H�8f�H��H��H�T(H�,I�D$0H�C CD�EE������I��$�H��tH;wsH;O sL�H�W���Ic�H��D���#H��~cH�$I�T$0H����tH�D$H�C���tHcȃ�H��H�K��tHcȃ�H��H�K��t
H�H��H�C 1��fD�#������H�T$(dH+%(��H��0[]A\A]A^�fDf�H�C ���C�fDI��$�H;r�H;J �H�BH�2I�l$0H��H�D�t(E����L������Lc�L���Ҹ��I��H���L��H��D����!H����@����I�D$��H�C�@��tI���H�SHcЃ�H��I�H�@��t
H�I��H�C�A��t
H�I��H�C L��肠��1�����H��I��H��H�D(H��H��]�������D�E����@������fD����W���@L������胰������[����5������L���ff.�����1Ҿ$��ff.���ATLc�1�UH��SH���1��˜���DL;` s[H�HH�I��H��Hȋ|(��xA1Ҿ$1�H��������u0H������|���H����P���9�~H���H;Xr������[]A\�D[1�]A\�f���H����@��~>UH��S1�H������u ��1Ҿ$H����H�����;Z|�H��[]�D1��ff.�f�����1Ҿ$�~�ff.���H����@��~>UH��S1�H������u ��1Ҿ$H���;�H�����;Z|�H��[]�D1��ff.�f���AUE1�ATE1�UH��SH��H���#@A9�}+D��H��$@H�����A��A��H������E��t�H��D��[]A\A]�f���H����@��H����@��H�G������tPATU��S��t<��I���@����+I��$�H��tI��Hc�H����I��$�H��t
1�[]A\�1��I��$��x��IDŽ$��ո���ff.���SH��H����<H���Hǃ��uǃ�[�f���H��H��1�@��tH�wH��uH������H��t��f�H�H9�s�H�H����f��H*�H��xuf���H*��Y�H��xCf���H*��^��
��f/�s�H,�H��뗐�\��H,�H�H�9?��DH���f��H��H	��H*��X��DH����f��H��H	��H*��X��v���f.�I�Ѓ�f�I��I	��I*��X��@���f���Hc�H��HwH�>H��t	�@�ff.�@���g���AWAVAUATU��SH��@��H��hHD�`�T$H��t)Hc\$L�$[I��I�H��L��[]A\A]A^A_��HcG0I��H�4$D��H�<@H���|I��H��t�A�G0��~.H�4$M��1��	�H�4$I��`1�D��L��L�4$�g��A;_0|�@��tM��hL���i���f.�M��`L���P������S�@H���GH�?H�H�@�,���H�{hH�C`1�HǃXH��)���`���H�[�ff.�@��H�H�FH�BH�H�6H�v�o�f���AT���I��H��tOI�$H�x@�@I�D$�@蚫��I�|$hD��I�D$`IDŽ$XH��)���`1����H�L��A\���H��HD�1�H�H9�HD���ATUH��S���H�����tk���H����蚙��H�}�A���H������H������Q�������H���H���H�} ���[H�E ]A\�@H���H�������twH���H9�������H���辞�����H���H���u���˅��H�}(�2����q���DH����̘�����H����1���H�}��������H������蓘��H�}H���賐��H���H���d��������H��H�����H��H��I���#���M�������L9�������A��$�M�$$L9�u����f�H����$���@H���H;�������m�������GH�W������H�wH�>H�VH�2�Gu�@���ff.���SH��H�蟗��H�{ 薗��H�{(�m���H�CH�{@H�C H�C([鋋��ff.���AUATI��UH��SH��H�H9�tI���;���L�����H�EH�}(L9�t����L���i���H�E(H�u�EH�H9�tDH���(���H��H�H9�u�H��[]A\A]�f���USH��H�_H9�tH��f.�H���X���H�[H9�u�H��[]����USH��H�H9�tNH���f�H�H9�t8H���H���H���ұ����y�H��D$�"����D$H��[]��H��1�[]����USH��H�H9�tH��H������H�H9�u�H��[]Ð��USH��H�H9�tH��H��萜��H�H9�u�H��[]Ð��H�H�@0�@��H�G`H��H�hHLJ�H��)���`1����H��ff.�f���H���Hc�Hc�H;PspH;H sjH�H�HH�T
(H�H�냵�F��aI��L�@H�pI��I��8J�L�`L��H�H��tH�AH�D�`J�T�`H�P���H����H���L������AWf�Hc�AVAUATUH��@dH�%(H�D$81�H���)D$)D$ H;P��Hc�H;H ��I��H��D��I�Ծ$�H�T$1�E��I�����L�D$��tS迤���8uzI��@0uq�E0tkH�t$� D���	���H���tSH�E0H�ƒ���H��H��H��L�D�L�D$D��D��H��L�����1�H�T$8dH+%(uH��@]A\A]A^A_ø��������@AWAVAUI��ATA��USH��HH� H�4$D��H�L$D�L$<袌��M�}�D$M9���HcD$<M��E��H�D$ I�E@H�D$(�D$;�A�^;����A���t�|$;��I����t$�@���D$����{H�$��D��L��PH��H�������L��$��H�t$Hc|$�I���H�|$H;x�����H�L$ H;H �����H�HE�H�8H�D�d(A�����H�$E�"�EH�H��tL�T$0H��D��L��L���L�T$0A�
1�D��H�5�v�L�T$0����H�$�L$H��L�T$0H�t$A��P��xTH��$��E��u����I��xf���A���H�|$(D�������谭����yH���H��H�����[]A\A]A^A_�H���D��H��IEXH�(1��?���A�F0u M�6M9��<���H��H1�[]A\A]A^A_��l$E��D�d$<L��L��D��������x�I���H�D$H;C�+���H�|$ H;{ ����H�{��H�H�D(I���H�D�{ �0���I���D��C$�N���M�6�C(M9�������^���f.�L��$�����fD1�D��H�5uD��L�T$0�:��L�T$01�D��$A��C����������H����	�~���fDI��p�l���@H��H��[]A\A]A^A_�@��USH��H��H� �:���H�{(�����H�H9�tO��1��
H��H9�t���u�H��H9�u�CD+C@9�|	1�H��[]�H�{@跑����y����@�CD1�+C@9�}���f.���AU��I��H��@ATU��SH����D��H���h���A�ą�x H�����H��IEXH�1�����H��D��[]A\A]�����H��@H�~�1�闟�����H��@�ã����USH��H��H��`H��t0�S0��~a1��@H��`H�DmH��H��H��,9k0�H��hH��t,�C0��~%1��H��hH�DmH��H��H���9k0�H��`�#H��H��h[]����H����ATUH��S�'���H���o���H�}H�H9�t(H�GI��H�CH�H�?H�薀��H��H�L9�u��EH�}�[���H�} �R���H�}(�){��H�EH�}@H�E H�E(�H���[H��]A\���@����AWAVAUATUSH��HL�o dH�%(H�D$81�H����H�~H����H�~��H��H�RshH�G8I��L��+���E�w���I� A��蛥������E�g0M�'M9�u�MDM�$$M9�t?A�D$0t�I��$�u�I��$�L��P �p�n���y�A����I�P�-L���%������]I�(贗��I� A�ʼn$�Մ��E��H�6nA�ƉD$��H�5�q1��D$,�7��E����H�D$,E1�H�D$H�D$0�D$0����E1�L�t$4H�D$�$�D$4��������D��E��A���f�A��D9$$tfH��E��E��H��t$D��H��L��AV�t$ �q���H�� ��t�L��A�����軉��H�D$8dH+%(�H��HD��[]A\A]A^A_�DE��A��D9d$�=����T$,A�O0E1�9�t�H�5�p1�1��?���DI�(�o���E�d��2���DI�(�W���I� A���{���E��H��l�D$��H�5�o1��D$,����E���\E1�H�D$0D�d$L�t$,H�$L�l$4E����A��D9d$t]H��E��E1�H���D$8����D��H��L���D$<����AVAU�t$�2���H�� ��t����DL���x������������D�d$�|$���D$L�t$,L�l$4A�D��D$H�D$0H�$�D)�D$H��D��I���
���;\$tD�D$0����E1�H����D$4����H��L��L��AVAU�t$�D$(D����H�� ��t��
���A�O0�T$,9�uE1�����H�5�n1�1������1��,���A������E1��0�������ff.���H��8��dH�%(H�D$(1�H��H�T$H�D$H�D$H���H�D$ H�Loh�0��H�t$H�G8���H�T$(dH+%(uH��8�蛛��ff.���H��tH�FX�f���tH��x�@H��p����H�1�H9�tf�H���H���H9�u������G��t-H�H9�t-H��1��H���H���H9�u���1���ff.����AT1�SH��H������H��tVH��E1��fDH������H��H��t(H9��u�1����H����A��ؒ��H��H��u�H��D��[A\�DE1�H��D��[A\�f������u
�Ɔ�u�D�[�f.��UH��H��H�cH�}H�EPH��t+H��mh��E�t譟��H�E�E�����EH�E@H��tH��]���]�ff.����G�����W0H�O@�GH��tH�~X�ff.�f���H�	mh��G�D�ff.���ATE1�A�̹U��H��lhS�FH��A��H�G�G1�D�T�L�����H�H���t�k1�D�c[]A\�H���ff.�f���H��t����f��ff.�@��H���W�����O9�t�J��t0��u�H������u�H�
Dh�`H�5iH�=gh�{��H�
=h�mH�5�hH�=Xh�}{��ff.�f���H��H�?t�G��tW�W�fD�J�9�r&����O9�t�ƒ��u�H���f���u�H�����H�
|g��H�5�hH�=�g�{��H�
ug�MH�57hH�=�j��z��D��H�H�����0uH�WH�H���G��t��H�H��H;Wu�H��8u�����f����G���aAWAVAUI��ATUSH��H�H��H��jhH�GH��L�'�
�W0����I�E(H)�I�] I;E8�Ic}E1�H�wH�|$I��H9�������I�H�4f1�I��L��H�5%j�A�����I�E H��L�=gH)�L9�r*�z�H�L��1��H������H��I+E L9�s_H�l$H!�L��Ef��u�H�5j����I�](E1�H��D��[]A\A]A^A_�f�H��H������H#D$I�,@1�H�5�i��e��H��I+E L9�v��EH)�랸�����A�����뗋lih��tI�mL��蜑��A������v���H���g�3�H�=�hH��1����'ih�D���G��tH�H��H�G�fD���G���1AWAVI��AUATUSH���0��H�W(H��hhI�.�I�F )�A�ǃ���H�IcNH��H!�H��L�D=A�XH�\$I��f����9���H�H�H!�H9�tkI�~HI;^Pv$H��谟��H��H����I�FHI�^P@A�vD����!��D9�AG�D!�H�A��A�L�����H��L�A)�u�M�FHH�D$IF I�F A�~0uI�F I�FH��L��[]A\A]A^A_�f�H�H��H�W(���DE1��1��ff.�f���I��H��I�0H���)�F(�AWE1�E1�E1�AVE1�AUE1�ATUSH�D$���@��ty��3D�f0A�҉ŋFA9�tuD�NH�FD�V(H�GH�V H�WA��t6H9�t11H�� ��D�v4D�^2H	�L�n8I��A�� tL�~PH�FXH�D$�A���NH�^u�[�����]A\A]A^A_�H�wL�OL9�t2M)�L#D$�D��K�8I��H��H��A��@HE�I�L�M�H�wL�O�@I�� 1�D)�I	�I��I�I�[]L�A\A]A^A_ø�����f.����H����_���ff.�@��SH��H�?耀��H�[����AWAVLc�AULc�L��M��ATI��L��USI��H��H��(���H��tL��L�`A��L�8L�pL�h Hc�H�XH��[]A\A]A^A_�ff.�f���H�WH�W1�H��(�o����������AWAVAUI��ATUSH��H�T$H��tr����I��I���f�t+M�M)�tFL��L�����t*�ew��I��M��y�����8t�H��L��[]A\A]A^A_Ð�+���I����fDL��H+L$L9�uM����H�
,a�H�5�aH�=De�_t��ff.�@��H��H���*���f.���AWAVI��AUATUSH��H��t^A��H��I��H����t.H�I�H)�t;L��H��H��D��袄��I��H��y������8t�H��L��[]A\A]A^A_�fDM������H��H��1����f.�SI��I�ѹ��H��dH�%(H��$1�H��H���{��H��gI��H�
�dH��d�H�81��p��H��$dH+%(u	H��[����ff.�f�PXH���I��H�t$(H�T$0H�L$8L�D$@L�L$H��t7)D$P)L$`)T$p)�$�)�$�)�$�)�$�)�$�dH�%(H�D$1�H��$�H��L��H�D$H�=�cH�D$ �$�D$0H�D$�������r��ff.�AVAUATUSH��H�$H��0dH�%(H��$(1�H���a�H�����SI��</�pL��$ �L���[���H����H�=c���I��H��tL��H���s������{L���|��H��H���|��I��H�DH=����I�<�H)�H�5���v����I�UH��H)�I�|L�-���v��I�,$M��L��H�
ϊ�L��H��HE�1��0}������IH����{��I�,$L��L��H�
��L�%�H��HE�1���|������
H��$(dH+%(��H��0H��[]A\A]A^�m{��DH��$(dH+%(��H��0[]A\A]A^ÐL�7H�
�L�-
�I��L��M��IE�1��f|������L���{���1���H�=1e�4���@H��L���|��H��$�L���|�����b���H�$H9�$��P���H�D$H9�$��=����L��L����k���(���辌��H�=�c1����H��<H�=a1����f.���H�ńgH�8H�=�`H�P�H�pH�H����D��H��I��dH�%(H�D$1��?/H�$tHH�q�gH��H�
ЈH��`�L�@1��0{�����t9H�$H�T$dH+%(u H���f�H�D$dH+%(u	H��闊���ҋ��H�=�b1�����@��ATSH��H��t@�?I��t8�y��L�H��s�f�H��I9�w
�;/u�I9�vH��L��[A\�f�E1�H��L��[A\�f�H��L��L)�肚��H��^hH��t�L�cH��L��[A\����H�e�gH�=�^hH���H�x魇��ff.�f���H�=�^hH��t髉��SH� �gH�{����H��t�8u
H�{[鳉��H��[��f.���AUL�-*_ATL��USH��dH�%(H�D$1�I��谇��H�$H��� o��L��H��H���2���H�5�]hL���#���H���x��H�,$H�
�L��^H��HE�H��tI��1�H�̤�L���4y�����t=H����w��H�4$L��薆��H�<$�w��H�D$dH+%(uH��[]A\A]�H�=�`1�����迉��ff.�@��AUATUSH��H�ہgH�L�(H��txH��1���H��H�|�u�zHc�H���[o���L�(I��I��H��H�\�H��u�H��I�L��L����k��L���v��H�������[]A\A]����n���L�(I��뷐��H��hI��H��$8H��$pH��$H�L��$PL�D$ L��$XL��$0H��$@dH�%(H��$(1�H��$p�D$H�D$�L�L$H�|$ �&��ʃ�L�H�I�T��H��t&H��H��!tQHc���/v�H��H��H�I�T��H��uڃ� t0H�D� L���?���H��$(dH+%(u3H��h�@H�ɁgL�ѾH�"`H�81��h���������ԇ��@��H�?H�6H��H��鉈��f�AWI��H�=M\AVAUATUSH��HdH�%(H�D$81��F�D$(謄��H��tOH�Ǻ
1��8��H�=\f�D$0臄��H��H��t'1��
���f�|$0f�D$2tf����D1�H�T$0�T��f�����X�D$0P�O�|$(9��0���Hc�H��I�G1�H�D�H��D$,�����D$��L�-w[H�D$ H�D$f�1�H�5����i�����~HcD$,D�d$E1�H�l$H�D$L��A��A�H���GA�O��D9�tD��H9�GT$(H�CL��A��H��1�H���Xi��Hl$Dd$E9�t	H�CH9�w�H��D��I��H�X�g�
H�0�S���H�t$H�FH;t$ tH�D$�9���f�H�D$8dH+%(uKH��H[]A\A]A^A_ù�����f�|$0������D$2f�������������������者��SI��I�ѹ��H��dH�%(H��$1�H��H���p��H�gI��H�
�YH��Y�H�81��e��H��$dH+%(u	H��[����ff.�f�PXH���I��H�t$(H�T$0H�L$8L�D$@L�L$H��t7)D$P)L$`)T$p)�$�)�$�)�$�)�$�)�$�dH�%(H�D$1�H��$�H��L��H�D$H�=�XH�D$ �$�D$0H�D$�������g��ff.���AUI��ATI��USH��H�z	H���j��H��tcL� L��H�xL��H���F���H�KH�B�D%H�QH9�vHH�t@0H�{H��H9�HB�H�3H��葎��H��t;H�KH�CH�QH�SH�,�H��[]A\A]�DH�CH�SH�,�H��[]A\A]�H�=6X���DAWAVAUATUH��SH��H���H�|$H��dH�%(H��$�1��ir��H�D$(H���QI��H���kH���q��H�D$�D$L�l$(1�I�ؾH�
�H��WL���r������IH�D$0L�5{�H�$@L���0���H����H��H�X�p��H��H��H���}h����u�L�|$(I��L��H�
+L��M��IE��q�������L���2p��H�4$H�|$(�Tq����u��D$H�������p����@�h���H���(p����+T$H��v
�|�.exeu��HcD$H�|$Hc�H�4�If��L���a���H���1���L���@���H�|$(�o��H��$�dH+%(u>H���[]A\A]A^A_Ð�D$H�-{VH�D$���H�=�X1����老����ATI��I�|$USH�t,1�1�DH�<��'o��I�|$H���]H��I;\$r��o��[]I�$I�D$I�D$A\�f���AWAVAUATUSH��L�wM����I��I����H�_���@�EH��L9�sBD�m�H�<�N�<�H��I�w�|�����u�L���pn��I�\$M�t$�EH��J��L9�r�M��tv1�1�1��fDH�É�QH��L9�s��H��H��t�p9�uݍQ��H��L9�r��I�T$9�v-)�H��H�<�1�L��[]L��A\A]A^A_�]���H��[]A\A]A^A_�f�H�_�i������AWAVAUATUH��SH��(L�oM��M����I��E1�E1�1��fDI����H��L��L9���M9w��H�UH��H�$H��H�T$H�xI�GJ�4�H���)�����y�H�CI�t$L9���H�T$H�D$J��H�t$J�<���l��H�UH�$L�mL�d$H�H�H�L��H�H�\$L9��_���L9�v*H�EL��@H�<��l��H�EH��H��H9]w�L�eH��([]A\A]A^A_�fDI��H������DL��L9�t�I9�v�H�U�J�<�N�,�I���;l��H�UJ�*H�H��H��H�H�EH9�w��S�����AWAVI��AUI��H�=�RATI��USH���I{��I���b��H��H��t/H��L��L�����I�~I�v�H�
vg�B|��L���w��M����L���|��H�D$I����L�{�:L����y��H��H��t�H��tH��L���\~����tL��L��L���
���H��u�H�|$�;k��I�|$I�t$�H�
�ug�{��L���(w��H���k��H��L��L��[]A\A]A^A_�p�����AWAVI��AUI��ATI��USH��H�vH����I�}1�1�1�fDH�lj�H�H9�B�BH��H9�r�I�|$H��t/M�D$1�1�I����H�H9�B�BH��H9�r�H�����@a��L��H�5gQH��I��H�D$1��_��H�5dQ�1��|_��L���Tj��L��H���Ij���H��vgD�x�tf.�H�3�-�sw��A��s�H�3�
�`w����L���V�H�3�
�Iw��H�|$�i��I�|$uH��[]A\A]A^A_��H�vgL��H�5T�1���^��H�5<T�1��^��L���i��A��t�H�3�-��v��A��u�H�3�
�v����L����H�3H���
[]A\A]A^A_�v��H�z1�H���u����M���@��AUATUSH��H�oH��tIL�gH��E1�1��A�EI��H9�s,I�4�H��H���{����u�H���[]A\A]��H��1�[]A\A]���H��H��tgH�8虊��H�rtgH�8芊���� Y����Y��H�=/NhH���&X��fD��H���E1�dH�%(H��$1�H�T$H��L��$�H��H��$�H�$HDŽ$��H�H�׹1��H��L���\��1�H�5OH�=O�v��H��$dH+%(uH�����y��f���H��sgU��H�8蚉��H�ssgH�8苉����!X����X��H�=0Mh�+W�����g����]�t��ff.����H��qgH�x ���H�=5Mh�@��USH��dH�%(H�D$H�aqgH�x �Hv��H�MhH��tL���Y��1�H��T��^X����tV�;�9H�D$dH+%(�WH��[]�fD�H���sY����t�H��T�1��X��H��u�D$��LhH��u�H�=�M�u��H��H��u�H��M�H��謁����tH�wM�H��蔁�����[���H��gH�=�Kh� LhH��KhH�����H�τg��Kh����H��Kh詇���������=�Kh��QT����X����uK�=�Kh�XV��H�=����c��H�=����� !����H�5�LH���ix����������P���@�=>Kh���S����qw������bKh�D��H��H�=L�|t��H��u�9Kh�ҍB��ND�H�����
1�H����n��H���@���tRATI��U��SH��fD��t9kt/H�{M��tH��tL���w����t�C`H��`��u�[1�]A\�H��[]A\�1��DAUATUSH��H���?�
�G0��u�u� tH��[]A\A]�L�%IpgL�-���1�L��I�<$�V���KHc���!I�<$L��1���V��H�H�H�KH��t+�CI�<$����I�<$H�[K�1��V��H�HŋI�<$�����P�����C0H�K��H����H�8K��tH�{H�K���1��KV��H�H�fDH�����)�L�K I�<$1��L�"rH��J�
V���C1����L�K(I�<$H��1�[L��q]�A\H��J�A]��U��f�I�<$H�fJ�1��U��H�H��������	���\���H�LJ�C0tH�{H�-J���1��qU��H�H�H���,���I�4$�
�b�������f.���
�����f�H�-Yng�
H�u��a��H�K �9���H�}H��H��j1�[�]A\A]��T��D�C0�������T����H����1��T��H�KH�H�H���������fDH�]I�=���@H�:I����@H�WI����H�{H�6I���H�0I������AUI��ATI��USH��Hc_���=w�x��H���Icm���=v_��t��{D��)�tH��[]A\A]û{��u�I�|$I�uH��oH��HD�H��HD�H��[]A\A]��s��f.��x��H��,��SI��I�ѹ��H��dH�%(H��$1�H��H���]��H��lgI��H�
GH�G�H�81��>S��H��$dH+%(u	H��[��}r��ff.�f�PXH���I��H�t$(H�T$0H�L$8L�D$@L�L$H��t7)D$P)L$`)T$p)�$�)�$�)�$�)�$�)�$�dH�%(H�D$1�H��$�H��L��H�D$H�=6FH�D$ �$�D$0H�D$�������T��ff.�AWAVAUI��ATUSH��(H�L$H�t$�T$�|T��L�5igH�NkgI�H�$H��t%H�8H�G�1��Q��I�>�$_��I�H�$I�MH��F�I�]H�81���Q��I�ML�-�FH��u-�~f�H�$H��L��H�81��Q��H�H��tS�9u�L�M��tFL�=4EL�5�L�-um@H�$A�8L��L��IEξH��H�81��CQ��L�M��u�H�D$���tH�$�
H�0��]��H�D$�H�\$1�E1�DI��A��tfDA�U`I��`A����u�I�}XD��L�����4@��Hc��z��I��H����Hc�H��H�<@D��)�H��H�L�H�@H���K~��I�]XH��t
�D��M���Mc��AoEK�H��L��AoMH�AoU P �Ao]0X0�Aoe@`@�AomPhPA�$���|M��L��L�5���1��
f�A�E����t&I��`��u�Hc�L��`�m��A�EL��1҅�u�Hc�L��`M���m��A�$��td@H�L$H�����A����H�	��H�YL�l�L�;A�A�O<-����uOA;D$tuH��I9�u�I��`A�$��u�H�$�
H�0�\��H��(L��[]A\A]A^A_�y\��f�I�|$H��tL���^o����tI�|$ H��t�L���K��H��t��t$L��I��`���뉐��A;D$t�-�e���I�|$H���W���I�w�o�����F����@L�d$M��ME�����L��1�L�5[������fDH��H��ggI��L���uJ��H�Ou!H��C�L��1��kN�������H��ÐH��C�L��1��JN�������H��ËOH�dC�1�L���'N�������H���ff.�f�H��H��H�pH��tH�@H��tH�11��fD�p�G0t��t*L�M�@A�8-t��~(H����pH�rH�0H�r�f�H�w@�f.�H�5C��@AWAVAUI��ATA��UH��S��H��dH�%(H��$�1�H�D$��t;H����F0�t,H��$�dH+%(��H�5C��fD�F0� �UE��A���@tI�U H9�t	H����I�m E����I�}���U��vx��u|��s��uH�UHH��t�����6��,�tI�}�H�L$D��H��L���k������1�H�t$H��U8��E���@����A��A���/��L�T$ L�M(��H��AL��AL�׺LD���1�L�T$�t��D�u0L�T$Ƅ$���D����������D���A���I���1�}�7���/1�D��H��L��L�T$���L�T$��A���
f.�H��$�dH+%(�H�ĸD��[]A\A]A^A_�@H��$�dH+%(��D��H�5AH�ĸH��[]A\A]A^A_�v���fD�M�Q�����H�
z>Hc�H�>���H��@L�׺�H�5�@�L$HE��q���L$I������H��$�dH+%(�DD��H�5�@�c���f�I�}����DH��cgH�8E����H�MM�Ѕ�t"H��@�1�E1��LJ�������H��@�1�E1��*J�����DH�U1���u�������f.�1���u�E@H�U��l���@H�UH�E@�
������!ȉ�K���H�E���H�EHH���2�����*���f.�����tI�}��H�L$D��H��L���f������H�|$�?-�<H�]H�t$�
�p��H�H�D$�8�����@D��H�5�?H�����A�����fD�����tI�}�@H�L$D��H��L���������~H�|$H�t$�
�?-���_��H�U�H�D$�8�5����x���H�M���LH�1�H�UHH��t���tH�UH�H��t	�8��A�����fD1���uH�E@H�UH�����f������tI�}�xH�L$D��H��L����������H�|$H�t$�
�<���f.����`�tI�}�nH�L$D��H��L���������fH�|$H�]H�t$�
��^��H�H�D$�8�%����h�������tI�}��H�L$D��H��L���f������H�|$H�]H�t$�
�&t��H�H�D$�8��������E���9D�JE���,L�T$ L�6?��1�L�׺��L�T$�o��L�T$D��L��H��A������G����b���f�D��L��H���2���A���J���f.��MM��H��<1��E1��F�������H�EH������H�E���1�1�H��U8��E�����D�tI�}��D��H��L���+����ËE0����	�E1��
���H��$�dH+%(��D��H�5;>����L�JM��tqL�T$ L��;��1�L�׺��L�T$�Kn��L�T$����H�EH�U@E1���/���H�EH�U@E1�H�����H�����H�U@H����D�J�Q���H�U@H�EE1�H����A����������d��H�=�=1��:�f.�D�E��t@H�W����N`H��`��t%9Fu�1��ztH�BH�G��n���fDH�vXH��u�������f���PXH��H��t	1�1��x����F��ff.���AUATI��UH��H���H�L$8L�D$@L�L$H��t7)D$P)L$`)T$p)�$�)�$�)�$�)�$�)�$�dH�%(H�D$1�H�=)[gH��H��$�H�D$�H�D$ �$L�/�D$0H�D$�r�����t&L���&Q��H��t1�1�L��H������F��H�=_:1���f.���AWAVAUATI��UH��SH���L$H����H��\gH�I��1�L�oH��8�H�;�tC��I�NL�5�8H��u �pf�I��L��1��MC��I�MH��tR�9H�;u�M�EM��tAL�=�6L�5�zDA�8H�;L���H�
_IE�1�I���B��M�EM��u�H�3�
�O���UL�5z9��t �|$t0A�$9E���E`H��`��u�H�������[]A\A]A^A_�L�}M��t�L����O��L��L��H��H���G����tj�L��L��H9�HF�H���G����u�M�|$L���O��H�}L��H���G�����v���1�H������g���f�1�H������Z����1�H����L���^O��H���|���fD��AWAVAUATUD��SH��H����|$H�T$H�L$ L�D$dH�%(H��$�1�H��t
I�8�XD�l$A���f�A��L�{��
HDŽ$�A��A��L��$�D��$�H��$�D��$���$��$���
�l��E��t/M�7A�>-tzH��$�Lc�$���$���D��$�Ic�Ic�L��H��H�<��^\��C�D%Hc�H��H��$�dH+%(�EH���[]A\A]A^A_�DA�F���y���<-�cI��L��$�<hu����L��$�H�t$L���.������������������L��$�M����L���M��I��H��vRA�>nuA�~ouA�~-�X
H�D$H���t1H�{H��tL��L���4E�����+
D�[`H��`E��u�H�\$��uA�<$h��	H��L��������������������L��$�M��u�H�\$��fDA�D$N�4㉄$���$�I��L��$�D�h�D��$�E������H��$�D��$��7���A�~�rM�n���IH�5�5L���B_������H�5�5L���+_�������=L���vZ��H�D$0H����H�D$0L�t$8�D$TH�\$H��H���D$PL)�H�D$`I�FH�D$(H�D$H�t$HH�D$XL�|$@M��A��D�3E��u7H�|$(��H�|$��H�[XH��u�L�t$8L�|$@D���@H�kH����H���K��H��L��H��I���ZC���…���K�'A���������<=��H��D��M��H��$�H��$�H���=��������������ZL��$��-���L���J��L��L��H����B�����oH�D$H�����t$PH�D$(H�\$�t$T�D$P�H��`�����H��TgL�L�l$xH�
;YH��3�L��H�D$xL�1��K������'H�D$ L�=wL�M����H�D$ L�=�vL�`�KDL�t$xL���L��H�
�XL�s3M��IE�1��,K�������L��I����I��M�D$�L�t$xL���L��H�
�XM��IE�1���J�������L���I��I�<$�y���L�d$xL���L��H�
CXL���eM��IE�1��J�����t@L���HI��H�D$xH�t$H��f���fD1�M�F�H��2�iJ�������H�=h21����f�L��$�M�7��$�tzHc�$�HDŽ$��P��$�H��$�L�4�����@H�t$H�|$�L���aX�����M���H���������H��$�L�0H�=*RgH�
<WH�/H��HE�A�~-����H��$�H�2�D�1��I���������H���H��H�t$H�|$��f���H�5�1L���Z�����CH�5WwL����Z���������H�|$H��tH�t$H��$�1��<����<����Y��DA�������}n�L�}o�B�}-�8L�uL���G��L��L��H��H�D$h�q?���������H�T$hI������fD�<=���������Hc�$�L�t$8D��L�|$@�P��$�H��$�L�4��U���H�\$M��H�t$H�|$L���V��H��$��P��uH�PH��tH�t$1�1��xV������@H�t$H�T$x1���D$x�SV������fDH�t$H�|$1�L���4V��H��$��f.�L���F��L�H�D$0�C���H�T$HL��H���H>�����<L���XF���L��H�=�/H9�HF��>�����N���A�?n�s���A�o�h���A�-�]���L�t$XL��H��L����=�����RK�&�����t$P�|$TL��M��H��TL�/��H�t$(H��IEЅ�L�NH�t$LD�H�|Qg�v�H�8R1�H�?1�28��AXAYfDH�t$H�|$1�L���U�����H�|$H�޹L��H�\$��T������H��$�L��$�H�\$H�x���U��I��-M�7���H�D$�t$PH��HDD$(Dt$TH�D$(H�D$0�t$T�8t@H�D$`�D$PH�\$H��$��#����T$PH�t$H��$�D��M���������D$PH�\$��L�t$XL���D��L��H��H���m<�����������D��M���8���H�D$H�l$H��-D�E��t#H�UH��tH�޿1��d9���M`H��`��u�H�zPg�
H�0�uQ������8��H��$�D��$���$������A��I�����H�\$�D$P�&���H�|$H�����H�t$H��$���	�����H�}OgH��M��L�",H��+�H�81��*6�����H�\$�q���H�|$ �6���H�D$ H�H���%���H��H�-epH��H��1�H���]8��H�S�H��u����H�=`.1���H��NgL��H�{.H�81��5������7��D��$�L��$��2���f���E��I��1���b��ff.�@��A��1�H�WE��u�v�
t�A��ɹ����O��1��@�ɍA�H��1��ff.���S���T��H��t	X0[�f.���UH��SD��H���+��H��t��H�h(��	X0���X0H��[]�f.�f�AVAUATUSH��H��D�g(dH�%(H��$�1��G, u4��L�l$��D���{D��L���P����u@��u��x@A����9Ct}�K, D�c(H��$�dH+%(��H�ĠD��[]A\A]A^��Q���8��t�H�t$��A�����=��H��*�H��H�MgH�81���3���D�T$A�����Ѓ���<�i���A�������Z�����A�Ā��K�����A��A����DE��6�����R��ff.�SI��L��*���H��dH�%(H��$1�H��H���a=��H�ZLgI��H�
�&H��&�H�81��
3��H��$dH+%(u	H��[��IR��f�PXH���H�t$(H�T$0H�L$8L�D$@L�L$H��t7)D$P)L$`)T$p)�$�)�$�)�$�)�$�)�$�dH�%(H�D$1�H��$�H�=&H��H�D$H�5{)H�D$ �$�D$0H�D$�������]4��ff.�f���AVAUATE1�UH��SH��dH�%(H��$�1��G,�u�O����1ۨ�E1�u�E����1��a���Z���E����E,��ZE�����}�����qE���}�����������}��~��#-���}�;/��H�}H��t
�,������H�E H����L� M��u+���+K��H�E L�`H�PH�U M����L��=�uL��L��H��u��W����fD�U�����H�|$�C�������D$��E�E,����fD�E���q�}�������6�}����E���4H��$�dH+%(��H�İD��[]A\A]A^��H�|$�C������D$A��E�G���@H�|$��B������D$A�1ۉE�E,���������fD�-���H���fD�-���K���fD�|$��r+���|$�-���|$�-���C���1��Q+���}�i-���E,���H�E0H��t��H�E8H����H��Љ��|1��@�H�=a�1��D��1�A�ĉ��*��D���-���E,�������H�=.�1���C���A�ĉ���*��D����,���E,��j����H�=��1��C���A�ĉ��*��D���,���h���H�}�E,��U�����0���+L��D�0E����}���[���<�}���RE���E1�A��A��A��'������|$�',��E�������|$E1��,�����D�|$�+�����f.��|$���)���|$��+���|$��+���E,�Z�����|$1��)���|$�+���|$�+���E,����f.����a)���D���@H��H�?�}/��������|$�W+���|$�N+�����f��|$�7+���|$�.+�����f��|$�+���|$�+������+������*�����E��uE�}��u7��uN�}��uA��������}��~���*����E��uF�}��t����"L���*���‹|$�*���|$�*��뮋|$A�����*���|$�*���(����|$A�����k*���|$�b*�������J��H�t$ ���8��5��H�UH�=r#H��H�EH�01��[���ff.���S1�H���q����C,[�����D����R���f���UH���[����t]�DH��]��(��ff.�@��H��X��f�I��dH�%(H�D$H1��������H��D$(����L�$��D$	�D$H�D$8	�D$,�1��H�T$HdH+%(uH��X��J��SI��I�ѹ��H��dH�%(H��$1�H��H���E5��H�>DgI��H�
�H���H�81���*��H��$dH+%(u	H��[��-J��ff.�f�PXH���I��H�t$(H�T$0H�L$8L�D$@L�L$H��t7)D$P)L$`)T$p)�$�)�$�)�$�)�$�)�$�dH�%(H�D$1�H��$�H��L��H�D$H�=�H�D$ �$�D$0H�D$�������>,��ff.�AUH�ghATI��USHc�H��H��H�H��HcC�SH�;9�|3�LR0�p�����9�L�sHc�H����S��H��H��t;H�HcCL�,�L����D��HcKH�I�EH�<�H��t���CH��[]A\A]�H�=1����ff.����ATH��hSHc�H��HÍG�H����w5HcCE1�~H�H�t��DD��H���t�kH��D��[A\�A�������1�H�=0 �'������UH��H�������H�����H�����H�����H��
]���fD��H�
%Ag@�׍G �E�@��t��@��w��	Ѓ����H�
�@g@�׍G�E�@��T��@��W��	��fD��AWAVAUATUSH��H�$H��H�$H��XH�t$ 1�H�T$(1�dH�%(H��$H"1��<�����UL��$@��1�L��M��1�A��fDH��Hc�H	�L��E1�I9�tSL�p������PЃ�	vԍP���w��WH��H�H	���fD�P�������7H��H�H	��fDH�L$���j� L����<,��H���`M�,L��H�L$L�p���y�@A���L�p���x=��
t8L��M9�u�E��u7� L�����+��H��~#M�,L��L�p���y�DL��E�����E1����$��H��$H"dH+%(�H��X"D��[]A\A]A^A_�fD�� �_���E���V���M9��$A�6I�F�t$I9��A�8 L�pA��u�Rf�L�p����i�����
�`���L��M9�u�E���[���� L����+��H���C���M�,L���fDE1�L�|$0�'fDL�p���xe��
t`C�I��I��tOL��M9�uׄ�uC� L���H�L$L�\$�*��L�\$H�L$H����M�,L��L�p���y�DL��B�D0�T$H�|$ H�D$(��A�Dž������A��L���|���H�L$����� L����%*��H��~mH�L$M�,M�����H�L$��uc� L����)��H��~<M�,H�L$L�����M��A�����M��A�������F���M��A���{���M��A��m�����C��A���������DUH�=:H��ATSH��dH�%(H�E�1��Z%��H��QH�5�91���&���
�V(��H�=�Q�*%��H��RgH��t-H��RgL�%�9H�KL��1�H���i&��H�H��u�H�E�dH+%(uH���
[A\]��'���C��ff.�UH��AWAVAUATSH��H��HL�odH�%(H�E�1�H�E�H�E�H�E�M�����I�}��H�H�591�I�����%��H�M�}�M�u�H��H�M�L�}��0��L��I���}0��A�tHc�H��H�u�良��H�M�H�u�M��H��8I��H��1��,��L��1��&��H�u��L�e�A��L����/���
��&��I�MH���D���H�E�dH+%(uH��H[A\A]A^A_]���A��f���U1ɺH��AWAVAUATA��SH��1�H��HdH�%(H�E�1�H��;gH�8�%��H�5@>��L��A���#D��A�H��H�
��fH�Yg�5��L�%uh�E�M����H�5�7L���B��A��tiH�5�7L���A����tPH�1;gL��E1�H�5wO�����1��"$��H�E�dH+%(��H��HD��[A\A]A^A_]�DA�H��:gD�(H�q:g������E����LL�;H�5G�L���jA��L�5�OgA���H��OgH�E�M���L��L���9A��A����}���H�SH�5��H��H�U��
A��H�U����IH�E�L�HM��t@M�M��t8L��H��L�M�L�E�H�U���@��L�E�L�M�����M�AI��H�U�M��uȀ:-�s�zh�i�z�_L��H�5�N�1���"��H�E�H�XH��t3H�H��t+L�-�5DH�KL��1�H���"��H�H��u�
�7$���`���f����E1��Q���E���/H�E�M�yL�E�L�0L���0-��L�E�I��L��L�E��-��E�dMc�L���-���L�E�L��L��I��H�g5H��1��_)��L��1��"���}�L�kH�s��A��L��A���,������H��MgM��������H��H�����H�;u����f�E1�����H�E�H�E�L�0M�����L���1�A�H�5�4�_!���8���f.�H�=�LE1���������@L��L��H�5�41��L�M��!��L�M�M�����H�}���������f.�L��H�5�L�1��� ��H�E�H�XH���8���H�L�-�3H���%���fDH�KL��1�H��� ��H�H��u����@H��H�5�3H�U��%>��H�U��������L���1�A�H�5:L�M ���&����#=����UH��ATSH���ChdH�%(H�E�1���t0H�!h��H�XL�$��@H���8�	�2��H��L9�u�H�E�dH+%(u	H��[A\]��<��ff.���UH��AWI��AVAUATSH��dH�%(H�E�1��=�h��A�G A��H�5�Dž\�����X�����H��H����H��X����������0������A�G��tKE1�L��`���A�d�1�fDD��Hc�A�)�L�Hc��!����xAÃ�c~�A��E9ww�H�E�dH+%(uGH�Ĉ1�[A\A]A^A_]Ë�)���+���H�5S5�1��14��H�5,5�1��4���i;��H�5�2�1��4��fD��U�H�5�H��AWAVI��AUATSH���dH�%(H�E�1�A�FDž\�����X����G��H���/H��X����������n/������fo�v��Gg�E�SSSS)�`���)�p���)E�)E�)E�)E�����DžH���A�FL��`�����tnDžL����df�D��L���E1�fDIc�H��C�|�H)�L���F����xVA�A��c~݃�L���A�F��L���9�w���H�����H���9
IGgw�H�E�dH+%(u'H�Ę1�[A\A]A^A_]�H�5�3�1��2����9��H�5(1�1��u2��H�5p3�1��b2��f���UE1�H�
ѤfH��fH��AWAVAUATSH��dH�%(H�E�1��-����Fg�<���H���N��H��hH���z�=�hH����1Ҿ�H��8����/�������=Th�	H��@����,�������=2h��Fg�h���6Dž���L�5Igfn�<���H�ÿpf:"�@���H�����f֍����I��H���JH��HgH�����L��P���H�L�:H���L�xI�I�GH�����H������H��H���I�WH����H�����(�"��I��H����I��=YhM�&L�`I�$M�t$��H�����+����uz�$Egfn�H���f:"�L����������=hA�D$AD$um�(A������������L�����1��f��fDH��8����+�����^����=�
hH�V1�H�5C1HD�1��0���L���X>�����3�K�F&��L��H���2�����)L��H�y���L��H���$�����L���>"����L���H�������=&
h���H������H��H9��������H�����H����L�%d���H@H������H����=�hA�GI�G�7��?�������W���qL������1��0��H����1Ҿ��,�����J��������H���H�����H������H9��������F����1Ҿ�H��@����`,�����l����=h���L�%�����L���1��L���1����Bg��h������1�L�������t1f���8����L���-��H���@��9�hw�1�H������J����D����L����A��H�����oh1�L�� �����u*�Q�L���(���� ����g��9:hv)�=5hH�"htЉ�L���H�<�����9hw�L��1����L�� ���H��(���L+����H+����yI��H��@BH��.g����L�5q@��=�
h�L�-��L��H�5BHIE�1������AgM��=�
hME�H�5�.�1������R��H��H��?L��H��S㥛� H�5�.�H��1�H��H)�I��H��.���H�= 
hL�-IDg�!��H�==DgH�L9�t#DH�GI��H�CH��!��H��H�M9�u�H�=�CgL�-�CgH�L9�t"@H�GI��H�CH��U!��H��H�M9�u�H�E�dH+%(��H��1�[A\A]A^A_]�L�m�L���T:�����/�K�B"��L��H���.�����%L��L��L��H��� �����L���>��H��H;������F����="	hu$H�����I��lf��;H���U��I9�u��h������������(9
�?g��h��H��h���@����H��H��?�H��S㥛� H��H�5-1�H��H��L��H)��k���O���H�59,�1���*��H�5y,�1���*��H�5,�1��*��H�5,�1��*��H�5,�1��*��H�5,�1��*��H�5�+�1��q*��H��+gH�t,�H�81��D����j��H�5�+�1��7*���1��f�UH��AWAVAUATSH��H�$H��HL�=�hdH�%(H�E�1�Hc�I�<�H����L�%vhM�,�M���I�=�h�����A��L������ 1�L�43� L���
:��L�������=>hA�}(�I����H�5�+1��^���Dž���L��L�������C=������H���%��A9���1�H�U�dH+%(��H��H[A\A]A^A_]�D��1��4#��A���8���@H�5F+1������Dž��|����k.��D�A���~I��H�5�D�1�D����� ��D����A��
u	������u2������V�����f��I��I��H��������.��D��N@H�=�*�\���f.�A�}(H����1��H�5�*�"���Dž�����H����D�I��H�5D�1�D�����p��D����A���F���L����H�e*�L��������2���I��L��H�5�C�$������I�U H�5*�1����������T�����.��f���UH��AUATSH��H���?dH�%(H�E�1��E�����������;gE1�L�mԅ�~9fD���uJ�{�L��A������{�L���:��D;%�;g|�H�E�dH+%(uDH��1�[A\A]]Ð�{�L��A���:���{�L�����D9%=;g�봐�C��-��D��UH��H��AUATSH��dH�%(H�E�1��,��H�����,H��I����)��H��H��t_�L���\,��H�{H��hI���I,��H��hM��t]H��tXE1�L���=��H�E�dH+%(��H��D��[A\A]]�@H�'gL��H�JBA�����H�81���
���H��&g��H�=a(A��H��|7���H��&g��H�=;(A��H��V7���d�����,��ff.����Uf�E1�H�
ݙfH���fH��AUATSH��dH�%(H�E�1�)E�)E�)E�� ��H�}����H�}����1�H��p����G���E��=�h�E��E��E��E��E��E��E��E��E��4L�%M���H�M�1�H�}�L�����H�M�H�}�L��1����H�}�1����H�}�1�����H�}�1�����L�e�H�]�L+�p���H+�x�����H�=�h���H�=�h�t��H�=�h�h��H�=�h�\���}�uT�}�uNH��%g���������H��H��?�H��S㥛� H��H�5�%1�H��H)�H��L���R��H�E�dH+%(��H�Ę1�[A\A]]��I��H��@B�<�����[4��A�ą��H�}�����1��
���=)hH��6H�
����7gHD�H�5�?�1�Mi�@B�
��H��L��H��S㥛� H�5�%H��H��H��?H��H)�1�I��H��$�y
��I���f���I*�f��f(����*e7gH�5�%��X����^��5
��f�1��*:7g��X����^
�fH�5�%�^��,�������f�H�}����H��l���1�D���%�������L��A��f��H��L	��H*��X��J���H�Z#gH�D$�H�81��
����:���e)��D��UE1�H�
!�fH�:�fH��AUATSH��HdH�%(H�E�1��Z��H�}�1��
���U6g��~1��2����;>6g|�H�}�1�����L�e�H�]�L+e�H+]�yI��H��@BH��"g���tc���YH��H��?�H��S㥛� H��H�5X#1�H��H��L��H)����H�E�dH+%(�9H��H1�[A\A]]�fD��5gH�
3$H�56$1��Mi�@B�a��H��L��H��S㥛� H�5�#H��H��H��?H��H)�1�I��H��"�#��I�xvf���I*�f��f(����*5gH�5a#�M��^���
��f��M�1��*�4g�^
;d�H�5�#�^��,��
�����DL��A��f��H��L	��H*��X��t���H�2!gH�"�H�81�������
���='��ff.�f���UE1�H�
�fH�
�fH��AUATSH��HdH�%(H�E�1��*��H�}�1���
���%4g��~1�1����� ��;4g|�H�}�1��
��L�e�H�]�L+e�H+]�yI��H��@BH�� g���ta���WH��H��?�H��S㥛� H��H�5&!1�H��H��L��H)��|	��H�E�dH+%(�7H��H1�[A\A]]�@�n3gH�
6"H�5"1��Mi�@B�1	��H��L��H��S㥛� H�5i!H��H��H��?H��H)�1�I��H�k ����I�xvf���I*�f��f(����*�2gH�51!�M��^����f��M�1��*�2g�^
b�H�5j!�^��,��������DL��A��f��H��L	��H*��X��t���H�gH���H�81����������
%��ff.�f���UE1�H�
��fH�ړfH��AUATSH��HdH�%(H�E�1����H�}�1������1g��~O1��.���Dž���u
1��k��1�1��� ��������'t��;�1g|��f���1g'H�}�1��K��L�e�H�]�L+e�H+]��H�Ng�����j1gH�
*H�5 1��Mi�@B�-��H��L��H��S㥛� H�5eH��H��H��?H��H)�1�I��H�g����I���f���I*�f��f(����*�0gH�5)�M��^����f��M�1��*�0g�^
`�H�5b�^��,��z��H�E�dH+%(�H��H1�[A\A]]�f.�I��H��@B��������H��H��?�H��S㥛� H��H�5�1�H��H��L��H)����뇐L��A��f��H��L	��H*��X�����H��g��H�=�H��!-����g��H�`g��H�=�H��,����=��H�6gH� �H�81����������A"�����UE1�H�
�fH��fH��AUATSH��XdH�%(H�E�1��:��H�}�1������5/g��~d1�L�%�f.���;/g}DL�e�H�E��4+���Dž��#��1�1�����������'u���.g'H�}�1��v��L�e�H�]�L+e�H+]�yI��H��@BH�rg���td����H��H��?�H��S㥛� H��H�5�1�H��H��L��H)��?��H�E�dH+%(��H��X1�[A\A]]���..gH�
@H�5�1��Mi�@B����H��L��H��S㥛� H�5)H��H��H��?H��H)�1�I��H�+���I�xvf���I*�f��f(����*�-gH�5��M��^��v��f��M�1��*-g�^
�\�H�5*�^��,��B�����DL��A��f��H��L	��H*��X��t���H��g��H�=H��Y*������H�u�1�L���1����H��gH�=�H��!*����g��H�`g��H�=�H��)����=��H�6gH� �H�81��������A�����UH��H��AVAUI��ATI����SH��0L�wH��dH�%(H�E�1�A��H�}�1������5,g��~1����L��L���A��9,g�H�}�1����H�U�H�E�H+U�H+E�y
H��H@Bf��*�+gM��xRf���I*��Y�f��f���H*��H*��^
[�X��^�H�E�dH+%(u0H��0[A\A]A^]��L��A��f��H��L	��H*��X���'�����UH��AWAVI��AUI��H��ATI��1�SH��8L�L��dH�%(H�E�1��b���L��L��L��A��H�}�1�����+g��~ 1��L��L��L����A��9�*g�H�}�1����H�U�H�E�H+U�H+E�y
H��H@Bf��*�*gM��xQf���I*��Y�f��f���H*��H*��^
�Y�X��^�H�E�dH+%(u/H��8[A\A]A^A_]�@L��A��f��H��L	��H*��X��������UH��AWAVAUI��ATI��H��SH��1�H��(L�wL��dH�%(H�E�1��2���H��L��L��A��H�E��=��g�H��H�E��;��H�E�H�E���)g��~"E1�fDH��L��L��A��A��D9=�)g�H�u��=E�g�����H�E�H+E�H�U�dH+%(uH��([A\A]A^A_]�������UH��H��AWAVL�u�AUATI��SH����H��(L�oH��dH�%(H�E�1�A�Ջ=��g�L���m��H�E�H�E���(g��~$E1��D��H��L��A��A��D9=�(g�=y�g�L���$��H�E�H+E�H�U�dH+%(uH��([A\A]A^A_]��I��f�UHc�H��AWAVAUI��H�vATSH��H��(�E�dH�%(H�E�1�H�L��L�<�����I�OI�H�5lI�Ŀ1�����M���mE1��{ �LH��g����{�=��gL��L��L��L�����SH��g1��������u?�=|�g�ZH����f��H*����^E�H�5P�m���DL�����H�E�dH+%(�H��(L��[A\A]A^A_]������SH�gf����l����=��g���
{Vf/���yV�iVf(��Y�f/����_V�Y�f/��!f(���H�5l����:���f�L��蘆��I��H�������H��)gH�5N/�1�E1��o�������f.�H��)gH�5��1��F����f����H�5���*������DH����f��H*����^E�H�5������fDH�5�������e���DH�ƒ�f�H��H	��H*��X��"���f.��Yʿ�H�5Zf(��������DH�ƒ�f�H��H	��H*��X��L���f.�f(���H�5��6����������ff.��UE1�H��AWAVAUI��ATSH��(H�JH���fdH�%(H�E�1�����=��gtA����H�����I��A������������H�5
�g�*1��q
�����gA���H�=�'g�4��f���*
�$gI��H����f��H*��Y��M�M����L�5�'gA�>a� I�]�E�H�;H��H����DH�5F4L��������t-A�>huA�~t L��H�5��1����f.�H�=��L�%��h���I�EH�H��t*@H�HL��1����I�EH�H��H�H��u�A��@f.�H�;H���E�H���L���L���$��A�ą�u��E��u�L��L�����H�E�dH+%(�	H��(D��[A\A]A^A_]�DA�~l����E�fA��l����I�EH�8t�E1��@�E�D��L��L��A�����I�EH�H��H��u��y��������8&��H�kg��H�=�+H�� ���C���DL��H��f��H	��H*��X��Y��M�M������H�g�H�
�%gH�6A�H�81�����������H�
gH�;+�A���01��)���R���@��UH�
�fHn�H��H��0dH�%(H�E�H�gH�U��E�H�E�H�����fH:"�H�c�fH�E�E����H�U�dH+%(u�������UH�
d���fHn�H��H��0dH�%(H�E�1�H�b�fH�U�H�E�H�E�H���fH:"�H�݃fH�E�E�� ���H�U�dH+%(u���
��f.���U1�H��SH��H�ZgdH�%(H�E�1���gH�����H�M
g�oH�g�of��fo
�OfH:�f��H��y
H�E�dH+%(uH�]����z��f.���UH��AWAVI��AUL�-h�gATSH��L�gL��dH�%(H�E�1�����-'�g��L��H�=o�gL�=@,��	��L����e gD����1�L�-? g�H��I�V��E1�jE1�����H�4����g1����A�}ZYu��t
�����8t
L��1��!����g��I��9�w��=��gt�M�fH�E�dH+%(u.H�e�1�[A\A]A^A_]À=��gu��H�=��g�T�������:��f.���UE1�H��AWAVAUL�-�fATL�%��fL��SL��H��dH�%(H�E�1����������H����H����L��0���1��L���H�H��8������1�L��H�����H��0��������=�g�D�%�gE����D�� ���H����H���7�=�gD�5�gH���FD�-�gL����+���H��I��D��AV��H�5^*E��1��'�f�_AX)��gfo1MH�=��gH���g��g����H�=\�g���H�=�g����gL�����g���H�=(
g1�������Hc�H��H��?�u��D�
�gH��L�,�I��1�E����@A�܋=�g�I��L���A�$����I�D$H���L��1�L����H��������1�H����A����A���������H�H��H��L9�sH���H��I	�L��L��L���A�������I�|$L��H�J���L��������R��9�g�;���L�����L������H�=��g�����5q�g��t"DH�5y�gH�=��g�-���
O�g��u�H�=��g�g��H�=P�g���=�g����H��g1��\�gH����H��g�oH�Ng�of��fo
&KfH:�f��H��y�6gL����E1��t,@D��1�H��I�|�������A��D9%�gw�H�=)�gE1�L�-_�g�j���H�=��g�^���H�=��g�����g��u>���M��H�5(1�L���D���`�I�~A���s{��D9%�gvzH�E��E1�I��L���H��~I�F1�H��I��L��L���Z��H�Cg�8u��HgI�NE����x���M��D��H�5k'1������z���f.�L���H����
��f/����L,�L���Y��H�M����f���I*��/���A��L���=�gH��	H��	HD�H�5D'��Z�H�������H������H�E�dH+%(�H�e�1�[A\A]A^A_]�fD�&�g�H�p
���f��\�L���L,����H�I��?M���A���L��L��f��H���H	��H*��X��)���H�����Q��gA���
�����i�������H�5�	�1����fDL���`��H�5�	�1�����L���E��H�5�	�1�����H�5�	�1����H�5�	�1����L��L���������f.�f���UH��H��dH�%(H�E�1��V�gH�E�dH+%(u��������UH��ATL�%��gSL��H��dH�%(H�E�1������-��gtXL��H�=P�gH��g�$��L�����D�%��gDH��1�D��E1�jE1�H�޿�1�����ZY��t�1�����H�=;�g����@��UE1�H��AWAVAUL�-xfATL�%OfL��SL��H��dH�%(H�E�1�����������I��H����L��0���1��L���H�H��8�����1�L��H�����H��0�������=�g��D�%�gE����D������H���gH����=�gD�-�gH���Q�
�H��I��D��AU��H�5%�L�
��g1���foFf��AYH�=@�gAZ)
7�gH�g)
i�gz�gH�'�gH�\�g���H�=��g�[���H�=t�g�O���H�(gDž���D�E����f��=��g��H���gL���H������y����gHc�H�߉{�gH��?���D��gH��L�,�I��1�E����DL�����L��1�L���c�L����1�L��A����A�������H�H��H��I9�vH���H��I	�L��L��L���	����TH�������H���L��H�<�1��s������L�����;�g�\���L���\��H�=��g����=��g��t#fDH�5��gH�=R�g����5k�g��u�H�=��gE1��$���H�=m�g�x�����>���H�����1����D�5UgE��tL��gE1����H���
:gE1�E1�j��1�H�5K�g���y���ZYA�D95	gu�E��1�H����L�-��g�c�H����L��L��H+����L�%�gH��@BHH����L��H������=�g�0��g1ۅ�t+@H���g��1�H�<���������9wgw�H�
�g����������9�}���H�=�g��H�=��g��H�=.�g�I���L���Q�L��fH~��t��L��������4������fHn�fI~�f(��9�fIn܋
�gH�5�!�H,�f(ȿ�fHn��^�B�b�H�=��g��L����H�E�dH+%(�*H�e�1�[A\A]A^A_]�f.������f�D��D�cgH�5�!�H*ÍP��^�B��������g�H�9���L�%T�gL�-
�g���L���P��gA���8�����h��������H�5��1��
���DL���`��H�5��1����L���E��H�5��1�����H�5��1����H�5��1����L��L���������f.�f���UH��H��dH�%(H�E�1����gH�E�dH+%(u��������UH��ATL�%��gSL��H��dH�%(H�E�1���
���-<�gtXL��H�=��gH���g�$���L�����D�%�gDH��1�D��E1�jE1�H�޿�1�����ZY��t�1�����H�=��g����@��UH��ATSH��H�=[�gH��0dH�%(H�E�1��3���H�}�1���H��E1�E1�jD�%Ag��1��t�gH�5��gD����v���ZY�CA9�t��D��H�=�1���H�}�1��^�foE�f�E�fo
�?fH:�Cf��H��yK1����ff.�f���UE1�H��AWAVAUL�-zfATL�%�yfL��SL��H��hdH�%(H�E�1���������L��0���1��L���H�H��8�����1�L��H����H��0�������=S�g�����I��H����D�%?�gE�����8�g��t	D9���D�%$�gD��A�D��D�-�g����H�|�gH���S�=��gL�5���<��AUM��D��S��L�
P�g�H�5D1���fo5->f�AZH�=��gA[)��g5��g)��g5
�gH���gH���g����H�=p�g�k���H�=$�g�_���H�8�fDž|�������@�=��g��=!�g� ����H������H���cH�|�gL���H�������i	�����gHc�H�߉��gH��?���D�
��gH��L�,�I��1�E����DL������L��1�L���S��L����1�L��A����A�������H�H��H��I9�vH���H��I	�L��L��L�������H��������H����L��H�<�1��c����L�����;�g�\���L���L��H�=E�g���D���gE��t!@H�5)�gH�=��g�����5��g��u�H�=��g1���H�=��g�i���@
�/���L������1�L���M�����gH�=0�g1��P�f���=t�gL��������t3D��H�����L��H��L�H�������O��;8�gr�H�=��g1������
"�gL��������t(��1�H��I�|�N��������;��gr�H�=��g1��-���L���U����g��t*H�A�g��1�H�<�������u��9��gwً��gL������E1�L�%��g���zf�D��H�=��gA��H��L�H�s����sL������U�gA9�r̃�|����=.�g�H�������<�H��f��|���9���H�= �gL�%y�g�$�H�=��g��H�=A�g���L�����L��fH~���
��H�=�g��������������fHn�fI~�f(���fIn�
��gH�5��H,�f(ȿ�fHn��^n:���H�=��g�u�L����H�E�dH+%(��H�e�1�[A\A]A^A_]�f�fo%�9f�HDž���HDž����)���������)�������������-E1�L������L������L������E��L�������D��L��A��H��L�H�s�F����sL���;���D;=��gr�L������L���c�L��fH~��	��L���������F�������fHn�fI~�f(��K�fIn�D�;�g��|����H,�f(ȿ�H�5WfHn��^
9�m��������|����=��g�����fo=l8f�HDž���L������)�����L������HDž��������)�����������������g�L�5[����������� ���H�5E��1��X��D��1���uzA���6���H���E��;�gA�����H�55��1���L���l	��H�5��1���L���Q	��H�5��1����H�5��1����H�52��1����L��L�����L������L�������������DH�5���1��}�f.���UH��H��dH�%(H�E�1��6�gH�E�dH+%(u��������UH��AVAUATL�%��gSL��H��dH�%(H�E�1�����-��g��L��H�=(�gL�5�g��L��H���gL�-��g���fD�=�gD�%R�guhH��E1���E1�j1�D��H��1���_AX��t�����8t�A�>��DH�E�dH+%(��H�e�1�[A\A]A^]�fDH��D��1�H��j��M��E1���1��U�Y^��t?����8�Q���A�>u�H�=��1������H�=q�g�L�������H��D��E1�E1�j��1�H�5��g�����XZ�D���H�=?�1�����1������ff.����UE1�H��AWAVAUL�-rfATL�%�qfL��SL��H��dH�%(H�E�1�������8���I��H���L��0���1��L���H�H��8������1�L��H�����H��0������=�g� D�%�gE����D�����H���gH������g��u
�H�g�D�5�gE9�v
D�%�gE��=pg�|�=bgH�p�L�-#�LD�H�7���H�5�HD���H��D����H�6�gAVI��PL�
-�gH�51�AU���f��H�� fo�3H�=��g)
��g��g)
��g�gH���gH���g�C��H�=L�g���H�=�g���H���fDž����������=��g��H�|�gL���H������	����_gHc�H�߉�gH��?���H��I�Ƌ=gL�,�1ۅ����L�����L��1�L������L���+��1�L��A����A����7����H�H��H��I9�vH���H��I	�L��L��L��������1H�������H�����L��H�<�1������&L����0�;�g�\���L������H�=E�g�0���D�%�gE��t!@H�5)�gH�=��g�}����g��u�H�=��gE1�E1���H�=��g���������H�����1��]��D�5gE�������gE1�E1�A��A���G�H����L�
��g1�j��H�5��g1�����AZA[���SA�D95�gvI�=�gD��gt�H��L�
��gD��1�j���A��H�5v�g��AXAY��E��1�H�������H����L��H+����H��@BH��H���gHH�H��L�-'�gH���������L��H������=�guG������f�=�g�H*�D��g�^�0�P��D���H�5e�0���=�g�3�5�gE1�t0fDH���gD��1�H�<��������:A��D9%dgw�H�o�f������������9�����H�=��g�~�H�=��g�r�H�=��g��L����L��fH~��A��H��������������������fHn�fI~�f(����fIn܋
�gH�5(�H,�f(ȿ�fHn��^�/�+��H�=��g���L�����H�E�dH+%(�oH�e�1�[A\A]A^A_]�H����gD��E1�jH�5_�g��1���E1����;gY��_9������H�=�1��%���}���E)�E��D��E��H�5���~���I���D�%��fE���u���H���gL�-��gH��������L���������fA�������������H�5���1���DL�����H�5{��1���L������H�5|��1��j�H�
[�gH�X�g�1�H�5��I�H�5W��1��6�H�5Q���%��p�H�5���1��
�L��L������f���U1�H��SH��H���fdH�%(H�E�1����gH�����H���f�oH�R�f�of��fo
*-fH:�f��H��y
H�E�dH+%(uH�]������f.���UH��AVAUL�-�gATSH��H��L�gL��dH�%(H�E�1�����-��g�L��H�=!�gL�5��5�L��L�-������@�=��f���=��guH�����gH�s1�jE1���E1Ƀ�1����_AX����u���P�H��H�s1�j���gE1�E1���1�����Z^����t	�=z�ftPI���=Y�gt�L�cH�E�dH+%(uYH�e�1�[A\A]A^]�f�H�S�3L��1��p����8���H�S�3L��1��X����fDH�=I�g�d������J�f.���UE1�H��AWAVAUL�-�kfATL�%okfL��SL��H��dH�%(H�E�1��-����&���H����H���tL��0���1��L���H�H��8�������1�L��H�_���H��0�������=N�f�SD�%K�fE���)D�� �	�H��gH�����=�f����fL����J��D��H�5�
��A��1��O��f�H�=��gH�Y�g)B�gfoJ*K�g����H�=��g���H�=��g��H�=g�f���f1��)�g���H�e�gH��������o�fHc�H�߉��gH��?����=T�fH��L�4�I��1ۅ����*f.�H��gH�BL��1�L������H�������1�H����A����A��������H�H��H��I9�vH���H��I	�L��L��L���v�����H�
��gH����H�)���L��L�J�| ��������L������;|�fsnL��A�����H�L�gI��L�=Q�f��&�����H�������H�����H�BH������H�57��1��	�f�L���X���H�=��g��5��g��tf�H�5��gH�=�g����
o�g��u�H�=��g�'�H�=p�g�{���=��f���H�Y�f1��|�gH������H�Y�f�oH��f�of��fo
�'fH:�f��H��y�V�fE1��t1D��1�H��H �gH�x�������A��D9% �fw�H�=K�gE1�L�%��g�,��H�=��gL�-�f���H�=��g����f��tXH�E1�H��~D��1�H��H��gH�@H��I��L��L���P�A�}��=��f��A��D95��fw�L���c���
Ӿf/���L,�L���t���H�M���f���I*��J��A��L���=(�fH��H��HD�H�5_��u��H�=��g���H�����-��H�E�dH+%(�fH��1�[A\A]A^A_]�fDD��A��H��H=��gH���5W��D;5��f��������D��M���H��Hj�gH�5�	H�H�1�����������>�g��E�����\�L���L,��_���H�I��?M�����L��L��f��H���H	��H*��X�����H���������fA�������)��������H�5���1����fDL��� ���H�5���1���L������H�5���1���H�5���1���L��L���t�����f.�D��UH��H��dH�%(H�E��9��H�U�dH+%(u�����w����U1�H��SH��H���fdH�%(H�E�1����gH���,��H���f�oH�r�f�of��fo
J$fH:�f��H��y
H�E�dH+%(uH�]������f.�UH��AWAVAUATSH��(D�-x�gdH�%(H�E�1��=J�gtD�o�B�g�����߾�!�H�2�gI��H����E1���t1E1�@D��I���O����������gA��D9�w֍C�L�u�L�}��E�H��M�4��)�C�A�L���A�<�H��I�����������u�A�$L���D���z����xfH�E�dH+%(u}H��([A\A]A^A_]�����H�g�7�H�H�gI��H������H�5_��1��1���H�5D��1����H�5$��1��
���U�D��UH��AWAVAUATSH��HH�}�H�OD�=��gD�-��gdH�%(H�E�1�H�M��=��gA�O��M�tD�oL�%f�gL���N��-?�g�@L��H�=��gL�u����L��H�]��B��f��M��L��D����D�e���y���y'����8u�H��D���2��D�=1�gE��tՀ=�g���=��guwH�E�E��t��=��g��H�E�H�M�H�HH�E�dH+%(��H��H1�[A\A]A^A_]���K��8�B���H�5���1�����L��D��D��M�@�^���j���f�L��D��D���E���6���9���H�=��g�����H�E��x����7����z�f.���UH��AWAVAUATSH��XH�}�fo� dH�%(H�E�1��=��gH�E�)E����=��gH���H���HD�H�5J1����H�<�fH�8���=��g�_H�E��=��g���g���Ã���� ��:�I��H���(A���H��1�Hc�L)���H��1�H�pH��H�M�H��L��H��H��oL�I��H��H��o�o@�oZAMXAE
BI9�u�L��������g���H�M����fE1�L�af��=��g���Ã�v{��M�4$�i�H�E�H���VA�fD�+�H��1�Hc�L)���H��1�H�xH��C��H��L�I����C��I��I9�u�H�}��G���=�f��t[E1�L�m�fDJ���fDH��y'���8uI�$�L��<����=�guԋ��fI��H��L9�w���gI��I�� L9������1�H�}��`�H�E��=��g�����=��gt&H�U��H�5=1�����H���fH�8��H�E�dH+%(u=H��X1�[A\A]A^A_]À=@�g�������H�E��H�5A��1�����^�ff.���UE1�H��AWAVAUL�-h`fATL�%?`fL��SL��H��XdH�%(H�E�1��=�����G	L��0���1��L���H�H��8������1�L��H�d���H��0���������H������H���N	�=D�g���=U�g���O�g���P�Ǿ ��H������H���
	H��������������f��g�=��g�L2fHn���)������H�������7�������=��gD�=��fH���L�%��LD�D�5��fD�-��g����H���AW��D��M��H�5�E��1�����f�AYAZ)ٽgfo�H�=
�gH�Ͻgнg��H�=l�g�G��H�= �g�;���!�gH�=��f1����g�A���=�g�%�-���?�=нgɁ����@�=��gD��=νg��������=��g�N�:�Hc�H�����D���gH������H�C?H��H��H������E���H���L������HDž����Dž����H������D������I��L������=�gt$��c���A�E�����=��g���E�������=�f�A�EI�����I�EH��H����E����E1��
fDI�]�=��gD�5��gtE�uD��1�H��H�H�������s��H������IU�����g�����������L��D�����������8�����(A��D;%a�f�w����=(�g��H������L������1�H��L�����L������L�����1�L��A��������A���� ����H�H��H��H9�vH���H��I	�H������H������H�������v�����NH������H������H������I�}L��H�������������������������;w�g����H���������=B�g�~H�=�g����5�g��tH�5�gH�=h�g�C���
źg��u�H�=�g�}��H�=ƺg�Ѻ��H������1�H������H�����5������=��f����H���f1��ɺgH�����H���f�oL�%@�f�of��fo
fH:�A$f��H��yA$�=}�g����w��H������1��_�g買������H�=��g�n��H�=7�g�b��H�=�g����=$�g�\��g1�L�-��g��toI�$A��E1�I��L�����H��~I�F1�H��I��L��L��������fI�N������A�M��H�5�L���1�������9��gw�L������
�f/��{�L,�L����E�$$M���tf���I*��e��D��L��H�5��1�觽���=1�g����H�������`���&�gL��������t�؃�H��I�|���9�gw�H���������H�E�dH+%(�H�e�1�[A\A]A^A_]�L���,�����=��gH�
�H�	��HDˀ=��gH�5�HE�H��1����H��fH�8���=W�g�����H��������H�<�H�5���1�覼��H���fH�;�g��=&�g������gH�5���1��q���H�;�9����g�������H�������������g���A�1�M��H�5������%���9��g������
�����ڹ�����g���dD�q�gE����={�g�&����=P�g����H����
���H����H�5@��1�謻��H���fH�8�m��P���H�5<��1����H�5���1�����\��L,�I��?�w���L��L��f��H���H	��H*��X��t���H����R���q����5ögH������� H�
D��O�����H�=P�莹��H��fH�8���	���1������L��L���c�H��������H�57��1��A��H�5���1��.��fDH�5/��1����H�50��1����H�5���1������H�������<�H�5���1��������H�5���1����H�5���1����f.�@��U1�H��SH��H�:�fdH�%(H�E�1��¶gH��蜺��H�-�f�oH���f�of��fo
�fH:�f��H��y
H�E�dH+%(uH�]����Z��f.���UH��AWI��AVAUATL�%�gSL��H��Hfo�dH�%(H�E�1�)E��n���-ϵg�L��H�=Ǵg����L���j���H�E�H�E��D���H������
��f1ҋ=ŵgH��H��H��I�G(D���E�H���������H��Ic�H�E�H��H��H��H�H)�Hc�H���4���1�D�ʾ������uI�D�H�}�1�����=N�g��=?�g�U������f��t�D�%$�g1�L�m�fDI�G(A��L��D��Jc��E�H��H�E��w����uI�GI�G(L��D��Jc��E�H��H�E��H����uI�GI�G(1ɾD��B���(����uI�G ��;Z�f�p����)�����9C�f�Y�������@L�m�D�ʾL����������L�m�D�ʾ�E�L���������H�E�dH+%(u"H��H1�[A\A]A^A_]�H�=	�g����������f.���UE1�H��AWAVAUL�-�WfATL�%oWfL��SL��H��hdH�%(H�E�1��������x
L��0���1��L���H�H��8���襹��1�L��H�����H��0�������s���H��x���H���
��)����;�g����	��g�����'�g�����Ǿ0���H������H����	H��������������	���f�ݲg�=Ӳg�L2fHn���)������
H��������������	D�-d�f�Z�fD�%��g�z���E��A�ؿ��D��H�5�1��|���f��fo�H�=�g)
��g��g
��g)��g)
ƱgױgH�t�gH���gH���g���H�=�g���H�=��g�����gH�=\�f1����g践���=��g�J���Hc�H�����D���gHDž����H������H�C?1�H��H��H������E���Q�؋=<�f�L�$@I��L�����I��A�$����I�D$(H���1E��tIE1�f�E��1�N�<�譺��A�I�D$(F��E�������fA��A9�rɀ=��g�L�=�g��L������L������1�L��L���D���L��x���L���u���1�L��A����A���職����H�H��H��I9�vH���H��I	T�L���H������H������L���ӵ������L������H������I�|$L��H����9���������;8�g�����H�������&���=�g�VH�=үg�]��D�
��gE��t �H�5��gH�=�g���D���gE��u�H�=��g����H�=��g�9����=��f�n��H��f1����gH��膳��H��f�oH���f�of��fo
�fH:�f��H�����=c�g���=Z�gL������1ۅ�t.D��1�H�@H��I�|�2���������9!�gw�H�=�g1�L����ڻ��H�=ígL�-<�g�ǻ��H�=��g�k���5�g����D��E1�L�<@I��L�����K�t7K�<�L�K�44I�����I��u�c�fH�����L������L���I�O(A����J��VH�5_�AQM��L��1�����g���9Q�gZY�q���L�=��gE1�H�����I��L��趺��f/&����H,�J�3L��I��(�����C4I��I��u�L�����M����f���I*�����}���L��H�5���İ��L�����M���Yf���I*��������>���L��H�5���腰��L����M����f���I*������1����L��H�5����D����=&�g詫��H��x��������gL��������t#@�؃�H�@H��I�|(豺��9�gw�H������蝺��H�E�dH+%(��H�e�1�[A\A]A^A_]�f��������D�5��gE1�L�������DA��D;=I�f�����I�D$(D��L��D��H���Dž�����������������y�H�5C��1������H��VH�5������\��H,�J�3N1,3���L��L��f��H���H	��H*��X��
����=ګg
�t���H�=*��h����c���L���L��������H�����H�59��1�襮������H��x����4����~�g�Z������Y�=f�g�h�׾����H�@�gI��H����6�g����E1�D��A��I�\��������gD9�wݍC�L�����L������Dž����H��M�d��,��C�A�$L��A�|�H��I���@�����x#��u�A�U�=��gL���!������l���H�5M��1��&��fDL���L���q�����L��L��f��H���H	��H*��X����L��L��f��H���H	��H*��X����H�5x��1�����=
�g���g�����H�5��1�����=ʩg�H���U��H���gI��H����E1���s�������航��A�EH�5K��1��1��H���������H�5��1����H�5 ��1����H�������S��H�5���1����H�5���1�����L��L������H�5\��1�������H�5N��1�������UH��H��dH�%(H�E�1��ҨgH�E�dH+%(u�1�����fDUf��H��AWAVAUATSH����,���fo�D�S�fdH�%(H�E�1�H�E�H�E�)�p���E�)M�E�E���	��,���I��I��1ۉ�H���H��H��@���H��P���H��8���H��`���H��0���f�H��8���1���g�ȫ����H���jL��H��@���L��A�1�PL��������ZY���H��0���1�膫��H��`���H+�P���Hi�@BH��h���H+�X����L��p���H�L�}���L�����Hc5a�gL�����9�f�I���L���ѳ��L��fH~�������,���H���H�'�HD�f(ȿ�H�5&�fHn�����L��脳��L����H��������H�5+�f(���H����ҩ��fHnӿ�^�H���H�5)�f(¸諩��1�H�U�dH+%(u[H�e�[A\A]A^A_]�L��p���H�L�}���L�����Hc5\�gL�����;�f�D������L��p���L�}�������f.���UE1�H��AWAVAUL�-�LfATL�%�LfL��SL��H���dH�%(H�E�1�������g�=ץgu�=ͥg�����gf�H���H�E�H�E�E�E�E��Р��1�1�1��E���I��H=����a������*��I��H����H�=��L�u����1�L��L��L��������7L��������L���P����������u	�=�gu%H�U�dH+%(��H�e�[A\A]A^A_]�f�H�i]�=��f�)E�)E�H�E�)E�)E���H�=���P���D�-��fH�����D9-o�fH��������fDA���V�)���H�5��D��1�1ۿ�q����5'�ff�foHDž0���HDž`���)� ����8���)�P����h������.D1�1�1��ž��I��H=���H������1���g辧��AUH�U�1�jA�L����1�I��$�ؽ��ZY����1�H�����耧��H�����H+����Hi�@BH�����H+������L�� ���H�L��P�����L�����Hc5X�gL�����L���t��;
�f�(���L�����L��fH~�������H�5��f(�fHn�����L��茯��L����������H�5��f(�����ڥ��fHn��^��H�5��f(ĸ賥��A��D;-l�f�
����͝��1��y���fDL�� ���H�L��P�����L�����Hc5a�gL�����L���}��9�f�1�������@��t��"�����c������fDL�牅�����:�����������L��L��L���<������L�� ���L��P�������T�Z������f����H���fH��1��01��ƴ��L�����������H�}�fH���1��01�蛴��D���Y����^���L��L�������UH��H��dH�%(H�E�1�H�E�dH+%(u�1������ff.�f���UE1�H��AWAVAUL�-�JfATL�%JfL��SL��H��hdH�%(H�E�1��������$f���fH�E�L�}�)E�fo�E�����H�E�1�L�u�H��x���L�-2���L�%N��#f�H�L�}���L�����;D�fsN1�L���<���1�L��L��诮����uyH��x���1�����H�u�H+u�Hi�@BH�E�H+E�y�H���L��蠬��L��fH~����������f(�fHn�H�59��^��^����1�H�U�dH+%(uH��h[A\A]A^A_]�L��L���I��蔿��@UH��H��dH�%(H�E�1�=��g���gH�E�dH+%(u���V���fD��UE1�H��AWAVAUL�-�IfATL�%�IfL��SL��H���dH�%(H�E�1��=�������H��p���A�Dž ���H��(���H�E�E��H��0���A�]?������Hc��P��fo%��f��H�E�I�Ɖ�)]���H�E�%���)�p�������e�Dž$���H����������e�H�����L��1�I�D������1����H�D����$������H����H���Hc�I	4�A9�w�H��P����5��fE��DžH���H��@���H��`���H��8������ZH��@���1�1�����
��f��tPfDL��L�����A��A9�v,DD������A�OE1�E1�L��1�L���	���A��A9�wك�;h�fr�H��8���1��X���H��`���H��h���H+�P���H+�X���yH��H��@BHi�@BH��(���H�4舾��H��@���1��
�����f1�A�Dž�t%1�I���H��L9�u�A��D;=��fr�H��8���1��Ƞ��H��`���H��h���H+�P���H+�X����RHi�@BH��0���H�4�����H�����H���;��f�������$���E��1��`�fH�5i�脟��H��(���H�����H����H����������H���f(�H�5P�f(��?���H��0���H��谨��H����H�����������H���f(�H�5K�f(������g9�g��ѥ$�����$���A9��K���L��E��x����� �������H�E�dH+%(uwH���1�[A\A]A^A_]���������h���Hi�@BH��0���H�4譼����H�����H���9.�f�T�������
����X���L��L������������UH��AUATSH��8L�- �gHc!�gdH�%(H�E�1���������H�u�I���D�����xp1�L��H��貫��I�H��I�E�����H��tP�ȚgL���PA�EH���f���gH�y��01�蜭���b�f��;��g�����D1�H�U�dH+%(uH��8[A\A]]��/���ff.�@��UH��ATSH��H�$H��H�$H���dH�%(H�E�1�H���L������C����{�����1��/�����{� L���X������t �۷�����t��u�d�ծ�����{����H�E�dH+%(uH�� 1�[A\]��V���fDUH��AUATSH��H�$H��X�d�g1��HI��H�� H��A�� dH�%(H�E�1�L����I��L�����A�|$��Dž��I��� fD����f����A�$L��Dž��Dž�����������(��H�U�dH+%(uH��X[A\A]]�胸��UH��AWAVAUATSH��H�$H��hL�NH��p�I��L��H��L����dH�%(H�E�1�L��x��-���D�=Y�g�HL��H���HH��M�dHI9�IF�1����Hc[fAnM�H��x��H����Dž��
H����H��fp��H��
f����fD����Dž��fօ���IJ��H����HDž��Dž��I��HvoL��A�}L��L)��&��H����D�=��g1�L���HL��L���7���f��A�}L����H��p�L����Hc�H��݀��l&��H��xWL���*f�L��H��p�L��L��H��A�}��H��ŀ��8&��H�U�dH+%(u"H��h[A\A]A^A_]��H�������袶��f�UH��H��H�$H��P���dH�%(H�E�1�H����I��H���H���A�xDž��@f����H�z�gDž��HDž��Dž�� H�����w%��H�U�dH+%(u�������UH��AUATSH��H�$H�����dH�%(H�E�1�Hc�H��H�� �H��fnL�� �L����Dž��	H��
fp���H�H�� �H�5��gL��H��0��1�fօ(�f����衢��1�L��L��H�5��gf����薿�������{L���$��H�U�dH+%(uH���[A\A]]�����f�UH���f��H��AWAVAUATSH���H��X������fo�dH�%(H�E�1�@��H�k�HDž��HE�H���HDž���)����H��H������H���f����01�)��������蛧���
e�fH�E�Dž����H����������H�A�f�������H�D��������01��O���H������裧�����SH�{蒧�����BH�{聧�����1�T�������"���{�\����{�T����}��L����{1��"����{�:����{��
����{�%���H�=S5�1����Dž�����ޏ��������������I��H����H�=��A���H�=��I�$���I�D$�}�觑��D��L���,���1��Օ��D1�H����H�{H������}��s����{�k����{�c���H��0����}��H��H�������e���H�����}��4���H������1�膖���{�������H������H����H������H�����Z�f���GDž����跽����=p�gLc�I��L-Z�g蝽��������M�E�Hc�I��Hi҃�CD��H��2)‹�����i�@B��H��fA)ԋ0H�
�1�Mc��	���L��L��H���K���H��xE���f����E1�E1��A��A���D;5��f�`D��A�uH��L����H��y�H�=Բ�%���L�����L�����L�����L��fH~��"��������f(�fHn�H������H�5���^��^��<���L��贝����fL��f(�f�����f�H*��^�������賽���վff��������f(�H�5f�������ff(��H*ظ�^����L���8���L���������X�����������f(�H�5C�f(�肓��H�E�dH+%(��H��X[A\A]A^A_]�Di�����������=����1������;�f�����{L�����L�����艎��L�������;1�H������L���/���H�اf�H�
�D�������L��P����01����1�L��虓��H��0���H+�����L��H��8���H+����Hi�@BH�4�ڰ��L��L���ϰ��H�{1��t���������;X�f�
������D���ɴ��I��H����H�=��A��ۭ��H�=��I�$�˭��H�=��I�D$躭��I�D$�����{H�������H�DH��0����+��H��������<���DH�=��L������U���L������2�����_���芮��f.���UE1�H��AVL�5�pfAUL�-:fL��ATL��SH��0dH�%(H�E�1��r������B1�A������話��1��Ң����f� � H�?�g�<�f�
-�g脳��H��gH������
H�5��H�=u��7�����x"H���f�
�g�H�]��01��Ѡ���ˍg��tuL�m�1�L���y����L���l������g��~%1�H��H��H��H=��gH�����9�g�H�=n�g����H�E�dH+%(uEH��0D��[A\A]A^]�H�=#�A����������H�=��貎����ȏ��L��L��蝹������UH��AVAUI��ATH��(dH�%(H�E�1��[���I��H����L�u�L���s���H��L��1�jA�E1�L��L���T���ZY��uNL��L�-��f����L��L�������xp1�L��L����H�E�dH+%(�~H�e�L��A\A]A^]ÐL��L������L��轩��H���H��f1��01��4���L��E1��ٜ����H������H�٣fH��1��01�����o���轫��ff.�f���UH��AWAVAUATSH��H�$H��H�$H��xL�%L7fL�-e7fE1�L��L��dH�%(H�E�1�舟������L�%Y�fL���1������	L��������!�=Z�f���WL�=��fD�5�f�s�fL�����L����D��L����I��D�Hc�������A�F�Mc�1ۉ�h��E��7�@9�h���,L��DE�D��莊�����f��A9���L��L��L������A����t���H�������蒔��H�H�H��H�Q�f1��01��v���L�����A���HfD� ��L�牅h��L�����L�����L��H�I�H��f1��01��%���D��h��H�E�dH+%(�8H��x D��[A\A]A^A_]�fDH�=ȷf�����1�L���ơ��I��H����H�����I��H���Ff�H�xHDž���)����fo����������H�5`����1��c���I�F(�H�5X��P1��I���I�1�I9�t9�H���H�����h��D�`軎����h��H�D��D�I9�u�A�VH�5���1�����շfH�5�1���֋��L��莙�����f���L�5��f1��A�6��H�ު1��訛��L�����I��H���$H��p��1��F���L���.������v�5��fL���h�������L�����L�����L������L��耸��H�����1���L�����L+�p��Mi�@BH�����H+�x����I�L��L��L��h���$���L��蜘��L��h����1�A�6H�)�����ʚ��;��f�����L���&���L����h���F�������h��f(�H�5Q�f(��p���E1�DL��D��h���	���D��h���?���DI�L��L��L��h���s���L�����L��h����1�A�6H�x��������9�f�M����J������h��������H������8���H�wH��A�61�1��̙��D��h��L��D��h���f���D��h���2���f.���h��襤����H������8肐��H���H���fDA�����D��H��������M���H��H�����fD� ��L�牅h��L�����L��虗��L��H�l�����f.���H����������H���H���V���L��L���S���螥��A�������UH��H��dH�%(H�E�1�D���t�H�E�dH+%(u�1���R���f���UH��SH��H��dH�%(H�E�1��(�H��1�E1�E1�j1�H�޿�1����XZ���t�H�E�dH+%(uH�]�1������D��U�H��AWAVAUATSH��(L�%��fH�}�A�|$dH�%(H�E�1��2���H���I��L�5?����H�U������������A�t$�E�L�}ą�t51�L�}�f���1�L��L��I�|��<���������A9\$w��E�H���E1�E1�L����1�A�L$j�ޖ��A�L$XZ���g���1�D��1���I�|�����A9\$w��C���L��蓑��H�E�dH+%(u'H�e�1�[A\A]A^A_]�H�=R��׮����]���舣��H�=S�輮����B���f���UE1�H��AWAVAUL�-�hfATL�%3fL��SL��H���dH�%(H�E�H�ݜf���,����P������,H���f��;I��蒨���{�I��肨��I��M����H����E����H��H���E1�L�����H�����H��@���E��I���DA�A��D9+��D��H�����E1�1�M�4Ĺ1�A������H�L��������*H�!�ƅh���bDžt���H��@���I�FH��x���1�H�E�������y��f�������A����p������0H�=��调���GL�����H��0���1�E1��Ѕ���CL��,�����tSf�D��1�H���L��I�|�跎�����m�CA��D9�wӅ�t E1�@D��1�A��I�|��}��D9sw�H��@���1��a���L��@���L+�0���L�����L��H���L+�8��������t!E1��D��A��A�<�����D9�w�L�������L���ێ��H��f�����D�E��tnA����L��I��?�H��S㥛� H��H�5��1�H��L)�H��H������ԃ��H�E�dH+%(��H���1�[A\A]A^A_]�DH�1�fD�C�H�5���1�舃��L��H��S㥛� H��H�����L��H�5��H��?Di�@BH��H��H)�1�I��H����<���L�%řfC�7f���H*�f��H��fH�5��A�$��H*ЋCf(�������^�f���H*и�^��ނ��A�$f������H�5����H*��C�^�f��H*��C�^�f��H*���Y�菂�����f.��H�����I��@B�������!���H�=��v�������H�=���`�������H�ߘfD���H���H�81�����輁��L��L��葫���ܞ��H�=w�����薁��fD��UE1�H��AWAVAUL�-�bfATL�%O.fL��SL��H���dH�%(H�E�1�Dž,���裒�����H1�H��H����1��H�E1ɹ�����*H��A�����ƅh���bDžt���H��@���H��+���H��x���H��@���H�����H��1�H�E��ؐ��A�ą���H�F�f��A��D{�����D��D�����H��H�������I���H�����I��H���mD��E��tfD����������M��E1�L��,���A���	��A9�H����L��L��H����HB�1��T�������A��I��E9�u�D������'E1��{���1�H��0����
���L�5ƖfA�6��tB�1�1��$D���g|�����c1�1��$D���N|�����4A��E9.w�H�����1�诀��H��@���H+�0���H�����H��H���H+�8���H�����yH�����H�����@BDž,���H���E1�jH��,���E1�1�����0���H������XH�����ZL�,ȋ����I�Dž�t�I�?1�I���*x��M9�u�H������ى��D���z��H�
�f����D�C�H�5��1�A����~��H�����H��S㥛� H�����L�����H�5"��H��H��H��?L��H��H)�1�I��H���~��Ai�@Bf�f��H�5���؉��H*�H�	�f��H*ȸ�^��d~���)f��{���D� E���2A���H�=����|��H�E�dH+%(��H�e�1�[A\A]A^A_]�D����H�����H�5���H��S㥛� H��H�����H��?H��H��H�����H)�1���}���H�=ǝ�֥����\}��H�=�������F}��H�=%�誥����0}��H�)�fH�L��H�81���z����	}��H�=��m������|��L��L���Ȧ������H�=��G������|��A������D��UE1�H��AWAVAUL�-�)fATL�%�)fL��SL��H��xdH�%(H�E�1������d������PH���ff�1�E1�
��f)E�fo��H�)��01�H�E�L�-V�E�荌��L���Մ��I��H����f���ygH�={yg��H�H�@H��H�����H����I�<$H�\�H�Myg���L��H�A�D$�CA�D$p�C�t���I�T$8I�t$8�C�CH9����f�H����H9�u�KD�KQL��L��
�xgPH���f�01�蹋��YL��^��xg���I��H���,����ם��L�m��y�f1�L�}�L�u����xL��1��.|��1�E���b讃��1�L���|��H�E�H�U�H+E�H+U�yH��H��@BHi�@BL��H�4�T����>xg1҅���L��M��A��H���fDH��9xg��H�xgH�[L��A��tA�zt�I�:H��p���L��x����.t��L��x���H��p���H����D�HpE�BE9���H��L��h���H��p���H��x�������L��h���H��x���H��p���I��E�BD9���H�A8H�Q8E1�H9�t@H�A��H9�u�E�BE9��'���H�	I��H���H��f1��01������C���L��A���g���L����x���臣����x���A��f(�f(��#H���f1�H�
6�H����0�誉��A���
�����vgL�-�vg��~ ��L��H�@M�d�H�;H������L9�u�L������H�tvg�rvgH�E�dH+%(����d���H�e�[A\A]A^A_]�1��C���D��M��I���U�����;��f���������,y�����H���fI�
I��1�H�E��01��؈����������H�I��H������H�	I��H�����H�s�fH�
�H�}�1��0�臈�����L��L������B���f�H�9�fH�b�1��01��W���Džd�����������UE1�H�
�%fH��AWAVAUATL�e�S��H��%fH��hdH�%(H�E�1���������f�h�H�E�H�E�0E�E��1���I��H���(�H�����I��H����H�H� �H��I�FI�EI�F H��I�F(�֙��I�F0H����I�U��H�PH�
ߗI�U`H�H�
ޗH�PI�UH�HH�P H���I�F8肙��I�F@H���fI�U H�
��L��1�H�PI�U@L�=�6H�PI�U(H�P I�UHH�P(I�U0H�P8I�UPH�HH�
��H�P@I�U8H�H0H�
��H�PPI�UXL�8H�HHH�PXH�vmI�F0I�FM�u蹊��A�ƅ���L�5�fI�}M�.諟������H��M�H�E�����������bI�z 1�M��H�ܖ�����L��x����)���L��x���I�B@I�H�x@�H�u�1�E1���r���ܡf��~���A����D9-��f�L��1��r��HiU�ʚ;HU�H��S㥛� HiE�ʚ;HE�H)�H��H��H��H�i�f�H��I�ԅ�tt����H�5Y(�1��Pu����t,L�5��fM�&M��tI�<$H��t�/��L������I�H�E�dH+%(�TH��h1�[A\A]A^A_]�@H��f�
�fI��I��L+rgH����L+-rgL�0L��x���1�L���L���0r��1�M��L�
x�H�
�L��H�l��	r��H�=�qgtvL�=�M��M��L��L�
#eL���IO�H�C�1���q��H��qgH9�qgt5L��x���L��H�U�fL�
�eH�!��M��H�8IO�1��q��f�f�ɾL���*
�fL�=��I*�L��H���^��Mq��H�=
qg��L�%qgL���
��}��L�%�pg�O���L��A���}���I�}H��t�}��L���*~��蕎��D��ډH��f�3�H�=��H�H���fH�蠚������f�f��L����*
 �fH�?�L����I*��^��p��H�DpgH95pg�7���f�f��L����*
מfH��L����H*�x����^��6p��������x���H�S�f�'�H�=*�H����D��x���M�.M��t2I�}H��tD��x����f|��D��x���L��D��x����}��D��x���I�E�������D���@I�z01�M��H�������L��x����/���L��x���I�BPI�H�xP���������8A��A��D��x�������H�
��I����fDI�z(1�M��H�z������L��x����ǀ��L��x���I�BHI�H�xH�����詌���8A��A��D��x���赃��H�
2�I���ff�I�z81�M��H�������L��x����_���L��x���I�BXI�H�xX�6����A����8A��A��D��x����M���H�
�I��H���fH����H�81��fn��D��x����W�������8A��A��D��x�������L��I���蔍��H�]�fH�G��H�81��n����=p��訋�������ff.�f���UH��H��dH�%(H�E�1�H�E�dH+%(u�1���������f���UH��H��dH�%(H�E�1�H�E�dH+%(uɺ�����֌��fD��UH��H��dH�%(H�E�1�H�E�dH+%(uɺ���薌��fD��UH��H��dH�%(H�E�1�H�E�dH+%(uɺ�K����V���fD��UH��H��dH�%(H�E�1�H�E�dH+%(uɺ��������fD��UH��ATH��dH�%(H�E�1�H��tcH��E1�裊��H��H�q�fH���H��lgH��t
��p��H�E�dH+%(u*D��L�e���@H�5/��1��n����A��������~���ff.���UH��ATH��dH�%(H�E�1�H��tcH��E1�����H��H�уfH���H�[lgH��t
��p��H�E�dH+%(u*D��L�e���@H�5���1��
n����A��������ފ��ff.�UI��H��AWAVAUATM��SH��H��H��8dH�%(H�E�1�H���RL�5�kgI����Lc�A��[E��\@���RE����A��`H��H�H��Ic�H��H��H��H�I��Hc�H��H�H9��EDA�H��E�H��
H�E���H�K�E1��@H)�I�|�1�H�M�L��D�E�L���ng��D�E�H�M�I��L9}���A��[D��A�����%b�A1�D��H�H��H��I��H9�HO�H��@��u�H9�}�A��XA��Y�Q��t�I�D�I�t�I�T$fDH�H��H9�u�I��L9}��v���fDH�E�dH+%(��H��8L��[A\A]A^A_]�A��`H��H�H��Ic�H��H��H��H�I��Hc�H��H�H9�H)�H9�~�E�������H��I��A��ZI�TtPA�A��uGI�D�K�L��&A��XtL A��YtI�t$H�0H��L9�HB�H9�u��4���@I�DM��)fDA��XtL A��YtI�L$H�H��L9�IC�H9�u����@��utH��H)�H��tZH��I�t�f�H��H��H�f.��oH��f��H9�u�fo�fs�f�fH~�I�H��H��L�2H9����Od�����@I�D�I�t�fDL H��I�T$H�P�H9�u����L��1�L���e���4����w����UH��AWAVAUATSH��H�U�D�E�D�M�dH�%(H�E�1�H����H��A��A�ͅ���E1�H�� D��E1�1��� A������H����d��I��A��tZ�U������E�����I���I���E���H�E�dH+%(�<H��HL��[A\A]A^A_]���}�����t��H��H��������t�H��gg�����p���ǀH�=��h���U����H��1�L���c���R���fD1�艈���E�����Lc�L��L�E�����L�E�1�I��H��M�p?I��I��L���rc��1�L��L���%������H�}��ē��H����1�L��H���>c���}��I���QH��fgHc����~51ɿ�H��H��I9�vH��H��H��H��I	4�H��H9�u�L��1�L��L�E��Y���L�E����%���L���U���L���M����}������E1��K`��H��I��萃���u�L��蕒��I�I�v�H�P蠎��H�)fgH��t,D��pE��~ I�VA���1�H�5۷H�
�U���g��L���n�����蓎��1�L��Hc�H��?H��H��蚓�����L��蚒��H��eg1�1������Hc��������f��ˏ��H��H�sHI�H�H��H���\���H����H��fHn�L��fo
��H��fo����H��L�Dfo�H��f��f��B�H9�u�H��H��������H�I������H��H���P����������H��dg���������ǀH�=¶�=e������L��舑�����E1�����H�E��W���L�E�H��������}�H��L�E�H�E�����L�M�L�E���u{E�E��~sD�m�1�L�}�M��H�]�L��D�e�I���D��H���f����t"L��H��I9�vL��D��H��H��I	<�I��D9#�M��I��L�}�H�]�D�e�D�m�L��L�E��l��H�U�1�L��薑��L�E����Z����8���1�����y���f���UH��AWAVAUATSH��L�gHc_(H������dH�%(H�EȋG �������G$������H�Scg��p������H�G0H���������1�L��Hc�H��?H��H���������H������D�hA���tW��\��I��H���5���D��L���:���I�$I�t$�H�P�C���I��H��bgH��t��p����L���nk��������H��L�e�D������L�!���L��PE���1��e���L��1��e��H�_bg�H�H���D��mD��xH��h���H������H�RH������Z1�Y��n�x���L�5bgDž����H��x���A����D9�uA����;�����������������������������	ȃ�������T���~6H����x���1�L������L��h���H�5M���{c��L�5�agA]A_A����H�����1��$d��H�����fo����E��`H��@���I��)����E����H�[M��1�HDžX���HDž����H��H��`���H�����H��p���A��L�������������L������A��A��I���������H��h���M����A��A��I���Iċ�����H������M����A�ع1�I�I���H��x���M���f�E��hL�H������E���7I���H����
M���M��I�M���x��H��`���I���1��2L��1��b��D������L�����E��y&E���E��uA���uE��dE����
H������H��p���1��X,�qb��A��dH�������t+H�����H��H+����H9����@��@��H)�9���I9���
��T�������A���������+�@���9���A���H��@���I���H��`���E1�����;1��踇��Lc�L��L���趋��L���1�H��I��M�H?L����I��I��L��L����[��H���1�L����v������H�����`���H����H���1�H����Z��A���L���I���|E��xQH�f^gD;��}AIc�H��H��I9�vH���D��H��I	�L��1�L��L������L�����tL����L��E1������������~D��H�5���1���_��蔆��1�L��Hc�H��?H��H��蛋�����L��蛊����������H�����H�����L)�L)�y
H��H@BHi��f��f�Hi�ʚ;�H*�H�������H*��X�H����
f���H*��^���L������������������H�5Y��_��H�5vfH�8�Ջ���������CL�-]g����������H�����H�����H+����H+����y
H��H@BHi��f��f�Hi�ʚ;�H*ȍC�H*‰�����f(��X���H���M��t`A��p��xUf�f��H�5���*�A��`�^-��H*ȸ�^�f(��^
���Y���(^��L�-1\gA���uA�����Mc��L��I��L����`��H����������
L��1�H���7X��fAn��f~�f~������
f�I���L������E1�f~���f~����fv�f~����f~�����K�dH��HӋ;��x�|��L�-s[gfnC,fo����H�fo���I��A��f8;�f8?�f~����f~����f~���fAn��f~����f~�f~��D9�~1I����s���fAn��I��f~�f~��D9��N����������������…���!�A������������f�L������1�)���DH�Qsf��H�8�]����t.fo��fo���fAn�f89�f8=�)��)���L�-NZgH��A���9������
E1�E1��H��rfD��H�8�\��H�
Zg������E1�1ۅ�~h����~[E1���H�
�YgA����D9�����D�H�H�@H��H���8��z��D9�u�H�
�YgA������9��E�D��pE����I��D9���G���D��P���I��A��p����E��1�Dž��Dž�����E����	�Mc��L����]��I��E���lL��1�H���XU��E��E���JD��E1��"L�-�XgH�A��A�A��D9�����D�H�H�@H��I���8��y����y�L���De��L�-�Xg��X���f�f.�z��HDžX���M���&H�������e��A��p��xH�kqfH�8����L�-DXg������A9�`�qL������)���D���s���DE���E���gA�D$�����������D��L��f�����H��I�L�oH��f8 �fs�f8#�fs�f8 �f��f8#�f8#�f��fs�f��f8#�f��H9�u�fo�D��fs����f��fo�fs�f��f~�A���(Hc�A�46�qD9��Hc�A�46�qD9���Hc�A�46�qD9���Hc�A�46�qD9���Hc�A�46�qD9���Hc�A�46�qD9���Hc�A�46�qD9���Hc�A�46�qD9�}|Hc�A�46�q	D9�}jHc�A�46�q
D9�}XHc�A�46�qD9�}FHc�A�46�qD9�}4Hc�A�46�q
A9�~"Hc���A�46�A9�~Hc�A��fDL���������b��������������9�GЉ������9�Cƒ�����A�������9��OE�������D��L������Mc�E��������D�������D��A�܉�L���\Z��1�L��H����Q��������H���������f��H��H�fD�oH��f8 �fs�f8#�fs�f8 �f��f8#�f8#�f��fs�f��f8#�f��H9�u�fo������fs�f�ȉ�fo�fs�f��fA~�9��Hc��47A��r9��Hc��47A��r9���Hc��47A��r9���Hc��47A��r9���Hc��47A��r9���Hc��47A��r9���Hc��47A��r9���Hc��47A��r9�}sHc��47A��r	9�}bHc��47A��r
9�}QHc��47A��r9�}@Hc��47A��r9�}/Hc��47A��r
9�}Hc����47A�9�}
Hc��A��s`��E���z���D9�AGʼn������D9�DC�A��D����D9�������L����������:�����-�����9�P����O��X���f��f.�ztPA��pHDžX����������H�51w����H����^���T��L�-SgM�������A��p���{����
�V��L�-�Rg�e���H������D��H�5vvB��1��T��H�
�Rg�!���E��E�������L���1_���X@E1�1����L������I�}X�w��H�gRgA��L������������H��������H���M���;�I�}XI�L�������xO��L�-!RgI������L������I�|$X�4w��H��Qg��h�f��I�|$X�8O��L�-�Qg��C������������A9�`�����M��H��p���1��oT��H�����H�����H+����H+����yH��H��@BHi��f�Hi�ʚ;H�H������H�B8H=�ɚ;v>H�SZ��/�DH��	H��H������H��H��1�H��H����f��H*��^ȬH�������H�� ����CP�P��H��x���Hi�8����Hi�0���ʚ;H�Hi�(����H�C@Hi� ���ʚ;H�H�CHH��tI����1u��I�����u��H��PgH������H��H�����M��H�E�dH+%(�-H�e�1�[A\A]A^A_]�fo���f����1���P���H�5�sf~��#R��L�-,PgM������A��p������������D�����1�H�5�s�A�؉�D��E)���Q�������L�-�Og�w���M���n���D��f��f�E��p�H*ȉ��H*��^��b��\�E���7����YE���H�5Zs�gQ��L�-pOg�
���1�1��
���H��H��f�Ƀ�H	��H*��X����E��pE�����X���f�f.���������f���X���f�f.�����������E��pE��x-H�5�r����H����^o��P��L�-�Ng��H���A�����X����,����fA������D�j��u�H�ƒ�f�H��H	��H*��X��<���fAn��f~�f~�����}�Dž���f�Dž�������DžP���)�����������H�5�q1����O��L�-�Mg�Q���Dž���Dž��������$�DžP���f�)����N�L���z�����I�~��r��H��Mg�P`���P`;����H�x��J��H�yMgH�xh�r��H�iMg���ƀ�H�phH�����b��H�DMg���t�H�xh�J��L�5+Mg��L���y����I�D$D��H�5ݞH�1���N���@�H��Lg1ɺHc�������H��H��I9�vH��H��H��H��I	<�H��H9�u��s�E�����H�x0�yn��H��Lg����AƅM�����H�����HDž����H��p������DžP������L�����������k��ff.�@UE1�A������!��� H��AWAVAUATSH��H������1�dH�%(H�E�1��H��1��� H���(H��H����H�2cfH����o�o`H��Kg�oh �op0�ox@��oXP��o``�(�ohp�8�o���H�o���X�o���h�o���x���������`������Qb��H�
Kg�����H�Bcf�8tD��pE����H��H���M1��x��H�� ��8H���>H��(H��t1���x����HH��0H����1��x����P�
ܥ��1Ҿ��8���Y���nD��m�Y�D��x����Y��Y��H,���@�Y��Y��H,�fHn�fH:"�����H�Y��Y��H,�H����H,�H����2�H�H��IgH�xh�p��H��IgH�����o��H��IgH�x�p��H��IgH�x0�o��H��IgH����ip����H�hIgHc����nD��mD��xH�<@H����L�=7IgH���A�������Ic��E1�A��O�dmH�_?I��M��H��A�D$����H���u��1�H��I�D$H���$E��Ic��1�Hc���~/@H��H��H9�vI�D$H��M��H��I��L	�H��H9�u�I��E9���t���A��p����I���H���������e��E��pH�����H��`���E���>H��`���M��Dž0���H�����H�z2H���H��H������p��I��H���o�-H����b��H��M��tE��pE�����
1�L��H�����Y^��H�{1��
I�ʼn�(����@^��D��D9艅��Mؾ#L���b��Dž��H��tH�x�
1��^�������_L���Rb��Dž���H��tH�x�
1���]��������xL��� b��Dž��H��tH�x�
1��]������M��tE��pE���jA���9����(�������K�����$D����D����K������������L�5�Fg�Hc�����8���H�����Hc�H�����D9���Ic������H��?H��H������Hc�0���H��H�@H���H�� �����������0���A;���e��0���L�� ���M������A��p����������Ic���r��1�H��I�GH���B����8���9�(�����D��(���E��ynE1�L�����DN�D��L�����A��DI��Wr��L��H��H9�vI�GL��D��H��H��H	�A��I��E9�|�L�����D;�8���}MMc�E9��	I��q��L��H��H9�vI�GL��D��H��H��H	4�A��I��D9�8������0�����0���H�� ���`;�������������H������(�����(����8���H����9����#���H����z���H�� H����1��r����@���@�����~GA��p���k����������(���H�5�h1���F��L�5"Dg�@���M���7���@A��p���$�����(���H�5�h�1���E��L�5�Cg���@�,�nG�������L�5�Cg~�M�������Y�����(���H�5s��1��E��L�=�CgM��tA��p����A���;�0�����H������O��A��pI���H���������`��H�����H��`���A��p����
H��`���E1�H�����H�f-H���H���H�������j��I��H���3�-H����]��H��M��tE��pE���s�
1�L��H�����AY��H�{1��
H��8���A���'Y��H��8�����(���9ȉ�Mؾ#L���f]��Dž ���H��tH�x�
1���X���� ����xL���4]��Dž0���H��tH�x�
1��X����0���M��tE��pE����A���9��1D;�(��������0�������Ic�H�@��0���H��D艅8���fDE;�����nd��D��H��I���0]��A�Dž���E�$E���o�A��E;<$�ZD��L����C����t�L���J��L�=DAgM���I�E����E��pE��xD��H�5�e�1��B��L�=AgE�t$A��H��`D;�8����>���D� ���D9�(������������DL���I��D��H�5.��1��B��L�=�@gM��tA��p���A���D9��$H������M��A��p���_Ic�H���L��H�U@gH��(������
H��P���1�H��H��0�����B������� L��(���E1�L�%�d�%�H��?gG�t�I��D9����h��D��A��A��H��?gH��t-��p��~#D��D��L��1�D��8����A��D��8���E��u�H��p���L�e�E��H��H��0����jh���L��A��L�Nd��1��Gg��L��1���A��H�A?g����H�L�$@I��L��E�t$A���tT�9��I��H���L\��D��L���Qk��I�I�w�H�P�\g��H��>gH��tD��pE����L���G��M�d$�ng��1�Hc�L��H��?H��H���ul�����GH��>gHc�H���v��1ҾH��(���H�q>g��nH���D��mD��x���H�H>gI�ƃ�p��������L��(���L�=|��@��fn�fA:"�D�A��Hc�H�IH��H��L�q�A(f�A H��8����h��L��L��1�H��8���H�I��H�A0H��0��������H�AX�zI��H��=g��D9����~*L��(���1�I�<�1�H���Y8��H�r=g9��M��tH���L����a��H��(�����I��1���>��E��pE���(���D��H�5�a�1��
?��L�==g����fDL���E���3������
H��0���1��?��H��`����-y�H�������8���L�-�<gL��(���E1�H��L���E��E��~(fDC�<�1�H��I���W��L�-<gE9��A������~��I���E1�H�RH��H�A8H���H������H�I�H9�HG�H��`H9�u�H�SZ��/�DH��	H��H��H��H��H�����1��>��L��`���H��h���L+�P���H+�X���I��yI��I��@BA��p����A��p��x�
�I?��L�-�;gIi��f��f�Mi�ʚ;f��H*��H*�I����I*��� ����X��^�8�����0���H���@f��H*�fH~�M���f���I*�f�A����A*���^��^�8���fI~��YH������H�
;SfH�_H��HEƀ9H�������9
��0���H��L�`H�
�`H�5�_���<��H��Rf�8���� ���H������L��_H�
�`H�5|_���S<��H��Rf�8�fH������fIn�L��_H�
[`H�5<_���<����0���H�\Rff(��\� ����8�Yŕ�^��Y����H������L�_H�
�_�H�5�^��;��H��9gf��fHn��*��H��Qf�8�^��^�8����[H������L�9_H�
>_�H�5u^��Q;��H��QffHn��^�8����8��
H������L�_H�
�^�H�5-^��	;��H�9gf�fHn��*��H�BQf�8�^���0����Y�8����^��d
H������L��^H�
�^�H�5�]��:��H��8gf��fHn��*��H��Pf�8�^��^�8����^�0�����	H������L��^H�
�^�H�5[]��7:��H��PffHn��^�0����^�8����8�i	H������L�^H�
\^�H�5]���9��H��7g��p������	��E1�L�=]H��f�����E1�L�u��@L��]L��H�
4^�H�5�\��q9��H�S@f�H�SZ��/�DH��	H��H��OfH���8�H*��OL��]L��H�
c�H�5<\��9��H�SHf�H�SZ��/�DH��	H��H�POfH���8�H*���L��]L��H�
�b�H�5�[��8��H��6gI����D9���A��H��f�E��A)�%�%L��A)FL�]H�A�F L�A�F$H�@1�H��H��AT��J^��H��NfZ�CPY�8�����H��\L�����8�������p����H�(OfH�8��d��H�6g���H�=1Z�p6��L�=�5gM�����E��pE�����H�5Z�1��7��L�=�5g�w�A����H�5�Y1��7��L�=�5gI����'�H�=Z�5��L�=q5gM���L�A��p���=�H�5yl�1��67��L�=?5g��A����H�5PY1��7��L�=5gI������H�=�Y�5��L�=�4gA��p����L�%�YE1�L���Y5��H��4g���D�����~7H�zMfE1��H�;D���57����H��4gA��A��D;��|ڋ��H�
�XE��E��H�RNH�5A����HE�1��F6��H�O4g�H�5K�H�����`H��H����HI�1�H���
6��H�4g�H�5J�H�����`H��H����HI�1�H����5��H��3g�H�5I�H�����`H��H����HI�1�H���5��H�=jX�4��L���4��H��LfH�8�Hb��L�=�3g�����H��D����H�5�WD����������P��(���1��45��^L�5<3g_�P�H��L��1�H�5{W�
5���
1�L��L�53gH���E��I����(����É����Z�H�{�X��H��2g�X`9��tH�pH�x0�cH��H��2g���9X`u�H�x�/���H��2gH�xh�W��H�=�2gD������H��h��/��A9��UH��`���1�H��H������5��H��`���H+�P���H��H��h���H+�X�����5͍H�
2g��8���H��tuD��pE��xiHi�ʚ;f��f�Hi��H�5��H*��H*���X��^��3��H�
�1gH��tD��pE��xH�=sV�#2��H�
�1gfo�`���H�yh)�P����V��H�}1gƀ�H�xh�.��H�f1gH����JD����ǃp�����a�H��L��1�H�5�U�3���
1�L��L�=1gH���m��G����(���A�Ɖ�����(���D��1�D�� ���H�5�U��2��L�=�0g����
�e4��L�=�0gA���;�0����B���0���H�5���1�)��2��L�=�0g���
�4��L�=s0gA���D9����D)�1�D��H�5���:2��L�=C0gH�������<��M���4�����f�H��VL�����1���3���f�H��VL������1�����f�A��D;�����H��H��(����`<��H�;H��tH����,T��Lc��H�E1�E��~,fDK�DmI��H��H��H�x�S\��D9���H���H��t
K�4dH����S����H����S��1�H�7/gH�U�dH+%(��H�e�[A\A]A^A_]ÿ�+C���f�����P�}�H�=�S�m/��L�-�.gM���;���H�=�S�L/��L�=�.gA��p�D�H�)U��H�5�S�0�����H��T��H�5�S�j0���.���H��T��H�5�S�H0�����H�@T��H�5|S�&0������H��S��H�5ZS�0����H�����H�58S��/���-�fInĿ�H�]SH�5S�/������ �����H���H�5�R�/���9���0�����H���H�5�R�g/�����H�=6R��-��H�_-gH���?�����p���1���H�5��1��%/������L��A��f�H��L	��H*�f(��X����H�ƒ�f�H��H	��H*��X�fH~���I�WA���1�H�5�~H�
D���.����H��0�����x����H������H��PH��HE�H��Df�8ut��0���L�iQH�
�R�H�5oQ��K.���U�L���.Y����H�
I���ڿH�5�1��.��H�',g���H��H@B�����0�����H�QH�54Q��-�����H�	��KE1����H�50�1��-����������H�5��1��-��������f���H�=&�,��������P���M������JJ��f.���UH��OH��AWAVAUI��H�5�ATA���SH��(dH�%(H�E�1��-��E��~4A�D$�M��H��dM�t�DI�H�޿1�I����,��M9�u�H�=�Q�i+��1��L��H�@BfH��fH���H�H�N�D���H�CH����f�KPH�
�eH�CH�p��CRH�CX1�A����E1��CT�Ce�Cl�t=�������Sl��u31��.������H�E�dH+%(��H��(1�[A\A]A^A_]�H�=�L�-�Yf�_/��L�5�c�I�E�I�}H�E��L��E1��H��Mc�A��H�8u�H�U�H�5t��1��+��L�m�K�D�I��@I�UL���1�I���+��M9�u�E1�L�m�A��A��H�=zP�*��1��H���H��E1�L��f�CPH���H�
��eD��H�CH��H�f�H�CH���CRH�CX�CT�CeD�cl�<����u	H�}�����I�ŐH���fL9�����
�,������H��H�5v1�E1�E1��*���2���H�5xfH�=�e�,T���wG�����UH��SH��H�5ŜH��dH�%(H�E�1��H����u
H�BAfH�H�E�dH+%(uH�]�1����G��ff.���UH�}�H��ATSH���H��L�%�>fdH�%(H�E�1�A�4$�:����0��A�4$�H�F���1���9���sLH�{@��~RHc�H��HGH�@��A�4$H�1������1��9��1��gV�����H�U�dH+%(uH��[A\]�H�GH�@��IF��f�UH��AVAUI��ATA��SHc�H��H�OdH�%(H�E�1�H����H�9��H�A 1�@H�� ��H�x�u���A�@�A��DO����MI�EH��L�5�=f�N1�PE��H�d�D�CA�61���8��XZA���toE����H�E�dH+%(�MA�6H�e�H�>�1�[1�A\A]A^]�8��D���M�ML�5E=f�N1�1�H���A�6�_8��A���u�I�EH������xH��H�H�HH����H�E�dH+%(��H��>fH���H�5��H�8H�e�1�[A\A]A^]�Q��fDH�E�dH+%(�~H���H�5��H�h>fH�8H�e�1�[A\A]A^]��P��f�Hc�H��H�D����H�E�dH+%(u*H�9�H�5,��f�Hc�H��L�L����*D��f.�Uf�H��AWAVAUI��ATSH��hL�@D�g������dH�%(H�E�1��GH)������������GL������H�=fH������I�GH��t1H�8t+H�� 1�@H�� ��H�x�u�~���������7H��;fH������E���PH�������0��~zI�OH����H�9��H�A 1�DH�� ��H�x�u���Hc��������
M�������1�H���H������D�@1��$6��E����L��� ��A��H�������8��H�������q%��D������D��L���������������c���H�E�dH+%(�
H��h1�[A\A]A^A_]�@������M�1�H�~��H1��5��E���k����@��H������L������H������H������H��x����d�H��D������Dž����9�6��f������t�H��������L��D����'��H��~1Ƅ����H�������8��H�<fL��H�0�o9���DH�������8�p���L���@��A��H�������8u
A�����H��x����$��D������D��L���������������
���D���������L���8/��H��x���L��H���N��������D��1��|;��H�������0�������I�OH��������p����A����	���H������H��x���I�OH����H�9��H�A 1�H�� ��H�x�u���Hc���������M�������H�|��1��H������D�@1��3���tD������M�H�>�1�D�������HH�-8f�0H������1��M3������H��L�L���������M��1�H���H1��3��H��9fH������H�0�7��H��x����o"��D������������L��������������_���E�����D���H��L�L�����~?��ff.�Ufn�f:"�H��AWAVI��AUE��ATSH��(dH�%(H�E�1��=� gH����PH��f�E��:���H�H��H���5�~E�L�p@�@����H�
�6ff�@H�@,�	�ƒ��W,��~e���G,H��<H�GH�����H�G8�lN�������=��f��H�E�dH+%(��H�;H��(D��[A\A]A^A_]�[������W,�H�i6fA��A�ԿH�Г�01��~1��E��~iIc�H��IFH�@D��L����H�M����H�6f�01��D1��E���D��D��L�����1�H�U�dH+%(u!H��([A\A]A^A_]ÐI�FH�@뜸�����=��f.���U�lH��AWAVL��`���AUL�-�ZATA��L��SH��H��(H������dH�%(H�E�H���HDžH���HDž(���HDž0���fHn�H���H��@���1��H�H�=k�H�sH�
a�H��`���fHn�H��(���H�=E�fH:"�H�)��h���fHn�H�
X�fH:"�H�vH������H�~<�x���fHn�H���fH:"
�4fH����H�FH�� ���H��g����fHn�H�=�fH:"�H���H��@���H�pH������H��g�(���fHn�H�
��fH:"�H���H������H�SH����H���f�����fHn�H�=��fH:"�H���H�����H�wH��@���H��0������fHn�H�
ϖfH:"�H�=��H���fHn�fH:"�H�?�Dž�����X���fo�fH:"�H�ܐ�����fHn�fH:"\5ffH:"
i5fH�z�Dž�������fHn�fH:"�L��P���HDžX��������������'��A�ƅ���1�H�=|��w ��H�(5f1�1��H�8�����A��E1�L��H��H��P���L��@���D��H�)�f��H���������~H�;L���_;��A�ƅ���L��0���M�����=�g�s�=�gt�G�fH�4f�f�x1��@��.�����4
H��(���H������H��tH�����H�������=8��H�v�fE1�HDž���H������H�����H������E��H�H�����H�H����DH�8L�x�'��A9�DL�M��t/I�?t)I�w 1�H��f�H�� ��H�x�u���H�����H������SH��H���:���fDI�4$L���d��H����Hc���H�����L��D�����������H�������������|A��H�E�dH+%(��H��(D��[A\A]A^A_]�H��0fL��H�;�9�����wH��2fL��H�;�u9�����]H��1fL��H�;�[9�����CH��0fL��H�;�A9�����)H��0fL��H�;�'9�����H�`2fL��H�;�
9������H��/fL��H�;��8������H�0fL��H�ݎ1�A������01��)+������ݓf��������D����E1�L�������������H�1�fH�����H��8���H���H�����HDž���H�H�Ic�H��H�����H����DD�����L�3A��E����H������L��E1�L���K�|��
L���D��H��8����:uH;������I��D9�����DH�����H�����H�����H�H�8H��H�����H���g���H�����H�����H�
Q�fH9�����D�������@K�t�H������H���x���f.�H������L�3H�.fD��M��1�H�-��01��)��H�KH���J���H�9�@���H�q 1�H���H�� ��H�x�u�����1��H�� ��H�~�u�Lc�E1��Gf.�L��E�FH���1�H��I��L�LH��-fD��01��)��M9������H�KH��u�H��-fL�E�F1�D��H�j�I���01���(��M9�u��}���D1�fDH�� Lc�H�~�u�L��M�gH��M�|(�I�<$�O#��A9�DL�I�� M9�u�H�����J�D0H���������E��H�����H�����H�=��fH9�����H�������}:��H�����H���������H������Dž���H��8���Dž��I�݃�D����H�D�I��H������HDž���H�L�0Hc���H��H�����M���:D���������I��������������RL������~7I�<$�
L���A��H��8����:�xH9�����I��M9�u�I�FH����H�8��H�H 1�DH�� ��H�y�u�Hc�ƅ��1�H��L�����H���H��to�L�d�������~5L������I�>�
L����@��H��8����:uFH9����tMI��M9�u�H�� H9��tK�H�����H�@H��u�H�����L� �@I�6L���E��H��t�ƅ��H�� H9��u�����L��������H�����H���������H���TI�NH��t*H�9t$H�q 1�H��@H�� ��H�x�u��nHc����H����������L��D��������H���X����B�����H������H�����H�����H�L�4H��H�����M������H������H������H�
��fH9��t��������D������tH�����H�������H��L�d��=эfH�;uD����������H�;H�����I9�u�H�����E1��|������I�4$H�����H����������fDH�y)fM�1�H�2�D���������01��$��H� +fH�3�H�5#�H�81��x=������1��H�� ��H�~�u�H�1�L�����H�����������������H����DH��H��L�d������������~TL������L��M��I��I�<$�
L���#>��H��8����:�S���H9�����^���I��L;����u�f�H��H;����t'�����I�N�������H���d���M�&�h���L���������H������������SA���/���H�����A����������A������f�A����������/��UH��AVAUATH��H�$H��H�$H��� H�=��dH�%(H�E�1��%������1�� H�=���$������1�� H�=����$������L�����H�=��L���.'��H���H��@��L���&2������L������/L����!��H��t�L�-W�L��1�L��L���f��1�� L���W$����y3�L��1�L��H�*��:��1�� L���+$������H�U�dH+%(��H�Ĩ A\A]A^]�fD��X��L�����%�=��D���L��L��1������L��L���)&��H������������f�����L��L��H��I��1����L������� L��1��l#���G�����-��f���UL�U��H��HH��ATH��H�OH�}�dH�%(H�E�1�H��%fH�E����H�G*LD�1������x+H�}����H�}�A���J��E��tD��A������ă�tA�����H�E�dH+%(u	D��L�e����A-���UH��AWAVAUATSH��H�$H��H��8�A��I��I��L��L����dH�%(H�E�1�����H����L�W�ATA��H��@�1����H�� ���5��H�� ��L���$��L��I�|Ƅ��/�e/��� ��2��fIn�ZYfH:"�)� �H���)L��I���+��H����I�$L���+��H����I�D$H�C����I�D$�D���I��H��t|fo� �L����*��H����I�EH�H�4�H��8�H�8�6��H����H�H��8�H�L�,�H��H�H�E�dH+%(unH�e�[A\A]A^A_]�H��#f1�H���01����H�E�dH+%(u4H�e�L��[A\A]A^A_]�2��H�[#fH���1��01��y����
���?+��H�8#fH���1��01��V���`���H�#fH���1��01��8��L������L�������2���U1�H��AWAVAUA��ATSH��H������L��#fH�5CH������H������dH�%(H�E�1��P-������H������H�ׅ����p���L�5��1�H��H������H������H��A���H��x���L�<M����A�.tM�g�.L������H��H��tL����*������A�G<t*��uBH������1�L��D����-�������%�=@u�H�5c�L��������CH������H��H9������\���L������1�H��I����H��H��Ԗ��H������L9�u��S��H�E�dH+%(�FH�Ĉ[A\A]A^A_]�fD1ɺL��D����/�����$���1�L��D������A�Dž��[���H��@�������H��p������H������@���#��H��p���H��x���H�H��H9����9!H�QuwL��p���E��I����H�Q�	��xY��
tkH��H9�u躀L��D���H��H��~7L�L����L��D�� 1����H������H���������q������E��D������H���������L��E��L��p����H��H9���H�Q�	��x���#t�L�M!fHc�A�; uӅ�����
��������L�������L��h���E��M��H��`���H��D��\���I���5DH�Q�	A�ʅ�����
��E�H��H����8Lc�H��H9�uź�L��D��D��p����
��Lc�p���H��~aL�L���f���D��H��p�������H������H��p���H�H�������D������H�������H��p���H�H��� ���M��H��`���E��D��\���L��h���1�H������DIc�L9�tqB�H�=�fI��� u�D��Ƅ����L��p����+��H��p����O%��H��H�������L������H������L��D���*���H���������L��������M��H��`���E��D��\���L��h���A���L����%&��D��UH��ATH��(dH�%(H�E�1�H�E�H�E������yBH�E�H�}�H�4��0��H��twH�U�H��H�U�dH+%(uWL�e������H�u�H�U�A�����H�E�H�}�H�4��Q0��H��t#H�U�H�E�D��H������H�E���Y%��H�RfH���1��01��p������f.����UH��H��dH�%(H�E�H�����x��H�x��H�x �/H�P(�� u	�� t@L�
�|A��H��fH�
`|�H�i|�01�����������bfD����H��x��H�x��H�x ��H�P(�� tL�
X|A���@�� u����1�H�U�dH+%(����L�
�{A���M���L�
�{A���;���L�
�{A���)���L�
�{A������L�
�{A������L�
�{A����L�
�{A�����L�
�{A������L�
�{A������#����UH��H��dH�%(H�E�H��u[�x��H�x
��H�xH��H�xP��H�����H�x ��1�H�U�dH+%(����L�
�zA�ifDH��fH�
�z�H��z�01����������L�
�zA�j��L�
�zA�k�L�
�zA�l�L�
�zA�m�L�
gzA�s�L�
�zA�n�y����d"��@��UH��H��dH�%(H�E�H��u1�xubH�x���xDu`H�xPuh1�H�U�dH+%(uu��L�
�yA�yH��fH�
�y�H��y�01��
��������L�
�yA�z��L�
�yA�|�L�
�yA�~�L�
yyA�{��!��D��UH��H��dH�%(H�E�H��u1�xubH�x���xDu`H�xPuh1�H�U�dH+%(uu��L�
�xA�jH�)fH�
�x�H��x�01��=��������L�
�xA�k��L�
&yA�m�L�
%yA�o�L�
�xA�l��� ��D��UH��H��dH�%(H�E�H��u1�xubH�x���xDu`H�xPuh1�H�U�dH+%(uu��L�
�wA�/H�YfH�
�w�H�x�01��m��������L�
�wA�0��L�
VxA�3�L�
UxA�5�L�
�wA�2����D��UH��H��dH�%(H�E�H����P��uk�DH��upH�xH�OH�xP�\H����f�P8���q�� ����@���@8���H��P��tWH�H��t��u���H��t�L�
wA��DH�)fH�
�v�H��v�01��=��������H�x(��`H�x �y�P8������ ����@���@8��qH��x�RH�x�#�P8������ �g��@�L�@8���1�DH�U�dH+%(����L�
]vA������fDL�
SvA�����fDL�
IvA�����fDL�
ZvA������fDL�
UvA�����fDL�
RvA�����fDL�
KvA�����L�
=uA�}�t���L�
iuA���b���L�
&vA���P���L�
PuA���>���L�
vA���,���L�
!uA������L�
�uA������L�
�uA�����L�
�uA�����L�
�uA������L�
_uA�����L�
:uA�����L�
(uA���������L�
JuA�����D��UH��H��dH�%(H�E�H��u1�xubH�x���xDu`H�xPuh1�H�U�dH+%(uu��L�
�sA� H�IfH�
�s�H��s�01��]��������L�
�sA�!��L�
FtA�$�L�
EtA�&�L�
�sA�#�����D��UH��H��dH�%(H�E�H��u1�xubH�x���xDu`H�xPuh1�H�U�dH+%(uu��L�
sA�H�yfH�
s�H�!s�01����������L�
sA���L�
vsA��L�
usA��L�
�rA�����D��UH��H��dH�%(H�E�H��u1�xubH�x���xDu`H�xPuh1�H�U�dH+%(uu��L�
DrA��H��fH�
Hr�H�Qr�01��
��������L�
KrA����L�
�rA���L�
�rA���L�
)rA����E��D��UH��ATSH��H�dH�%(H�E�1��{$��D�C E����H�{
��H�{H�5�q�������H��{$���{ ����H�{��H�{H�5�q�������H��{$���s ����H�{��H�{H�5�q�G�����|H��{$���K ���vH�{��H�{H�5oq�������H��S$���i�C ���LH�{�iH�{H�5�q�������_H��{$�V�{ �^H�{H�5x�������BL�%�qH�{L��������9H��{$�0�{ �8H�{H�5�q�W�����H�{L���C�����H�U�dH+%(�H��[A\]�L�
	qA�f�H�9fH�
�o�H��o�01��M��������L�
�pA���L�
�pA��L�
�pA�"�L�
�pA�'�L�
�pA�+�L�
�pA�0�y���L�
�pA�4�g���L�
�pA�=�U���L�
OpA�H�C���L�
MpA�J�1���L�
+pA�V����L�
)pA�X�
���L�
pA����L�
oA����L�
�oA� ����L�
�nA�#����L�
�oA�)���L�
�nA�,���L�
�oA�2���L�
�oA�;�}���L�
woA�9�k���L�
{nA�5�Y���L�
inA�>�G���L�
`oA�L�5���L�
EnA�M�#���L�
<oA�Z����L�
!nA�[����:��f.���UH��SH��H�dH�%(H�E�1��um�{tH�� H�5Ԩ���������C8���� ���@tz�C8�ub������1�H�U�dH+%(��H�]���L�
%mA��H��
fH�
)m�H�2m�01����������L�
�mA����L�
�mA���L�
�mA���L�
�mA���L�
nA���L�
nA���y���L�
�lA���g���������UH��AWAVE1�AUL�m�ATSH��QL�{<H��8dH�%(H�E�1��U��I��H����L���q���H��1�L��jA�H��L��A��O��A��XZE��uEL���
��L���E��H��L9�u�H�E�dH+%(��H�e�D��[A\A]A^A_]��L�
)fH��1�D�E�H����A�1L�M��8��H��L������L�����L������L�M�D�E�A�11�H��H��l�D�E����D�E�A���EE��R���L�
�fA�������UH��AWAVAUATSH��H�$H��H��0�H��8�dH�%(H�E�1��>��H����I����H��L����L�kl�1�L���H��L���P��I��H���\L���|"��H���C�x.L�puA�~tހx.uA�~.u
A�~t�fDH��I��L��1�AV��L������XL��Z���A�DŽ�t�H�5?qL�����H���r���L��@�H�¾�H�� �L��L��(��!���L��(�L�� �H����L��L�� �L��(��4��H�� �Ƅ?����L������H��0�H��(�H�����L��8�L��I��P��H�;H��tI�>u-���H��8�E1�H�8����f�L�����E1�H�E�dH+%(u#H�e�D��[A\A]A^A_]�@L������o����~��ff.�UH��AVAUATSH��H�� dH�%(H�E�H�GH��t
�Є�������I��H����L�u�L����H��H�3L��j1�A�E1�L������ZY��u<L��SA��L�����L������H�E�dH+%(��H�e�D��[A\A]A^]�A��H��fH�H�����01�����H�3L���s���H�5�iL���t��D��A���H�qfH��i�E1�01�����q���H�MfH��i1�A������01��e���M����+��ff.�UI���L��i�H��SH�����H��H��PdH�%(H�E�1�V�HDž��������H��H������H������H������XZ�>���H�U�dH+%(uH�]������@��UH��AVAUH�u�H�}�ATH��(dH�%(H�E�1�������tNL�m�L�u�L��L���+���L��A������L������H�E�dH+%(u!H��(D��A\A]A^]�f.�A������������UH��SH��dH�%(H�E�1����H���������H�H9�u�TH�z(�ufH�z uoH�H9�t;�zt�L�
ffA�jH��fH�
,f�H�5f�01���������H�U�dH+%(uMH�]����L�
�fA�l뭐L�
�fA�n�L�
�eA�f�L�
�gA�g�|������ff.���UH��H��dH�%(H�E�1����H�H9�u2�~fD�� ����@���@8���H�H9�tQ�P8��t�L�
�eA�OH��fH�
#e�H�,e�01����H�E�dH+%(uuɸ�����f�H�E�dH+%(u]��;���L�
�eA�P�f�L�
�eA�Q���fDL�
�eA�R�k���L�
}dA�L�Y�������ff.�@��UH��SH��H�dH�%(H�E�1��u;�V�����{unH�{(�uUH�{ ulH�U�dH+%(u{H�]���f�L�
�cA�YH�\fH�
�c�H�d�01��p���������L�
�dA�\��L�
�cA�[�L�
�dA�^�L�
�eA�Z�������UH��ATSH��L�'dH�%(H�E�1����A�|$��I�|$uqH��H�5]eL���������{I�<$�����H�uxH�58e������1�H�U�dH+%(��H��[A\]�fDL�
cA��H�,fH�
�b�H��b�01��@���������f�L�
�bA���L�
xbA���L�
�bA���L�
ZbA���L�
�bA���L�
jdA���p���L�
XdA���^����
��ff.����UH��SH��H�dH�%(H�E�1�����{uQH�5 dH���b�����H�;�uqH�5d�E�����1�H�U�dH+%(��H�]���f�L�
�aA�YH��fH�
�a�H��a�01�����������f�L�
�aA�^�L�
8aA�W�L�
WcA�Z�L�
HcA�_��	��ff.�@��UH�=A�H��H��dH�%(H�E�1����H����H�U�dH+%(u���-	��ff.�f���UH�=cH��H��dH�%(H�E�1���H����H�U�dH+%(u������ff.�f���UH�=��H��H��dH�%(H�E�1��I�H��t$H��H�E�dH+%(u%�H�5�|�u	��DH�E�dH+%(u�1���h�����UH��AWAVAUATSH��dH�%(H�E�1��G���31�I���w��I��H���E1�A�A�|$��A�|$��1�1�fD�k�H��H��t'�GA;D$u��uE���ED��D�H��H��uل���L��L�����I��H��u�E����1��9@L�
�aA��H�|�eH�
_�H�$_�01���������H�U�dH+%(u`H��[A\A]A^A_]�L�

_A���L�
aA���L�
CaA���L�
JaA���{���L�
�^A�z�i����������UH��H��dH�%(H�E�H�H9�u.�}D�� ����@���@8���H�H9�tQ�P8��u�L�
�^A�^H�t�eH�
^�H�^�01����H�E�dH+%(ucɸ�����f�H�E�dH+%(uK�����L�
�^A�_�f�L�
�^A�`���fDL�
{^A�a�k����������UH��SH���wdH�%(H�E�1����r��1��9��� ����@���@8��	������'��9�tp��HD�H��P8��t�L�
�]A��f�H�I�eH�
�\�H��\�01��]��������H�U�dH+%(�MH�]����H�H����x��C8���� ���@���C8������1��fDL�
]A���V���fDL�
]A���>���fDL�
]A���&���fDL�
d]A������fDL�
Y]A�����L�
�[A�����L�
(]A�	����L�
�\A����L�
�\A����L�
[\A����L�
6\A����L�
�[A��x���������UH��AWAVAUATSH��H�$H��H�$H��dH�%(H�E�1�Dž���HDž���@H��������H�����H����H�L�����HDž���HDž�������H���L��SI��L�@���1����H�� ��L�����AYAZ����L����I��H����L���8��H���L�`f�.HDž��L��)�������H��u�H��M����ATH�����L�H�����_AX=���H�5�`H���H��I��H����H�����H�����H���
���H���`H������?H������0���H�����H��t\��L��HDž����l��H��eH�ٿH����01����L���%��H�����@L���
���3����c�L��L�����HDž����	��H���H�����AT�L���L�L��[1�H�����b
��H��YH��^L����H���������H�����H�8�L�����4����-L���'���H������H�����ATM��L������0L�:[1���	��H����H��L����H����XZ�1��Å�����H���eL��H����01����������D؉��������H�Y�e�H�ͯ1��01��r��m���DH�����M���H�z�H�H��e�01��?����f.������H��eL��H����01�������D��������DD�D��������H���eL��H�z��01����L���!��Dž����������f�H�E�dH+%(u~�����H�e�[A\A]A^A_]�H�U�eL��H����01��m�Dž��������^���H�'�eL���H�mY�01��?�����������Dž�������������ff.���UH��SH��H�dH�%(H�E�1��ur�{u{H�{���{Du$�_��H9CPun1�H�U�dH+%(u{H�]���f�L�
�VA�H�l�eH�
V�H�V�01��������L�
�UA���L�
�UA��L�
hVA�	�L�
�UA���������UH��H��H�dH�%(H�E�1�H�G�p9ruH�@H9BuJ1�H�U�dH+%(uH��L�
~UA��
H���eH�
DU�H�MU�01��������L�
RUA��
���n���ff.���UH��H��dH�%(H�E�1��G����H�H9�u�`�xu_H�H9�tO�xt�L�
�TA��H��eH�
�T�H��T�01�������H�U�dH+%(u+��1���@L�
�TA���L�
@TA������D��UH��H��dH�%(H�E�H�H9�u.�}D�� ����@���@8���H�H9�tQ�P8��u�L�
�TA��H�4�eH�
�S�H��S�01��H�H�E�dH+%(u[ɸ�����f�H�E�dH+%(uC����L�
ETA���f�L�
BTA��놐L�
CTA���s������f���UH��H��dH�%(H�E�H��u �xuQH�xuY1�H�U�dH+%(uW��L�
�RA��H�J�eH�
�R�H��R�01��^�������L�
�RA����L�
�RA�������@��UH��H��dH�%(H�E�H��u �xuQH�xuY1�H�U�dH+%(uW��L�
ERA��H���eH�
IR�H�RR�01���������L�
LRA����L�
HRA����d���@��UH��H��dH�%(H�E�H��u �xuQH�xuY1�H�U�dH+%(uW��L�
�QA��H�
�eH�
�Q�H��Q�01���������L�
�QA����L�
�QA�������@��UH��H��dH�%(H�E�H�H9�u.�}D�� ����@���@8���H�H9�tQ�P8��u�L�
�QA�kH�T�eH�
�P�H��P�01��h�H�E�dH+%(ucɸ�����f�H�E�dH+%(uK����L�
eQA�l�f�L�
bQA�m���fDL�
[QA�n�k���������UH��ATSH��H�D�gdH�%(H�E�1���A���E��u1A9�u6�{ugH�{uo1�H�U�dH+%(umH��[A\]�D������L�
�OA��H�:�eH�
�O�H��O�01��N�������L�
�OA����L�
�OA�����@��UH�5jRH��ATI��H��H�?dH�%(H�E�1�������tH�E�dH+%(uXL��L�e���A����H���eL�
!RA��H�
+OH�9O��01���H�E�dH+%(uL�e���������Q������UH�5��H��H��H�?dH�%(H�E�1��F����t1�H�U�dH+%(u9��H��eL�
�QA��H�
�NH��N��01���������������UH�5�QH��H��H�?dH�%(H�E�1�������t1�H�U�dH+%(u9��H���eL�
QA��H�
NH�(N��01����������S�����UH�5
QH��H��H�?dH�%(H�E�1��F����t1�H�U�dH+%(u9��H��eL�
�PA��H�
�MH��M��01���������������UH�5�PH��H��H�?dH�%(H�E�1�������t1�H�U�dH+%(u9��H���eL�
PA��H�
MH�(M��01����������S�����UH�5PH��H��H�?dH�%(H�E�1��F����t1�H�U�dH+%(u9��H��eL�
�OA��H�
�LH��L��01��������������UH��AWAVAUATSH��8H�}�dH�%(H�E�1���H����E�I��H���L�5tOf�L������H����x.L�huA�}tހx.uA�}.uA�}t�f.�H��L��������t�L��L�����t�H�5OL������t�L���7���H�E�H����H���"�I��H���rfDL���H��H��t]�x.L�xuA�t�x.uA�.u
A�t�fDH��L���m��t�L��H�5b�Z�L���]������H��u�L������H�}��e���L������H�����@L�����H�E��U�9PucH�E�dH+%(��H�}�H��8[A\A]A^A_]��L�
NA�4H���eH�
�J�H��J�01����E������H���eL�
�MA�MH�
dJH�rJ��01����H�E�dH+%(u8H��8�����[A\A]A^A_]�L�
[MA�%�o���L�
�MA�7�]����]�ff.�f���U1�H��M�L���H��AVL�5��AUL�m�ATSH�� H��edH�%(H�E�1��3�0�L���x�1�L��L�����u7L�����L��A������H�E�dH+%(u5H�� D��[A\A]A^]�D�3A��A��L��H�f��1�������ff.���UH��AWL�=�eAVL�5�LAUE1�ATS1�H��(L�%Q�edH�%(H�E�1����I���� t[M�A�4$�1���L���L�L�������t�M�A�4$�E���1�H����!�D�U�A���EE��I���� u�H�E�dH+%(uH��(D��[A\A]A^A_]�����UH��AWL�=O�eAVL�5�KAUE1�ATS1�H��(L�%��edH�%(H�E�1����I����Kt[M�A�4$�1���L���|�L��������t�M�A�4$�E���1�H����Q�D�U�A���EE��I����Ku�H�E�dH+%(uH��(D��[A\A]A^A_]����UH��H��D�GdH�%(H�E�1�E��u �G��uI1�H�U�dH+%(u~��fDD�����H��H��tWH�5�O������u�L�
�JA����L�
@GA��H�\�eH�
�F�H�G�01��p�������L�
�JA�����%�D��UH��AUATI��SH���_dH�%(H�E�1��U��A���E���<9���L��������I�$I9��.E1��fD��u
�{�rH��������:�C8���� ��@��C:����!�C8��&H;���+��p���/H�I9����C���r���H�{��C8��$� �.�@�8�C:��>��H�C8��MH���H9��O����T���9���T��p����I���[���L�
�EA���H�y�eH�
E�H�!E�01��������H�U�dH+%(��H��[A\A]]�f.��C8��� ���@��C:����9�C8��L��H������tH������+����uC��p�������L�
�HA���,���@�c���D���f.�1��1���L�
LHA�����L�
�DA�����L�
�DA������L�
�GA�����L�
�GA�����L�
�DA�����L�
�GA�����L�
�GA���y���L�
�CA���g���L�
1DA���U���L�
2DA���C���L�
5DA���1���L�
*GA������L�
,GA���
���L�
DA�����L�
GA�����L�
GA������L�
GA������L�
�BA�����L�
�CA�����L�
kCA�����L�
�CA���}���L�
�FA���k���L�
dFA���Y���L�
\CA���G���L�
TFA���5���L�
MDA���#�����ff.���UH��H��dH�%(H�E�H��P8������ ����@���@8�u,�u`�xuiH�xuq1�H�U�dH+%(����f�L�
�BA��H�	�eH�
�A�H��A�01����������L�
mAA����L�
�AA���L�
�AA���L�
BA���L�
�AA���L�

BA���y�����@��UH��AUATSH��dH�%(H�E�1��G����1�H����I��H��tcL�-(E�%f.�A�|$uhL��H���m�I��H��t5A�|$��t��g��H��H��tvL���g����u�L�
ZDA���1@1�H�U�dH+%(udH��[A\A]]�@L�
�@A��H���eH�
K@�H�T@�01����������L�
�CA����L�
@A����f�fD��UH��H��H�dH�%(H�E�1��B8���� tx�@���B8����tH�E�dH+%(������L�
AA�9H���eH�
�?�H��?�01����H�E�dH+%(uCɸ�����L�
@A�6�L�
�?A�5�L�
@A�8�L�
�?A�7��y�f���UH��H��H�dH�%(H�E�1��B8�����J:������������ ���@���B8�uH�E�dH+%(����v���fDL�
s?A��H���eH�
{>�H��>�01�����H�E�dH+%(ugɸ�����L�
�BA���L�
BA���L�
BA���L�
�>A���L�
�>A���y���L�
�>A���g����E�D��UH��H��H�dH�%(H�E�1��B8�����J:������������ ���@���B8�uH�E�dH+%(����F���fDL�
C>A��H���eH�
K=�H�T=�01����H�E�dH+%(ugɸ�����L�
lAA���L�
�@A���L�
�@A���L�
�=A���L�
�=A���y���L�
�=A���g�����D��UH��AUI��ATSH���_dH�%(H�E�1��E��A���E����9��>E1�1��IDA9�}[E��ID�H��C8��}� ���@���C8������A������A���E��t��%��A9�|�H�E�dH+%(��H��L��[A\A]]��������Z���fDL�
r<A��H��eH�
�;�H��;�01��(��H�E�dH+%(umH�������[A\A]]�f�L�
-<A�릐L�
2<A�떐L�
3<A�놐L�
�<A��s���L�
;A���a����w����UH��H��dH�%(H�E�H�H9�u�eD��usH�H9�tS�P:��u�L�
�>A��H��eH�
�:�H��:�01��2��H�E�dH+%(u2ɸ�����@H�E�dH+%(u��S���L�
]>A�����ff.����UH��H��dH�%(H�E�H�H9�u�eD��tsH�H9�tS�P:��t�L�
�=A��H�^�eH�
�9�H�:�01��r��H�E�dH+%(u2ɸ�����@H�E�dH+%(u����L�
�=A�����ff.����UH��ATI��SH���_dH�%(H�E�1��'��A���E����9��I�$I9�u,��� ����@���@8���H�I9�ta�P8��u�L�
�9A�{H�\�eH�
�8�H�9�01��p��H�E�dH+%(��H�������[A\]��H�E�dH+%(u}H��L��[A\]�y���f�����0���fDL�
=9A�|�k���fDL�
:9A�}�S���fDL�
39A�~�;���L�
%8A�w�)��������UH��SH��H�dH�%(H�E�1��C8���� ���@���C8����uJ���������{u~H�{(��H�{ ��H�U�dH+%(��H�]���DL�
o7A�YH���eH�
s7�H�|7�01����������뫐L�
38A�B��L�
f7A�[�L�
&8A�\�L�
)8A�^�L�
�7A�@�L�
�7A�?�x���L�
�7A�A�f���L�
�8A�Z�T����*��f.���UH��AUATSH���WdH�%(H�E�1����C1�H���;��I��H����L�-?�JfDA�D$����I�|$ ��I�|$H��I�|$P��L��H������I��H��t<A�|$��t��޾��H��H����L���ڼ����u�L�
�9A���4�1�H�U�dH+%(��H��[A\A]]�L�

6A��H��eH�
�5�H��5�01��0��������f�L�
�5A��뾐L�
�5A��뮐L�
�5A���L�
9A���L�
95A������ff.�UH��H��D�GdH�%(H�E�1�E��u �uJ1�H�U�dH+%(u���D��訽��H��H��tWH�5E9褻����u�L�
�8A����L�
�4A��H��eH�
�4�H��4�01�� ��������L�
<8A��������D��UH��AVI��AUATSH���_dH�%(H�E�1����A���E����9���L���2��������A���E����9���E1�1����A���E����A9���E��ID�L�#L���L����LA�D$8��� ���@��A�D$:������A�D$8���I��$��I��$�I9���A��$������A9�$���I�$H�����������C8���� ���@���C:��
���C8��%L��H���������*H������+�����&A�����DL�
Z3A��H���eH�
�2�H��2�01��
�������H�U�dH+%(�H��[A\A]A^]���k���_���fD�[�������@�K������fD1��@L�
�2A���f���fDL�
�2A���N���fDL�
�5A���6���fDL�
�5A������fDL�
�2A������fDL�
6A�����fDL�
�5A������fDL�
}5A�����fDL�
{5A�����fDL�
�1A�����fDL�
�1A���v���fDL�
�1A���^���fDL�
�4A���F���fDL�
�4A���.���fDL�
�1A������fDL�
�4A�����fDL�
�4A�����L�
m0A������L�
l2A���������ff.�f���UH��AVI��AUATSH���_dH�%(H�E�1�����A���E����9���L�����������A���E����9���E1�1����A���E����A9���E��ID�L�#L���,����LA�D$8��� ���@��A�D$:������A�D$8���I��$��I��$�I9���A��$������A9�$���I�$H����������C8���� ���@���C:��
���C8��%L��H���������*H������+�����&A�����DL�
:/A�H���eH�
x.�H��.�01����������H�U�dH+%(�H��[A\A]A^]���K���_���fD�;�������@�+������fD1��@L�
�.A��f���fDL�
�.A��N���fDL�
�1A��6���fDL�
�1A�����fDL�
{.A�����fDL�
�1A����fDL�
h1A�����fDL�
]1A����fDL�
[1A����fDL�
�-A�#���fDL�
�-A�$�v���fDL�
�-A�%�^���fDL�
�0A�&�F���fDL�
�0A�'�.���fDL�
�-A�(����fDL�
�0A�)���fDL�
�0A�*���L�
M,A�����L�
L.A��������ff.�f���UH��AVI��AUATSH���_dH�%(H�E�1��ÿ��A���E����9���L������蛿��A���E����9���E1�1��x���A���E����A9���E��ID�L�#L�������LA�D$8��� ���@��A�D$:������A�D$8���I��$��I��$�I9���A��$������A9�$���I�$H�����������C8���� ���@���C:��
���C8��%L��H��������*H������+�����&A�����DL�
+A�AH���eH�
X*�H�a*�01����������H�U�dH+%(�H��[A\A]A^]���+����_���fD���������@��������fD1��@L�
�*A�B�f���fDL�
�*A�C�N���fDL�
�-A�D�6���fDL�
}-A�E����fDL�
[*A�F����fDL�
�-A�G���fDL�
H-A�H����fDL�
=-A�I���fDL�
;-A�J���fDL�
�)A�R���fDL�
�)A�S�v���fDL�
�)A�T�^���fDL�
�,A�U�F���fDL�
�,A�V�.���fDL�
k)A�W����fDL�
p,A�X���fDL�
{,A�Y���L�
-(A�3����L�
,*A�5�����}��ff.�f���UH��AVI��AUATSH���_dH�%(H�E�1�裻��A���E����9���L���ҷ�����{���A���E����9���E1�1��X���A���E����A9���E��ID�L�#L���������LA�D$8��� ���@��A�D$:������A�D$8���I��$��I��$�I9���A��$������A9�$���I�$H��������C8���� ���@���C:��
���C8��%L��H��������*H������+�����&A�����DL�
�&A�pH���eH�
8&�H�A&�01���������H�U�dH+%(�H��[A\A]A^]�������_���fD��������@�������fD1��@L�
u&A�q�f���fDL�
r&A�r�N���fDL�
a)A�s�6���fDL�
])A�t����fDL�
;&A�u����fDL�
�)A�v���fDL�
()A�w����fDL�
)A�x���fDL�
)A�y���fDL�
�%A�����fDL�
�%A���v���fDL�
�%A���^���fDL�
q(A���F���fDL�
m(A���.���fDL�
K%A������fDL�
P(A�����fDL�
[(A�����L�

$A�b����L�
&A�d�����]��ff.�f�UH��H��D�GdH�%(H�E�1�E��u �uJ1�H�U�dH+%(u���D���h���H��H��tWH�5�'�d�����u�L�
W'A����L�
�#A��H���eH�
k#�H�t#�01����������L�
�&A�������D��UH��AVI��AUATSH��D�gdH�%(H�E�1��¶��A���E����A9���L����A��蘶��A���E����D9���E1��v���A���E����A9���E��ID�L�#L��������JA�D$8��� ���@��A�D$:������A�D$8���M;�$���A��$���A��$p����I�$H����������C8���� ���@���C:�����C8%�=�L��H�������� H������+�������p���&A�����DL�
"A�qH���eH�
X!�H�a!�01��ͼ�������H�U�dH+%(�H��[A\A]A^]���+����a���fD���������@��������fD1��@L�
�!A�r�f���fDL�
�!A�s�N���fDL�
�$A�t�6���fDL�
}$A�u����fDL�
[!A�v����fDL�
`$A�w���fDL�
U$A�x����fDL�
c$A�z���fDL�
� A�����fDL�
� A�����fDL�
� A���v���fDL�
�#A���^���fDL�
�#A���F���fDL�
� A���.���fDL�
�#A������fDL�
�#A�����fDL�
�#A�����L�
-A�c����L�
,!A�e�����}��ff.�f���UH��AWAVAUATI��SH���_dH�%(H�E�1������D@9���L���ݮ������I�$I9��kE1�E1�L�=4#f.��C�������t�{���	�C8�ƒ� ��������@���C:������C8���H���H9����������p����I���@H�{(���H�{ ���C8���� ��@��C:����!�C8��&L���L9��(H��L���������#����(A���9���'D��pE���7I��H�I9������1�f.�H�U�dH+%(�)H��[A\A]A^A_]�fD����H�{���C8���� ���@���C:����2�C8��IL��H�������tH������+��������p���6���L�
� A�(��H��������(����C8��r� ���@���C:�������C8���H;������p�������L�
C A�7H�Y�eH�
��H��01��m����������������@���C:�����N�C8%�=��ML��H�������t&H���TH������+����uH��p�������L�
�A��D���@���9������L�
QA�����L�
?A��
���L�
�A�����L�
�A�����L�
�A������L�
�A������L�
�A�����L�
�A�����L�
�A�����L�
oA���z���L�
zA���h���L�
�A���V���L�
cA���D���L�
gA���2���L�
bA�� ���L�
�A� ����L�
�A�!���L�
�A�"���L�
�A�0����L�
�A�#����L�
�A�1���L�
�A�2���L�
�A�$���L�
iA�3�~���L�
aA�%�l���L�
YA�4�Z���L�
=A�5�H���L�
HA�6�6���L�
A���$���L�
WA������L�
5A�'����L�
�A�����L�
�A������L�
�A������L�
�A�����L�
�A�
���L�
A����L�
QA����L�
eA��p���L�
pA��^���L�
KA���L���L�
/A��:���L�
'A��(���L�
�A�����L�
IA������_���ff.�@��UH��AVI��AUATSH��D�gdH�%(H�E�1�肫��A���E���A9����c���A����L��蝧��D9���E1��<���A���E����A9���E��ID�L�#L���������xA�D$8��2� ���@��A�D$:������A�D$8%�=���I��$��I��$�I9���A��$�����A9�$��A��$p���I�$H��������C8��� ��@�$�C:��0��@�C8%�=�EL��H���J������JH������+�����F��p���PA������L�
�A�OH�Q�eH�
��H���01��e��������H�U�dH+%(�+H��[A\A]A^]��˫���;���fD軫������@諫��A�����1��@L�
5A�P�n���fDL�
2A�Q�V���fDL�
!A�S�>���fDL�
A�T�&���fDL�
�A�U����fDL�
rA�V���fDL�
�A�W����fDL�
�A�X����fDL�
�A�Y���fDL�
�A�Z���fDL�
2A�b�~���fDL�
-A�c�f���fDL�
*A�d�N���fDL�
A�f�6���fDL�
A�g����fDL�
�A�h����fDL�
�A�i���fDL�
A�j����fDL�
�A�k���L�
�A�A���L�
�A�C������ff.�f���UH��AVI��AUATSH���_dH�%(H�E�1�����A���E���9��LL���B��������A���E����9��6E1�1��Ȧ��A���E����A9���E��ID�L�#L���\�����lA�D$8��&� ���@�A�D$:����(A�D$8��1I��$�:I��$�I9��AA��$��J���A9�$��NA��$p���VI�$H���B������C8��N� �^�@�n�C:��z����C8���L��H���������H������+��������p����A������L�
JA��H��eH�
��H���01���������H�U�dH+%(�lH��[A\A]A^]���[����?���fD�K���������;�������@E1��Ȥ��A���E����A9��6L�+L���c�����s���A�E8��� ���@��A�E:������A�E8��nI����I���I9���A��������A9����A��p����I�]H���Q��������C8���� ���@���C:�������C8���L��H��������H������+������A�����@L�
}A������fDL�
zA������fDL�
iA�����fDL�
eA������fDL�
CA�����fDL�
�A�����fDL�
0A�����fDL�
%A���v���fDL�
#A���^���fDL�
A���F���fDL�
zA���.���fDL�
uA������fDL�
rA�����fDL�
aA�����fDL�
]A������fDL�
;A�����fDL�
@A�����fDL�
KA�����fDL�
CA���n���fD�������L�
�
A�u�L���L�
�A�w�:���L�
tA���(���E1��jA9��DH�H��������/����C8��� ���@���C:�������C8���H;���iA������A���E��t��R����z���L�
A�����L�
A���y���L�
�A���g���L�
�
A���U���L�
�
A���C���L�
EA���1���L�
�A������L�
�A���
���L�
�A�����L�
�A�����L�
6
A������L�

A������L�
8
A�����L�
0A�����L�
A�����L�
�A���}���L�
0A���k���L�
�A���Y����D���L�
�A���B���L�
�A���0���L�
�A������L�
�A������L�
nA�����L�
GA�����L�
"A������1����ff.�@UH��ATH��D�GdH�%(H�E�1�E��u9wuI1�H�U�dH+%(u~L�e���f�D��I���œ��H��H��tTL���ő����u�L�
�A���f�L�
A��H�,�eH�
�
�H��
�01��@���������L�
\A�������D��UH��AVAUATI��SH���_dH�%(H�E�1��#���A���E���9���1�E1�L�-��f�9�����MD�M�6L���������I����L��L����������	�����uA�F8�M�6L�������uZA�F8��M�6L��L���\�����u8A�F8�����U���A���E���T���蟟��9��O���1�DH�U�dH+%(��H��[A\A]A^]��k����@��L�
�
A�NH���eH�
+	�H�4	�01�蠤��������f�L�

A�O뾐L�
Q
A�Z뮐L�
A
A�b�L�
2
A�R�L�
�A�B�����ff.���UH��AVAUATI��SH���_dH�%(H�E�1��3���A���E���9���1�E1�L�-��f�9�����MD�M�6L���������I����L��L����������������uA�F8�M�6L�������uZA�F8��M�6L��L���l�����u8A�F8�����e���A���E���T���话��9��O���1�DH�U�dH+%(��H��[A\A]A^]��{����@��L�
�A�H���eH�
;�H�D�01�谢��������f�L�
A�뾐L�
TA�$뮐L�
DA�,�L�
5A��L�
�A������ff.���UH��AWAVI��AUATSH���_dH�%(H�E�1��A���A���E����9��jE1�1�L�=�
����A���E����A9���E��ID�L�#L���;����CA�D$8���� �}�@��A�D$:������A�D$8���I��$��L��L���Ʈ������A��$p����I�$L���H���J��������C8���� ���@���C:�������C8���H���L��H���<��������p���A�����DL�
�A��H�a�eH�
�H�	�01��u��������H�U�dH+%(��H��[A\A]A^A_]�fD�Ӛ���h���fD�Ú����,���@1��@L�
MA���v���fDL�
JA���^���fDL�
9A���F���fDL�
5A���.���fDL�
A������fDL�
�A�����fDL�
A�����fDL�
A������fDL�
zA�����fDL�
uA�����fDL�
rA�����fDL�
aA���n���fDL�
]A���V���fDL�
;A���>���fDL�
�A���&���fDL�
(A������fDL�
CA�����L�
�A������G������UH��AWAVAUATI��SH���_dH�%(H�E�1��q���A���E���89���1�E1�L�-�G���A���E����9����MD�M�7L���ݺ������A�F8��p� ���@�A�F:����A�F8��%I���/L��L��������4A��p���=M�>L��������2A�G8��5� �E�@�UA�G:��`��pA�G8��zL��L��自�����A��p����M�?L��L���
�������A�G8��x� ���@��A�G:������A�G8���I����L��L���������A��p�������Q���DL�
�A��H�!�eH�
��H���01��5��������H�U�dH+%(��H��[A\A]A^A_]�fD蓖�����fD胖���@���1��@L�

A���v���fDL�
A���^���fDL�
�A���F���fDL�
�A���.���fDL�
�A������fDL�
JA�����fDL�
�A�����fDL�
�A������fDL�
:A�����fDL�
5A�����fDL�
2A�����fDL�
!A���n���fDL�
A���V���fDL�
��A���>���fDL�
A���&���fDL�
A������fDL�
z�A�����fDL�
u�A������fDL�
r�A������fDL�
aA�����fDL�
]A�����fDL�
;�A���~���fDL�
�A���f���fDL�
(A���N���fDL�
CA���6���L�
�A���$����G������UH��SH��H��H�?dH�%(H�E�1��G8��� ��@�!�G8�tiH�5���
��������{H����x��H�x���xD��H�xP��1�H�U�dH+%(��H�]���L�
�A�@H�y�eH�
��H�!��01�荘��������fDL�
�A�A�L�
�A� �L�
��A�!�L�
R�A�$�L�
Q�A�&�L�
b�A�>�s���L�
=�A�=�a���L�
S�A�?�O���L�
��A�#�=���踤�����UH��SH��H��H�?dH�%(H�E�1��G8��� ��@�!�G8�uiH�5��}��������{H����x��H�x���xD��H�xP��1�H�U�dH+%(��H�]���L�
��A��H��eH�
���H����01����������fDL�
u�A���L�
8�A���L�
g�A���L�
�A���L�
��A���L�
�A���s���L�
��A���a���L�
�A���O���L�
�A���=����(������UH��SH��H��H�?dH�%(H�E�1��G8��� ��@�!�G8�tiH�5���������{H����x��H�x���xD��H�xP��1�H�U�dH+%(��H�]���L�
�A��H�Y�eH�
���H���01��m���������fDL�
�A���L�
��A��L�
�A��L�
2�A��L�
1�A��L�
B�A���s���L�
�A���a���L�
3�A���O���L�
�A��=���蘡�����UH��SH��H��H�?dH�%(H�E�1��G8��� ��@�!�G8�tiH�5���]�������{H����x��H�x���xD��H�xP��1�H�U�dH+%(��H�]���L�
c�A��H�ɘeH�
h��H�q��01��ݓ��������fDL�
U�A��L�
�A� �L�
G�A�!�L�
��A�$�L�
��A�&�L�
��A���s���L�
��A���a���L�
��A���O���L�
�A�#�=����������UH��SH��H��H�?dH�%(H�E�1��G8��� ��@�!�G8�uiH�5M���}�������{H����x��H�x���xD��H�xP��1�H�U�dH+%(��H�]���L�
�A�H�9�eH�
��H���01��M���������fDL�
�A��L�
��A���L�
��A���L�
�A���L�
�A���L�
"�A��s���L�
��A��a���L�
�A��O���L�
_�A���=����x������UH��SH��H��H�?dH�%(H�E�1��G8��� ��@�!�G8�tiH�5���=|�������{H����x��H�x���xD��H�xP��1�H�U�dH+%(��H�]���L�
C�A�3H���eH�
H��H�Q��01�轐��������fDL�
5�A�4�L�
��A��L�
'�A��L�
��A��L�
��A��L�
��A�1�s���L�
m�A�0�a���L�
��A�2�O���L�
��A��=���������UH��ATSH��H��H�?dH�%(H�E�1��G8��.� ��@�0�G8���H�5'��z�������{L�#��A�|$��I�|$��A�|$Du2�
y��I;D$P��1�H�U�dH+%(��H��[A\]�@L�
O�A�H��eH�
���H����01�� ���������f�L�
c�A�&뾐L�
��A�'�L�
H�A��L�
w�A��L�
��A�	�L�
��A�$�o���L�
��A�#�]���L�
��A�%�K���L�
.�A��9����G������UH��ATSH��H��H�?dH�%(H�E�1��G8��.� ��@�0�G8���H�5I��y�������{L�#��A�|$��I�|$��A�|$Du2�mw��I;D$P��1�H�U�dH+%(��H��[A\]�@L�
��A�H�l�eH�
��H���01�耍��������f�L�
��A��뾐L�
��A���L�
��A��L�
��A��L�
@�A�	�L�
Q�A���o���L�
,�A���]���L�
B�A���K���L�
��A��9���觙�����UH��SH��H�dH�%(H�E�1��{��u:�C�����C8 ��1�H�U�dH+%(��H�]���f.��y��H��H��tqH�5r��w����u�L�
��A��fDH�	�eH�
���H����01�����������fDL�
��A���L�
.�A���L�
�A���讘��ff.���UH��SH��H�dH�%(H�E�1��{��u:�C�����C8��1�H�U�dH+%(��H�]���f.��x��H��H��tqH�5r��v����u�L�
��A��fDH�	�eH�
���H����01�����������fDL�
��A���L�
�A��L�
�A���讗��ff.���UH��SH��H�dH�%(H�E�1��{��uB�C����H��H�5#��,�������H�U�dH+%(��H�]����w��H��H��tqH�5j��u����u�L�
��A��fDH��eH�
���H����01�����������fDL�
��A���L�
Q�A��L�

�A���視��fD��UH��SH��H�dH�%(H�E�1��{��uB�C����H��H�5���,�������H�U�dH+%(��H�]����v��H��H��tqH�5j��t����u�L�
��A��fDH��eH�
���H����01�����������fDL�
��A���L�
Q�A��L�

�A���覕��fD��UH��H��dH�%(H�E�H��P8������ ����@���@8�u<�up�xuyH�x���xDuwH�xPu1�H�U�dH+%(���ÐL�
��A��H�	�eH�
���H����01�����������L�
m�A�/��L�
��A�0�L�
��A�3�L�
��A�5�L�
�A���L�
��A���y���L�
��A���g���L�
G�A�2�U����`�����UH��SH��H��H�?dH�%(H�E�1��G8��� ��@�!�G8�tiH�5}��-r�������{H����x��H�x���xD��H�xP��1�H�U�dH+%(��H�]���L�
3�A�H���eH�
8��H�A��01�譆��������fDL�
%�A�
�L�
��A�/�L�
�A�0�L�
r�A�3�L�
q�A�5�L�
��A�
�s���L�
]�A�	�a���L�
s�A��O���L�
��A�2�=����ؒ�����UH��SH��H��H�?dH�%(H�E�1��G8��� ��@�!�G8�tiH�5��p�������{H����x��H�x���xD��H�xP��1�H�U�dH+%(��H�]���L�
��A�MH�	�eH�
���H����01�����������fDL�
��A�N�L�
X�A�/�L�
��A�0�L�
��A�3�L�
��A�5�L�
��A�K�s���L�
��A�J�a���L�
��A�L�O���L�
/�A�2�=����H����UH��ATSH��H�$H��dH�%(H�E�1�L��������H�A��L��H��1��e{���1�L��H��eH�"��3����L�����H��tqI��1�L���,���H��t8�x.H�Pu�zt�x.u�z.u�zt�DL��H����H��u�L���ט��H�C�H�U�dH+%(u;H��[A\]Ë3L�
���1�A��H�
��H����_���H����$���@UH�=xqfH��AVAUATH��fo��dH�%(H�E�1��X�UqfXXXXf�Rqf)7qf�������Z�@A���u��I��H���WfDo��H��fDo-��H��@fo��fo=Ŏfo5͎fo-ՎfDo%l�fDos�fDoz�fDo
���fAo�H��fE��fDo�fDo�fo�fAs� fD8(�fA��fD8(�fo�fo�fA��fA��fD8�fD8�fE�fAr�fEo�fAr�fE��fDo�fAr�fAs� fA��fD8(�fDo�fD8(�f��fD8�fD8�fE�fAr�fEo�fAr�fE��fAr�fA��fDo�f��fAs� f8+�fD8(�fo�f8(�fA�fD8�f8�fA��fr�fDo�fAr�fA��fDo�fr�fAs� f��fD8(�fo�f8(�f��fD8�f8�fA��fr�fDo�fAr�fA��fr�f��f��f8+�fA��fg�B�H9��v����@L��D���V���L�5ofL��H=@�LE��h{��D��� l��H�E�dH+%(u"H��L��A\A]A^]�f�H�=��E1�衘�����Z���D��E1���k���ff.�f���UH��AWAVAUATSH��(dH�%(H�E�1���H�����H����L�����1���L��L�����H�L���x��H������j��L��H��I���g������CL��L������L�
x�A����y[H���eH�
��H�A��01����A�����H�E�dH+%(�H��(D��[A\A]A^A_]��L������H�9�eH���L�sH������H�]�1�H�I�V�H��f�CA�
L��L������IcVH9��QI�H9tL�
��A���7����A�Ff9Cu�E1�I��L;�����u��@D�������q��D������H��I���>�L��1��@D�������h���
L��L��A�@H��H�������f���L������D������I���������H=�?��1��fDH��H���?t;H��A�4I��H��H��H��H�H)�H9�t�L�
��A���H��������L��D�������x��L���o��L�����H�������q��D�������)���L�
F�A�����L�
4�A������L�
�A������L�
��A�|����S���L�
�A������������UH��AWAVAUL������ATL�����SH��HdH�%(H�E�1����H��H���������1���L���H���L����A��腆����H���eH�������q�0H������E��1�H�}���|��Ic�L��H�������y�����������-D��L����1��L���������t��A��D��@�g��I��H����H��L���Ď������L����n��9����������H��H��u�L�
�A��H������L�-K��L�%��L��L��01���{��L�
�L��L��A�
�H��������01���{��Dž��������H�E�dH+%(�n������H��H[A\A]A^A_]�fD������E1ҍC�1�H������H�E�H������A����fDH�����L��L�<�L���r���A�Ņ�y#L�
j�A�H�
e�H����C���L��蠓��E��t�A���uA��H�CH;�����tRH���DH������1�L��L��A�
������訔��D������H��
t�L�
��A��{���f�H�����H��� ��?Mc�L��J�<�H������裄��H����������yL�
��A�&�&���@�ے����t�H�����H��� ����� ���E1�u�0�H�����D��H��H��A���V���H����m��D;� ���r�L������t���L�������H�W�H��I��H������L��01��y��I9������L�
I�A�1�o���DL�
��A���k���fDL�
��A���S���L�
2�A��.���L�
��A�)����L�
��A�"�
�������U1�H��AWAVAUL�-�ATS��L��H��dH�%(H�E�1��={��A�ą�u6Lc�D���Kd��H�E�dH+%(upH��L��[A\A]A^A_]�f.�1�L��1���z��A�ƃ�uLc�D���d����L��1�1���z��A��Lc�t�{��I���I��D����c�����7������UH��AWL�=�AVAUATSH��(dH�%(H�E�1���1�L��H������1��Zz��1�L��A��1��Kz��1�L��A��1��<z��1�L��A��1��-z��Hc؉��Cc��D���;c��D���3c��D��L������$c��菐��1���L��H����H�������H���*�����H�a|eH��������0H�����1�A�ؿH���ew��L��H������At�����\L����L���5o��D���H��H�����a��I��H����H��L���s���A�ą���L���h����u�H��(���H������H�8H���������H��������ycL�
��A�`H�
��H�3�H������A������01��v��H�E�dH+%(��H��(D��[A\A]A^A_]�D�����������������t�H��(���H������H�xH�������]���H��������yL�
Q�A�d�Z���������蕎����������t�L��1�1��/x��A�Dž��	H��(���H������H�xH���������H��������yL�
��A�o��f��������%�����������t�H��(���H�8�� ��uH������H���������H��������A��yL�
��A�y�����ˍ��E��t�H��(���H�@�� ��
D���v`����0�����t0@H��(����ڃ�L�4�L���@}��I�����h��;�0���r�L������_�L�������H�B�H��I��H�����L���01��t��I9��
���L�
4�A�������L�
]�A��H�����L�-���L�%��L��L��01��Jt��L�
b�L��L��A�[���f�L�
$�A��뮐L�
&�A���L�
��A��I���L�
��A�u�7���L�
��A�X�%���L�
��A�k����蔀��@UH��AWAVAUATSH��H�$H��(��������L����dH�%(H�E�1��=�afu%H�U�dH+%(��
H�e�[A\A]A^A_]�H���H��A��Q�L������wL�
xafL��E��PL�K��1����H�� H�5H�L���ˉ��I��H���2
L�C�H��E��1�H�"���@`��������1�E��L�����L��L��莈��L��L��Hc��ۉ��H������L��1��E��L����N���L��L��Hc�蛉��H���a��L��1�D�����L����
���L��L��Hc��W���H�����L��1�D�����L�L��Ƈ��L��L��Hc�����H������L��1�L�����L��肇��L��L��Hc��ψ��H����D���L��1�L����B���L��L��Hc�菈��H���UD�K��1��L��L�������L��L��Hc��N���H���L�K��1��L��L�b����L��L��Hc��
���H����
L�K��1��L��L�.�����L��L��Hc��̇��H����
L�K��1��L��L���>���L��L��Hc�苇��H���Q
L�K ��1��L��L������L��L��Hc��J���H���
D�K(�L��1���L���A��跅��L��L������H����	D�K(�L��1���L�U�A��A���l���L��L��
跆��H���}	D�K(�L��1���L��A��A��� ���L��L��	�k���H���1	D�K(�L��1���L���A��A���Ԅ��L��L������H����D�K(�L��1���L���A��A��舄��L��L���Ӆ��H����D�K(�L��1���L�\�A��A���<���L��L��臅��H���MD�K(�L��1���L�#�A��A����L��L��
�;���H���D�K(�L��1���L���A��A��褃��L��L�����H����D�K)�L��1���L���A���\���L��L��规��H���mD�K)�L��1���L�l�A��A������L��L���\���H���"D�K)�L��1���L�*�A��A���ł��L��L������H����D�K)�L��1���L���A��A���y���L��L���ă��H����D�K)�L��1���L���A��A���-���L��L���x���H���>D�K)�L��1���L�s�A��A�����L��L���,���H����D�K)�L��1���L�0�A��A��蕁��L��L�����H����D�K(��1��L��L���A��A���J���L��L��
蕂��H���[D�K*�L��1���L���A��A�����L��L���J���H���D�K*�L��1���L�y�A��A��賀��L��L�����H����D�K*�L��1���L�?�A��A���g���L��L��貁��H���xD�K*�L��1���L��A��A������L��L���f���H���,D�K*�L��1���L���A��A������L��L������H����D�K*�L��1���L���A��A�����L��L���΀��H����D�K*�L��1���L�j�A��A���7��L��L��肀��H���HD�K+�L��1���L�(�A����~��L��L���:���H���D�K+�L��1���L���A��A���~��L��L������H����D�K+�L��1���L���A��A���W~��L��L�����H���hD�K+�L��1���L�|�A��A���~��L��L��
�V��H���D�K+�L��1���L�?�A��A���}��L��L�����H����D�K0��1��L��L���}}��L��L��Hc���~��H����D�K4��1��L��L����<}��L��L��Hc��~��H���OL�K8��1��L��L����|��L��L��Hc��H~��H���L�K@��1��L��L�n��|��L��L��Hc��~��H����L�KH��1��L��L�;��y|��L��L��Hc���}��H����L�KP��1��L��L���8|��L��L��Hc��}��H��uOD�KX��1��L��L����{��L��L��Hc��H}��H��uL���Jy��1��n�H�=�)��}��L���,y��������M��r��H�=�)�}�����4�f���UH��AVAUATSL��$���H��H�$L9�u�H��dH�%(H�E�1���_������L��0���L�%U�L��L���t�����3�V��I��H����L���I���1�L�(���L����z��H����1��L�
�L���H�ߺ�z��L���b_��L��L���wt�����7L��H���dt�����$�vvI��M��L��f��ͯ���0H�KieL�����L��ƅϯ��Džɯ��-vvv��9�N…������H��ɯ��RH��(P1��s[��ZL��Y�iW�����H�U�dH+%(��H�e�[A\A]A^]�DH��he�vvM��L��L�����f��Կ���0L�
����L��Džп��-vvvƅֿ��9�N…������H��п��RH��'P1���Z��_L��AX��V������Z����������I���fDH�1heH�M���01��Lc���������������������p����UH�=\�H��H��dH�%(H�E�1��	m��H��H�OQfH��he�H�E�dH+%(u���o��ff.�@��UH��AWA��AVA��AUI��ATA��SH��(D�E�L�M�dH�%(H�E�1��m��D�H��A���u'D�H�E�dH+%(u]H��([A\A]A^A_]�fDL�M�D�E�D��D��D��L��D�U���D�U���t�H�ge1�H�r��01��"b�����Q����n����UH��H��dH�%(H�E�H��ge�8uH�E�dH+%(u��f.��=Pfu��Pf���n��f���UH��AWI��AVAUATSH��H��(L�/L�wdH�%(H�E�1�H����x4uH�H�@I�L�H)�I)�I��I��`L���
J��I��H��t�x6t/L����\��H�E�dH+%(��H�e�1�[A\A]A^A_]�L9(u�A���L�k L��ee��H��I�ŋA�21�AUL�CH��%1�L�KH�L�U���`��M�D$XL�U�ZM9�t A�2M�L$1�1�I�$H���`��L�U�A�2L��H���1�1��}`��A�D$6�3���f�A�2H��$1�1�L�U��Y`��AƇ�L�U��V����m����U�����H��AWAVAUATL�%�iSH�����L��H��H��(
dH�%(H�E�1�H������DN��H��������L��H��H������&N��H��P���H��H��������{�����.H�����H�5Mm�u������H��8���H�����H�����{������H��X���H�����H�E��4U������Dž���H�E�H�@ L�`8H�deH����M�����M�l$M�t$ M9���H�E��x4u
H�H+PI�I�H��0���1�L����l��H��H��t!H����H�@�y4uHH+AL9���H��8���L9ht4I�\$-L�5eH�5.��f�I�vI��H����H���'l����u�L���d��I��H���G���H�������~TH�����L������H�5��E�H��L����Y��H��L���E�H�5}�����Y��H��0���H���E�H�5��Y��H�������T��H�������T��H�E�dH+%(�������H�e�[A\A]A^A_]�fD�L�5�eL�=L��M�~I��M��t#L���kX��H��L��H���=P����u�����@H��L�=�L�5/e�:X��L������M�����	A���@L���X��D��)�xHc�L��H���j���������M�nI��M��u�L������L�5�eH�5����I�vI��H���GH���w_��H��t��Z���DM�D$-H������I��L��I�w-H���:X������H����I�G I�؀y4uHH+AL��H)�H��H��HH�H�
�<f�	H9���H��M��L��PH����H��!�01��N\��ZL��Y��b��I��H����������H��0���1�H����_��I��H���#H����H�@�y4uOHH+AL9��,���I��M�O-H����L��H�u!�01���[��L���Db��I��H��������D���L9������cU��I��A�D$*I����<�AtEH����I��L��H�B!��01��l[��L��Dž���������a��I��H���&�������H�5�H����h����t�H�5�H����h����t�H�5#�H���h����t�H�5"�H���h�����k����q���@I��H������L�K-��H��_eH��1��01���Z��Dž����������H�c�H�w_e��01��Z��Dž��������|���H�k��H�J_eH��1��01��hZ��Dž��������K����$g��@��UH��ATSH��dH�%(H�E�1��6u!���I��H��t-H��`eL��H�0��h��H�E�dH+%(u.H��1�[A\]ÐH��^eH��1��01���Y��ƃ���f��f���UH��ATI��SH��H��dH�%(H�E�H�G H������tH���H��`�7C��H��H��t'�@6�T��H�E�dH+%(uQH��1�[A\]�@���tH��_eL��H�0�
h����H��]eH�V�1��01��Y��ƃ�����e�����UH��AVAUATSH��H�$H��H�$H��dH�%(H�E�1��O���������������n��H����H���L�%s��1�H��L��I���ab��I��H=��RL��H�ǻo�N��L�%A�����1�1�1�L���S�����C����u�1�1�1�L����@������I���H�@H�xtRH�x tKL�@(I��ouKE1�L���¨��L���jd��L���?��H�E�dH+%(�!H�Đ D��[A\A]A^]�H�%�o�H�q\eH��A������01��W����b����H��P��A������8�kN��H�l�H��H�%\e�01��LW���U���H�\eH�+��A������01��%W���&����+b��L��I�غ �8L�����A�����L���`��H��[eL��H��Y��01���V�����H��[eH����A������01��V�������yc��f���UH��AWAVAUATSH��H�$H��H�$H��8dH�%(H�E�1��,M���������������+l��H�����H�����&?��I��H���VL�����1��1�L��H��L�%���H�H�޹L����_��H�����H=��6H�����H��L���!Q������1�L��o�jI��L�%��A��L���8H��A���C�A9���A�����Ic�H�����H=�wH���D��H��I	�1�L�����(q������E1�D1�L��1�1��P��A�����@��A9�u�H�����H���wH�ʸH��H��H��I!׍s�L����H��A���>���fDH��YeD��H���01���T�����L��1�E1��#U��H�����1�L��H����KH��A���dfDH�x ��H�A�L$oH��L�L(H�CoI9�t%H�4YeE��H�A������01��FT��A�t$L��H����G��A��L��A���F��9�~hA����H�����1�1�D���<����x&H�����H���H�@H9X�W���H�%H��XeH����A������01��S��H������[��H������s���H������`��L���ON��H������#;��H�E�dH+%(�5H��8!D��[A\A]A^A_]��f^����H��@��A������8�=J��D��H��I��H��We�01��S���g����!^����H��@��A������8�I��H���H��H��We�01���R���1���H��WeH����A������01��R�������]��L�����I��L��8� L��A������6\��H�OWeL��H��U��01��gR������H�+WeH� ��A������01��@R������_��fD��U�H��AWAVL�����AUL��ATSH��dH�%(H�E�1��H�H�����H������H�H�����H��\H��8����2Q��H���;�1�I��H�5��H�=~��|[��I��H=���H��L���BF��L��L����P��H�
��H���A�ą��1�L��L���1_���H��I�}(1����Z��L����m��A�ą��������L���i��A�ą���L���O���H�5��1�������dL��Dž����Dž���A�E0���D�����1�Dž��������M���I�L���:c�����������H��������A9E0̋����9�����������������u�H�
��H�+�f�H�Ue��01��;P��L����M��H�E�dH+%(��H��D��[A\A]A^A_]�L���_��H��H��t$�>	t3L��������tX��L���l_��H��H��u�L���,C���%����L�����L��L���>Z��A���H�h�L��L���f��=�K���A��H�LTeA�H�
��H����01��ZO������D�[Z����H��@����8�8F��H���H������f��+Z����H��@����8�F��H�A^H�����f���H��SeH�����01���N�����DH��Se�H�
՘H��A������01��N���_���f.��
L����Y������H�GSe�H�
��H���A������01��UN�������[��f.��ULc�f�H��AWAVAUL�����ATL��SH���dH�%(H�E�1�H�E�)E�)E��H�L������HG��H����H��1�1�I���W��L���f��I��H���k1�L��H����c������1�L����e��A�Dž��"1�1�L���7��H�/H�����@(�l�PH�
]����a�@0f����H�]�1�1�L��H���\��H�}���D������H��������H��Qe������01��L��H��1�1�L���=\��L�]�f�D�����A�E������E��u�H��1�1�L��L������A���\��H�yQeH�M�H�%�L��������01�L)��L��A���N�������A��������N��H���1�H��H�Qe�01��EL��L���S��L���9��L����3��H�E�dH+%(�H���D��[A\A]A^A_]�DH�
�H��PeH��1�A������01���K��L����Z����A���������M��H��1�H��H�~Pe�01��K���c���H��H�bPe1�A������01��K������H�>Pe1�H�]A������01��VK���w���L�
y�A��H�
Pe�H�
M�H���A������01��K�����L�
�A������W��f���UH��H��dH�%(H�E�1�H�E�dH+%(u�1�����W��f���UH��H��dH�%(H�E�1�H�E�dH+%(uɿ�K����VW��fD��UH�
	�fHn�H��AWAVAUATSH��HdH�%(H�E�H���fH:"�H��OH����H�Pe)�����~MOefH:"�PeH�����)����@����������������_��H������H������2��I��H����L�����1��1�L���H�L���_=����H�H=���1�L�⾀�`e��1�L�⾀�Qe��������H��I��H���(H������H��L��1�L������M2��H�����H������H������L���@L���@L�X1��_���1�L��H�=���R��I��H=����@@1�H���K��L��L���=��H������L��L���D������DŽ������`��H�������H�Hi�	����H�� ��)���)�)�������H��H���'�����L��H���V`��1Ʌ���L������I��L������I��C����tN����E1�DA��A��D9�u�I��I��u�L������L������H������M��$�L���uZ����yV�l��>��	�H��L���"6�����5H������L���+/��H���@Hc��L�������P��L���W��H��H��u�L����:��I�$I9�u�DH�I9���Hc��D�������D;�����t�H��A�������2��E��H��I��Hc��������H��Ke�01��F��DL���D���jfDH���H��I	��4�����Q����H��@���A������8�=��1�L��H���U:��I�ؿH�.
��H�]Ke�01��F��L���<A��H�������.��H�E�dH+%(��H��HD��[A\A]A^A_]��H�	KeH����A������01��F���@H��JeH�A��A������01��E���u����H��JeL��H����01���E��I���A��E��A������E1�����_5��H�i�H��H�nJe�A������01��E�����A�lj�H�IJe1�H���01��gE���j���H������H��
��]P����H��@���A������8H���1<����H�c
I��H��Ie�01��E�������P����H��@����8��;��H��
H���?���H��IeH����A������01���D���M����Q�����U�H��AWAVAUATL�����S1�L��H��H�~7�ddH�%(H�E�H���fH:"�H���H�)�����H�����H������ƅ���Dž<����0,��fo������H���HDž��I��H���H�)����M����L��L���=C��H�V}�Å��H�����E1�1�L��L���9���Å���M�/���L���fY��L���YY��L��E1��IY��L��1�L����G��E���L��P����L��L��L���H���D���`I�������RN���8u
��������H�=f�H�������-[��H�������8��H��@����:��H�,H��H��Ge�A������01���B��L���@���6@H��H��Ge��01��B��L��E1��Y@�����A��A��H�E�dH+%(�	H��HD��[A\A]A^A_]�fD�[7��I��H���g���H�(GeH�i{�A������01��=B���Dž��������1���H�‰�H����?I��H�2H��s'��������"H��H��H��H��H!�H�
�H��H=u���������LA���L�⾀�]������L���S^���Å�����<���L���Y���Å���L��E1��@��L���+��ƅ����Dž����ƅ����ƅ����ƅ����ƅ����HDž����L������H�������@0����HDž����H������L������H��H��H)�H��I��H��I���tS�����DL������D��M��H�������SP��I��H���E�'D���\0��I��A��wD����H�����L��L����.������H�GEe�0��~2D��(���H����1�1�H�չ�S@��H��Fe1�L��H��?��H����H������H;������,D��(���������A9�t%A��H��DeL���H��	��01���?��D�����E���E9�t)H��DeL����H�y	�01��?��E���D�����E9�t"H�nDeL����H�t	�01��?��D�����A��
w6��L���/A��
tiA��
wH���D��Hc�H�>���H�DeD����H�7	�01��?��H�������G��H������H�������(����A�GA9G��I��H�/L���a>��I��H���U������uH�xH�5]O�LL����������������uI�|$H�5I��)L����������������uI�|$�H�5+��1�����������������5���A�D$��l��������������fDI�H�5�N�K���������H��BeL����H����01���=������f�I��(����A��M����������������9�������������u
�������d�������/���������������MD���E����A�GA;��t&H�BeL����H�'��01��1=��A�GA9G����H��AeL����H����01��=�����@I��H��AeL���L������H�2��01���<��H����H���������f�L������A��L���/��fDH������H������H������9F0�g�����<���������3���H�+AeH�%��L�������01��?<���p���f.�A���A9G�����H��@eL����H���01��;�������F��H��@�����E1�8��2��H���H��H��@e�01��;��������F����H��@����8�2��H�(�H��H�W@e��01��y;�����@�������M���H�5&�L����H�����������0���fDA�D$��d���f�H��?eH�
�K�H���01��
;��D��A���k����H��?eH�
�K�H�N�01���:��D��A���������E����H��@����8�1��H��IH������f�H�Y?eH�
#b�H��01��m:��D��A������H�!?eH�
��H���01��5:��D��A������H�
�H������DH��>eH�:��01���9��D��A���4���fD���������D��L��M��I��H��>e�0��~H�I@e1�L��H��l8��H�e>e�0H���1��9�������D���[����<F��f.�f���UH��AWAVE1�AUL�-x�ATSH��H��?edH�%(H�E�1�H�CPH��8���D�8��I��H����H�3H���E������M�<$M9�tH�3L����#�����uM�?M9�u�L���u6��H��H;�8���u�D��0���H�r>eE1�L�5��H�CPH��8���D�7��I��H���GH�3H���tE������M�<$M9�tH�3L���E#������M�?M9�u�L����5��H��H;�8���u�E�����D�0�����0���Dž4���E1�E1�D��D���%������A��A��u�A��A��u֋�4�����D�0�����0���H�E�dH+%(����0���H�Ĩ[A\A]A^A_]�@L��L�+�U#��L��H��H�S<eM��A������01��q7��M�?M9���������L��L�3�#��L��H��H�<eM��A������01��17��M�?M9��=����S���Dž8���H��@����z6��I��H��t}��8���D��D��H��A���"��H��L����C�����=M�4$M9�t@H��L���!����tiM�6M9�u�L���Y4����8��������Dž8����5��I��H��u�H�N;eH�����01��i6��Dž0����������f.�L���"��I�ؿH�i�H��H��:e�01��&6��M�6Dž4�������M9��H����Z���A��H��:eH�H����A������01���5��L���3������A��H��:eH�H�l��A������01��5��L���P3���^���H�a:eH�ٿH�r�01��y5��L���!3������@H�1:eH�o���01��L5��Dž0����������DH�:eH�?���01��5��Dž0�����������A���UH��AWI��AVA��AUI��ATA��SH��dH�%(H�E�1��?M��H����H��@8uE��u01�D�K,E9�u]H�U�dH+%(��H�e�[A\A]A^A_]�H��E1�M���H�B9ejH���I���01��Y4��Y�����^�H��M���H�
9eATI��H����01��$4��X�����Z�k���H��8eM��I��H�B���01���3��������;����@����U�1�H�5e�H��AVL�5b7AUL��ATSH��dH�%(H�E�1��=��I��H=���1ɺH��A�����H�5���v�����L��H�5O�c���[������sL��������H�5���5���L�����H�5��DE�����L����H�5��DE�1����L�����H�5��DE�������L��H�5r�DE����L��DE��.?���1�L��H�5T��x<��I��H=���1ɺH�5��H���s�����L��H�5g�DE��T�����L��H�5�DE��5���L�����H�5�DE�����L��DE��>��H�E�dH+%(uPH��D��[A\A]A^]�f���E����fDH��6eL��H��A������01��1����t>��@��U��H��AWAVAUATSH��dH�%(H�E�1��-��H����	�PI�ĉ����H�x�B����w	��fn�fo�<fo-=��fo%�<fp�H�G��H��H�LDfDfo�fo��H��@fo�f��fo��@�f��f~H�f:H�f:H�f��f:H��@�f��f~@�f:@�f:@�f:@��@��@��@��@��@�H9��h����Ѓ����Lc���I��)�J��1�A�H9���A��J�tA)ɍH�FD�9���A��J�tA)ɍH�FD�9�~dA��J�tA)ɍH�FD�9�~HA��J�t A)ɍH�FD�9�~,A��J�t(��A)��FD�9�~J�L0)‰�A1�1ҾL���n:��E�D$��A9��A�$�����@�I�T$������fn�fo;fo-+;��fo%;fp�H�B��H��H�t2Dfo�fo��H��@fo�f��fo��@�f��f~H�f:H�f:H�f��f:H��@�f��f~@�f:@�f:@�f:@��@��@��@��@��@�H9��h����ȃ����Hc�A��H��A)�H�<2D��G�x9���A��L�D2A)��xA�@E�9���A��L�D2A)��xA�@E�9�~gA��L�D2A)��xA�@E�9�~JA��L�D2 A)��xA�@E�9�~-A��L�D2(��A)�A�@E�9�~H�T20)��
�B1�1ҾL���8��A�Ņ���A�T$I�L$A�$�����B�������fn�fo9fo-C9��fo%(9fp�H�A��H��H�t1Dfo�fo��H��@fo�f��fo��@�f��f~H�f:H�f:H�f��f:H��@�f��f~@�f:@�f:@�f:@��@��@��@��@��@�H9��h����Ѓ����Hc�A��H��A)�H�<1D��G�x9���A��L�D1A)��xA�@E�9���A��L�D1A)��xA�@E�9�~gA��L�D1A)��xA�@E�9�~JA��L�D1 A)��xA�@E�9�~-A��L�D1(��A)�A�@E�9�~H�t10)‰�FH��0e��1�f�qH���3�+���;��~2H�=2e�H�
��H���L�01�L������L��L���R"��1�1ҾL���A6��D�A��E��~5H��1eH�
V��H�D�L�81�L�����L��L���"��D�A���A�T$I�L$A�$�����B����=��fn�fo�6fo-�6��fo%�6fp�H�A��H��H�t1D�fo�fo��H��@fo�f��fo��@�f��f~H�f:H�f:H�f��f:H��@�f��f~@�f:@�f:@�f:@��@��@��@��@��@�H9��h����Ѓ����Hc�A��H��A)�H�<1D��G�x9���A��L�L1A)��xA�AE�9���A��L�L1A)��xA�AE�9�~gA��L�L1A)��xA�AE�9�~JA��L�L1 A)��xA�AE�9�~-A��L�L1(��A)�A�AE�9�~H�t10)‰�F��D�ƿf�A1�f�QH���)�����~2H��/e�H�
�H��L�01�L���b��L��L�������1�1�L���3���3A�ƅ�~4H�f/eH�
ͣ�H���L�81�L�����L��L���{���3A��tsD��1�A�����H�l��w(���'DH�9�H�2-e�A������01��N(��L���f6��H�E�dH+%(��H��D��[A\A]A^A_]ÐH��1�1��
(���D��H�v�D��1��A�������'����E�D$��H����d���1��d���1��=���1�����1����H�z,eH����A������01��'���D����U4��D��U��H��AVAUATH��dH�%(H�E�1��#��H���g1ɺ�H��I���=������A�$���#I�D$�8��f�x�r1ɺ�L���N=�����VL�-�+eA�$A�u����I�D$D�HA���f�x����~2H�E-e�H�
��H���L�01�L�����L��L���Z��1ɺ�#L����<�����/A�$A�u����I�D$�x#��f�x�7��~2H��,e�H�
F�H��L�01�L���x��L��L������1ɺ�XL���I<�����iA�$A�u���I�D$�xX�(f�x�T��~;H�K,eH�
ܠ�H���L�01�L�����L��L���`��I�D$A�u�8�(f�x�;D�HA���Df�x�Y�x#�if�x��xX��f�x��QA�XA���j@H����+�jA�A��PH��)eH�.��0�1�A�������$��XZL����2��H�E�dH+%(��H�e�D��A\A]A^]�fDH�Y)eH��A�E1�j�H���0�f.�D�HH��E1��H�)ejH����0�e���f�A�A��H��(eH�A��A������01��#���8����jA�A���PH�-����H��A�A���j���@H��A���jH�����A�A����[���f.�jA�A�#��PH����|���H��D�HA���j#H����[���A�A�#��A�uH�C��1�A������#���<���H��A�#A���j����H��'eH����A������01��"������jA�A�X��PH�������H��D�HA���jXH������A�A�X���D���H��A�XA���j�>���H��D�HE1���jH����T���A�AUE1���j�	���A�AS��H�}�j�"����.��A�ARA���j����A�AQ��A�j#H�;����A�#AP��A�j���H�
+�1�E1��!�����A�WA���jXH���������U�H��AWAVH�����AUL�-��ATL�����L��SH���dH�%(H�E�1�L������H�L����=������L�5��L��L���=�����L�5��L��L���=�����H���L�5��L��H��L������~=�����L�=��L��L���d=�����L��L���Q=�����/L��L���>=�����?L�5u�L��L���$=�����HL�=p�L��L���
=�����QH�g�H��L��H�������<�����SH�5d�L����<��H�5U����XH�5(�L���<�����aL��L���<�����uL��L���<������L��L���{<������H�P�H�5S�L��H������V<������H�5J�L���?<��H�5;�����H�5
�L���!<������L��L���<������L��L���;������L��L����;������H��H�5��L��H�������;������H�5�L���;��H�5�����H�5؛L���;������L��L���{;������L��L���h;�����
L��L���U;�����H���H��L��H������4;�����BH���L��H���;�����H�5��L���;�����4L��L����:�����HL��L����:�����XL��L����:�����hL�-��H��L��L��H������:�����cL�-ܛL��L���:�����lL�-қL��L���n:�����uL�-ΛL��L���T:�����~L�-ʛL��L���::������L�-��L��L��� :������1�H�U�dH+%(��H�e�[A\A]A^A_]�@L��L����9��jL�
w�A����Pf�H��!eH�u��H�
]��01�����X�����Z�@L��L���9��jL�
g�A����P�L��L���u9��jL�
T�A����P�H��L���U9��jL�
W�A����P�m���L��L���29��jL�
K�A����P�J���L��L���9��jL�
4�A����P�'���L��L����8��jL�
�A����P����L��L����8��jL�
�A����P���L��L���8��jL�
��A����P���H��L���8��jL�
�A����P���L���c8��jL�
�A����P�{���H�5��L���<8��jL�
ؗA����P�T���L��L���8��jL�
ƗA����P�1���L��L���7��jL�
��A����P����L��L����7��jL�
��A����P���H�5��L���7��jL�
��A����P����L���7��jL�
��A����P���H�5Q�L���e7��jL�
s�A����P�}���L��L���B7��jL�
\�A����P�Z���L��L���7��jL�
C�A����P�7���L��L���6��jL�
+�A����P����H�5˖L����6��jL�
�A����P���L���6��jL�
�A����P����H�5ؖL���6��jL�
��A����P���L��L���k6��jL�
�A����P���L��L���H6��jL�
ԖA����P�`���L��L���%6��jL�
��A����P�=���H��L���6��jL�
ӖA���P����H��L����5��jL�
��A����P���H�5r�L���5��jL�
��A���P����L��L���5��jL�
��A���P���L��L���r5��jL�
y�A���P���L��L���O5��jL�
h�A���P�g���L��L���,5��jL�
e�A�
��P�D���L��L���	5��jL�
R�A���P�!���L��L����4��jL�
F�A���P���L��L����4��jL�
9�A�
��P����L��L���4��jL�
&�A���P���L��L���}4��jL�
�A���P����`$����UH��AWAVAUATL�%&�SL��L��H��dH�%(H�E�1��3�����"H���H��H���3������L�-��L��L���v3��A�Dž���H��L���`3�����jL��H���M3������H�5m�L���63������H�5^�L���3������H��dL����'�L��L���2������H��L9���L�cL�+L��L����2����x�L�
��A��H�6e�H�
ёH��zA������01��D��H�E�dH+%(�DH�e�D��[A\A]A^A_]��L�
a�A���f�H���dL�sx�L��L���52����~1H��I9�t�L�cL�+L��L���2����x�L�
+�A���C���L�
A�A���1���I��A���#���L��L����1��jM��A��PH�Be�H��H�
֐A������01��P��XZ����H��H���1��jI��A��P�L��L���u1��jM��A��P�I��A�����M��A�����M��A������!�����UH��ATSH�{�H��H��dH�%(H�E�1�����H����H��H�����H����H��H�����H����H���dL����H��L9�tgH�;�_��H��
t�L�
ђA��H�eH�
���H��x�01���������H�U�dH+%(�}H��[A\]��H���dL���f�H�;����H��uH��L9�u�1��DL�
a�A���r���I��A���d���I��A���V���I��A���H����O ��ff.�@��UH��AWAVAUATSH��H�$H��XdH�%(H�E�1�����H����H����L����I���1�L�y��L����(��L�����I��H����Dž��L���/��H���'�x.L�huA�}tހx.uA�}.uA�}t�f.�H��H����L��1�AU��L�=�I�پ�`(��YL��^����������x���H��I��L�2�1�AU��L����(��XL��Z�%��I��H����L����L����L���@.��H���"�x.L�puA�~tހx.uA�~.u	A�~t�f�L��L�����ƅ��L�ƅ��H����H;���t����D�;H�I���p��t+����uZ��������H��H;���u��[�����t����u*������������u�A�G�<v�A��_t�D1�H����L���������H��e1�M��L��H�1��01�����L��Dž�������-��H������L����L��L������%��L���,��H������fDL����%��H�E�dH+%(u_����H�e�[A\A]A^A_]�@H�AeL��H���1��01��\���O���H� eH��1��01��>��Dž����������H��eL���H�'��01�����)���H��eL��H�Ύ1��01�����Dž�������>���ff.�UH��AWAVAUATSH��H�$H��H�$H���H�
��H����fHn�H�
��H�4�dH�%(H�E�H�\�I��fH:"�H�e�)� ��fHn�H�r�fH:"�H�Z�)�0��fHn�H�
r�fH:"�H�K�)�@��fHn�H�e�fH:"�H�K�)�P��fHn�H�
[�fH:"�H�>�)�`��fHn�H�B�fH:"�H�t�)�p��fHn�H�
=�fH:"�H��)����fHn�fH:"�H��)����fHn�fH:"�1�)�������L���@ ��H�����L��1��x��A�Ņ�����H�5������������AD��1�H�5��g	��A�ƅ��L�ǺH�5��I'��D��H���L�'�����H�5̍D�����H�� ��L���������DL�;1�H���L���L������������AL��D��1�����A�Dž���H�sH��H�������H����D��H���&��H����D��H���x���I9��w�����H�5_�D���������;���AD��1�H�5��H��A�ƅ��-�ǺaH�5��*&��D��H��`������H�5��D���9(��I��H����D������PH�Ie1�L��1�H�c�L�����3�]��1�L��H�,�L��������%L���t�����uH�%�fD�3L��H���1�1������f�H��eH��1�1��3����L����1�L��H���L�������y��3L��H���1�1�E1��������fDH�ieH���1�1��3����D����H�De1�1�H����3�b���v���DI��H�eH��1��01��<��A�$�H�E�dH+%(�AH��� L��[A\A]A^A_]��H��eH�z�1�1��3��
�����f��;���H��e1�1�H�y��3��
������DH��eH���1�1��3�
�����f.�H�YeL��1�1�H�+��3�t
������H�1eL��1�1�H�+��3�L
���`����H�	e1�L��1�H�i��3�$
��D���|����0����3L��H���1�1�E1��	������H��eH�]�1�1��3��	�����������UH��AWAVAUATH��H�$H��H�$H��dH�%(H�E�1�L�����L�����H����L�� ��I��L���
��1�H�5_�L���������/H��0��1�E1�L��H����H�L������A�ƅ��RH��8��H��H�#��H9�u:H��h��H�_�H�E@�H9�uH��p��H� H9�tH�S�H��
e1��01����L��L�����j���1�L���H�y�L����������L���������I��$�I��$�L��fo�H�BH�A�$��)��H�E�dH+%(��H�� D��A\A]A^A_]�f�H��eH���1�A������01�����C���@H��eL��H�r�1��01������^����H��eH�s�1��01�������A������^���H�peL��H�n�1��01�����=����Q�����UH��AWAVAUATSH��H�$H��H�$H��8dH�%(H�E�1�H�����H���e���H��� I�����I��H���xL�����L�����H��1�E1�jL��L��L��L��A����L�ɇA��XZE���^H�#��I�EH�HH9�u7H�E@�H�HHH9���H�HPH� H9�t&H���@H�H�H�Be1��01��g��L������L��L�������1�H�پH��L���w������.L���g�����u[I��$�I��$�L��fohH�BH�A�$�����H�E�dH+%(�H�e�D��[A\A]A^A_]�fDH��
eL��H�B�1��01�����f.�H�i
eH��k1�A�����L������01��z������DH�9
eE��L��H����01��N��L�4�L��L�����H�5rkL�������������DE����DH�y����A���������H��	eH��H���1��01�����������f.����UH��AUATL�m�L�%�L��L��H��dH�%(H�E�1��E����L��L������E���H�U�dH+%(u
H��A\A]]��4��@UI��H��H�V�H��AWAVAUATSH��hH��p����H��x���dH�%(H�E�1�H��e�01��������H���2�@H��x���1�H��I���������>I�$A�1�H�P H����f�H�HL�,�M��t8I�}E��A���0��f�L���A*�H���|��M�mM��u�I�$H�P H��H9�w�H���0H�M�H�E�L�}�H�M�L�e�H�HH�]�H��H���G�G��I���f�L;e�t:L���
��H�[H���L��L�#���A�H�u�L���'��A���t��[��I��H����L���g��I��H�����@H�����H��tf��/H�x�@���H��u�H��A�L��L��jA�1�L�����L��E��
���XL��Z���L�������E����.���L�e�H�eH���1�A������01��-��L�����H�E�dH+%(�iH�e�D��[A\A]A^A_]�H�E�H�H�P H�E�H�M�H9������L�e�H��x���L��L���������I�$A�1�H�P H��tYL�}�I��H�HJ��H��t8@H�;E��A���
��f�L���A*�H���e��H�[H��u�I�$H�P I��I9�r�L�}�H��x���L��L���%����u"E1�����L�}��Q���L��L�e��������H��eH��p���1�H�e��0H��1����H�5m�H�������t�H�5n�H���z����E����H��eH�څ�A������01��������t
��H�meH�Å1�A������01�����[�����UH��H��H�w dH�%(H�E�1�H��tH�E�dH+%(u H�����H�E�dH+%(u�1��������UH��H��dH�%(H�E�1�H�E�dH+%(uH�wH����������Uf�E1�H��AWAVAUATSH��hH�u�dH�%(H�E�1�H�)E�)E�)E��H�ceH�OI��H��H�'���01��t���A�E����I��H����H�=ä���I��H����H��1�L�}�L���5�H�SH�u�L��L���r���A������H�K���Mu3�y1����Mu$�y2�z��Mu�y3u�y��DH��eH����D�E��01����D�E�L��D�E��	��L������L���X���L���@���D�E�H�E�dH+%(�.H��hD��[A\A]A^A_]��1�L��1����A����tEH�K�l���@�y�5���H��eA�m�H����01�����E1��Y���D�L���K�I�$I9�tw�DH���H��H�5��H�M�H�U�H�@0H���H�U�H�M���uoH�H��I9�u�I�$H�E�I9�t#H�u�1�L���p�H��u{H�E�H�H�E�I9�u�H�DeH�K�H�*��01��[���H�KA��t���@H�=�eH��H�U�H�M��a��H�U�H�M�H�H��I9��1����i���@L�H(H��(H��p���L9��m���H�SI�yH��L�M�H��x����}
��L�M���tM�	L;�p���H��x���u��2���DL��1��E����H�te�H�{��0����A�mD�E�����fD�y�8����w���A������L������A������	��fD��UH�=4�H��AUATH��dH�%(H�E�1���H�5'�H�=(�I���O���E�H����M����L�m�1�H��H��L������tH�U�dH+%(ubH��A\A]]Ð1�L��H�TL�������u΋M�t�H�YeA�H�t�1��01��q���������f.��������$��@��U1�H��ATL�%����L��H��dH�%(H�E�1��2�����tH�U�dH+%(u-L�e���f�H�E�dH+%(uL��L�e�1���"����ff.�f���UH��AUATL�%6SH���dL�k(H��dH�%(H�E�1�H�3L���3�����uH��L9�u�L�%_���1�L�������t!H�U�dH+%(u5H��[A\A]]��H�E�dH+%(uH��L��1�[A\A]]�m�������UH��AWL�=�AVH�����AUfHn�H��`���ATSH��(dH�%(H�E�1�H����HDž����fH:"�H����)����fHn�fH:"�)�����DH���������H������H����Dž��H��L�(H�@8H9A8��H�5xH�=y���I��H�������-s��I��H����H�@8L��fHn�I�D$HfHn�fl�I�D$xAL$8fl�fHn�I��$�AD$HfHn�fl�fl�AL$xA�$��d��M�t$XL��L��I�$A�D$��A�E1�1�fE�L$lH��~L��L�����������AH�B�eL�5;�eH���H�����oH��L��L��fo����I��)�����oXH�����)�����o` )� ����oh0)�0����op@L��0���)�@����oxP)��)�P��������I���uaH���x���L�����D�����E���aH����������H�H���d�0����H�P��1��������H���7���������@H�9�dL��H��}�01��Q������@H��dH�b��1��3�,���H�������3�H�g�H�1��
���D�����H�U�dH+%(��H��([A\A]A^A_]�fDL��������L�%<�e@I�D$M�$$H�5�|M�t$�I�D$H�=�|M�l$�I�D$I�D$@fHn�M�d$Dž��fl�AD$@���H��H���Y���I�D$ L��I��$��<��L��fA�T$4�I�L����H����������I��$�H����H��E1�DH��A��H�:u�D9����t?�Z@H��`���H���H�
�L���������H�CH��H�����ofo�����L��)�`����oP)�p����oX )]��o`0)e��oh@)m��opP)��)u�L�}�L��������i���H��dM��M��L��H�b���01��,����������������f�D����D9������I��H�n�eI9��%���1�����fDH���dL��L��`���H�0���01������H���1������E1�D�����L��H�Y�H�J�d��01��l��_���H�0�d�/���D�����L��H�����L��� �H�	�d�������ff.�@��UH��AWAVAUI��ATSH��H��(L�'L�~dH�%(H�E�H�M�4$H�H�E�L��L	�tCM��u&M���]H�M�M��M��H�����M���7L��L���<����u�M�L$L�s L��L	�tGM��u"M���1H�M�M��H����^fDM���L��L��L�M�����L�M���u�M�L$PL�s(L��L	�tGM��u"M����H�M�M��H����fDM����L��L��L�M����L�M���u�M�L$ L�s8L��L	�tGM��u"M����H�M�M��H�c��fDM����L��L��L�M��8��L�M���u�M�L$HL�sHL��L	�t?M��uM���aH�M�M��H�C��Y�M���GL��L��L�M����L�M���u�M�t$0L�c@L��L	�tnM��uBM��uQM��t=H�M�M��M��H�'��H��d��01���������V@M��t�L��L���x�����tH�5'xL���e�����u�I�EM��L��H����H�q�d�01���1�H�U�dH+%(��H��([A\A]A^A_]�@M�������L��L����������������M���������f.�M�������#���f�M���N����k���f�M��������������ff.�f�UH��AWAVAUATI��SH��H��L�.L�7dH�%(H�E�1�L��L	�tdM��uGM����M��M��H�!�L��fDH�a�d��01��������{f�M����L��L�������u�L�kM�|$L��L	�t?M��u"M����M��M��H���L����M���gL��L�������u�L�kM�|$L��L	�t?M��u"M���aM��M��H���L���<���@M���?L��L���\�����u�L�kM�|$L��L	�t?M��u"M���9M��M��H���L�����@M���L��L��������u�L�k M�|$ L��L	�t?M��u"M���M��M��H���L�����@M����L��L�������u�L�k(M�|$(L��L	�t?M��u"M����M��M��H���L���L���@M����L��L���l�����u�L�k0M�|$0L��L	�t?M��u"M����M��M��H�l�L�����@M����L��L��������u�L�k8M�|$8L��L	�t?M��u"M����M��M��H�T�L�����@M���wL��L��������u�E�D$@D�K@E8��|E�D$AD�KAE8���1�H�U�dH+%(��H��[A\A]A^A_]�@M���!���L��L���d������d����	����M���n���L��L���<�����������V����M�������L��L��������������~����M�������L��L���������������M�����L��L�����������������M������L��L��������,�������M���6���L��L���t������T��������M���^���L��L���L������|����F����L��H���H�W�d��01��y�������q���L��H������.���ff.���UH��AWAVAUATI��SH��(L�5��edH�%(H�E�1�M�����E�I��H�k�eL�=��@I�6I�<$�����u4A�EL��L������u\H���dI�$L����01����E�L�sH��M��u��}�t01�H�U�dH+%(uFH��([A\A]A^A_]�f��������H�B�dI�$�H�2��01��Y�����������ff.�f���UH�5�qH��AWAVAUI��ATI��SH��(H�0dH�%(H�E�1��������L�5w�eH�p�eM�����E�L�=Cq@I�6I�<$�l�����u4A�EL��L��������utH�}�dI�$L����01����E�L�sH��M��u��}�tH1�H�U�dH+%(u[H��([A\A]A^A_]ÐL�5�eM��tH�
�e�a����������H��dI�$H�/�1��01��������������UH��AWAVAUATSH��H�$H��������H���dL�s0dH�%(H�E�1��X�I��H�����3L����@��I��H���gH�s1�1�H��H����L���H��L9�u�H�\�dL���L����L�����1�L����fn�H�L��	H�s�H�H�Cfp��L��L����L���fօ��ƅ��H��H����HDž����L��L��L����H���dH9��y���H�R�dH���H�0L��H���m���H���H��ttL�kƀ�M����H���1�L�xH���L�p0�L��H��I���P�L9�t{I�wM�G��I�?���H��H��u�H����
��H��dH�w���01��!�L��E1����H�E�dH+%(uaH���L��[A\A]A^A_]�f�H������H���H���H���dH9�t����H���dH�����01�����q���UH��AWAVAUATSH�_8H��dH�%(H�E�H��H�H$��uH�_ H�1�dH�
�1H��r1��01��H�L�cM����1���L�����I��H��tvA��$u�I��$�I��$�M�t$@L�x-I��$�H�@ L������AV��H�F�I��H���dAWM��1����01���XL��Z�/�I��H��u�H�E�dH+%(uH�e�[A\A]A^A_]��T�@��UH�
�0H��qH��AWAVAUATSH��1�H��(dH�%(H�E�1�H��d�01��9�L�c0M����1�f.�A��$uI��$�1�H�H I�D$pH��tH�I�t$(I��$�H�U�I��$�L���D�oH�u�L�x-�i��H�U�H�u�E��I��H���d��1�RH�H�V�01�AWAV��H�� L�����I��H���_���H�E�dH+%(uH�e�[A\A]A^A_]��,�f.�f�UH��ATH��dH�%(H�E�H��H�@$����H��8H�E1�H����H�G0H9G0��H���H;DPeH���H���ttH;UPe��H;pPe��H;�Pe��H;�Pe�H�y�dH�Vp��01��������H�U�dH+%(�$L�e���fDH;�Oeu�H;
�Oe�v���I�����H��H���5���I����1���H;�Oe�L���H;
�Oet�H;�Oe�C���DH;�Oe�1���H;
�Oet�H;�Oe�(���DH;�Oe����H;
�Oe�l���H;�Oe�	���f�H;�Oe��H;
�Oe����7����H� �d���H�P�dA�L��H�H���01��b�����������#�UH��AWAVA��AUATSH��dH�%(H�E�H��H�@$����H��8H�H���Ic�E1�E1�E1�H��H��H��LeH���!E�����Z�I��H��H���H�G0H9G0t�H���H;NeH���H����~H;Ne��H;*Ne��H;ENe��H;`Ne��H;C�H;C8�H;C`�0H;���DH;���^I��I���F���DH;qMe�u���H;
lMet�H;{Me�l���DH;qMe�Z���H;
lMet�H;{Me�Q���DH;qMe�?���H;
lMet�H;{Me�6���DH;qMe�$���H;
lMe�`���H;wMe�����H;qMe�	���H;
lMe�8���H;C�����@H;S���H;K ����H;C8���H;S@����H;KH�����H;C`����H;Sh����H;Kp����H;�������H;�������H;�������H;�������H;�������H;����������DE��uDI����M�EM9���1�H�U�dH+%(��H��[A\A]A^A_]�H� �N���M9���M��t�H�"�dL���H���01��:�������H���dH�*���01���������x���E���m���E1�A�L��H�&�H���d��01�����������=�����M��L��H�]���L��H���ff.�@��UH��AWAVAUATSH���
dH�%(H�E�1�����H����H�56kH��I���Y�H��IeL�-SKeA��H��p���H��(���E����L��L�%gJe�����M��H��(����A�I�|$I��(���M9�u�L�%�HeM��I�}I��(�w��L9�u�I��$�I��(�a��L9�u�H�E�dH+%(�UH�e�D��[A\A]A^A_]��H�5ujL����A�ƅ��U���1��/�����GH��(���H�����H����H��I��H���!H��d�8~H���dH�0��1�fo��H��`�����L��0���H�����H�sHe�H�L��)�����L�-�Ie���I�$H�����H�����I9��#L�����L�� ���H����H�����H�����L�-�HeL���fAnMI�EH��L��L��ƅ����fp��H��`���fօh���������TjL��L��E1�SE1�1�1��S��^_H���1I�}��H��0�������I�}I�E���H��@���H��t"�Q(�D����q(9�t
�B��r���v�fHn�I��(fH:"�H���AE�L9� ����,���H�����L��8���fAnUI�EH��L��L��fp��H��`���fօh����������yj1�1�E1�SE1�L��L���x��ZYH���VI�}�$�H��0������I�}I�E����H��@���H��t'�Q(�f.�����q(9�t
�B��r���v�fHn�I��(fH:"�H���AE�L;�����.���H�����H������H�H�����H9��������L�����L�� ���H����L�����M�4$M9�t0M���1�L���_��H�X�d�8��M�6M9�u�M�4$I�D$I�ƐL��L���L����L���������A��	���f.�A�����H��EeL�-$Ge���L��L�����L�� ���A�����H�������H���d�H���01��������L�����M�6M9������@���L���_������b���L��L������1�L���B������E����L��E1��*�����A���>���A������/�f.�D��UH��AWAVAUATSH��(dH�%(H�E�1����H���H�5�eH��I���	�A��H��p���H����E��tkL��H�!Fe���L������H�������@H�;H��0�L��L9�u�H�E�dH+%(�\	H��(D��[A\A]A^A_]��H�5]eL���y�A�Dž��{���1��������H����H�����H���h�H����H��H����H���d�8~H���dH�0���H��`���1���L����I���H�L��HDž����d���I�$I9��^D����H�����L����I��L�=�De1��L��L���H�f�H�]�dL����fAnH����HǃH��P���I�Gfp��H�����L�� ���ƅ����H��`���fօh����{���cA�G(1�L��L�牅���H�2�d��C�����;I����H�������I�I�G���H����H��t%�Q(������q(9�t
�B��r���v�fHn�I��0H��EefH:"���AG�I9�����H�H9�������L����D����L�����M�4$M9��y���H��bM���1�L���$��1�L�����H��d�0����A��P
�I���	��I��@���A��T
��I���	��I��H��hH��DeL��I����x��H���d�0���)A��P
�#I���	�I��@���A��T��I�����I��H���Idž�L�����H�zBeL��H�@ I�����H���d�0����A��P
��I���	�uI��@���A��T�vI����VI��H,�3IdžL�����I��L���1�H�j�d�0���<A��P
��I���	��I��@���A��T��I�����I��H,��IdžL����L��Adž��_��H���d�0����A��P
�-I���	�
I��@���A��T��I�����I��H��_Adž�����L������H��@eL���~�@efH:"@ A�����L���A��H�*�d�0���6A��P
��I���	�II��@��&A��T�I�����I��H���M�6M9��������@H���d�H�ղL����01����L������A��������1�H��_1����L������U���1�H�K`1��}��L���ż�����1�H�K`1��`��L��証���6���1�H�H`1��C��L��苼�����1�H�H`1��&��L���n�������1�H�I`1��	��L���Q������L�
_A�|H���d�H�
�^H�`7A������01������
���L�
�A����L�
�A���L�
_A���L�
�^A���L�
�^A���L�
_A���u���L�
�^A���c���L�
�^A���Q���L�
�^A���?���L�
{^A���-���L�
V^A������L�
��A�E�	���L�
��A�"���L�
K^A����L�
!^A�����L�
�]A����L�
�A����L�
ױA����L�
�]A�����L�
�]A���y���L�
��A�	�g���L�
�]A�9�U���L�
]�A�&�C���L�
�A�$�1���L�
��A�C����L�
W�A�A�
���L�
a]A�=���L�
7]A�;���L�
a�A������L�
]A������L�
]A����L�
�\A�����L�
��A�����L�
_�A���}���L�
%�A���k���L�
�\A���Y��������UH��AWAVAUATSH��HdH�%(H�E�1����H��H�
�
I��fHn�H��fH:"�H�Y)�����fHn�fH:"�H�?H������)�����M���H�5�[L��L��������A�ą�tkL��H��=e�<��L��L������DH�;H��(���H�C�L9�u�H�E�dH+%(��H��HD��[A\A]A^A_]��L���h���L���@��H��H��thH���d�8~H���dH��H�0����I�EL������H������H�����H������H������H��A�A�ą��'���I��L;�����u�����A������
���H�n�dL�
�ZA�_H�
�[H�3�A������01��o�������5��DU��H��AWI��AVL�5I<eAUL������ATL�����SH��`���H��(dH�%(H�E�1�H��p���H��H������1��H�L��HDž����d���1��L���A~�H�H������H��L��fp��L��H�����H�����H�l�dƅ����H��@���A�fօh���������I�FH��`���������H�~�d1�L��L��臸������H������I�~I�F����H�����H��t'�Q(�f.�����q(9�t
�B��r���v�fHn�I��(H��<efH:"����AF�I9������L�����1�H�U�dH+%(uAH��([A\A]A^A_]�@H���dH�����01����L��謺���������`����UH��AWAVAUL���ATI��SH��1�H��(L�=��dL�5��ddH�%(H�E�1�H��YI�H��YI����L��L�������E�����1�L�����1�H�����H���d�0����L���A�������I����d���n���L�-�?H��L���\������I���L�=�VL��H�@ H����3������I���H�5�VH�x-������lI�~(d�aL���������I���H��������д��L��H����������I���L��H�@ H����������I���H�5PVH�x-�������I�~(d��L���I�������H���H�E��,���?���L�5�UH��L���-������L�E�L��I���H�@ H���������nL�E�H�5�UI���H�x-�������KL�E�I�x(d�<L��������I�Dž��PH����,�<蛳��L��H��������$I���L��H�@ H����n�����I���H�5GUH�x-�O������I�(d��L��������I����H����,������L��H���������I���L��H�@ H�����������I���L�5�TL��H�x-������bI�(d�WL��������I���QH����d�@�{���L��H���p�����(I���L�%BL��H�@ H����G������I���H�5!TH�x-�(������I�(d��L���������I����H����d�����L��H����������I���L��H�@ H����������I���H�5�SH�x-������eI�(d�ZL���`�����I���TH����d�C�Z���L��H���O�����+I���L��H�@ H����-�����	I���H�5 SH�x-�������I�(d��L���������I���aH����d�P�Ͱ��L��H���������8I���L��H�@ H���������I���L��H�x-�������I�(d��L���J�����I����H��������A���L��H���6������I���L��H�@ H����������I���L��H�x-������~I�(d�sH���L����@$����H���L���H��u)�h�L��L�����L�����H���H��tAL���L��M�l$��]��L��L��H����[��M;nu�L���=��I�F���ۺ��H�E�dH+%(�.�E�H��([A\A]A^A_]�fDL����G���@M�I�1�H�S1��Z��L��袮���S���DL�
SA�H��dH�
�R�H��)�01�����E������^���f�L�
�RA��L�
�RA��L�
�RA�=�L�
�RA�D�L�
~RA��z���L�
lRA�!�h���L�
ZRA�(�V���L�
HRA�/�D���L�
6RA�6�2����5��D��UH��AWAVAUL���ATI��SH��1�H��(L�5��dL�=��ddH�%(H�E�1�H��QI�H�_I�趰��L��L������E�����1�L��趷��1�H���\��H���d�0���:H�������IH����xd�8H�(,�*���������SH����zd�BH�x(d�7H���L����@$����H���L���H��u+�jf.�L��L�����L�����H���H��tAL���L��M�l$��}��L��L��H����{��M;nu�L���]��I�F�����H�E�dH+%(���E�H��([A\A]A^A_]�fDL����E���@M�I�1�H�(P1��z��L���«�����L�
'PA�H�)�dH�
�O�H��&�01��=���E������f���L�
�OA�������ff.�f���UH��AWAVAUL���ATI��SH��1�H��(L�5U�dL�=6�ddH�%(H�E�1�H��OI�I��i���L��L���N�E�����1�L���i���1�H�����H�X�d�0����L���L�5�LI�������L��H���������I���L��H�@ H�����������I�}(���L�����H���I��詪��L��H���������I���L�59LL��H�@ H����u�����rI�}(d�gL��L�-�4�3��H���I���D���L��H���9������I���L��H�@ H����������I�(,��L������H���I�����L��H����������I���L��H�@ H����������I�(���L�����H���I��萩��L��H��������dI���H�5�9H�@ H����_�����>I�~(��0H���L����@$����H���L���H��u$�cL��L�����L���e��H���H��tAL���L��M�l$�����L��L��H�������M;nu�L�����I�F���C���H�E�dH+%(���E�H��([A\A]A^A_]�fDL����L���@M�I�1�H�pL1��¾��L���
����8���L�
oLA�[H�q�dH�
L�H�#�01�腾���E������f���L�
3LA�g��L�
$LA�O�L�
LA�U�L�
LA�a�������UH��AWAVAUL���ATI��SH��1�H��(L�=u�dL�5V�ddH�%(H�E�1�I�I�茪��L��L���q��E����t1�L��茱��1�H���2��H�{�d�0��� L���L�-�1I����&���L��H��������aI���L��H�@ H���������?I���H�5�HH�x-������� I�~(��L��L�5fH���H���I��覦��L��H���������I���L�%6HL��H�@ H����r������I���H�5HH�x-�S������I�(d��L�����H���I���)���L��H��������UI���L��H�@ H���������3I���H�5�GH�x-�������I�(d�	L�����H���H�E�貥��L��H��������L�E�L��I���H�@ H����������L�E�H�5]GI���H�x-�^������L�E�I�x(d��L�����H���I���0���L��H���%������I���L��H�@ H���������vI���H�5�FH�x-�������WI�~(d�LL�����H���I��躤��L��H��������1I���L��H�@ H���������I���H�51FH�x-�n������I�~(d��L���3���H���I���D���L��H���9������I���L�5H4L��H�@ H����������I���H�5�EH�x-��������I�(d�zL�����H���I���ǣ��L��H��������bI���L��H�@ H���������@I���H�5tEH�x-�{�����!I�(d�L���@���H���I���Q���L��H���F�����AI���L��H�@ H����$�����I���H�5EH�x-������I�~(d��H���L����@$����H���L���H��u%�d@L��L�����L���
��H���H��tAL���L��M�l$��m���L��L��H����k��M;nu�L���M���I�F�����H�E�dH+%(��E�H��([A\A]A^A_]�fDL����K���@M�I�1�H�F1��j���L��財�����L�
FA��H��dH�
�E�H���01��-����E������f���L�
�EA����L�
�EA���L�
�EA���L�
�EA���L�
�EA���L�
�EA���t���L�
~EA���b���L�
lEA���P����k��ff.���UH��AWAVAUL���ATI��SH��1�H��(L�=ջdL�5��ddH�%(H�E�1�H�)EI�H��DI����L��L������E�����1�L�����1�H�����H�ջd�0���:L���L�-+L��I���H�@ H����r�����xI���H�5eBH�x-�S�����YI����?���L��H���4�����:I�~(d�/L��L�%<0��L��I��H���H�@ H����������I���H�5�AH�x-��������I����Ÿ��L��H��������rI�~(d�gL��L�5FA�u���L��I��H���H�@ H����x������I���H�5LAH�x-�Y�����nI����E���L��H���:�����OI�(d�DL�����L��I��H���H�@ H���������&I���H�5�@H�x-�������I����Ϟ��L��H����������I�(���L��膻��L��I��H���H�@ H����������I���H�5c@H�x-�j������I����V���L��H���K�����~I�(d�sL��L�%�?�	���L��I��H���H�@ H���������NI���H�5�?H�x-������/I����ٝ��L��H���������I�(d�L��蓺��L��I��H���L�E�H�@ H������L�E�����I���H�5;?L�E�H�x-�k�������L�E�I����S���L��H���H�������L�E�I�x(d��L���	���L��I��H���L�E�H�@ H�������L�E����jI���H�5�>L�E�H�x-������GL�E�I����ɜ��L��H��������$L�E�I�x(d�L������L��I��H���H�@ H���������yI���H�5[>H�x-�c������ZI����O���L��H���D������;I�}(d�0H���L����@$����H���L���H��u$�cL��L������L���M��H���H��tAL���L��M�l$�譸��L��L��H������M;nu�L��荸��I�F���+���H�E�dH+%(��E�H��([A\A]A^A_]�fDL����L���@M�I�1�H�X?1�誱��L�������L�
W?A��H�Y�dH�
�>�H��01��m����E������f���L�
?A����L�
?A���L�
�>A���L�
�>A���L�
�>A���L�
�>A���t���L�
�>A���b���L�
�>A���P���諽��f.����UH��AWAVAUATSH��8dH�%(H�E�1�����H��H�
VI��fHn�H�7fH:"�H�	)�����fHn�fH:"�)�����M���H�5!<L��L������@���A�Ņ�tyL��H�?(e�ڭ��L��L����K���H�{H��(蛫��H�C�H�{����H�C�L9�u�H�E�dH+%(��H��8D��[A\A]A^A_]�DL�����L���м��I��H��tUH�q�d�8~H�5�dL��H�0�j���I�$H������H������H������L���A�Ņ��(���H��L9�u�����A���������H��dL�
J;A��H�
Q=H���A������01������ ����ػ���U��H��AWL�=�&eAVL������AUL��`���ATL�����SH��#eH��(H������dH�%(H�E�1�H��p���L��H������1��H�L��HDž�����蕪��D1��L���H�H������L�����H�����H�ߴd�x�H���dfAnH��@���L��L��I�GH������ƅ����fp��H�����H��`���fօh����2������H���d1�L��L����������I����H�������g���I�I�G�J���H�����H��t&�Q(�f�����q(9�t
�B��r���v�fHn�I��(H�'eH��PfH:"����AG�I9�����L���c���1�H�U�dH+%(uMH��([A\A]A^A_]�H�I�d����@H��dH����01�����L���������������UH��AWAVAUI��ATI��SH��1�H��xH��h���dH�%(H�E�1�艠��1�I��p����+���H�t�d�0���yI�}(萛��I��H���#L�}�1�H��:1�L��� 訣��H�]�1�L�}�I��H�E�H�@I9E(��H�]�I���L�s�Օ��H��L���ʹ�����bI���H�sH�@ H���觹�����?I���H�s H�x-苹����|������H�^�d�xtI�EpH�H9�PH�A�d�xucL��I���'���I��H����H�}�L��H��91�� I�Ĩ躢��H�E�(I������L�
d�A����f�I���I���H�����4���L�p�H��8H�E�I9��l���L��p���1�L�}��SI�F I��I��H�@ K�t<H���萸������I�F(K�t<H�x-�v�������M�6H��L;u���H�M�H�}�I�ؾ H�ƌ1��ߡ��M�,$I9�w�L�
׌A��H�R�dH�
�8�H���01��f���Dž|�������H�E�dH+%(���|���H��x[A\A]A^A_]�@L�M�A���@H���d1�H���HD�@1�����L���H����Z���L��L�}�L��p���H9��&���L�
�A���>����L�M�A���(���I��uJH��h����A���H��d�x�0���L�
͋A���f.�L�M�A������L�
Z�A������L�
�7A����������UH��AWAVAUI��ATI��SH��1�H��xH��h���dH�%(H�E�1��ɜ��1�I��p����k���H���d�0���yI�}(�З��I��H���#L�}�1�H�71�L��� ���H�]�1�L�}�I��H�E�H�@I9E(��H�]�I���L�s����H��L���
������bI���H�sH�@ H���������?I���H�s H�x-�˵����|������H���d�xtI�EpH�H9�PH���d�xucL��I���g���I��H����H�}�L��H�61�� I�Ĩ���H�E�(I��	����L�
��A����f�I���I���H�����t���L�p�H��8H�E�I9��l���L��p���1�L�}��SI�F I��I��H�@ K�t<H����д������I�F(K�t<H�x-趴������M�6H��L;u���H�M�H�}�I�ؾ H��1�����M�,$I9�w�L�
�A��H���dH�
�4�H�:�01�覦��Dž|�������H�E�dH+%(���|���H��x[A\A]A^A_]�@L�M�A���@H��d1�H� ��HD�@1��@���L��舏���Z���L��L�}�L��p���H9��&���L�
Z�A���>����L�M�A���(���I��	uJH��h���	�A���H�^�d�x�0���L�

�A���f.�L�M�A������L�
��A������L�
�3A������H������UH�
�0H��0fHn�fLn�fLn�H�
�0fL:"�H��0H��AWfHn�fHn�AVL�����fL:"�H��0AUI��H�5�efH:"�ATL���H����SH��L��H���	dH�%(H�E�H��H���fHn�H�K0fo�fH:"�H��fHn�H�F0fo�fo�fH:"�H�$0fH:"�H�0fH:"�fH:"��<H���H�H�׺� H�ȹ��H�H��/H��)�����fH:"�H��/�8���)���fH:"�)����������������������D)�0���)�@������)�����)����HDž��HDž����HDž0���HDž���HDž����HDž(���)�P���H��d����f�P�����������������(���D)�����)�����D�(���D�8����H���)�������������)� ���)�0���)�@���)������X���)����)���x��������)�P���HDž���HDžx���HDž ���HDž���HDžp���HDž����G���1��0���H�i�dH�="�d�o(�o`�o@0/�oh go G0�[���L��L���@��������xH����L��L���/���������H���L����@$��uL���H���L���H��u)�h�L��L���e���L���ͺ��H���H��tAL���L��M�g��.���L��L��H����,���M;eu�L������I�E��論��H�E�dH+%(u������H���	[A\A]A^A_]�����ff.�@��U��<H��AWAVL����AUL���ATI��H�5�eSH��L��H��dH�%(H�E�1�H�r�d�H�� H��f�P趓��1�����H�=x�d�ӳ��L��L����������x1�1�L��L��������H���L����@$��uL���H���L���H��u)�h�L��L�����L���M���H���H��tAL���L��M�g�讦��L��L��H���謶��M;eu�L��莦��I�E���+���H�E�dH+%(u�����H��[A\A]A^A_]�蟬��ff.�@��U1ҹ-H��AWAVL��`���AUL���ATI��H�5�eSH��L��H��dH�%(H�E�1�H���d�H�� H��f�P�9���1�����H�=��d�V���L��L���;��\�����x1�1�L��L������\���H���L����@$��uL���H���L���H��u$�[L��L���m���L���շ��H���H��t9L���L��M�g��6���L��L��H����4���M;eu�L������I�E�軓��H�E�dH+%(u��\���H�Ĉ[A\A]A^A_]��/���ff.�@��UH�
	H��)fHn�fHn�H�
})fHn�fLn�H�
g)H��AWfHn�fHn�AVL��p���fH:"�H�X)AUI��H�5IefL:"�ATL���H�a)SH��L��fH:"�H����H��xdH�%(H�E�H��H��`���fHn�H�8)fo�fo�fH:"�H�)fH:"�H��(fH:"�fH:"��-H���H�H�׺� H�ȹ��H�)�����H��H��(�8���)�0���fo�fH:"�H�c(���fH:"�H��(D)�����fH:"�)�����)���)��)����)����)� ���D���������������HDž��HDž����HDž0���HDž���HDž���������H��d)�0���f�P)�@���)�P�������������������)�����)�����)�����)������(����8����H���HDž(���HDž���HDžx���HDž ���艱��1��r���H�=k�d�Ʈ��L��L������l�����xH��`����	L��L���Z��l���H���L����@$��uL���H���L���H��u$�cL��L���ձ��L���=���H���H��tAL���L��M�g�螡��L��L��H���蜱��M;eu�L���~���I�E������H�E�dH+%(u��l���H��x[A\A]A^A_]�菧��f.�D��UL�
�L�<)�H�
m)H��|H��ATSH�}�H��H�E�ddH�%(H�E�1����H��(LN�1��@�����xLH�M�3H��(1���3���H�}��j���H�}��E�輔��H�E�dH+%(uH��D��[A\]ÐA��������æ��H��ff.����UH��H��dH�%(H�E��!�e����e��
H�E�dH+%(uP��f��=
�e1Ҿ$1��̅���=�e1�1��$踅��H�E�dH+%(u�=ˇe1Ҿ$1��锅������ff.�@UA���H��AUA��ATH��h���H��H��dH�%(H�E�1��H�H���H��`���E�����E�H�u��s���foK��M�aH�E�)�p����E�谤��H��`���A�����1�I��������*1��M���A�ą����Ǻ(�1�迟��D��
D��1�譟���8����D���1�藟��1Ҿ$D��1�膄��H�E�dH+%(uWH�ĐD��A\A]]���E��H�u��(���H�Ԝd�H��h���H��&A������01�����諤��ff.�UH��SH��H��ddH�%(H�E�1�H������H�H��H�H�E�dH+%(uH�]�1�������O���ff.�@��UH��H��dH�%(H�E����e�����e��
H�E�dH+%(uP��f��=��e1Ҿ$1��\����=��e1�1��$�H���H�E�dH+%(u�=[�e1Ҿ$1���$���诣��ff.�@��U�H��AVAUH��H���L�-A���ATH��L��@���1�SL��H��dH�%(H�E�1�L��@����H���E��~������MH����1�L��
H��@����Y������NH�5қd��L��8�������
L�����e����H�5-�d�1��d�e����=]�e1Ҿ$A�ĉE�e1������=<�e1�1��$����D��1Ҿ$1�������D�%�e1�1��$D���ԁ���=��e1�1��$����=�e1�1��$謁���L��D���܇��������H��8���L�%�d�=��e�L��谇��������L��8����=��e�L��苇��������L��8����=c�e�N����=T�e�C����=E�e�8���M��M��H�ً)�eH��w�P��ePA�4$1�誔��XZH��t$A�4$H����H��H�
x�1��~����
܂e��tA�4$H�x�1��\����
��e��tA�4$H�&x�1��:���I��tA�4$L��H�6x1������I��tA�4$L��H�Fx1�����H��ud�=Q�eu[I��uU�=>�e��1�I����!Ѓ�H�U�dH+%(��H�e�[A\A]A^]�DH��v�1�蕓������������L�%R�dH��"�1�H�����A�4$�d�������A�4$H�]"�1�I������A����C���A�4$H�:"�1�I������������H��uH�ۗd��01����������&���H��u��赟��D��UH��H��dH�%(H�E�1��2�eH�E�dH+%(u���x������U1�H��H��dH�%(H�E�1�����H�U�dH+%(u���8������U�H��AUATH��H���H��@���SH��1�H��HdH�%(H�E�1��H�H�;�����E�H��@����������~H������1۹H��H��L�������H�H�f�Dž��H������H����H�����B{��fo*������aH���)�����Dž���v���L��A�����1�I��������*1�����A�ą����Ǻ(�1�艘���D��1��
�'�p�������D���1��Z���1Ҿ$D��1��I}��1Ҿ$D��1��8}����K�����u�1Ҿ$D��1��}��H�������D���A�������utD��L�������|��H�t�d1�D�eL��H����3腐��I��'���
�~e1���d��H�U�dH+%(��H��H[A\A]]�H��dH�A�1�I������3�#���D���{{��D��~e�31�H�����H�)�����3L��A�'�H�yt1��ڏ���
H~e��dt�3A�d�1�H�yt贏��������C���H�s�dH�������H�~�01�臏�����������H�F�dH�'r��01��a�����������"���f���UH��H��dH�%(H�E�1�H�E�dH+%(u�1�����ff.�f�UH��AWAVAUATSH��HdH�%(H�E�1�@���tfoL�L�-e�E1�L������H������)�����D1��H��E���H�H�Q���L������Dž���H������x��fo����������`H����)������R���1�A�����L��I�������1��*����xgB���0���I��I��d�q���H��dH�s�E1�01��
���f.�H�E�dH+%(�DH��HD��[A\A]A^A_]�H���dH�������H���01�赍��Ic�DŽ�0�������E��t�A�D$�H��0���L���4���fD�;H����x��L9�u��s���foؙL�-��E1�L������H������)�����f�1��H��E���H�fo�����H��{eL������Dž��������`H�����HDž��)������ۘ��1�A�����L��I�������1��*�|��������B���0���I��I��d�n�������t���@UH��AWAVAUATSH�������������H��dH�%(H�E�1�Hc�H�����H��H��H%�H)�H���H9�tH��H��$�H9�u��H)�H����H�D$�����H��H��H��H�����H���������-H��(���H���dE1�L�� ���H�����L�=ffDH�����1��fo
"��H�H�8�DžT���HDž`���H�� ���H��yeH��X���ƅH���`)�0����[���1�A�����L��I�������1��*������yH�����D��L���B���31�I���!���L;�����L���fo� ���H������fo�0���fo�@���fo�P���fo�`���)�����fo�p���fo�����H��0���H�p���fo�����)���)�����)����)��)����)����)� ���Dž��H�����t��H�������$@H���H�������<�1��v��A�ą����3H���1��)�������������������~!H�����L�,�fD�;H���Uu��I9�u�H�E�dH+%(�bH�e�D��[A\A]A^A_]�H��(����3�1�H��诉��L�
A�Y�3H�
�1�H��A�����耉���fDH�L��M���DH��H���1��fo•H��L��@����H�H���Džt���H�E�H��@���H��weH��x���ƅh���`)�P�����1�A�����L��I�������1��*蕈����x8�3H���1��̈�����H���d���L�
[A�c����H��H����3�1�H��荈��L�
WA�i�����F���fD��U�H��AWAVH��H���L�5�veAUATSH��@���H���fo��dH�%(H�E�1�H��8����H�H���Džt���L��x���H��@���H�E�ƅh���`)�P����ӓ��1�A�����H��I�������1��*�t������u�H��dH��(���1�����A�������0���D�4����rƅ'���Dž ���Hc�4���H���z��H�����H���nE����E1�E��I���
f.�A��H��8���1��fo���H�H���Džt���L��x���H��@���H�E�ƅh���`)�P����ʒ��1�A�����H��I�������1��*�k������
A�EA�GI��D9��n���D;�4�����E��H�����Ic�D�����E��L��8���L�<�M��D��4����A��1��L���H�H���Džt���H��@���H���H��x����o��fo
����h���`H�E�)�P������1�A�����H��I�������1��*藅�����UA�A�D$I��D9��j���E��D�����A��1�D;�4�������4����PfD�$@H��A�ĉ�1���q��D��A����p��1�E����1��� ����\����A���O�����0�����D�4����5E��������	�E����	Ј�'���H���dH��(�����4���H��D��D�� ���D��0���H�j�PH��(����01������'���Y^��H�E�dH+%(����4���H�e�D��[A\A]A^A_]�	���H��(���L��E��H��H���I�ǿE�u�1��3H�l�y���A������31�H��1��`���Dž4�������E��t3H�����Ic�E��H��I��H�\�L�,L)�A�}I���o��L9�u�H������~�����H��E1�D��H��(���jH�iD��0����01��ރ��XZH�E�dH+%(��H�eظ����[A\A]A^A_]Ë�4���E1�������Dž4����w���E��E������H��(���M��E��H��H���H�]�D�����E���01�A���S���A��������H��dH��H����H��0H��(���1������p������E������0�����	�E����	Ј�'���H���dH��(����r���Dž4�������������UH��H��dH�%(H�E�1�H�E�dH+%(uɸ������j���f.�UA���H��ATH��h���L��`���H��H��foҎdH�%(H�E�1��H�H���D�E�)�p���H��`���H���dH�E�H�E��E�`����L��A�����1�I��������*1�謁��A�ą�xH�E�dH+%(u:D��L�e���D����M��H��D� H���d�01�A��讁����w������UH��AWAVAUATSH��dH�%(H�E�1�跙���Hc�����A�ą���L�-��dL�}����L��M�u�s��������H�}���L�L��D��I�]�s��������H�}�uBD���Zl��1�H�U�dH+%(��H��[A\A]A^A_]�fD1�A���������f�L�
�A�uH�|�dH�
��H�$��01�萀��������L�
�A�r��H�C�dH�u��01��^����#���H�"�dH�T��01��=����/���������UH��AWAVAUATSH��dH�%(H�E�1��G����Hc��j���A���L�-8�dL�}����L��M�u�Qr��������H�}�uoL�L��D��I�]�(r��������H�}���D����j��1�H�U�dH+%(��H��[A\A]A^A_]�fD1�A���������f�L�
�A��H��dH�
z�H����01�� ��������f�L�
YA���H�ʃdH��
��01���~������H���dH��
��01���~���"���芋��f.���UH��AWAVI��AULc�ATSH��dH�%(H�E�1�辖���Hc����A�ą���H���dL��@����D��L��H���p�������RH��@�����H��H���1��fo��H�׾$@L���H�H���D��Džt���H�E�H��@���H��dƅh���aH��x���1�)�P����j������,�����H�ՂdL��8����D��L��X�p��������H��8����e1Ҿ$D��1��i���L��D��H��d�X��o��������H��8����0H�W�d�L��D��X�o��������H��8����D���[h��E1�H�E�dH+%(��H�ĸD��[A\A]A^A_]��ۇ��D��,����8tOH���dH�b�D��,����01��|��D����g��D��,����E1����A��A����I��MnH��A�����I�E�DL�
�A��H��dH�
z�H����01�� |��A���������L�
lA����L�
]A���L�
NA���H���dH��
��01���{�����H���dH��
��01��{������H�o�dH��
��01��{���A���H�N�dH��
��01��i{���S����/���f.�D��UH��H��dH�%(H�E�1���ieH�E�dH+%(u�����D��UH��H��dH�%(H�E�1��ie�qie����H�E�dH+%(u��蛇��ff.���Uf�H�5[���H��AWAVAUATSH��dH�%(H�E�1�)����H�����H�����H�q�)��)����)� ���H��0���HDž8����ނ���9b��H����I����{�������I���ێ��H����M���H���H��L��L����b��H��0���H���1�L����L���o��A�Ņ��iI�$L���B8H�B �B@f%ݟf
  f�B8�O���A�Ņ��T��L��臑�����xL����c���.��ge��t�
�ge���������L��軄������M��$�L���‹����x�L��趈��H��t"��8u��geL��蜁��L��蔈��H��u�L���Wl���DH��}dH�R��A���01���x��L���s��H�����Z`��L���bv��H�E�dH+%(�-H��D��[A\A]A^A_]Ã�t�H�N}dH��	�A������01��cx���H�*}dH�t���01��Ex���r�����H��@��������+o��H�H	�H��H��|d�01��x���9���������H��@���A������8H����n����H�@I��H��|d�01���w����H��|dH�B]�A������01��w������H�b|dH���A������01��ww������=���f.�UA���H��AWAVAUATL��@���SL��H���dH�%(H�E�1�Dž<����H�D��Dž@���H��H����!HDžX���f��h���HDžP�����Sv��H����L��1�I������I��H����H��L���vk���Qx��I���ym�����b���H��M����H����H��L��L���_��L���'�����,������3��L���\���A�ƅ���L����t������@��<�������<�����u�L��贋��M���L��襈�����oH�����HDž ���H������
f�L���x~��L���p���H����8	u�H�����H��L��� d��A�ƅ���H�������,���H� ����fDH�azdH����A���01��vu��L���.p��H���]��L���s��H�E�dH+%(�dH���D��[A\A]A^A_]��A���H��@�����L��P����8A���l��M���A��H��H��ydL��ZH�	[�01���t���m���H��ydH�;�1�A������01��t���D���L���h��Hc�,���H;� ����(���H�gyd��,����H�[A������01��vt������|��H��@������8I���Vk��A��H��ZI��H�
yd�01��4t�����H��xdH����01��t�����H��xdH�_��A������01���s�����貀��f���U1�H��H��dH�%(H�E�1��^�����tH�U�dH+%(u%��f�H�E�dH+%(uɿ�&����Q������UE1�A������!�H��AVAUATSH��1�H��L�-�SedH�%(H�E�1��E�A�u��]��H�����H�CI�Ŀ�1���r��L�5�wd�M��C��H��1�A�6��r���{H�uԺ�L���H��uN��ae��u�d��t����ae��t�A�uH�{�+���H�E�dH+%(u7H��1�[A\A]A^]�DA�6H�91�1��ur����H�=�w�����0����UE1�1�A������!�H��AWAVAUATSH��xdH�%(H�E�1�H�tRe�6ae�0��\��H�����H��H��`e1����q��L�5�vdI��L�=�`e��`e���1�A�6H�lL��A��q��L��x���H��L�c��r����u'1�L��H����L���\k����tH�;��\���{��\��L�
u�A��L�-2A�6�1�L�%��L��L���Iq���@�;L�}��A��L��H�� �c���{�I���z\���{��r\��I��u�A���R���L��M��L��x���H���r�����~��A�>H�udA�H�5vud�H��x���H�[_e�H��E1�1��o��H�=T_eA���p���H��td�H��Pe�0����H��H��h���D����_eI��I�} 1���I�� �4Y��������~�H��h���������p�������L��p���L�-�^eH��h���H��x���L���l��E�}��f��H��D�����[��I�MA�6H��I�ǿ1���o��I�EL��L���H�P�r��L���s���H�M�H���&L�A�6H��1��I�� �o��L���]��H��^eL9��c���H��x���L��p���H��h�����t��E1�A�����1�H��Oe�!��Q^e�0��Y��I��H����PH��]e��1��n��A�6M����]e��H��1���n���H��x�����L��M�o��/o����u)1�L��H�?���L���h����tKA�?�Z��A��Z��L�
��A��L�-[A�6�1�L�%�L��L���rn���DA�?�L���I�� ��`��A��I���Y��A���Y��I��u����R���H��x���I���z����|��A�>I��H�1rd���d�����|��E1�A�1�H��rdI��L��H���k��L��A���x����fU��H��qdH�=h\e�H�Ne�0�'���H�@\e��\eH��p���D��I��I� 1���I�� �WV��������~�H��p���������x�������L���Ui��D�{�d��L��D�����X��H�KA�6H�=�I�ǿ1��m��H�CL��L���H�P�Jo��L��貄��H�M�H����L�A�6H�#�1��H�� ��l��L����Z��H��[eH9��c���L���r��H�E�dH+%(�G��x���H��x[A\A]A^A_]�A�6H����1�L�-5��cl��L��L�%���dZ��H��x����q��L�
��A��A�6L��L��1��&l��Džx��������o����A�6H�6��1��k��L��L�%{��Y��L��L�-���Lq��L�
�RA���H�=a������L�
��A������L�
��A���,����L�
q�A�����fDL�
Y�A�����H�=���j���L�5#pd�����x��f���UH��AWAVAUATL�����SL��H��(dH�%(H�E�1��JY��1�1�L���V���1�L��H�����I���uV���1�L��H���cV���1�L��I���QV����L��I���<V��H������M����H����M��H����M�������H����H�����L�(L���a�������H�3L���u������I�7L���u������I�6L���uu��������L��苂��H������H����H�����L���<^��H��L���1^��L��L���&^��L��L���^��H������L���^��H������L���]��H������H�H��H�������~������iH������H������H�0��t�����$L���#���L���[~������PL������L���@~������UH�����L���%~������ZH������΀��H�������€��H�������}������KH������蟀��L���n��1�H�U�dH+%(��H�e�[A\A]A^A_]�L�
��A�*H�lmdH�
4��H���01��h��������f�L���h}��L�
�A�.�jPfDH�mdH����H�
��01��-h��X�����Z�L����H�������}��L�
�A�J�jP�L���|��L�
��A�P�jP�@L����|��L�
��A�S�jP�q����L���|��L�
j�A�V�jP�Q����H�������|��L�
F�A�\�jP�-���L�
<�A�1����L�
*�A�2���L�
�A�3���L�
�A�L���L�
�A�;�����s��f.�@��UH��H��dH�%(H�E�H�G+FH�U�dH+%(u���s��f.�U1�H��SH�}�H��HdH�%(H�E�1��mW����udH�]�����u�1�H���QW����uHH�U�H�E�H+U�H+E�y
H��H@BH��u<H=P�~�fo�r1�H�}�)E��v��1҃�D�H�U�dH+%(u
H�]���~����r�����U�f�H��AWAVAUATSH�� ���H��H��(dH�%(H�E�1�H�����)� ���fHn�H�����1��H�H��p��)�0���)���)�@���HDžP���ƅM���H������Dž��������HDž���������D\���������������C{��H������H����>N��H���H����
�d��I��H����
H���H������H���N��H�5p�L���r�����`I�FL�-a�L��L��H�������q��A�ą��+	L�=`�L��M�nL���2���H�]������L����|��fHn�fI:"�)����H=���ƀ�M;.�wL��L���R��M;.����L���z���L���	z��H�5.�L���:q��A�ą���M�~L��L���PX��A�g9��L��I�G ��y��1�H��L���ih��I�I9��K	A�W9�����M	M9�t�I9�t
�@9�H�I9�u�L���a������y�����L���{������
L���b��H������p�������������E1�1�1�1�H�5�����S������L���p������E1�1�1�1�H�5����S�����y�B������BE1�1�1�1�H�5q���S�����JL���a�����#E1�1�1�1�H�5K���^S�����������������L���+x��fo����A�F0fo���)� ���)��������H��`���E1�Dž��HDž���H�����L�����M��L����t��������������H������A;F0|�E��� L���_t��H��H����H�����H�����1�H;����t.f.����o"��H��H� �ojH�hH9�u�H�
����� L��H����l��E���1	A�G�L�cH�����H��D����L�l0H�6fdL��H������f.�H�� I9���L�;A���	����u�E�g�W��A9��CDž���Dž����A�?�������t�D����H�����H��H�������01���`��H�����D���H�/��01��`��H���E[��H�����H;������Dž�������H�����H�H�GH�BH��
[��H�����H9�u�H��0����e��D�����E��� 	D��<���E������@���������D���������H���������L�����tD��P���E���Z��T�������H�>�H������� A���R��I��H���+H�XH�����H��L��H�����L�����L�hI�EH�����I�E��M�����dH��p���H����I�EL����g��L����n��H��H���w���L���R���
���f.�H�:�H��cd��01��_��L���t��L���\��H����Y��H�������F��H�E�dH+%(��H��(D��[A\A]A^A_]�@H��cdH�F�0�1�A������^���@L��H�A�H�Wcd�A������01��s^���Z���L��`���L��L��L���L������H������L����E��I��H;� �����L;�(�������H�����H����01��	^��D��<���E��u
DžL���D��@���E��tD��D���E��u
DžP���D��H���E���q���DžT����b���D����H�����H�����D���H���01��]��H���X��H�����H;����uNH��0����q������H�%bd�H���0H�����1��9]��H�����Dž�������H;����tQH��������H��adH���H������0�1���\��L���xe��H�����Dž�������H;����u�H��0���A������������H��adH�d�H������H�iadH��C����H�VadH��C����E�g�AS��A9���A�?�_���E�g�%S��A9��XA�?�C���E�g�	S��A9��1���E�g�R��A9�����M�gH�5��L���i��������H�������H����0��A�L��1�H���DžH�����[����������E1�E1����L��H������H�g`dH�������H������L��H��H���r��L��H�P�bL��H�������r��Lc������D������H������A��H�����H��BD��01��[��A�D$�����E����L��0���M��� D;�8�����K������t;�����������������H��_dH�������H�~_dH�'B��H�k_dH��A����H�X_dH�u�����H���H�>_d�A������01��ZZ���Q���H�~���H�_dH�U��A������01��*Z��HDž�����H���1�D����H������Y���(���E�g�P��A9������M�gH�5N�L���og�����{���H�������D����0u�A�L��1�H���DžD����Y��������7���E�g�GP��A9�����M�gH�5��L���f�������H�������@����0�#���A�L��1�H�E�Dž@����Y����������������fDH��]dH����;���H��]dH����(���E�g�O��A9������M�gH�5�L���Uf���������H�������<����0�}���E1�L��H���1�Dž<�����tX��Dž����Dž����R���H�$]dH��������Hc����L��H����o��I��H����H��0���Hc�8���9����~)D��I�<���L������)�H���AB��L�������������8������Hc���,j��H��0���I��H��tk�������~ɋ����L�Ǿ�H����A��I���H�h?���H�T\dH��H������s���D����H�����H����v����"d��D����H������s���D����H�����H���D���A������f.�f�UH��AWAVAUATSH��(H�u�dH�%(H�E�1��G0�����E�I��E1�1����I���A9^0~HM���M�L���>i����x�L���2f��I��H��uzf.�L���I�����I��A9^0�H�E�dH+%(���E�H��([A\A]A^A_]�A�G�E��M��9E�t/�L���^��L���e��I��H��t�A�?u����A�G�E���L��9E�u�H�u�I��c�����U���E��e����b�����U�H��AWAVL����AUL��ATH��dH�%(H�E�1��H�ƅ
���DžL�������DžT�������HDžx��������=L���������������<k��I��H���E�;>��I��H���+�T��I��H����L��H��L��� >��H�5�L���b������H�5n�L���a�����u1�L��L���UY��I�$L���B8f%��f
f�B8�yq�����M�����L���l�����_L��L�=���-S��E1�1�1�1�L����'E������L���j��L��L���\�������L����R��I�|$�a�����L�=��E1�1�1�1�L�����D������L���i��L��L��������u}E1�L���i��L���Q��L���N��L���;��H�E�dH+%(��H��D��A\A]A^A_]�f�H��<H��Xd�A������01��S���@H�A<��H��<��H��;��H��;�H�=XdH��;�A������01��RS���D���H��;�H�]<�H��H��Wd�A������01��S������H�����H��WdH����E1�A������01���R������_��f.�f���UH��H��dH�%(H�E��+H�U�dH+%(u���m_��ff.�f�UH��SL��$���H��H�$L9�u�H��fo^fo%.^fo�^dH�%(H�E�1�H�����H�U�H��f�fo�fo�f��H��f��)H�H9�u�H�
<������H���*]��1��fDH��H=�t!9�t�H��VdH�����01���Q��H�E�dH+%(uH�]����y^��f�UH��AWAVAUATSL��$���H��H�$L9�u�H��8�� 1���L���H���H��dH�%(H�E�1��H�H�F\ƅ���Džd�������H��X���H������H��H��8���HDž���������U;��H�
�H�c�HDž���fHn�H�
��fH:"�H�2�)��fHn�fH:"�)�����G��A���^��H��H��`���H�2WdH��H���l��H�$:����H��`�����L���L��h�9L���yL������I�D$ ����P�����K�������K�������!Ȉ�#���D��D����d��H��P���H����H��`���E1�1�H��H��TdA��M�����8H��`���D��D���Y;��H�����H�����8��H��@���H����N����M����N����L�%����N��I��H���H��P���H��@���H���]8���L��L��H�Td�01��BO��L��L���7\������1�H��L���S��I�I9�t"f.��B8f%��f
f�B8H�I9�u�L���k������L�������L��X���L��������f�����,H��X�����L�-��L�%9�6M��fDL��L���ee��H��H��t
��a��L���B����uۻ
fD������u�L�� �����L��ȫ������$����9���� ����9����t(L���N����y�H��RdH�����01��
N��H��X���H���c���C0HDž���HDž����Dž$�����~PH��X���H�����H��H��H��p����C`������H��X�����$���H��������$���;A0|���#���t��L���A����L���t��K���A���H������H�5���O��H��E�A����DH��SdH�
�ZL��H�H8�.I����x:I�D$ A������������L���ƅ#���E1䈅K������H���H��Qd�A������01��L��HDžX���HDž@���HDžP���H��X����1J��H��@����5G��H��P����	4��H��`�����Q��H�E�dH+%(��H�e�D��[A\A]A^A_]�H�Qd��"����0�gH��6�1��L��H������:��H��(����c���H��p����tT��H��p����h[��I��H����A�$��	tl�P���v˃�w�H��`���1�L���L����y�H�jPdA�$�H�z7A������01��{K��H��p����S���H������c������H��ȫ��H��X���L���9�����Y��,�����(���H��`����6��H��(���H����1��~H�����H�� ����H��~f�H������H�H��OdD������H��H�M5L�������)�����01�)����D��"����J��L��L������F��A��L��H��H��(�����L��H���=���H������H�@ H��ث��H���"���H���I�ǿH�OdH����01��4J��A���uA��������BH��NdH��ج���H����01���I��H������H���L��ج��H�@H��H)�H9���HG�H��0���H��ث�����t-L��H��4L9��sH�gNd��01��I���u���H��(���H�8��:��H��0���H��ث��L��H��H�����I��H��H�������(c��H9��G
H�������SE������
H��ث�����������H������H������H���H������H��8���1��
H��H9��hH9�u�H���H��Md��01��H��H��MdH����0���H�lMdH�}2�A������01��H��HDžX���HDž@�������H�/MdH� 2�A������01��DH��HDžX���HDž@������H��p����};���m���M�uI��M���GH��@����kH��H��P����d��L��1�1���0��L���E���R���H��LdH��1�A������01��G��HDžX���HDž@����1���H���H�[Ld�A������01��wG���
���H��������
H������H��Ő���H��H������L��ث��M���L��L��Ы���jG����!�������H������L��L��H���2a�����n
H��ج��H������L������@/��H���L��H��SL�3� H������H��0���H�PH��MdQ� L�1��\��H��Ы��H�� ��=���H�KKdL��H����01��cF��� H�5��L����>��H�5��L���0`��H�����H����	H��0���HDž������	H������HDž����H������H������H����H������H���H�����H����@L�����H����
H����L���^/��L��H���sX������H���SH�������:�N��H���oH�H�@���nH�5qKd��A�F���QI��H�I� u�H����A�HDžh���M��H��x���I��f�1�M��L��H�������}@A�L�-�JdH��A�DD�nA�FI��I��A�DDtP@��H������I���6E��A�����+E����H������H������	�A�A�_�H��A�D �/H��u�H������M��I��H�h���I��H����H��x���L�F�J�<H9��`L�^�H��M��I��M�JM)�M9��I����M��H�t1�I���I�f.��o�oH��H��f8
�Pf8�PH�FL9�u�H��x���L��H��H���H�H)�L9����2�8@�:H�x�@�0H�rH9����r�x�@�zH�z@�p�H�p�H9����r�x�@�zH�z@�p�H�p�H9��r�r�x�@�zH�z@�p�H�p�H9��Q�r�x�@�zH�z@�p�H�p�H9��0�r�x�@�zH�z@�p�H�p�H9���r�x�@�zH�z@�p�H�p�H9����r�x�@�zH�z@�p�H�p�H9����r�x�@�zH�z	@�p�H�p�H9����r	�x�@�z	H�z
@�p�H�p�H9����r
�x�@�z
H�z@�p�H�p�H9�sn�r�x�@�zH�z@�p�H�p�H9�sQ�r�x�@�zH�z
@�p�H�p�H9�s4�r
�x�@�z
H�z@�p�H�p�H9�s�r�x�@�z@�p��L��x�������H��FdH��,�0����H��u�H��h�����H������1�H������H�5�h�0����u{L������L9������oH������L��H��0���H)�H9�vbH��H��h���H��
���H������L)�H�H����H9�HG�H���B\��H������L������H�H������H������H9�0����%���H������H��0���H+������{;��H���EH������<����!���tH�����4��H��0���H������L�����L���R��H�
]Ed�1����H�D,�\��H������M��I��I�����f.�H������M��I��I��H�h���H�������I���f�H��D��H��H���H��WH9�w�����I�����H��DdH����A������01���?��HDžX����X�H��DdL��X������~=�J��L������L���0L���^2��H�WDdL��H��)��01��o?��H��@���A�������?��H��P����a\��H��X���1�1�H���>(��H����<��HDžX�����H��ج��H������L������l'��H����Ы��L��H���� H������H��0���L�%+H�PH��EdQ� L�1��T��H�� ��=��.���H�?�H�rCd��01��>��H�����A������,��H��(����6V��H��p����F����H��X���H������L��X���H������H�Cd�H����01��#>��H������L��0���L+������8��M��tH��BdL��H�*�01���=��H������	:����!�������H��Ы���1������H�?��1��=������HDžX���A������%�H�VBdH�ٿH��)�01��n=��H������9����!����H���5��"���<t<�����H��0���Hc�H)���H��ث����������y���H����4�H��AdH�[(��01���<�����H�w�H��Ad�A������01��<��H��p����BE���E�H�vAdH��'�0�}�H������H�������H��&��BI��H������8����!��������H�"Ad�0����H�AdH�����01��/<����!����r����Q���D�a����]���H��Ы���0�����@H����C���H�}'�1���;��H��@d�H�u��01��;��H��0�����H�x@d������H�Q�H��A�L�%�D�01��|;��H�E@dH��0����0u#�@H�)@dI���0L9�0�����H������H�ڿB�01��+;��L���H��u�H��?dL���01��;���H��?dL�%oD�L��01���:��H��?d�H����01���:��L������L��1�E1��:��H�y?d�H�a��01��:���
I��L9�0���t�H�N?dC�L5H�ڿ�01��g:��L���H��u�H�$?dL���01��C:��믐��U1�H��H��dH�%(H�E�1��~���tZ����H�
����Hc�H�>��fDH���H��>d��01���9��1�H�U�dH+%(uD��f�����@H�����H�n���H�j���RF�����f.�UH��AUI��ATI��SH��H��(dH�%(H�E�1���t�FH�b�9G���F9G����tI�EH�o�I9D$����tI�EH�n�I9D$����@����tI�E(H�X%I9D$(���ÀtA�EXH�a�A9D$Xuv��tI�E0H�c�I9D$0u_������ ��I��$�I���H�H;��1�H��u�~fDH��H9��kH�|�H9|�t�H�(&H�	=d��01��+8��E1�H�E�dH+%(��H��(D��[A\A]]��I�E H���I9D$ ������H��I��$8I��8����H9�����t7I��(I9�$(t&H��$�Z���f.�H����D���@��t#I��0I9�$0tH�{$����fDH����H��������������M��@I��$@1�@H�IH��L�L�M�I�RL3H3PL	�uH�@I9B��H�1$���@��t:A�T$\A;U\��I���I��$�D�E���H��D�E�H������i������������� t8I��$I;��lI�� I��$ �H��H��%��������@tI�E8H�_�I9D$8������ǀtI�E`H�`�I9D$`�����tI�E@H�a%I9D$@����������tI�EhH��%I9D$h������� tI���H��I9�$��x�����@tI�EpH��%I9D$p�Z������tI�ExH��%I9D$x�<���A����B���I��$PI;�P��I��XI��$XD�E��]G��D�E�H��%���������@H9��[���H��!����I��$�I���H�:H;8��H�HH9J��H���6���E����H��H��E1�f�H�2H�JH30H3HH	�uH�pH9r��H�h9dH��"�D�E̋01��4��D�E��N���fDI��@H��!I9�$@�����������I��HH��!I9�$H��������f.�H�� ����@H�������@H��H9������X���fDI��H��H��L9������.���fDH��!�|���I��$����I���I9�$��I��$�I;����H�����I��$�H��t!I���H��tH���vE���������H�/"����f�H�I!��H�e!���H��H9P A�����H�"����I��$��H��I���I9�$�ulI��$�I;��ugH������I��$H��t!I��H��tH����D���������H�`"�S���H�$!�G���H�@!�;���H��!�/���H�"�#���H��"�����"?��f�UH��AWAVAUATSH��H�$H��	�RH�!C���̻�dH�%(H�E�1�I��L����I��fIn�L��Dž@�fH:"�H��`�H��P��H���fo�>H�����@�H�)���fo~>H�����@�H�)��fop>H������H��(��H�)���fo[>H�t<HDž��)��foJ>H����)�0�foD>HDž��e)�P�fo:>)�p�fo;>)���fo<>)���fo=>H��;H����H��p�)���fo)>H����H��0�)���fo>HDž��p)���fo>HDž�r)���fHn�H����fH:"�H����HDžP�)�0�fo�=H��@�H�������fo�=H��X�)���fo�=H��`�)���fo�=HDžp�H��x�H����HDž��HDž�� H����)���fo�=)���fo�=)���fo�=)���A��tH��P�A��tH��`�A��tHDžH�fo-�xfDo=T=fo%|;fo5�;fDo5�xfDo-B=fDo%I=fDoP=fDoW=fo==fDo
V=fDo]=Dfo�H��fA��fo�fo�fo�fA��fA���ˆfo�f��fA���ӈf��fo�f8+�fo�fA��fA��f���ӈfo�fA��f��fA���وf��f8+�f��fg�f��)@�I9��m���A���=H����HDž��H����H��(�L��L���@(��L��H��L��L�� ��� ��L�� �H��I����L�ھ�H��L�� ��=��1�L��L��H��(�L��A�$	fA�D$fA�\$��D��L�� ���t�Pf�C�|��I�C�uI��M��u�I9���L�5\2dI��L��1�H�M�A�6�p-��L������(��M��tA�6L��H��1���E-��H�E�dH+%(�@H�ĸ��[A\A]A^A_]�fo�;HDž��������L���g��L����L��L��L��L�� ������e7��L�� �������D����A��tH�����H9�����A��ufH��(�L��L��L������ZI����L���'������L�5+1dA��M��H�
��A�6H���1��8,������H����H9���t�L�5�0dH����1�A�6�,��A�6L��H�1����+���x���L�5�0dA��M��H�
]��L�5�0dH�P��1�A�6�+��뭿L��`���ƅ�����L��L��L�������6��A����uLD����L��L��H��(��������L�5"0d��H�IA�6�9+������L�5�/dA��G���L�5�/d����H��/dH�������01��*������7��ff.�f���UH��ATA�SH�]�H�ĀfoO9dH�%(H�E�1�H�E�)�p���fo99)E�fo=9)E�foA9)E�foE9)E�foI9)E�foM9)E�I����I����1Ҿ�?������uMA� 1�1�L��������u7M�I��u�H��p���L�e�H���?���������u
H��I9�u�H�U�dH+%(uuH��[A\]��L��p���f�I�$1���P�����u�I��L9�u�A� I���Y���1Ҿ���A��������:����|����'6�����UH��AUATSH��H�$H�����dH�%(H�E�1�H����HDžp�H�� �L��8�L��@�HDž�HDž���H�H�����Dž��@�H���H����f����f������H�H�(Dž��@H����H����fHn�H����fH:"�H����H��0��@)� ��2H������?w`H��t[L���������H��L9�tNH��H�3��@u�1�H������1����t�H�(H��,d��01���'��H����A������@H����E1��q%��H�E�dH+%(uH��D��[A\A]]�H�8���c4��Uf�H��AWI��AVH�}�A��AUM��D��L��ATS��D��H��8dH�%(H�E�1�D�E�)E��/��D�E�����A�ċE��u�H��D�M�L���PH��+d�01��'��XZ8]���D9u���L�u�M��tNM����L��L���z4������L���j!��H�E�dH+%(��H�e�D��[A\A]A^A_]�DM��t�L�
 �A�fDH�Q+d�H�
d�H���A������01��_&���DL�
/�A��L�
C�A��L�
?�A��L�
��A����2�����U1�L����H��AVAUL�-�ATL��SH��dH�%(H�E�1��<������t
E1�1�1��L��� �������
1ɺ�L��L����������
E1�1�1��L��������y
1�L���o��H�(*d���o
�3L�%�E1�1��L�
=.L��L���+%���L���.������
�3�1�L�

.A�L��L���$���L���������
�3L��A�L��1�L�
��$����L�-��L���L���
������z
E1�1���L��������{
L�����L���������u
E1�1���L��������v
1�L���4������
�3E1��1�L�
-L��L���#���L���������
�3�1�L�
�,A�L��L����#���L����
������
�3L��A�L��1�L�
׽�#��L�-�1�L����L���������
E1�1�1��L����������
1ҹ�L��L����������
E1�1�1��L���������
1�L���
������
�3E1��1�L�
(�L��L����"���L����������
�3�1�L�
��A�L��L���"���L���������
�3L��A�L��1�L�
���q"��L�Ҽ1ҹ�L��M����������
E1�1�1��L���������
1�M���L���������
E1�1�1��L���l�������
1�L���������
�3E1��1�L�
�L��L����!���L����������
�3�1�L�
߻A�L��L���!���L���������
�3L��A�L��1�L�
���X!����L�-��L����L���������
E1�1���L���������
L�K���L���^�������
E1�1���L���?�������
1�L����
�����!�3E1��1�L�
�)L��L��� ���L���
������
�3�1�L�
y)A�L��L���a ���L���d
������
�3L��A�L��1�L�
p��+ ��L���1ɺ�L��M���}�������
E1�1�1��L���a������1�M���L���B�������
E1�1�1��L���&�������
1�L���	������3E1��1�L�
�(L��L���~���L���	�����U�3�1�L�
`(A�L��L���H���L���K	������
�3L��A�L��1�L�
W����L���1ɺ�L��M���d������E1�1�1��L���H������1�M���L���)������E1�1�1��L���
������ 1�L��������m�3E1��1�L�
w'L��L���e���L���h������
�3�1�L�
G'A�L��L���/���L���2�����C�3L��A�L��1�L�
>����L���1�1ҾL��M���N������@E1�1�1�1�L���5������G1�1�L�|��L���������GE1�1�1�1�L���������1�L��������<�3E1��1�L�
��L��L���T���L���W�������3�1�L�
o�A�L��L������L���!�����4�3L��A�L��1�L�
-�����L�j�1�1ҾL��M���=������rE1�1�1�1�L���$������y1�1�M��L���������}E1�1�1�1�L��������1�L���}�������3E1��1�L�
��L��L���G���L���J�������3�1�L�
b�A�L��L������L����������3L��A�L��1�L�
 �����L�f�1�1ҾL��M���0���gE1�1�1�1�L�������1�1�M��L�������E1�1�1�1�L��������1�L���p�������3E1��1�L�
��L��L���:���L���=�������3�1�L�
U�A�L��L������L����������3L��A�L��1�L�
�����L�c�1�1ҾL��M���#����E1�1�1�1�L���
����1�1�M��L��������E1�1�1�1�L��������1�L���c�������3E1��1�L�
x�L��L���-���L���0�������3�1�L�
H�A�L��L������L����������3L��A�L��1�L�
����L�a�1�1ҾL��M�������E1�1�1�1�L�������1�1�M��L��������E1�1�1�1�L�����A�ƅ��1�L���S������3E1��1�L�
h�L��L������L��� ������3�1�L�
8�A�L��L�������L���������*�3L�
	�L��L��A��1����H�E�dH+%(�7H��D��[A\A]A^]�L�
��A��H�
i�H��|�3�1�A������\���f.�L�
w�A�5H�d�H�
�H��|A������01�����d���DL�
7�A�6뾐L�
'�A�7뮐L�
�A�8랋3L�-DZL�%\|A�%L�
�L��L�����L�
��A�9�L��L���%����3L�-{�L�%|A�%L�
��L��L���h��L�
��A�:빋3L�-?��1�L�%�{L�
j�A�%L��L���*��L�
Q�A�;�x���L�
?�A�?H�
�H��{���L�
�A�@H�
ӰH�h{�h���L�
��A�AH�
��H�H{�H���L�
��A�BH�
��H�({�(����3L�-~�L�%{A�%L�
��L��L���k��L�
��A�C����3L�-?�L�%�zA�%L�
k�L��L���,��L�
S�A�D�z����3L�-��1�L�%�zL�
+�A�%L��L������L�
�A�E�9���L�
�A�HH�
��H�Iz�I���L�
��A�IH�
��H�)z�)���L�
��A�JH�
t�H�	z�	���L�
��A�KH�
T�H��y����3L�-?��1�L�%�yL�
j�A�%L��L���*��L�
Q�A�L�x����3L�-���1�L�%�yL�
)�A�%L��L������L�
�A�M�7����3L�-���1�L�%KyL�
�A�%L��L�����L�
��A�N���L�
��A�QH�
q�H�y����L�
��A�RH�
Q�H��x���L�
}�A�SH�
1�H��x����L�
]�A�TH�
�H��x����3L�-���1�L�%�xL�
'�A�%L��L������L�
�A�U�5����3L�-���1�L�%IxL�
�A�%L��L�����L�
��A�V���3L�-z��1�L�%xL�
��A�%L��L���e��L�
��A�W���L�
z�A�ZH�
.�H��w����L�
Z�A�[H�
�H��w���L�
:�A�\H�
�H��w���L�
�A�]H�
άH�cw�c����3L�-��L�%NwA�%L�
�L��L�����L�
��A�_���3L�-z�L�%wA�%L�
��L��L���g��L�
��A�^����3L�-;��1�L�%�vL�
f�A�%L��L���&��L�
M�A�`�t���L�
;�A�dH�
�H��v���L�
�A�fH�
ϫH�dv�d���L�
��A�eH�
��H�Dv�D���L�
��A�gH�
��H�$v�$����3L�-z�L�%vA�%L�
��L��L���g��L�
��A�h����3L�-;��1�L�%�uL�
f�A�%L��L���&��L�
M�A�j�t����3L�-��L�%�uA�%L�
&�L��L������L�
�A�i�5���L�
��A�mH�
��H�Eu�E���L�
��A�nH�
��H�%u�%���L�
��A�oH�
p�H�u����L�
��A�pH�
P�H��t����3L�-;�L�%�tA�%L�
g�L��L���(��L�
O�A�r�v����3L�-��L�%�tA�%L�
(�L��L������L�
�A�q�7����3L�-���1�L�%KtL�
�A�%L��L�����L�
��A�s���L�
��A�vH�
q�H�t����L�
��A�wH�
Q�H��s���L�
}�A�xH�
1�H��s�����3L�-��1�L�%�sL�
G�A�%L��L�����L�
.�A�z�U���L�
�A�yH�
ШH�es�e����3L�-���1�L�%IsL�
�A�%L��L�����L�
�A�|���3L�-z��1�L�%sL�
��A�%L��L���e��L�
��A�{���L�
z�A�~H�
.�H��r����L�
Z�A�H�
�H��r���L�
:�A��H�
�H��r���L�
�A��H�
ΧH�cr�c����3L�-���1�L�%GrL�
�A�%L��L���
��L�
�A�����3L�-x��1�L�%rL�
��A�%L��L���c
��L�
��A�����L�
x�A��H�
,�H��q��3L�-��1�L�%�qL�
B�A�%L��L���
��L�
)�A���P���L�
�A��H�
˦H�`q�`�L�
��A��H�
��H�@q�@�3L�-���1�L�%$qL�
��A�%L��L�����L�
��A�����3L�-U��1�L�%�pL�
��A�%L��L���@��L�
g�A����3L�-��1�L�%�pL�
?�A�%L��L�����L�
&�A���M�L�
�A��H�
ȥH�]p�]�L�
�A��H�
��H�=p�=�L�
ԾA��H�
��H�p��L�
��A��H�
h�H��o��3L�-S��1�L�%�oL�
~�A�%L��L���>��L�
e�A����3L�-��1�L�%�oL�
=�A�%L��L���
��L�
$�A���K�3L�-Ѥ�1�L�%_oL�
��A�%L��L���
��L�
�A���
�L�
ѽA��H�
��H�o��L�
��A��H�
e�H��n��L�
��A��H�
E�H��n���L�
q�A��H�
%�H��n��3L�-��1�L�%�nL�
;�A�%L��L���	��L�
"�A���I�3L�-ϣ�1�L�%]nL�
��A�%L��L���	��L�
�A����3L�-���1�L�%nL�
��A�%L��L���y	��L�
��A������2��f���UE1�1�1ҿH��AUL�-\}ATL��SH��dH�%(H�E�1����������������I��H����H���
��A�|$�	1�L�����������9��01�L���.��H���}1�L�����L��H���P�����`A�$���L���3�����I��H����H�����A�|$��1�L���D�������1�L�����H���1�L�����H�5�H������A����A�$����L����H�E�dH+%(�H��D��[A\A]]�fDL�
ܢA�$H��d�H�
��H�IlA������01�����DL�
��A�%�f�L�
B�A�*멐L�
t�A�4뙐L�
m�A�5뉐L�
�A�9�v���fDL�
O�A�'�^���fDL�
7�A�6�F���fDL�
ʡA��.���L�
�A� ����L�
ԡA�0�
������ff.�f���UE1�1�1�H�5�z�H��AUATH��dH�%(H�E�1��9�����ue�`������)��I��H����H�����1�H��1�L���b�A��ukL�����H�E�dH+%(urH��D��A\A]]��L�
�A�^H��
d�H�
�H��jA������01������L�
�A�g�L�
��A�c����ff.�f���UH��AVAUATH��dH�%(H�E�1�H�~��L�nI���X���H�I9��_L�5�yI�|$L���	�����1L�����I��H���A�x��1�H�����A������D9���1�L���~��H����1�L���k��L��H�����A����A�$����L����H�E�dH+%(��H��D��A\A]A^]�fDL�
��A�GH�q	d�H�
i�H�iA������01�����DL�
l�A�N�f�L�
]�A�O멐L�
�A�T뙐L�
G�A�Q뉐L�
7�A�I�v���L�
�A�H�d���L�
�A�L�R�������D��UH��AUATSH��dH�%(H�E�1��K��A�����H�}�E��H�����1�������PH�}�1�1�1�����H�}�I���$���M����H�Dd���~H�
dL��H�0�L��1�L���R����6A�|$�3����~H��	dL��H�0���1�L�������4E�l$�3E���3��~H��	dL��H�0����1�L�������t0L���x�H�E�dH+%(�H��D��[A\A]]���3L�
_�A��H�
n�H�g�A���������L�
�A��H�
A�H��f�1��Z��A������w���L�
��A�sH�d�H�
�H��fA������01�����<���L�
��A��3H�
Ҝ�1�H�{fA�������������L�
M�A����L�
l�A���K���L�
�A�y�k����x�����UH��AWAVAUATSH��H�$H���fo�
�XdH�%(H�E�1�L����f����L��)���Dž��XXXX�	������4��L��p���H��dL��H�g���01����f��1�1�L��L��p��x�����������q��L�
/�A�*I��H=������I��$H���J�
L���"�L����L����I�D$
����L��I��$��������7I��$��L���J���%�I��H���Yf��L��1�1����L��p�Dž���x����������L�
J�A�JI��H=����K���I����I�}X�H�5B��X���HA�EdE1���~VDD��L���������L��D���I��H��I��H�%�D�D�@H�d�01��E���E9}d�1�L��1����A��H��@�H��8��DH��8�1�D��������@�D��D���H���L���\�D;�`���Ic�H��I��9A�$9�49q�CA����Q����`��L����C�A��L����9��h���1�L��1��$�A��H��@�H��8��|�H��8�1�D�����IcNj�@�D��D�H��I����H���L���\�9x�c9�m9H�vA����~�������L�����A��L���m�9��u���1�L��1��y�A��H��@�H��8��H��8�1�D���o��IcNj�@�D��D�H��I����L���\���`�D��H�D9��9P�~A����b����@�����������L������A��L����9��d���1�L��E1����H��@�H��8��H��8���1���Hc�@�D��D�H��I����L���\���`���H�9�'A����H����&��������������6A��L��D�����L�����A9��e���1�L��1���A��H��@�H�� ���H�� �1�D�������L���@�D��D��H���8���\���4���D���(���`�D�����,���0�����;�,�D����A�������8������(������0���}��4���W��L����$�A��L����9��,���L���i
��E1��=@L�
Q�A��H�	dH�
v��H��_�01�����A�����L��D��8�����D��8�L��D��8����D��8�H�E�dH+%(�SH���D��[A\A]A^A_]�I��E9}d�"����n���DM�}X�H�5T�L���@���t�H�5A�L���(����-�H�5��L������u(1�L�������������I��������L�
�A�r����fDL�
��A�5H��cH�
>��H�y^�01����A���������f.�H���cH�a���01����A��������f�L�
��A���N���fDL�
��A���6���fDL�
��A������fDL�
6�A������fDD��8�L�
0�A��H���cH�
b��H��]�01��	���D��8����H�=Eh���A��H���cL�
�H�
�H�Z]��01����A���������L�
?�A���d���L�
U�A���R���L�
k�A���@���L�
��A���.���D��8�L�
��A���)���L�
ŔA�-�;���A������!���D��8�L�
��A����D��8�L�
��A������D��8�L�
b�A�����L�
7�A�����L�
��A�����L�
��A���w���D��8�L�
��A���r���D��8�L�
x�A���Y����P��L�
��A���.���D��8�L�
-�A���)���L�
��A������D��8�L�
��A�����D��8�L�
��A�����D��8�L�
e�A������D��8�L�
5�A�����D��8�L�
�A�����L�
ؔA���t���L�
=�A���b���f�UH��AVI��AUI��ATH���dH�%(H�E�1���H����L���L�����H��I�ľdL���K��dL��H�H)�I�<���L��L��p����F���dL��1�L��H�ݔ�M�L��L�������uH�U�dH+%(uQH���A\A]A^]�M��A�H���cH�
���H�HZ�01����������L�
A����i��f���UH�5]�H�@H��SH��dH�%(H�E�1�H�}�����M� H�}�H�5.������H�51�H�������H#}�	�H��H	�H�}��w����M� H�}�H�5	�	��a���H�5�	��E��!��Z�E�H�}��B���	�H�U�dH+%(uH�]������fD��UH��AWAVAUATE1�SH��hdH�%(H�E�1��4���H�=�H�����H�=�rI����fIn�L��fH:"�H��H�E�)�p����D�H�=��fHn�fH:"�H�]�)E��t�foE�fo�p���H�E�H�E�)E�)M�N�4�L��L����������E1�M9�tJ�4�L���������I��I��u�I��I��u�L���8�H�u�H�����H�}�H��I���}������L���]�H�}�L����L��H��I���S������L��L�e��/�H�;H���#�L9�u�1�H�U�dH+%(uyH��h[A\A]A^A_]�fDL�
��A��H��cH�
a��H��W�01��������L�
Q�A����L�
H�A���L�
0�A������ff.�@UH��AWI��AVAUATI��S��H��xdH�%(H�E�1����L��I�����L��H��I���j�H��I����9�ukH��`����dL��H���D�L��H�����A�Dž���L����L�����L�����H�E�dH+%(��H�e�D��[A\A]A^A_]ÐL��A�������S�H�hmPH���cL�
��A��H�

��01���XZ�@H���cL�
��A��H�
�H�!V�A������01����W����H������UH�=ƐH��AUATH�ĀdH�%(H�E�1���H�=��I���v�L��H��I�����H��I��������H�u��dL�����H�1-2,4-5,H9E�tSL�
��A��H���cH�
��H�WU�01��������H�U�dH+%(uQH��A\A]]�Df�}�7u�1�L�|����I�L���A���|����f�L�
׏A���x����)���f���UH�=؏H��AUATH��dH�%(H�E�1��e�1�1�H�
H��I���������L����H�=���3�1�1�H��H��I���}������L����H�=g���1�H�(1�H��I���K��A�ą�utL���\�H�E�dH+%(ulH��D��A\A]]�L�
��A�nH�<�c�H�
��H��SA������01��J���L�
R�A�v뾐L�
B�A�~����ff.����UH��ATH��dH�%(H�E�1�f�~�:f�~
��f�~�f�~uzH�~����H��I���F�����1�L���S������L���=�=��A�$����L���-�1�H�U�dH+%(��L�e���f�L�
�A�AH�	�cH�
v��H��R�01���������fDL�
�A�D�L�
��A�E뱐L�
��A�G�L�
ՈA�?�L�
r�A�@���L�
`�A�F�n���L�
CRA�=�\����g������UH��ATH��dH�%(H�E�1�f�~��~
�-f�~�f�~u{H�~���H��I������=��1�L���������L���1�=��A�$����L����1�H�U�dH+%(��L�e���fDL�
��A�[H���cH�
���H�1Q�01���������fDL�
��A�^�L�
 �A�_뱐L�
"A�a�L�
�PA�W�L�
�A�Z�L�
�A�`�q���L�
ۋA�Y�_������f.���UH��AUATSH��dH�%(H�E�1�f�~�T�F��f����yf�~
�&L�f1�L���������4L��������U�������thL��������u�L�
�A�%H�6�c�H�
��H��OA������01��D�H�E�dH+%(�H��D��[A\A]]�L���X���H��I���������1�L�����A�����������t(�s�L���}��9�t�L�
i�A�,�J����L���h��_���L�
��A� �$���L�
&OA�����L�
�A�"����L�
a�A�(���L�
1�A�����L�
�A�#����L�
׉A�*�������UH�
�~�H�5p�H��AUATL�%\�L��H��dH�%(H�E�1�������t#H�U�dH+%(��H��A\A]]�fDH�
���H�5��H�=��������u�H�
���H�5��H�=�������u�L�-���L��L��L��������u���L��L��L���}������Z���H�E�dH+%(uH��L��L��L��A\�A]]�J�������D��UH�=~�H��AVAUATH��xdH�%(H�E�1���H����I��H��p����dH�����L���k�f��p���1L�
��A��tT@H�Y�cH�
Ƈ�H�M�01��m������H�U�dH+%(��H��xA\A]A^]�f�H�=`��t�I��H��t9H��p����dH�����L�����L�
���p���1,5A���c���L�-�L���$�I��H��t2L��p����dH��L�����L���}�L��L���R�������H�=·���I��H��t9H��p����dH���r��L���:�L�
����p���2-5A������L�-��L����I��H��t_�dH��p���H���"��L�����H�10,24,35H3�x���H�1,3-6,8-H3�p���H	�tL�

�A���a�����}�-37u�L����I��H��t`�dH��p���H�����L���{�H�10,24,35H3�x���H�1,3-6,8-H3�p���H	�tL�
��A����f��}�-37u�H�=����I��H��ts�dH��p���H���?��L����H�20,22-30H3�x���H�1-10,12-H3�p���H	�tL�
*�A���~���fD�}�,32-u�f�}�40u׀}�uѐ1����f�L�
�A���>�����f���UH��H��dH�%(H�E�1��~uS�~��H�~��H�~d��H�~ ���H�~(,��1�H�U�dH+%(����L�
�A�H�H��cH�
*��H��I�01���������L�
�A�I��L�
�A�J�L�
�JA�K�L�
��A�L�L�
��A�M���ff.���U�.H��H��dH�%(H�E�1�L������H�~L���H���H�FH��tlH�~ ��H�~0��H�~@�L�
{�A�$H��cH�
0��H��H�01��������H�U�dH+%(���ÐH�~u�H�~ �H�~0��H�~@��L�
�A�%�f�H�~(�T���H��u�H�~u��H�~ �H�~0��H�~@��L�
˄A�&�/���DH�~8����H��t�H�~ �^����zf.�H�~H�����H�~H�P���H��u�H�~�w���L���/�������������������������u~1�����H�~8�������H�~(����H���&����f�H�~H�&����H�~8�����y���H�~(���i���L�
t}A�#�0���L�
��A�.����L�
��A�,����L�
��A�-�������U�.H�o���H��H��dH�%(H�E�1�H������H�������H�1�ƅ����Dž����Dž���������uH�U�dH+%(u9��H���cL�
��A�;H�
��H�'F��01����������R�f���UH��H��dH�%(H�E��ᆳ�H9FuH�~uJ1�H�U�dH+%(uH��L�
�EA�gH���cH�
'��H��E�01���������L�
�EA�h��������U���L�
���1�H��H��8fo��dH�%(H�E�1�jL�E�H�E�,)E����ZY��uH�U�dH+%(u9��H�P�cL�
��A�YH�
l�H��D��01��W���������ff.���UE1�1�1�H�
�����ᆳ�H��H��dH�%(H�E�1��-���uH�U�dH+%(u9��H���cL�
*�A�nH�
݀H�aD��01�������������f.�f���UH��AVAUATH��XdH�%(H�E�1��w��H����L� ��I��L���F<�����1�1�L��A�{L���D��I��$P����H�=R���1�H�7L��I��$P�g�����H�(�1�H��L��I��$H�O�����L������1�L����L�����H�'L��L��H�E��(�����H�=̀����L��H��L��I��$����A�ą���L���'��H�E�dH+%(��H��XD��A\A]A^]�f�L�
O�A�ZH�	�c�H�
�H��BA������01�����DL�
q�A�b�L�
��A�g뱐L�
��A�m�L�
��A�r�L�
�A�V���ff.���UH�~H��ATSH��H��dH�%(H�E�1����H�{{�
H�{u|H��I���������1�L����������L�����������L����������L������1�H�U�dH+%(��H��[A\]�L�
�AA�GH���cH�
�~�H�YA�01�����������fDL�
�~A�H�f�L�
�~A�I멐L�
�~A�J뙐L�
�~A�K�L�
�}A�F�w����*�f.���UH��H��dH�%(H�E�1�H�~{u+H�~u[�?�f.Fz[uY1�H�U�dH+%(uW��L�
}A�#H���cH�
�}�H�g@�01�����������L�
�|A�$��L�
�|A�%��y�f���UH��H��dH�%(H�U�1�H�~{u/H�~u_H��H�~H��@����uWH�U�dH+%(uW��L�
b|A�6H��cH�
%}�H��?�01����������L�
+|A�7��L�
�AA�8����D��UH��H��dH�%(H�E�1�H�~{u,H�~ukH�~H�5�|�W���uHH�U�dH+%(uW��L�
�{A�H�^�cH�
x|�H�?�01��r��������L�
�|A���L�
o{A�������UH��AWAVAUATSH��dH�%(H�E�1����H���sH�5dmH��I����A�ą���M�uH���cf�H���H���I�N0�3H�����1�)��)����)���)����Dž��������HDž������H�����L��H��������H��������aL�����E1�1�L��L���������A�N9L���������L���E�����JA���1�1���L��1�1�L���M�L�����L�� ���M9��6�3�H�
)M1�H��{����L���r��f�A������%��I��H���1H�5lH��������M�}�3H���1��I�O0�t���?���������������>�I��H���,	H��L�����L��������ٿ����������	I�}�������YH�����1�1�L���W�L�����L�� ���M9����3�H�
3L1�H��z����L���|��@A������-��I��H����H�5
kH�������zM�u�3H�5�1��I�N0�|���G���������������F�I��H���\A�N8H��L��������2L���ھ��L���������DI�}��������H�����1�1�L���V�L�����L�� ���M9���3�H�
2K1�H��y����L���{�������A��A��A�����DD����I��H���hH�5�iH�������AM�u�3H�|�1��I�N0�k��H�=�{���I��H����A�N8�����H��L��������F�3���t(�����H����1���������������3H�~x�1����������A��A��A�����DD��F��I��H����H�5#iH�������CM�}�3H���1��I�O0���H�=�z����I��H���)�����H��L���-���������L�������������I�}�������H�����1�1�L���t�L�����L�� ���M9�tq�3�H�
TI1�H��w���L�����D�����A��A��A�����DD�H�E�dH+%(�H��D��[A\A]A^A_]�f��3M��H�
gw1�H�ew����L���)�����3M��H�
7w1�H�5w��Q��L��������I��H��������3H��v�1��#��1��������3������1�H��������������� L�
|vA���3H�
~v�1�H�M8�����������3M��H�
wv1�H�uv����L���9������I��H�������3H��u�1��c��1���������3M��H�
v1�H�v��1��L���������I��H���g����3H�ru�1����1�����/����L�����L���������������iI�}�v������H�����1�1�L����L�����L�� ���M9�t8�3�H�
�F1�H�[u�|��L���$��@������y���fD�3M��H�
'u1�H�%u��A��L���������I��H���^����3H��t�1����1��������������H�c�3�1�����D������3A������3L�
ft�1�A��H�
ctH�96A�����������fDH�a�cH��s�1��3�|��1��%������3H�o��1��[��L������f���fD�3H�G��1��3��L���������fD�3H���1����L������6���fDH���cH����1��3����L�����������3H����1����L���[������fD�����H��I���fD��������H��@������8�j���3�H��GH��1��R��D������h����3H��r�1��1��������D���L�
�rA���3H�
�r�1�H��4������������L�
�rA���3H�
�r�1�H�[4�������������L�
|rA���3H�
Yr�1�H�(4A������������L�
HrA������3H����1��f��L�
�qA���m����3H�k��1��?���L�
�qA�������3H����1�����L�
�qA���J����3H��E�1��������H�]q�1�A��������������3H�=q�1�������������������k��f.��UH��AWAVAUATH��dH�%(H�E�1��<��H���NI���+��H��H����L�����H��I���,����������I��H���L�5<qL�����L��H���)�������L�-*qL�����L��H����������L��L�����H��I��蹻���������I��H����L���7��L��H��輻������L��L������H��I���n��������@��I��H���|L������L��H���q�������L�=vpL������L��H���O���A�Ņ���L��L���Y��H��I�����L�������H�E�dH+%(�H�e�D��A\A]A^A_]�f�L��踺��jL�
pA�$P�H�y�c�H� HH�
�oA������01����XZ�L���h���jL�
�oA�,P�fD�K���jL�
�oA�5P�f�L���(���jL�
{oA�P�s���L�����L��H���-���jL�
(oA� P�H����L���x��L��H�����jL�
�nA�!P����L���P��L��H���չ��jL�
�nA�)P��L���(��L��H��譹��jL�
�nA�1P����L�����L��H��腹��jL�
�nA�2P���L�
SnA�H��c�H�
CnH��/A������01��������L�
nA���L�
	nA��L�
�mA�(�L�
�mA�0����@��UH��AWAVAUATSH��xdH�%(H�E�1�费��H�����H����H�5�mH��I�����H��p�����������(��I��H���uL�5�mL������L��������L��H���#��H�=�m����G�L��H�����H�qmL��L�������Å�����f.������&� H�`mL��L������������f.���������H�2mL��L���t�����u�Df.������T�NH�mL��L���>�����%���f.���������H��lL��L����������P�f.������V�PH��lL��L���������'�%�f.���������H��lL��L��������}f�f.������y�sH�dlL��L���j�����J�-��f.���������H�;lL��L���4�������t�f.������{�uH�lL��L��������L�56�f.���������H�;�L��L������������f.���������H�%�L��L������������f.������h�bH��L��L���\�����9�$f.���������H���L��L���&�������~�f.������j�dH�kL��L���������;�P�f.���������H��jL��L����������"�f.������l�fH��jL��L��������=�\f.���������H��jL��L���N������f�f.������r�lH�rjL��L��������C�=�f.���������H�FjL��L�����������
�f.������t�nH�jL��L��������Ef�f.���������H��iL��L���~������f�f.������~�xH��iL��L���L�����Of�f.��������H��iL��L���������f�f.���������H�ziL��L���������V��f.��������H�UiL��L��������u��f.������*�$����H������H�iL��H��H��x����f������������L�
�hA��f.���H��x���H��hL���)�������L���h���L��L��H�=�h�V������E	I�<$����H���cL������I�<$H�5�gL��������"	I�<$L��H�5�h�Ʋ�����	I�<$L��H�5�h諲�����	L���۹��1�L��A�D$H�=^��������I�<$�p���H����I�<$L��H�5?h�S�������I�<$L��H�54h�8�������L���h���1�L��H�=#h�W�������I�<$����H���LI�<$L��H�5h�������I�<$L��H�5h�α�����(
���1�1���o����"��L�爅n������1�L��H�=�g�ӻ������	I�<$肰��H������o���H��gI�<$L��H�5�gHD��S�������L��胸��1�L��H�=7��r������{I�<$�!���H������n���H�5bgI�<$L��HD�������xL���)���1�L��H�=��������3I�<$�ǯ��H���iL�����1�L��H�=
g������ZI�<$蔯��H���ZL���÷��1�L��H�=�f貺�����KI�<$�a���H���oL��萷��1�L��H�=�f�������*I�<$�.���H���H�vfI�<$L��H���������AL���>���1�L��H�=f�-������2I�<$�ܮ��H����I�<$L��H���ï�����L����1�L��H�=Cf������I�<$葮��H���L�����1�L��H�=f诹������I�<$�^���H����L��荶��1�L��H�=�e�|�������I�<$�+���H���"I�<$L��H����������L���B���1�L��H�=�e�1�������I�<$���H����I�<$L��H���Ǯ������L�����L�
�eH������L��L�����L�
�e����H������H�YeL���i������������f/������gL�
aeH������L��L���2��L�
He����������f/������ML�
<eH������L��L������L�
#e���m������f/�������
L�
eH������L��L�����L�
�d���b������f��f.�ztf/������,L�
�dH��x���L��L���l��L�
�d���
H��p����������_f��f/���L��聴��1�L��H�=�d�p������j
I�<$����H���:I�<$L��H���������d
H��dL��L������A�ƅ��f��f.������E�?L��H�~d�-�U��L��H��x����V���H��x����,H�Ud�.��H��x���H���/���H�޿=H�5d���H��H������I���	����L��H��H�d1�萴��L�����H��L��L����������
�5�f.���������H�=%*����*�L��H�����H��cL��L�����������=�f.������m�gL���)���H�E�dH+%(�Q
H�e�D��[A\A]A^A_]�@L�
,aA��H�I�c�H�
�_H�� A������01��W����L�
�_A�AH��cH�
M_�H�� �01��%���DL��A�����肯���T���Df(�fT
�f.
T�w f.�zf/
L�sf��f.�z�y���L�
,bA���@���L�
�`A���.���fDL�
k`A������fDL�
S`A�����fD���j�H��6PH�-�cL�
�^A�VH�
\^A������01��:���XZ�z���L�
�_A�����L�
�_A�����L�
�_A���}���L�
�_A���k���L�
paA��Y���L�
�_A���G���L�
�_A���5���L�
x_A���#���L�
f_A������L�
T_A�����L�
B_A�����L�
0_A������L�
_A������L�
_A�����L�
�]A�AH��c������H�
%]H���01������a�L�
p]A�BH���c������H�
�\H�\�01��ȹ����L�
$]A�A��L�
']A�BH�p�c������H�
�\H��01������v�L�
�\A�A��L�
�\A�BH�'�c������H�
Y\H���01��6�����L�
�\A�A��L�
�\A�BH�޽c������H�
\H���01�����x�L�
[\A�BH���c������H�
�[H�G�01�賸�����L�
q]A������L�
_]A���
���L�
�[A�B�0���L�
�[A�AH�4�c������H�
f[H���01��C����b�L�
�[A�AH���c������H�
,[H���01��	����@�L�
w[A�BH���c������H�
�ZH�c�01��Ϸ�����L�
+[A�A��L�
.[A�BH�w�c������H�
�ZH��01�膷���Q�L�
�ZA�A��L�
�ZA�BH�.�c������H�
`ZH���01��=������L�
�ZA�A��L�
�ZA�B���L�
xZA�A��L�
xZA�BH���c������H�
�YH�d�01����%�L�
,ZA�A��L�
/ZA�B�?���L�
ZA�A���L�
ZA�BH�T�c������H�
�YH���01��c����D�L�
�YA�A��L�
�YA�BH��c������H�
=YH���01��������L�
vYA�A��L�
yYA�BH�ºc������H�
�XH�e�01��ѵ���N�L�
-YA�A��L�
0YA�BH�y�c������H�
�XH��01�舵�����L�
�XA�A��L�
�XA�BH�0�c������H�
bXH���01��?����X�L�
�XA�A��L�
�XA�BH��c������H�
XH���01�������L�
RXA�A��L�
UXA�BH���c������H�
�WH�A�01�譴���Z�L�
	XA�A��L�
XA�BH�U�c������H�
�WH���01��d������L�
�WA�A��L�
�WA�BH��c������H�
>WH���01������`�L�
wWA�A��L�
zWA�BH�øc������H�
�VH�f�01��ҳ�����L�
.WA�A��L�
1WA�BH�z�c������H�
�VH��01�艳���b�L�
�VA�A��L�
�VA�BH�1�c������H�
cVH���01��@������L�
�VA�A��L�
�VA�BH��c������H�
VH���01�����d�L�
SVA�A��L�
VVA�B���L�
�WA���?���L�
�WA���-���L�
pWA������L�
^WA���	���L�
LWA�����L�
:WA�����L�
(WA������L�
WA�����L�
WA�����L�
�VA�����L�
�VA�����L�
<UA�S�y���L�
�VA���g���L�
2UA�Y�U���L�
�VA���C���L�
�VA���1���L�
tVA������L�
bVA���
���L�
PVA����L�
>VA�����L�
,VA�����L�
VA�����L�
VA����L�
�UA����L�
�UA����A����L�
�VA���r�L�
�VA���`�L�
�VA���N�L�
SWA��<�L�
BUA��*�L�
/WA���A���
�L�
NVA����A�����A�����A�����L�
�SA�BH��c�H�
MSH��A������01��$����^�L�
�SA�BH�۴c�H�
SH��A������01�������L�
ESA�A�L�
6SA�AH���c�H�
�RH�9A������01�蟯����L�

SA�B��L�
�RA�A�g���L�
�UA�����L�
�UA������ ���UH��AWI��AVAUI��ATSH��H��8���dH�%(H�E�1�������L��1�L��@���蒭��L�5�UA��M����L��1�������L��1��g�����ou�L���J��A�E0��~`1�E1�M���I�L���+���L���#���H��t#���t?��	uBH��8���L�������H��u�L��A��H���踡��E;e0|�E1��,DA��f�H�	�cH�5U1�A������01��!���L���ٲ��H�E�dH+%(uTH�ĨD��[A\A]A^A_]�������H��@���A������8�פ��H���H��H���c�01�踭���聺�����U�H��AWAVAUL��0���ATL��0���L��SfIn���H���dH�%(H�E�1�)����Dž���Dž����H�H�����Dž����H��X���HDž������L���L��A���1������fo����ƅ?���)�0����S���H����L��H��I������H�5�����L�� ���L���N���H��1�E1�jL��A�L��H�5ȣ�+��L����XZ�������1�L��L���ڰ��L���������������L�����L�����L��L��L��A����������u`�����D�������ou
A��o��H�۰cH���1�A������01����#�H�I�H���c�A������01��Ϋ��L���v���H�E�dH+%(��H�e�D��[A\A]A^A_]Ð諶����H��@���A������8肢��H�#�H��H�<�c�01��c���듐L�����L�������x$L��L��L�������E��a�����3���H��@������8����H��"�H��H�ʯc�01�������H���cH����A������01��ʪ�����萷����U�X�H��AWAVAUATSH��fo��dH�%(H�E�1�f�u���E�XXXX)�p����˼��I��H������H��H�=]���H����L��p���L���#���H����1�L��A��������I��H����H��H��P����o���H��L���D���L��H�
�����L�U�H��L��L��H������H��1�1�jH��H���E1�E1�L��轵��ZY����1�L������H��H���0L��PH�
�PH��L��H���!���H�
�PL��PH��t~H���ַ���E1�L��芺��L�����L���ڣ��H�E�dH+%(��H�e�D��[A\A]A^A_]�DL��H�F�H�ϭc�A������01�����f�H���cH�b���01��Ĩ��H���<����e����H�y�cH�j��1��3蔨��L��E1��)����3H�p��1�E1�A������k�������H�/�c��H�&�cL��H����01��>����������@��H�U�H��ATE1�SH��`���H��dH�%(H�E�1��af�U�H�f�E�H�^H��`���H��p���H�<�H�E�H�E�H�E�H�E�H�E��E�krv�E�kravH��h���HDžx���H�U�H�E�H�E�H�E�H�E�H�E�H�E���sH�;�}���9Cu0A��H��A��u�1�H�U�dH+%(u4H�Đ[A\]�DH��cD��H��N1��01�������������UH��AWAVAUI��ATSH��xdH�%(H�E�1�������I������I��M��t	H����L��`���L���V����dL��L��d��H�J�cL��H�N��01��b���L��L�����L������1�����H�U�dH+%(upH��x[A\A]A^A_]��1�L��1��ę��A���7�E��A�V?D��AIփ�L��H����Hc�I	�苙��A��L���`���9�|��"���蒲��f���UH�=>KH��ATH��dH�%(H�E�1��������H�=�E�������H�=����������H�=�E�k�������L�%{EL���T�������L���D�������H�=hE�0�������1�H�U�dH+%(��L�e���DL�
EA�.H���cH�
�L�H�Y	�01��Ť��������fDL�
�DA�/�f�L�
�DA�0멐L�
�DA�1�L�
�DA�2�L�
|DA�3�x���L�
jDA�4�f�������f���UH��H��dH�%(H�E�1���H�E�dH+%(uɿ�����а����UH����H��H��dH�%(H�E�1�H���c�01��ɣ������1���X����辫��������D���@��UH�5�����H��ATL�%SH�]�H��dH�%(H�E�1��E����H��H�5���L���]���H�=��c�����M��u&L���~���H���H�U�dH+%(u0H��[A\]ÐH�٧cI�ؿH��J�01���������赯��D��UH��AWAVL�����L�5�JAUATL�%�JSH��`���H���dH�%(H�E�1�H��JHDž���H�����H��JH��(���H��H��0���H�qJH��8���H�gJHDž���(HDž ���@HDž@���H��H���HDžP���HDžX����
M�wI��M��tHM�/�dH��L��蠥���I��M��H���cL��L��01��ϡ��H��L���d�����t������H�U�dH+%(uH���[A\A]A^A_]��e���D��U�;H��AWAVH���AUATSH������H��xdH�%(H�E�1�H��h���Dž|����H�H�;TcDžX���H�����HDž��H��p���H��p���H�C
HcH�~H��:�����I���H���I��M���H����1�L��E1��v���A���2�E��A�F?D��AI�H��A��L��D��H�I	��B���A��L������A9�|�L��H���6�����|�����|���L�c�H��p������<���L������H��h���L��莕�������PL��蹬��A�Ņ����L��衬�����:�PL��苬�����B�L���u�������PL���_������4�PL���I��������PL���3�������>H���������H��������H�������L���޹��H�E�dH+%(�H��xD��[A\A]A^A_]�fDM��tcL����D��|���L�
�GA�>K�DmHDŽŰ���H�ԣc�H�
oGH�|A������01�����o���DL���p����L�
fGA�D�L�
WGA�F�L�
HGA�E�L�
9GA�H�L�
*GA�G�u���L�
GA�B�c���L�
GA�C�Q���L�
�FA�I�?�������D��UH��H��L�GH�dH�%(H�E�1�H�G �W(L����H�ԢcR�0H��1���H�E�dH+%(u�1��諪��ff.���UH��AUATSH��H�dH�%(H�E�1��FH�@I��H�H�иH9Ju
H�OI��H9Jt$H�U�dH+%(uLH��[A\A]]�f.�H�G H��H�2H��������uA�U(��u
A��D�c�����@UH��AWAVI��AUATSH��H��H��(dH�%(H�E�1�������L�������L�%��cA��H��E1�A�4$軜��A�4$H��E�1�L���L�-��薜��fDH�KL�L��L�CA�4$1�H���n���L9�u�A�4$H�GE�1��R���1�H�5	���L����������H�U�dH+%(uCH��([A\A]A^A_]�fDH�U�H�5=���L��H�]��E�蚗����t�L�%��c�/���赨��D��UH�
�DH��D1�H��AWL�=�DAVAUL�-�DATSH��foҪfo
ڪfo�fo5��fo-�fo%�dH�%(H�E�1�H�iD�8���H�����H�����H��0���L��P���H��h���L������H������L������H���������)� ����X���)�p��������)����������)����fo^�)E�fob�L����L�����H�����L��0���H��H���L��`���H��x���L�m�L�}��������)� ����8���)�P����h���]�)E��Ҧ��H����I��H��H���H�����H����H�;����I��H�����o{H��L��8�������L��H��薕��H9���u�L���Ճ��I��H����H�=MC轃��I��H����L��詃��I��H����foU�L��L��AEfoR�AfoV�莒�����-H��P���L��H���T������@L��L���a������L��H���.�������L��L���;������L���+������L��H�����������L�-ҝcA��H��A1�A�u���A�u�1�L�}�H��AL�5��˘��H�KL�L��L�CA�u1�H��覘��L9�u�A�uH�A�1�芘��1�H�5A���L������A�u1�L�
�AA��H�
�A�H���N��������H�U�dH+%(�sH��[A\A]A^A_]�@H�����H���L��Dž����H�5.���H���蒓����tL�-��c���f�L�牅���r���L���Z���L���R���L���J��������f����L�
�@A�oH�d�cH�
�@�H���01��x���������%���fDL�
�@A�s�L�
�@A���L�
�@A���L�
�@A���L�
�@A���L�
�@A���p���L�
E@A�{�^���L�
3@A�x�L���L�
!@A�~�:���L�
�?A�i�(���莣��f.�@UH��H�M@H��AUATI���SH��H��L�-W�cdH�%(H�E�1�A�u�o���H�u�L��胳����u_H�M�A�uH9�u2H�89�1��?����H�U�dH+%(uHH��[A\A]]�I��H�f��1��
���1���fDA�u���1�H��?���1��订��ff.�UH��AWAVAUATSH��H�$H��XL�-y�c�VH�
�A�udH�%(H�E�1�H��L����H�����HDž���oC�H��L��E��H�H�����H��H����1��:���L�3�%L���*���H��t"H�KL�C�1�A�uH�������L�3H����H����L��L��L�����#���������D��D����D����E9�oE9�ujH����1�E��#���H9Du ��H��A9���L�DH�D L9t�A�uI���1�H�d�E1��\���H�����'A�u�D��1�H��E1��3���H�����ǎ��H�E�dH+%(�VH��XD��[A\A]A^A_]��A�u�1�E1�H��=�ޓ��H�����D1�D���f�������H����E1��f�J��� H��u@I��I��@tQJ��� Mc�H��tً�����A�DŽ�t:J��� H����H��t��������uBH�����A�����J��� A�u�1�H��<����H�������A�u�1�E1�J��� H��<��H�������A�u�1�E1�H��<�ʒ��H�������艟��f���U1�H�=8H��AWL�=��AVL�5�<AUL������ATSH�S��bH��XdH�%(H�E�1�����ʚ;H�=�7����������H�==<�������~����ʚ;H�=2<�������g���H�=,<H�Jb^Hp�������K���H��H�=�<�������6���H�����H�=<�����������L��L��L�%��c������1�A�4$誑��L��L���o������H9�����u
H9�������A�4$H��;1�E1�I�S��b�L���Z���A�4$H���1�L��H���=���H��L���������JH�S��bH9������H��H9������
L�;A�4$1�L��L������L�b;L��L���������H�S��bH9�������H�������~L�7;A�4$1�L��L���艐��L�;L��L���G�����?H�������	H�S��bH9�������L��:A�4$1�L��L����-���L��:L��L����~������H�������}H�S��bH9������fE1�H�������fo��L��L������fo���H�L��H�����������)�����)�����Dž���������L��fod�D!�foI�foa���L��fo
d��H�)�����L��fo_�H��������)����foR�����)�������)�����)����)���H������Dž�����"����L��fo
"�!�L��fo%�fo���H�H�n9L�������)����������)�����H������Dž��������L��fo��!�L��fo
��H������H�9L��fo��)�����H�����������)�����Dž�����Z���fožL���!�L��fo
��fo���H�����H��8L��fo��H������H��Oh��'�����)�����fo�������)�����fo|�Dž����)����fos�H���)�������L��fo��!�L��fo
�fo��H�)�����H�8L��fo_�fo'������fo
(������)����foB���������)���)�����)����)���H������Dž��������A�4$�H�:�!�1�蹌���ۍC�H�U�dH+%(��H��X[A\A]A^A_]�fDA�4$1�H��61�I�S��b�E1��b����o���DA�4$1�H��61�I�S��b�E1��2������DA�4$E1�H��61�H�S��b�E1������V���@A�4$H�T61�E1�I�S��b�H�S��b�ʋ������DD������D"�����D"�����D"�����D"�����D"�����D"������/���f�A�4$��H��51��E1��d��������A�4$��H��51��E1��<����I����A�4$��H��51��E1����������A�4$��H�X51��E1�����A����A�4$��H�051��E1��Ċ�����芗��f.���UH��AUATSH��H�$H��foۖdH�%(H�E�1�L�����XDž��XXXXL��f����)���� �����x|A��H��c1�L��H�5�01��-���jE1�1�jA�D��H�
�djH��j蔆��H�� D����Vu��L����}�����H�U�dH+%(uH�e�[A\A]]�H�=��ԡ���������舖���UH��AWI��AVAUATI��H��SH��H��(dH�%(H�E�1�H�u��K���foÕL��A�D$XXXXI�ƸXfA�D$A$����H�u�����L���A��覢��I9�yD���t��1�L��1��]��������L����{��fHn�H�C��H����D�{1��C �C$CH�U�dH+%(��H��([A\A]A^A_]�fDH���cH��3��01�蜈��D����s��L���|��������H�N�cH����01��i���������H�+�cL��H�j3�01��C���L���;|��������Q������H���cH�Q3��01������;�is��L���|��������f�UH��AWAVAUATSH��H�$H��XdH�%(H�E�1�H����I��H��H����H��H��H����������^A��H����1�H����L�=�2L�5�2�H��L���4���H����I��H����I9���H9���H�BH����A�D�
D9�t0H��A�eL��L��PH��c�A������01����AYAZ�����{���H��A�fL��L��H���cj�A�A������01�踆��_AX�@��������=����������������H�����x��H���H����H�fHn�fH:"�)�������f�H9��gH�BH����D�
A���t7H��H���cH�
m1A�ij�H�m1�A������01����Y^������H�����������7q��H������y��H�E�dH+%(��H�e�D��[A\A]A^A_]�fD�����H����Dž���߆��H�H��ux胐���nƅ��A�����m���A������%���DH��H�
�cH��0E1�jA�j�H�
l0�01�A���������XZ����f�~������������W���fD�����������������u9����H�����1w��H��~hH����H�fHn�fH:"�)����C��������H����Dž���Ʌ��H�H��t~����t:������Z����nƅ���J���A������b����8��������(�������ې��ff.�UH��AWAVAUATSH��H�$H��x�E��l��EL����H����H��L�������H��x�H����D����L��p���h�dH�%(H�E�1���������H����A��H����E1�H��`�H����A��/H�HH����H������CЃ�	wyK�D�E1�L�lC�H��H9�ù����������������sH���������iu��H��H���H����H�fHn�fH:"�)����k���E�������E�H����L9�t:H��M��A���PH�W�cH�
�-H��-A������01��j���A^X����9�t;H��A��A���PH��cH�
�-H��-�01��,���A\A�����A]H����H����E1�H����H�����4@H�HH����L�0E����A�FЃ�	wwK�D�1�M�lF�H��H9�uˀ����C�����������jH���������t��H��H����H����H�fHn�fH:"�)����j���f��۸����DE�H��x�L9�t:H��M��A���PH���cH�
u,H��,A������01�����A[[����D9�t;H��E��A���PH���cH�
0,H�8,A������01��̀��AYAZH����H����E1��H�����9f.�H�PH����L�(E����A�MЃ�	wK��H��1�M�tM�H9���uȀ����������������o����H�����r��H��H����H����H�<fHn�fH:"�)����f���fD�۸����DE�H��p�L9�t:H��A���M��PH���cH�
+H�.+A������01����_AX��l�D9�t9H��H�
�*E��A��PH�X�cH��*�A������01��m��Y^D������h�A8�t9H����H��*A��PH��cH�
�*�A������01��!��XZH�����#��������hj��L���s��H�E�dH+%(��H�e�D��[A\A]A^A_]������H����Dž������H�H����������1�����G���fD�����H��`�Dž������H�H�����������������C���fD�����H����Dž���w��H�H��t ~)�����������T���fD����nƅ��A������~����ۈ���nƅ�����������@軈���nƅ��A��������A�����������������A������~���A����������s������f�����f����V�����v�������fDUH��AWAVAUATSH��H�$H��x�E�����EI��H����x�L�����H��p�L��H����D����L������l�dH�%(H�E�1�����6�L����E1�A��L��`�H����A��L��H������H��H�H	�H��E1�H9�tdH�PH����D�E����A�BЃ�	v�A�B���wA��WH��Mc�L	��f.�A�B����cA��7H��Mc�L	�������W����������uCH���������Qn��H��H���XH����H�fHn�fH:"�)����?��������H��`�Dž����|��H�H���T������������s���fDI��M9�tFH��H�
�cA��H�
}&AUH��&�A������01�D��`��{��D��`�A_X��x�D9�t;H��E��A���PH��cH�
#&H�+&�01���z��A\A�����A]H����H����E1��H��x�H�����fDI��H�I	�H��1�H9�t]H�PH����D�(E����A�EЃ�	v�A�E���wA��WI��Mc�M	��A�E���wA��7I��Mc�M	���������������������H���������ml��H��H���H����H�fHn�fH:"�)����B�����۸����DE�DH��p�L9�t:H��M��A���PH�R~cH�
�$H��$A������01��ey��A[[����D9�t;H��E��A���PH�
~cH�
�$H��$A������01�� y��AYAZH����H����E1��H��x��DI��H�I	�H��1�H9���taH�PH����D�(E����A�EЃ�	v�A�E���wA��WI��Mc�M	���A�E���wA��7I��Mc�M	��������������������O����H������j��H��H���\H����H�fHn�fH:"�)����=���fD���JL9����SD9�����D������l�A8�t9H����H�1#A��PH��|cH�

#�A������01��w��XZH�����������b��L���k��H�E�dH+%(�MH�e�D��[A\A]A^A_]�fD�����H��x�Dž���x��H�H���d�i��������������fD�����H��x�Dž���Ox��H�H�����������������g���A�����L9���tS�H�y{cH���M������A��H�
�!H��!�01�A������tv��_AXD9����n�������H��E��A��H�
�!H��!�A�����PH�{c�01��'v��Y^�*���H��`��$���I���nƅ��A��������f�����nƅ��A����������ۀ���nƅ��A���������E�������I��DE��O���I��A������A���A������s���A���������A��������I���Z���H��`��i���I����@����V�����V����F�����f������fDUH��AWAVAUATSH��H�$H����@��dH�%(H�E�1�H����ƅ��
H�aaaaaaaaH���H�H�����>H�bbbbbbbbH����H�����H�H����H���ƅ��H��H����������%H��8�E1�E1�E1�H���H��(��fDIc�A��D��@�A��
��H��H;� ��wH�ZH��(�D�*E��E����A���u�M���L��D���L��芋��H����
fo�@�fo�P�I��fo�`�fo�p�B8fo���fo���fo���fo���BT8D���B\8 Bd80Bl8@Bt8PB|8`BL8p1�M��A�D��@�A��
����Mc�L��K�>H�s�׊��H���H���
I��J�<8H��@�L���1���E1�A�����H����m��=t>H��A��H��wcA�:h�H�
H��01�A������r��A[[H���L�-�L�%�I��H��DE�A��at1H��A�<L��L��H�7wcja�A������01��Qr��AYAZI��I9�u�H���D��A��
t8H��A�=H��vc�j
H�
]H�eA������01��q��_AXH���E1�E1�E1��l��H��8�H��(�H�����Ic�A��D��@���
��H��H9� ��PH�ZH��(��A�х���A���u�M���L�牕�L���������H���fo�@�fo�P�I��fo�`�fo�p�B(fo���fo���fo���fo���B\(D������Bd( Bl(0Bt(@B|(PBL(`BT(p1�M��A�D��@���
����Mc�L��K�.H�s�E���H���H���TI��J�<(H��@�L��蟋��A�����LH���L�-�L�%�I��H����E�A��bt/H��L��A�@L��H��tcjb�A������01��p��XZI��L9�u�H����j��H���������C[��H������c��H�E�dH+%(�~H�e�D��[A\A]A^A_]�f���4��Y�����0�����������H����a��H���'H���H�fHn�fH:"�)� ��+���f���4��������0������u���H����a��H����H���H�fHn�fH:"�)� ��V�����8��H���Dž<��5p��H�H��tv~��>�������%�����8��H����Dž<���o��H�H��t~#��>�������?����}y���nƅ4�����cy���nƅ4�����H��(�H���E1�E1�E1���h��H��8�H����@Ic�A��D��@�A��
��H��H;� ��gH�ZH��(�D�*E��E����A���u�M���L��D���L���J���H����fo�@�fo�P�I��fo�`�fo�p�B8fo���fo���Bd8fo���D���Bl8 fo���Bt80B|8@B\8PBd8`Bl8p1�M��A�D��@�A��
����Mc�L��K�>H�s藄��H���H���!I��J�<8H��@�L����A�I��H��H��qcA�9H�
�hH��A������01��l��A\A]������4��i�����0�������Y���H�����^��H����H���H�fHn�fH:"�)� ��;���H��(�H���E1�E1�E1��f��H��8�H����fDIc�A��D��@�A��
��H��H9� ��gH�ZH��(�D�*E��E����A���u�M���L��D���L���*���H����fo�@�fo�P�I��fo�`�fo�p�B48fo���fo���B|8fo���fo���D���B\8 Bd80Bl8@Bt8PB|8`BL8p1�M��A�D��@�A��
����Mc�L��K�>H�s�w���H���H��� I��J�<8H��@�L���х��A�I��H��H�nocH�
�A�>h�H���A������01��qj��Y^���f.���4��i�����0������u���H����\��H����H���H�fHn�fH:"�)� ��?�����8��H���Dž<��Ek��H�H��tv~��>�������e�����8��H���Dž<��k��H�H��t~#��>�������?����t���nƅ4�����st���nƅ4��G���A������o����v��H��(�L����c��HDž�����H��(�L���c��HDž������t����|����s������s����o�����s����y���L���^c��I��HDž�����L���?c��I��HDž��������U1�H��AWL�=AVAUL�-�ATE1�S�����H��(dH�%(H�E�1�@�L��M���Lc�L���8�L��L����DE��'�L��H�=���DE����M���DE���u�jE1�A�����1�j������L�����H���ͫxV4��AXAYjj
A��A��A�
�
�H�=<�d�AZA[��uA���jA�A�;�
j��;H���ͫxV4H�=A������!�A[A^jjx��L�5��AE�A�1�L��A�x�x���ZYj��j�A�x�AE�L�=�1�E1�L���������^_����jA��A�����1�ji�g��ͫH�=�a��AYAZ�����۸����DE�jL��E1�A�����j�1ɺa�Na�A������|�Y^jj
A�
A��ù�
�H�=��M�_AX��u��E�jA�A�;�j��;�Na�H�=��������AYAZjjx��L��A�DE�A�x1���x���A[A^jj��������L��DE�A�xE1�1����A_Z����jA�����A�c1�jc�a��[H�=�`��_AX����E��DE��N�������DE�H�E�dH+%(��H�e�D��[A\A]A^A_]�fDj�gA��1�jiA�������ͫH�=�_A��������XZ�`���j1ɾ�[A�cjcA������aH�=�_A��������Y^�X���A�������q��A���A���@��UH�`H�
]`fHn�H��`H��AWAVAUL�m�ATSH��p���H��xdH�%(H�E�H��_Džl���fH:"�H�2`)�p���fHn�H�
'afH:"�H��`)E�fHn�H�fH:"�H�Ca)E�fHn�fH:"�H��)E�fHn�fH:"�)E�L�;1�L�����L�sH��I��L���aq����u=L��H���Q^��I9�u�H�E�dH+%(uc��l���H��x[A\A]A^A_]��H�IhcM��M��L��H���H���01��Wc��L����]��Džl�������L9��\�����p��D��UH��AWAVL�5AAUE1�ATSH�]�H��hdH�%(H�E�H���H�E�H�E�H�CH�E�H�SH�E�H�X`H�E�H��`H�E�H��`H�E�H�<H�E�H�E�H��x���L�#L������L�KI��H����M����L��H��L��p����
p����t+L��p���M��H�%gcL��L��A������01��;b��L��H����\��H;�x���u�H�E�dH+%(u3H��hD��[A\A]A^A_]�DL�OM��u��f�I��L�
<��n��f.�D��UH��H��dH�%(H�E�1�H�E�dH+%(uɸ������jn��f.���UH��H��dH�%(H�E�1�H�E�dH+%(uɸ������*n��f.�Uf�H��AWAVAUATI��SH��H�u�H��X���H��h���L��p���L��`���dH�%(H�E�1�)E�)E�)E��e`��H����H�=QI���]��I��H���}1�H��L����I��H�5��H�=����_��L��L��H��H�E�H��H��x�����`���E���tNH��x����rk��L����u��L���2[��L���^��H�E�dH+%(��E�H�Ĉ[A\A]A^A_]�f�1�L��1��]���E���u��L���pP��M�eM9���L�}�I��$I�7H��u��I�wI��H��t{H���{m����u�I�_I��$�H�5|�L��AƄ$�H�@0H��J����uHM�$$M9�u�M�eM9�t!M��H��x���1�L���bM��H��u;M�?M9�u�f��d�1��@H�=�dcH���m��M�$$M9��4����L�`(H�X(I9�t�I�|$H��X�����l����tM�$$I9�u��1�L���[Y��H��h���H��p�����d���H��`����V���I�]I9�t-L�u�L��p���H��x���1�H���L��H��uH�I9�u�L�u�f��8L�p(L�x(M9�t�I�~L���/l����t
M�6M9�u���M��1�L�u�L���X��H��`���������E�����L���[���E�������k��ff.���Uf��E1�E1�H��AWL�=TAVL��L�5L��AUL�-$ATL��P���SH��H���L��H��H��dH�%(H�E�1�L��P���HDžX���,L��`���HDžh����)�p����P������6�if.�H�������H��
E1�E1�H��H��P���H�=H��[L��H��p���f��H��
H��HDžX���,L��`���HDžh����HDžx����H�E�H�E�X)U���������jhf.�H�������H��
E1�E1�H��H��P���H��
H�=�
L��H��`���f��H��
H��H��p���H��
H�E�H��
H�E�H��
HDžX���dHDžh����HDžx���,H�E��H�E��H�E�H�E�X)]�������+��gf.�H�����H�=l
E1�E1�H��H��L��������4�pgf.�H������H�=>
E1�E1�H��f��H��L��L��P���HDžX���,L��`���HDžh����)�p����:�������H�=
E1�E1�H��H��L�����������H��	E1�E1�H��H�=�	H��P���f��L��H���H��HDžX���	=H��`���HDžh�����)�p���������Z��ff.�H�������H�=�	E1�E1�H��L��	L��	H��L��f�L��P���L��`���HDžX���,HDžh����L��p���HDžx����)u��*���L�]	L�E	����=�ef.�H�������f�I��L��L��H��@���L�(	L��P���H�=�L�u�HDžX����L��`���HDžh���,L��p���HDžx����H�E��)E���������df.�@��������=Gef.�H�����H�U�dH+%(�H�Ę[A\A]A^A_]�fDH�9^cL�
�A���3L�-��1�L�%нL��L���:Y��L�
�A�/�3L��L��1��Y��������t����H��]cL�
�WA���3L�-G�1�L�%h�L��L����X��L�
?A�.�H��]cL�
:A���N���H�s]cL�
�A���3L�-��1�L�%
�L��L���tX��L�
'A�,�5���H�+]cL�
�A���H�]cL�
�A���3L�-��1�L�%��L��L���X��L�
_A�-����H��\cL�
9A�����H��\cL�
=VA���H��\cL�
wVA������H��\cL�
�A�����H�l\cL�
�A�
�3L�-��1�L�%�L��L���mW��L�
sA�0�.���H�$\cL�
%VA��H�\cL�
zA���3L�-��1�L�%��L��L���W��L�
&A�1����H��[cL�
&A�"�3L�-<�1�L�%]�L��L����V��L�
A�2���H�~[cL�
,A��H�h[cL�
�UA���U���H�O[cL�
�UA�%��;c��f.����UH��H��dH�%(H�E�1�H�E�dH+%(uɸ������b��f.�U�CH��AWAVAUATSH��XL�'dH�%(H�E�1�f�U�H�]zH���fHn��E�A,B,fH:"�H��jH�E�)E�I9��4H��HcI����}��h��H�E�I��H���HM��L9�t)@I���a��I�H����M�mI��L9�u�H��[cA��$��@�U��E���t�E�M9�$��tH�u�1�L��H����<��A�ą��e�E��@9C��L�+E1�L9�u8�gf�8E���A���;M��M�mA��L9��4D��H�M�L��}��E�Hc�A��L�<�L���?�����HcE�L�|ŠI��XH�x L���b�����4E��u�H��Zc�P1����m���M9���`���A������P���H�
H�Yc�A������01��T���E���~(H�E�D�m�H��N�,��H�;H���N��L9�u�H�}��N��H�E�dH+%(��H��XD��[A\A]A^A_]�}��E�����H�|XcH��R��01��S���t���E1��l���H�SXcH�k��3�pS��M��M���3D��H��R1��A������HS���%���H�XcH�?�1��3�'S��I��XM��L�@ �H�
P�H�\�}�I��LE���tM9��uA���HM�H��WcH�oR�A������01���R�����H��WcD�E��H�oRA������01��R���u���H��Z�N���H�PWcL�
'A�H�
*H���A������01��QR���f���H�WcH�gZ�A���01��,R���A�����^��f���UH��AWAVAUATSH��xdH�%(H�E�1���F��H���EL�e�I��L����:��L��L�����L��A���yO��E����fo�bH�9Xc�}�E�ionsf�}��C)E��Q��I��H����L��p���L�u�L���D��H��E1�L��jL��A�1�L���g��Y^A�����L���Q:��L��L������L����l����[��L����N��D��l���E���&�C�wP��I��H����H;��L���9��L��L�����L��A���N��E�����E�CPI�/P��I��H���HL���9��H�5��H�=����O��L��L��L��H����P��A����L��L���Q���L��A���V[��L���N��E���~H�E�dH+%(�SH�e�D��[A\A]A^A_]�H��TcL��H����l���1�L�%��3�P��L��L���J��L���aZ��L���M��D��l���jL�
��APA���3H�K�L��1��A������O��XZ�W���@H�qTcL��1�H���3�O��L��L�%=��jZ��L���"M��jL�
�OA��AU��H�!TcH����1��3�<O��L����L���d���H��ScL�%�jL�
>�A��AW�:���H��ScL�%��jL�
�NA��AV����H��ScL�%��1�A�}L�
��L���3H�D��N��A���������H�rScL�%Y�1�L�
n�A�jL��A������3H���pN���n���H�4ScL�%�1�L�
0�A��L��A������3H�²�2N�����H��RcL�%�1�L�
��A��L��A������3H�����M�������Z��H��RcL�%�����H��RcL�%���R���@��UH��H��dH�%(H�E�1�H�E�dH+%(u�1���]Z��ff.�f���U�H��AWAVAUATL��`���SL��H���dH�%(H�E�1��H�ƅ����ƅ����Dž�������Dž�������HDž����������C����������������b��H�����H������5��H�����H�����JL��I��H����H�����H�����H���5��H�5�L���Y�����E1�L��L���Q��I�UI9�t�B8f%��f
f�B8H�I9�u�L���i��A�ą��z�����L���Pd������H��0���H������H��I���H�8�XC��A�ƅ�tWA����������L���I��H������F��H������3��H�E�dH+%(�H���D��[A\A]A^A_]�L���RJ��E1�1�1�1�H�5����H<��������^��E1�1�1�H���H�5��1���<������L���`��A�E0����H��`���HDž(���HDž���HDž���H�� ����DH��(����A��E9u0��L��(���M��L���]����x�L���}Z��I��H�����L���8>���fDA�_�A��9�usA�_�A��9�ufI�_H�5��H���CX����tH�5�H���0X����u<L��L����M��H��H����H�� ���L���T������H��p���H�����L����R��L����Y��I��H���T���A�?u��Y���f�L��L���eM��H��H��tZH�� ���L���T�����gH��p���H������D���H�����������H��Nc�A������01��I�����H��JH�uNc��01��I��A������z���H�����H��t�L�����M��t�L������H���L���c��L��H��H�����I���T��L��L��L�����H���T��L�5�McI��H�����I��H��J�1�A�6�
I��A�6L���L���H��J1���H��A�61�M��H�����H��J���H��L9�����&���L9��������H���H9���L9���	�D��A���}���H��I����H�LMcH��H�A������01��aH���J���H��0�}���H��0�q���H�
McH����E1�A������01��H��HDž������H����0���H��LcH����E1�A������01���G������H�JH���H�~H����T��f.�f���UH��SH��H���V��HdH�%(H�E�1�����SH9�������H�U�dH+%(uH�]����)T��f�UH��AUATSH��H�$H��H��8���H��@���H��H���L��P���L��X�����t&)�`���)�p���)U�)]�)e�)m�)u�)}�dH�%(H��(���1�L���H�EI��H���L�����L��H��0�����Dž��Dž��0H���� >����=���H�;KcD�+E��~sL��H���D��1���JF��L���9��A�ą�u+H��(���dH+%(uaH��D��[A\A]]���3��H�;��1��E�����H�5��L���\>���t����A�������R��UH��AWAVAUATSL��$���H��H�$L9�u�H����pR1�dH�%(H�E�1�H��P���A��L������H��H�����L��@����/��Dž���������<��D��H�pG��H��Ic�01��&E��H�������H��H��(����;V��E�ṀL��L�L����1���Z��E�ṀL��L�B����1��Z��H������A��L�GH�ǹ�H��0����1�L�����}Z��H������A��L��H�ǹ�H�� ����1��LZ��H�����A��L����H��H��8����1��Z���L��HDž@���c�CU��L��L�������4-��H���L��AUI��L���1�����Y��L����Y��_AX��tY�B5��I��H���H��I���L��AUL�]��1���}Y��L���%>���L���Y��Y^����L��H������,���H��H���?/��H��P���L��H��H��H���H��HDžH���HDžP���H������4�����9H��H���L��P���H���)H�5hE�P�����M���L��L���oP������L���_=��H��H����S=��H�|Gc�H����01��B��1�H�=���9������H�JGcH�ٿH�|��01��bB��H�� �������\���Dž���H�
e�H�KEH�Gc�A������01��B��H�������G����������x�c-��H��Fc�8'�����������������������H�E�dH+%(��H�e�D��[A\A]A^A_]��9H�5tD�������������I����,��I��9�2���H�BFc�8~H�� ���H�=��1�����H�� ���H��8���1�H�=��������TH��Ec�8��H��8����*��I��H����H�5%�H���MK��H��tH�@H������H�5�L���.K��H��tH�@H������L���V<��fo�����fo
RfH~�f��)�����H���fH:�H���
H�VEcH��C��01��q@���lN��H��Fc��H��0���H������H����Z���������Dž��!�0�����H�����1��foNH��P���L�U�����H�H�����H��@���DžT����HDžh�����X����[�����gL�5_Kfo'Q� L����������L��H� H����L����)��讻��H�� �-D������H��8���D������*:��fo�PH��8����H�Hf������H����H��)����H��(Dž����f�����H�����L�����HDž���R��H����������L������H�����H9���1���L�����H�L��H������H��@���L����H����H������H�����1�H�������HDž���IHDž���eHDž���9�` Dž8���Dž����	f������N8��L��1Ҿ�L��f������EU��A�ƅ��������������L���5���I�������I9��~H��BcD�A��~&H��0���H��(���H�=U�1��v���H��BcD�L��L�=�AI��H��H��H��0���I��1�AVH��(���L��L���������������,���H�� A�ą���A��A���A���H�(BcD��H��8���H�=��1��������fDH��0����$1���O����H�� ����1��������"���H��8�����0�������������@A��������L��P���L���f7��H��H����Z7��H�
[?�n���H�
v�H�pAcH��?�A������01��<���b���H�
�?�4���H�
���(���H�
d��H�
@����H�
$@����H�
��H�2����H�
ՆH�����H�
�?����H�
n?������H��@��U1�H��H��dH�%(H�E�1�������tH�U�dH+%(u%��f�H�E�dH+%(uɿ����qH�����UH��ATI����SH��dH�%(H�E�1��%;��L��H���A����*d��*d��~��1�@�
�*d��*d����9��H�E�dH+%(uH��1�[A\]���G��f.���UH��H��dH�%(H�E����N*d��u_�o)K*d�oN)
O*d�oV )S*d�o^0)W*d�of@)%[*d�onP)-_*d�ov`)5c*d�o~p)=g*d��1��+:���)�)dH�E�dH+%(u���.G��ff.���U�1�H��AWAVH��(���AUL������ATL�����SH��hH���dH�%(H�E�1�H�� ����H�H�
])dHDž0���H�����H��X���H�ʹfo� ���fo�0���ƅH���cH��fo�p���fo�����ƅL���8fo�@���)�����DžT���fo�P���HDž`���fo�`���)�����)�����)�����)����)���H������)��H������fo������H�L��H������)�����"��H�����H�����Dž����@H������=O��H������L��H��H��������;�������D��H������A�����1�I��������*1��O8�����������	H�����L������I��L�%���1�L��L��L���92������I��M9�u���'d���'d����������1�1��$�l$������L���L>��H�;1��� ��A�ą��WH��L9�u⋽����1Ҿ$1��*$��A�ƅ����A'dD�6'd��������D9���D9������o�
'd���yH�'dH9'd����������"��H������1ҿ�:��L���KD��H�=�&d�8��H��&dH�E�dH+%(�PH�e�D��[A\A]A^A_]��H�=a&dH��� H�5U����*���ƅ��H�=6&dL�=���
S��f�xH��x���H�Xu$��H��x���A��H���@A9����3H�=�%d�6N��L��H���;D����uNjsH�=�%d�R��H�����P��f����H�=�%d�0��M��H��H��txH�5����C��D��%d���}���H�;c������H�;1��A������3�6���3H�L;1���6���j����.��H�,%dH��H������D�=%d����@��@����H��@���A������8�,��H�~��H��H�l:c�01��5�����fDH�=�$dH���H�5֩��')���ƅ��H�=�$d�qQ��f�xI����1�L�%���A�FH��9���H�[H�=T$dA�t��L��L��H���B����u��@����H��@���A������8��+��H��9�H��H��9c�01��4���3���f�H�;1�A��������j�H�
��PH�Z9cL�
��A��H����01��m4��Y^�h�����,��H��#dH��H������H�9cH��8�A������01��,4������2?����H��@���A������8�	+��H����H��H��8c�01���3���r���H��8cL�
F�A��H�
��H�N��A������01��3������jL�
��A��PH�c8c�H�
�H�
��A������01��q3��XZ����������1Ҿ$1����jL�
^�A��P몋�����L�
��PAPA���jL�
�7A��P뀋�����1Ҿ$1��F��jL�
2�A��P�W����?��f.�f���UH��AWAVL�5 �AUL�-_�ATE1�SH�K�H���dH�%(H�E�1�L����*��I��H��tnL�8H��L���#@������L��L���@�����OL��L���?�����GL��H�=E���?�����;L��H�=����?����u�A��>Dž��������f�H�E�dH+%(�������H�e�[A\A]A^A_]�A��H��6cL��H�l7�D��8���L�5y�c�01��1��Lc�8����E1�H���bH��P���A�t$I��H��8���L��@���J�T1��57�cA������H���@����*L��H�	�cH��H��������DžD����ƅh�����0���Dž��
�������=�c������H���cL�� ���H�����Dž����Dž���������L��0���L�-x�cDž�����L��(���M�剅���������������1��Љ�����E1ɺ����L����H��8���A������؃������H�����L�H�� ���D�81�L������H��*L��H���D��@���DžD����ƅh�����/��L��������I��A����H��8���1�E1�E�йH��(���H��0���L������H��*DžD����H��6H��H����������@���L���l/��L��������A���Q�A�4�E1�E��H��8���H�,�cH��1��H���@����*L��H��H��������DžD�����
/���ǃ�������D�����D�����H����������L�H�� ���D�8��H���PH���cM��D���4�A��H��4H���PH��(����0H��0���D�H��3c�01���.��H�� H����������������H��(���H��0�������������������M��H�����H�� ������z����W����D�����D�����H����������L�H�� ���D�8����Dž��������H�������A��)���A�����A�������:����UH��H��H�O dH�%(H�E�1�H9u	H�~�H�U�dH+%(u���}:��ff.�f�UH��AWAVAUATI��SH��H�U2cdH�%(H�E�1��;~H�
4cH�0�+��M�|$8E1�M����E1�L�5��"DM��I9G tsL���3��I��H����A�G*��<u�I�GM��t�I9D$ vNj3L��1��,��L�-�3cL��M��I�u�-��I�uL��A������~-��I�GI9G u��3H����1�A������,��H�B3cL��H�0�G-��L���3��I��H���`���H�E�dH+%(uH��D��[A\A]A^A_]��*9��f.���UH��AVAUE1�ATH��dH�%(H�E�1�H9�t)L���I��L����&��I��$�L��H�������tH�E�dH+%(uH��D��A\A]A^]�L���8�����E����8��ff.����UH��AWAVAUATSH��H�$H��(dH�%(H�E�1��9��H���!�d�dH��I�����I��H���'H�J2cL����H�0H���7�L���*��H�0cL���H�0��01��+��L��L���8��H��H��t
����H������AUA�E1�AW1ɺ�����jL��j���H�� I��H����I�^ L��H���4�����t5H�����A�Dž�uH���t?H��H�53���L���[����A���%H�9/cH����A������01��N*��L����%��L���A��L���/��H�E�dH+%(�?H�e�D��[A\A]A^A_]Ð�L���3;������fDL��H����H�5����HDž��H�����+��L����H�����M����A�V(�f����A�N(9�����B��J���v�����fDH�Q.cH����A������01��f)������H�I�H�#.c�A������01��?)������H�/��H��-cH�;��A���01��)������H��-cL���H��.A���01���(������5��f.�D��U� H�582H��AVAUATL�%�SL��H��dH�%(H�E�1��#7���H���(#�����L�5�L��5L���6��L��H��I���5��L�����"�����L��L��3��6��H�5��H��I����5��L����"������L�-��H����aL���6��H�5��H��I���5��L����~"������L��H����a�R6��H�5~�H��I���P5��L��A���E"��E����H�E�dH+%(��H��D��[A\A]A^]�f�L�
��A�H�,,c�H�
��H�ԋA������01��:'����L�
��A�뾐L�
��A�뮐L�
��A��L�
��A���3��f.���UH��H��AUATH��L�gH�=r+cdH�%(H�E�1��W(I�D$-M��LE�H���u}�)�H�=_+cD�H��wjM����H��HcҿH�H��bL�AD��L�,�L��H�<�1�M���H&��L��L����3��H�U�dH+%(uUH��A\A]]�@��놹D�ֿ1�H�>,�&��������H�IH�O,D��1����%���������2��@��UH��H��H��dH�%(H�E�1����H�U�dH+%(u���g2���UH��SH��H�$H��XdH�%(H�E�1�H���4��H�����	I�ى�H��1�H���H�j��1�L�j����u��H�U�dH+%(uH�]�����1��ff.���U��H��AUATL������I��L��L��H��dH�%(H�E�1�HDžx����H�L���S������L��H��x���E1�L��A�H�=��������A�ą�uJH��x���H��ueH�}�茞��H��`���耞��H�E�dH+%(��H�ĀD��A\A]]�DH�)cH�^���01��,$���f.�H��(cA��H��*A������01��#���n���H��(cH����A������01���#���D����0��@��UH��AVAUATI��SH��H��L�/dH�%(H�E�1��=O�c�t)��D)�H�U�dH+%(uNH��[A\A]A^]��L�5	(cL��A�F(�I:����c��u�L��A�F(�/:����c���/��f���UfHnǹ�fl�H��H��@L��(cH�}�H�u�H�}�dH�%(H�E�1�H�x(c)E�H�E�H�E�Ћ��cH�U�dH+%(u���~/��ff.���UH��H��dH�%(H�E�1��#��H�U�dH+%(u���:/��f.���UH��H��dH�%(H�E�1��@2��H�U�dH+%(u���.��f.���UH��AVAUATH��dH�%(H�E�1��/��H���5H��I���=������H��&cL���@H��%c��.�������L�5&cA�>i�t��A���l��D��L����:��I��H����H������L��A���09��L����&��H�E�dH+%(��H��D��A\A]A^]�fDH��'cL��H�0�(���@H��%cH���1�A������01��	!����A�6H�v�1�1�A������� ���r���f�H�#�H��%c1�A������01�� ���T���H������~-��f.�@��UH��H��dH�%(H�E�1��kdH�E�dH+%(u���5-��D��UH��AUATA�H��dH�%(H�E�1���~H��
1�H�9��$��A��L�-{����L���(���L���(��D���9*����d��t�H�E�dH+%(uH��1�A\A]]��,��f.�D��UH��H��dH�%(H�E�1���dH�E�dH+%(u���U,��D��UH��H��dH�%(H�E�1�D�Nd��t�H�E�dH+%(u���,����UH��H��dH�%(H�E�1���H�E�dH+%(u�1����+��ff.���UH��AUATA�H��dH�%(H�E�1���~H��
1�H�9�e#��A��L�-�����L���N'��L���A'��D����(��H�
�%c1�H�}�H�E����P���+9��H�}�1��P��H�E�dH+%(uH��1�A\A]]�� +����UH��H�� �}�dH�%(H�E�1��$d�H�E�dH+%(t��*������UH��H�� �}�dH�%(H�E�1���U��
dЉ�
d��
d��t�H�E�dH+%(t�*������UH��H�� �}�dH�%(H�E�1��E������H�E�dH+%(t�R*������UH��H�� �}�H�u�dH�%(H�E�1��E��}�~H�E�H�H���R"���E�H���H�ƿ��%��H����H�ƿ�%���E���M'���E����3���H�U�dH+%(t�)������UH��H�� �}�dH�%(H�E�1���d�H�E�dH+%(t�{)������UH��H�� �}�dH�%(H�E�1�H�����H�ƿ�%���E���&����4��f���*�fH~�fHn��o6���Id��tڸH�U�dH+%(t�(������UH��H�� �}�H�u�dH�%(H�E�1��E��}�~H�E�H�H��� ���E��$2�����t��u�E���(���������������H�U�dH+%(t�u(������UH��H��dH�%(H�E�1��H�E�dH+%(t�C(������UH��H��dH�%(H�E�1�����H�E�dH+%(t�(������UH��H��dH�%(H�E�1�H�����H�E�d�P�	dHc�Hi�VUUUH�� ����)ʉ���)ȉ…�t�l����5���H�E�АH�E�dH+%(t�'������UH��H�� �}�H�u�dH�%(H�E�1��E�?B�}�~H�E�H�H������E�x
d�P�o
d9E�|�'����吸H�U�dH+%(t�'������UH��}�H�u�dH�%(H�E�1���c����c��c<{u��c�����c�+�c����cЈ�c���U����A�-H��H��0fo7+dH�%(H�E�1�H�M��E�H�E�)E��<��1�H�M�A�-������H�E�dH+%(u�1���$&��@H�H�_H�OH�WH�w H�(H�o0H�D$H�G8H�$H�G@H�GHH�GPH�GXH�G`H�GhH�GpH�GxL���L���L���L���L���L���L���L������UH��AWAVI��AUATI���SH��dH�%(H�E�1��<��H����H��H���h5��fo�)� I��$�A�$����I��H����L�{8I�>L������H��H��t[H�X���� L��L��L)�� H9�HG�H���g
��M��$ 1�I��$H�U�dH+%(u[H��[A\A]A^A_]�H��cH�3���01����L���C��������H�ecH�F��01��������F$��fDU�R1ҾH��AWAVL��`�L����AUL��ATSH��x
dH�%(H�E�1��H�H�����HDž���H�L��H�fgHDž(���eH���������L��H��L���	��H����L��H��I�����1�fA�]L��fA�E1ҾL��A�E	�-��A�H�
��A����t��f�C�|%��I�D$�uI��M��u�L9�t^H�/cL��I��H�"�A������01��>��L������H�E�dH+%(��H��x
D��[A\A]A^A_]�fD��n���H��`���L��L����h��v ��A�ą�u?������f9�����uf������f9�����t�H��cH���1��3����T�A��A�H�
�H�YcH�"�A������01��n���+���H�2cH����1��3�M���3�H�G1��A������-�������!��H��cH����A���01��������ff.���UH��H��dH�%(H�E�1�H�E�dH+%(u��P����!��f.����UH��H��dH�%(H�E�H��u[�x��H�x
��H�xH��H�xP��H�����H�x ��1�H�U�dH+%(����L�
�xA��fDH��cH�
���H��x�01����������L�
�xA����L�
�xA���L�
�xA���L�
�xA���L�
gxA���L�
�xA���y����d ��@��UH��H��dH�%(H�E�H��u/�P��unH�@H��H�� H��uN��uh1�H�U�dH+%(uf��L�
�wA�H��cH�
���H��w�01����������L�
��A���L�
�wA��L�
�wA� ����fD��UH��ATH��L�'dH�%(H�E�1����A�D$����I�D$H��H�� H��������L��L����������I�<$���H�<u3L���������1�H�U�dH+%(��L�e����L�
�vA��H��cH�
���H��v�01�����������f�L�
�vA���L�
(vA���L�
g�A���L�
HvA���L�
9vA���L�
zA���p���L�
�yA���^����<��ff.����UH��ATSH��L�'dH�%(H�E�1���A�T$���4I�D$H��H�� H�������L��L���o�����6A�D$8��� ��I�$�C���#H�CH��H�� H������uHL��H���������C8��� �1�H�U�dH+%(�H��[A\]�L�
uA�iH�)cH�
���H��t�01��=��������fDL�
�tA�a�L�
xtA�^�L�
��A�`�L�
��A�h�L�
�tA�_�L�
uA�d�s���L�
�tA�c�a���L�
1xA�b�O���L�
DtA�g�=���L�

xA�j�+���L�
�tA�k����L�
�tA�l�����2��f���UH��ATH��L�'dH�%(H�E�1����A�D$����I�D$H��H�� H��������L��L���p������I�<$�u.L���V������1�H�U�dH+%(��L�e���L�
UsA�TH�|cH�
9��H�$s�01����������f�L�
 sA�P�L�
�rA�M�L�
�A�O�L�
�rA�N�L�
�vA�Q�L�
�vA�U�p�������ff.���UH��ATH��L�'dH�%(H�E�1����A�|$unL��L���C������I�<$�G����H�GH��H�� H������uyL���������1�H�U�dH+%(��L�e����L�
rA�=H�,cH�
���H��q�01��@
��������L�
�qA�<��L�
�qA�C�L�
��A�B�L�
�qA�A�L�
muA�>�L�
^uA�D�y���������UH��ATH��L�'dH�%(H�E�1����A�T$���I�D$H��H�� H��������L��L����������I�<$�G����H�GH��H�� H������u/L���������1�H�U�dH+%(��L�e���@L�
�pA�2H��cH�
���H�tp�01�����������f�L�
ppA�,�L�
pA�)�L�
W�A�+�L�
H�A�1�L�
)pA�*�L�
pA�0�p���L�
�sA�-�^���L�
�sA�3�L������f.���U1�H��AUATSH��H��dH�%(H�E�1��6��H����I��L�-���8�H�8�L���������A�|$��L��H������I��H��taA�|$���H��u�L�
q�A�wH�vcH�
3��H�o�01��
�������H�U�dH+%(u5H��[A\A]]�1���@L�
(�A�x뤐L�
�nA�y����ff.����UH��H��dH�%(H�E�H��u�xuP�xuY1�H�U�dH+%(uW��L�
FnA��H��cH�
h��H�Sn�01��	��������L�
MnA����L�
InA����e��D��UH��H��dH�%(H�E�H��u�xuP�xuY1�H�U�dH+%(uW��L�
�mA��H�cH�
��H��m�01��	��������L�
�mA����L�
�mA�������D��UH��AWAVAUATSH��8dH�%(H�E�1��W������E1��E�H���bL�=�pE���7�L��SL��E����L���7���E�����A��H��A��
�H�3
cL�D��L����01��L��H�CH��t
�Є��	���I��H���9L�m�L�����H��H�3L��j1�A�E1�L�����ZY���S���A��H��cH�H�����01�����H�3L���v��L��H�5�m�w��L�������L���O���E�����H�acL�D��H�D���01��v���E��M����D�A��H���M�A��
���H�E�dH+%(���E�H�e�[A\A]A^A_]�H��cH�Cm��01���������s��L������E������W���H��cH�m1��01������E������2����E������i������f�UH��AWAVAUATSH��(H������dH�%(H�E�1������B�L�%Dc��I��L�tL�-��A�A�4$L��1�I���E��M9�u�L�-+��'L�-
+DA�4$L��1��������u�A�4$L��1�L����������H�������L�������xQA�4$L��H��1�����1�H�U�dH+%(uDH��([A\A]A^A_]�fDL�%a
c1��V���A�4$H�н�1��s���������7���U�H��AWA��AVH��@���AUH��ATSH���H��0���dH�%(H�E�1��H�A���IcǾD��H��H��H	4¾�1��� ����������kH��0���f�E1�L�%�	cL��@D��M��A��E��1����v�D�����<���A��A�։���������A�4$D��L��fn�<����1�D��8���fn�fA:"�fA:"�fl�)� ����L��A�4$�1���<���H����/��A�4$D��1�H�������A�4$�ٿH���1����A�4$1�D��H��������H��0���L��fo� ���H��I��D��8���L��u,H�E�dH+%(uS�����H���[A\A]A^A_]�DA����H�FcD���H�?�01��^��Dž�����������ff.�f���UH��AWL�=/NAVL�U�AUATSH��XH��cdH�%(H�E�1�H�E�H�E�f�fv�A�GH�M�M�gA)Ic7L��L��L�U��E��'��L�U���A���ZA�W,���CD�m��3M�_E;7u$I�GH9E�uI�G H9E�uE9o(��fDH�%��1�L�]��q���3H��1���\��A�H�}�L�����A�A9�t�3E��H��
1���-��D�E�A�OA9�t�3H��
�1����D�E�A�OA9�t�3H��
�1�����L�E�I�O I9�t�3H��
�1�����A�O(D9�t�3E��H��
1����������H�U�dH+%(�|H��X[A\A]A^A_]��L�m�L���E�L��L�U��^����E���uAH�}�L���E��D����}�L�U�u$L��L��L�U��E��$����}�L�U��[����3H����1�����3H���1������A�I�L���-����3H���1���������������fDH����1�L�U�L�]����A�H�}�L�����������A�G0I��0L�U����r��������3H���1��X���3H��1���C��A�I�L������3D��H�K1����������q�������ff.�f���UH��AWI��AVLc�AUATSH��dH�%(H�E�1�����1�A�ʼn����������L�%|cA�Љ�A�4$�ƀ��H�
	H���1����H�u�1�������\������foU�fo]�)�`���)�p���A����A�H�]�L�5��Df�H�sH�SH3�p���H3�x���H	�uTA�4$D��H��1�����A��E9���H��D���������H�3H�SH3�`���H3�h���H	�t��U��u�����!ց�9�t*RA�E1�D��WA�4$L��1����AY�����AZ�U��u�����!փ�9�t)RE1�A�D��WA�4$�L��1��H���_�����AXD�]�D�U�D��D��D��f1�f1�!�9�tDRD��E1�A�WA�4$L��1��D��T���D��X�����D��T���Y�����D��X���^�u��U�A��A����!�9���RA�L��A�VA�4$D���1�D��X���D��\������D��\���D��X���XZE9���Dž\�������foE�foM�A��)�`���)�p���E9��]���H�E�dH+%(����\���H�e�[A\A]A^A_]�E9�����u�A�4$D���H�������f�A�4$D��H�ε1����������L�%�c1�A�4$A��H�
��1�H���I�����MwH�O�Dž\�������I�F�:���A�4$E��E��D��H�Y�1��e���Dž\����������Dž\�����������	��f���UH�
�]�H�1�H��H��dH�%(H�E�1�H��c�01�����H�E�dH+%(u�1�����f.���UH�
�]�H�ѴH��H��dH�%(H�E�1�H�{c�01����H�E�dH+%(u�1���Z��f.�U�H���H��H��dH�%(H�E�1�H�&c�01��M���������UH��AWAVAUATSH��(dH�%(H�E�1��W�����L�u�1҉�A��L������E��E1��PD��H�
��������H���n1��PD��H�x���H�����H����1����D���~���H����1�1�1�D���b���H����1�L��D���l���E���1�1���D���,���H����"H��H�E�L�-�b1�L�=��I�ؿA�uL������1�1�1�D�����L�M�H����I9������A�ą��>1�L���������E��.1�H�ٺPD�����H���1����D���o���H����1��PD��H������K���H����1�1�1�D���/���H����1�L��D���9���E���1ɺ�D��1����I��H����$A�uH��L���1�L���������1�1�1�D�����H����L9���1�H�U�dH+%(�vH��([A\A]A^A_]�fD1�1�1�1�1��q������O�����*���1����k���8H�����L�-�b�H�fH��1�A�u�0���1�1�1�D���
���H��t(�;�A���A�u�H���H��1�����L�
��A��A�uH�
��H�\\1��������������������8H������L�-r�b�H�ݰH��1�A�u����S�������8���A�u�H�Y�H��1��Z���1�1�1�D���7���H��t&�M���8�f���A�u�H�ΰH��1�����L�
(�A���'���f.�����8H���!���L�-��b�H�.H��1�A�u������������8���A�u�H��H��1�����C���DA�uH�%�����$���fD����8H�����L�-2�b�H���H��1�A�u�H��������K���8�d���A�u�H���H��1��������D����8H���1���L�-�b�H�fH��1�A�u�����������8���A�u�H�4H��1�����S���L�-v�bH�ɮ�1�A�u������A�uH����1��s����R���L�-7�bH�ٮ�1�A�u�P����[���A�uH����1��4��������������0��H���������UH�=-�H��AWAVAUATSH��dH�%(H�E�1��M��H���`H��`���L�%�VDž8���H��0���L��@���H��0���1�E�,$E1ɹ
M�t$�����fo'�H�A�����L���*)�P���D��@���DžD����L��H���Džh������M��D����<���A��H��H���bH���01���A�|$tr���tqH��bH�5���01�����~��<����B��I��H�VL9�� ���H�E�dH+%(uW��8���H�Ĩ[A\A]A^A_]����~�H�e�bH�����01���Dž8��������Dž8���������3����U�����H�=xH��H��dH�%(H�E�1��$��H�%�b�H�E�dH+%(u�1�������UH��ATA��SH��H�=��bdH�%(H�E�1���	��H�T�b���~U��u`�H��H�5�1�����<������u-H�E�dH+%(unH��[A\]�D�����u2f��E��t�H�E�dH+%(u6H��[A\]�	�f�H�a�b�8uċ��x����P����z������ff.���UH��H��dH�%(H�E�H�h�b�8t#H�E�dH+%(uH�=m�b��/����@�������������UH��ATL��`���L��H��dH�%(H�E�1������L�����1�1�L�����H�E�dH+%(uL�e�������ff.���UH��ATL��`���L��H��dH�%(H�E�1��a���L���$���1�L���%��H�E�dH+%(uL�e������f.����UH��H��dH�%(H�E�1�H�E�dH+%(u���o���ff.�@��UH��H��dH�%(H�E�1�H�E�dH+%(u���/���ff.�@��UH��H��dH�%(H�E�1�H�E�dH+%(u�1�����ff.�f���UH��H��dH�%(H�E�1�H�E�dH+%(uH�G�bH�H���������UH��H��dH�%(H�E�1�H�E�dH+%(uH��bH�H�@���`�����UH��H��AUI���ATI��H�}�L��H��dH�%(H�E�1���
����x3H�}��z���H�}����H�E�dH+%(u.H��A\A]]��H���bL��L��H�8��������ff.�@��UH��H���H��X���H��`���H��h���L��p���L��x�����t )E�)M�)U�)]�)e�)m�)u�)}�dH�%(H��H���1�H�EH��0���Dž0���H��8���H��P���H��@���Dž4���0�O���H��H���dH+%(u������fD��UH��ATI��H��dH�%(H�E�1����H�E�dH+%(u
L��L�e���������f.���UH��H��dH�%(H�E�1�H�E�dH+%(uH��bH�H�@���p�����UH��ATI��H���H��H���H��P���H��X���L��`���L��h�����t#)�p���)M�)U�)]�)e�)m�)u�)}�dH�%(H��8���1���H�EH�� ���L��H��(���H��@���H��0���Dž ���Dž$���0���H��8���dH+%(uL�e������f���UH��H��dH�%(H�E�1�H�E�dH+%(u���o���ff.�@��UH��H��H�OdH�%(H�E�H��HGH�GH9�rCL�G1�I�D0�I��I��H�H�GH�E�dH+%(u,H���bH�H�@��f.�H�E�dH+%(u������f.���UA�H��H��dH�%(H�E�1�H��H�GH��H��H�w IF�H��O(fHn�H�2�bfl�H�GH�H��tH�U�dH+%(u��DH�E�dH+%(u���B���f���UH��H��dH�%(H�E�H���bH�H�@H��tH�U�dH+%(u��f�H�E�dH+%(u�����f.���U�H��AUI���ATI��H�=.SH��H���bdH�%(H�E�1�H����H�;L��L���7�H�E�dH+%(u
H��1�[A\A]]��f���fD��UH��AUATSH��dH�%(H�E�H���b�8u9H���bI��I���	�H�=�H����H�;L��L����H�E�dH+%(u
H��1�[A\A]]�����ff.����UH��H���H��X���H��`���H��h���L��p���L��x�����t )E�)M�)U�)]�)e�)m�)u�)}�dH�%(H��H���1�H�EH��0���Dž0���H��8���H��P���H��@���H�g�cDž4���0�H��H���dH+%(u������f���UH��H���H��X���H��`���H��h���L��p���L��x�����t )E�)M�)U�)]�)e�)m�)u�)}�dH�%(H��H���1�H��b�:u>H�EH��0���Dž0���H��8���H��P���H��@���H���cDž4���0�PH��H���dH+%(u���E���D��UH��H��dH�%(H�E�H�`�cH9I�cuH�=@�c1�H�U�dH+%(u��D����������ff.����UH��H��dH�%(H�E�1�H9=�cu$H���cH�ޓc1�H�U�dH+%(u�����������f.�f���UH��H��dH�%(H�E�H�G@H�U�dH+%(u���M�ff.�f���UH��H��dH�%(H�E�H�GHH�U�dH+%(u���
�ff.�f���UH��H��dH�%(H�E�H�GPH�U�dH+%(u�����ff.�f���UH��H��dH�%(H�E�H�GXH�U�dH+%(u����ff.�f���UH��H��dH�%(H�E�H�G`H�U�dH+%(u���M�ff.�f���UH��H��dH�%(H�E�H���H�H�U�dH+%(u�������UH��H��dH�%(H�E�1����H�U�dH+%(u�����f���UH��H��dH�%(H�E�H�GhH�U�dH+%(u����ff.�f���UH��H��dH�%(H�E�H�GpH�U�dH+%(u���M�ff.�f���UH��H��dH�%(H�E�H�GxH�U�dH+%(u���
�ff.�f���UH��H��dH�%(H�E�1�H�E�dH+%(u�1�����ff.�f���UE1�H��H��dH�%(H�E�H��H9GtH�E�dH+%(u �D���H9Fu㋆�9��A�����c���UI��H��SH���H��P�����t)�p���dH�%(H��8���1�H�EH�_H�?H��(�����P���L��H��@����@0H��H��0����Dž$���0Dž ������S�Hc�H9�M�H��8���dH+%(uH�]�������UH��SH���H��P���H��X���L��`���L��h�����t#)�p���)M�)U�)]�)e�)m�)u�)}�dH�%(H��8���1�H�_H�EH�?H��(���I��H��@���H������H��L�� ���Dž ���Dž$���0H��0��������S�Hc�H9�M�H��8���dH+%(uH�]��������UH��SH�����dH�%(H�E�1���u�_|H�X�b�xtKH��p����x)H��`����H��uVfDH9�tH9��uH���H��u���W|9�L�H�?�G��9�M�H�U�dH+%(u-H�]���DH9�t�H9��uƒ�PH���H��u�������UH��H��dH�%(H�E�H�G@H�U�dH+%(u�����ff.�f���UH��H��H�
e�bdH�%(H�E�1��yt8H���L����H�?I98wH�����rH���1�H9��tf�H�U�dH+%(u<���f��tހyt�H��@H�
�bH+�@�R(��uH��H����UH��AWE��AVI��AUI��ATSH��H��h�EH�U�L�M�L����E�dH�%(H�E�1�H�H�E�H�GH��x���H�n�b�x�<�E�I��$p����I9�$H����*���H����I��$p���L��H�E�H�u�H�u�L� ��H�}�H�u�H�GA����GH���b�x�E��@��E���H�E�H��H�E�I9�$H������E�H�u�E1�L�m�H�@H��H�E�D��M��M��I�݉��;���L�U�L��L��1�A��I�UHc�IMH)�I�UA�I��L9u���H�]�bI�>�x)t
�����t؃}�A�FI�N���}�u�f��tH����f��H*�f���*��^�H�M���L��L����I�UHc�IMH)��m���fDH�}�����H�E�I�EH��x���I�EH�E�dH+%(�*H��hD��[A\A]A^A_]��H�M�����f�H���v���H�M�H����f��H*��Y�(H����f���H*��^��=����H�ʃ�f�H��H	��H*��X�����f��E�������H��H��H�E��7����A��$P���� ���I��$p���H��t9I��$`���H�U��&fDH9��uH���H�H��H�J�H��tH9�u�I�E0L�`ЋE�M9�����H�}�L��p����E�H�@D�}�H��M��L�e�H�]�H��@I���L�u�I9tXI��L9�u��I�W0L�z�M9�u�H�]�L��p���D�}��V����A��$P������OȉM�Hc����L��A��I�FA���A�F�f�H�ƒ�f��H��H	��H*��X��h���f�H�ʃ�f�H��H	��H*��X��.������E�����E1��������f.���UH��H��AUATI��SH�����dH�%(H�E�1���u�_|H�>�b�xtIH��p����x)H��`����H��ut@H9�tH;��uH���H��u���Q|9�L�L�)L���,��H�U�dH+%(uO9�I�t$I�<$M��M�H��H��[A\��A]1�]�b��f�H9�t�H;��u���PH���H��u�����f���UH��AWI��AVAUATSH��H��(H�V�bH�NxdH�%(H�E�1�H�sxD��(E��t
�z�q�H9�r(H�����w1��ztI���H��p���H9�H���t'H�U�dH+%(��H��([A\A]A^A_]�DLc�P���A��~̾L�����L��I���~�I��M����H����I�G0H��0I9�t6@H���H��H�����X���+��H�HxH�@0Hc�I��H��0I9�u�H�C0H��0H9�t;f�H���H��H�����X���+��H�HxH�@0Hc�I��H��0H9�uθ��wvH��L9�t}I�<�I9<�s�L��H�E�����L������H�E�����f�1�H9�I��������H9�C�H��p���H9�H���t$Hc����@H�������1��@Lc�P���A��~�E��~�E9�}žL���U����L��I���	�M��U�I���H����I�G0H��0I9�t7@H���H��H�����X���+��H�HxH�@0Hc�I�L�H��0I9�u�H�C0H��0H9�t2H���H��H�����X���+��H�HxH�@0Hc�I��H��0H9�u�IcԸI�|�I9<�r+H�����w"�A9�t
I�|�I9<�r/w4H��I9�u�1�L��H�E�L�E��`��H�}��W��H�E��?������H��������i�L��L�E�E1�U��'��H�}����HcE�E1��L��E1��
��L��E1����1��������UH��AWI��AVAUATSH��H��(H���bH�NXdH�%(H�E�1�H�sXD��(E��t
�z�q�H9�r(H�����w1��ztI���H��p���H9�H���t'H�U�dH+%(��H��([A\A]A^A_]�DLc�P���A��~̾L������L��I�����I��M����H����I�G0H��0I9�t6@H���H��H�����X���+��H�HXH�@0Hc�I��H��0I9�u�H�C0H��0H9�t;f�H���H��H�����X���+��H�HXH�@0Hc�I��H��0H9�uθ��wvH��L9�t}I�<�I9<�s�L��H�E��J��L���B��H�E�����f�1�H9�I��������H9�C�H��p���H9�H���t$Hc����@H�������1��@Lc�P���A��~�E��~�E9�}žL���U��y��L��I���i�M��U�I���H����I�G0H��0I9�t7@H���H��H�����X���+��H�HXH�@0Hc�I�L�H��0I9�u�H�C0H��0H9�t2H���H��H�����X���+��H�HXH�@0Hc�I��H��0H9�u�IcԸI�|�I9<�r+H�����w"�A9�t
I�|�I9<�r/w4H��I9�u�1�L��H�E�L�E����H�}����H�E��?������H����������L��L�E�E1�U����H�}��~��HcE�E1��L��E1��j��L��E1��_��1��������UH��AWI��AVAUATSH��H��(H��bH�N`dH�%(H�E�1�H�s`D��(E��t
�z�q�H9�r(H�����w1��ztI���H��p���H9�H���t'H�U�dH+%(��H��([A\A]A^A_]�DLc�P���A��~̾L���N��L��I���>�I��M����H����I�G0H��0I9�t6@H���H��H�����X���+��H�H`H�@0Hc�I��H��0I9�u�H�C0H��0H9�t;f�H���H��H�����X���+��H�H`H�@0Hc�I��H��0H9�uθ��wvH��L9�t}I�<�I9<�s�L��H�E����L�����H�E�����f�1�H9�I��������H9�C�H��p���H9�H���t$Hc����@H�������1��@Lc�P���A��~�E��~�E9�}žL���U�����L��I�����M��U�I���H����I�G0H��0I9�t7@H���H��H�����X���+��H�H`H�@0Hc�I�L�H��0I9�u�H�C0H��0H9�t2H���H��H�����X���+��H�H`H�@0Hc�I��H��0H9�u�IcԸI�|�I9<�r+H�����w"�A9�t
I�|�I9<�r/w4H��I9�u�1�L��H�E�L�E�� ��H�}����H�E��?������H��������)��L��L�E�E1�U�����H�}�����HcE�E1��L��E1�����L��E1����1��������UH��AWI��AVAUATSH��H��(H�v�bH�NpdH�%(H�E�1�H�spD��(E��t
�z�q�H9�r(H�����w1��ztI���H��p���H9�H���t'H�U�dH+%(��H��([A\A]A^A_]�DLc�P���A��~̾L�����L��I����I��M����H����I�G0H��0I9�t6@H���H��H�����X���+��H�HpH�@0Hc�I��H��0I9�u�H�C0H��0H9�t;f�H���H��H�����X���+��H�HpH�@0Hc�I��H��0H9�uθ��wvH��L9�t}I�<�I9<�s�L��H�E��
��L�����H�E�����f�1�H9�I��������H9�C�H��p���H9�H���t$Hc����@H�������1��@Lc�P���A��~�E��~�E9�}žL���U��9��L��I���)�M��U�I���H����I�G0H��0I9�t7@H���H��H�����X���+��H�HpH�@0Hc�I�L�H��0I9�u�H�C0H��0H9�t2H���H��H�����X���+��H�HpH�@0Hc�I��H��0H9�u�IcԸI�|�I9<�r+H�����w"�A9�t
I�|�I9<�r/w4H��I9�u�1�L��H�E�L�E����H�}��w��H�E��?������H����������L��L�E�E1�U��G��H�}��>��HcE�E1��L��E1��*��L��E1����1��������UH��AWI��AVAUATSH��H��(H���bH�s@dH�%(H�E�1�I�O@D��(E��t
�z�q�H9�r(H�����w1��ztI���H��p���H9�H���t'H�U�dH+%(��H��([A\A]A^A_]�DLc�P���A��~̾L������L��I�����I��M����H����I�G0H��0I9�t6@H���H��H�����X���+��H�H@H�@0Hc�I��H��0I9�u�H�C0H��0H9�t;f�H���H��H�����X���+��H�H@H�@0Hc�I��H��0H9�uθ��wvH��L9�t}I�<�I9<�s�L��H�E��j��L���b��H�E�����f�1�H9�I��������H9�C�H��p���H9�H���t$Hc����@H�������1��@Lc�P���A��~�E��~�E9�}žL���U�����L��I�����M��U�I���H����I�G0H��0I9�t7@H���H��H�����X���+��H�H@H�@0Hc�I�L�H��0I9�u�H�C0H��0H9�t2H���H��H�����X���+��H�H@H�@0Hc�I��H��0H9�u�IcԸI�|�I9<�r+H�����w"�A9�t
I�|�I9<�r/w4H��I9�u�1�L��H�E�L�E�����H�}�����H�E��?������H�����������L��L�E�E1�U����H�}����HcE�E1��L��E1����L��E1����1��������UH��AWI��AVAUATSH��H��(H�6�bH�NhdH�%(H�E�1�H�shD��(E��t
�z�q�H9�r(H�����w1��ztI���H��p���H9�H���t'H�U�dH+%(��H��([A\A]A^A_]�DLc�P���A��~̾L���n���L��I���^��I��M����H����I�G0H��0I9�t6@H���H��H�����X���+��H�HhH�@0Hc�I��H��0I9�u�H�C0H��0H9�t;f�H���H��H�����X���+��H�HhH�@0Hc�I��H��0H9�uθ��wvH��L9�t}I�<�I9<�s�L��H�E�����L������H�E�����f�1�H9�I��������H9�C�H��p���H9�H���t$Hc����@H�������1��@Lc�P���A��~�E��~�E9�}žL���U�����L��I������M��U�I���H����I�G0H��0I9�t7@H���H��H�����X���+��H�HhH�@0Hc�I�L�H��0I9�u�H�C0H��0H9�t2H���H��H�����X���+��H�HhH�@0Hc�I��H��0H9�u�IcԸI�|�I9<�r+H�����w"�A9�t
I�|�I9<�r/w4H��I9�u�1�L��H�E�L�E��@���H�}��7���H�E��?������H��������I��L��L�E�E1�U�����H�}����HcE�E1��L��E1����L��E1�����1��������UH��AWI��AVAUATSH��H��(������dH�%(H�E�1�H�{�bD��(E��t
�z�m�H9�w(H�����r1��ztI���H��p���H9�H���t+H�U�dH+%(��H��([A\A]A^A_]�f�Lc�P���A��~ȾL�������L��I�����I��M����H����I�G0H��0I9�t8@H������H�@0H��H�����X���H��0+��Hc�I�<�I9�u�H�C0H��0H9�t;�H������H�@0H��H�����X���H��0+��Hc�I�<�H9�u̸�DwnH��L9�tuI��I9�s�L��H�E��"���L������H�E������H9�I����H9ιF�H��p�����H9�H���t'Hc�����H�������1��@Lc�P���A��~�E��~�E9�}¾L���U��Y���L��I���I��M��U�I���H���I�G0H��0I9�t9@H������H�@0H��H�����X���H��0+��Hc�I�t�I9�u�H�C0H��0H9�t:fDH������H�@0H��H�����X���H��0+��Hc�I�4�H9�u�IcԸI�t�I94�r+H�����w"�A9�t
I�t�I94�r/w4H��I9�u�1�L��H�E�L�E�薽��H�}�荽��H�E��9������H����������L��L�E�E1�U��]���H�}��T���HcE�E1��L��E1��@���L��E1��5���1�����ff.���UH��AWI��AVAUATSH��H��(H���bH�N@dH�%(H�E�1�H�s@D��(E��t
�z�q�H9�r(H�����w1��ztI���H��p���H9�H���t'H�U�dH+%(��H��([A\A]A^A_]�DLc�P���A��~̾L������L��I�����I��M����H����I�G0H��0I9�t6@H���H��H�����X���+��H�H@H�@0Hc�I��H��0I9�u�H�C0H��0H9�t;f�H���H��H�����X���+��H�H@H�@0Hc�I��H��0H9�uθ��wvH��L9�t}I�<�I9<�s�L��H�E��z���L���r���H�E�����f�1�H9�I��������H9�C�H��p���H9�H���t$Hc����@H�������1��@Lc�P���A��~�E��~�E9�}žL���U�����L��I�����M��U�I���H����I�G0H��0I9�t7@H���H��H�����X���+��H�H@H�@0Hc�I�L�H��0I9�u�H�C0H��0H9�t2H���H��H�����X���+��H�H@H�@0Hc�I��H��0H9�u�IcԸI�|�I9<�r+H�����w"�A9�t
I�|�I9<�r/w4H��I9�u�1�L��H�E�L�E���H�}����H�E��?������H����������L��L�E�E1�U�跹��H�}�讹��HcE�E1��L��E1�蚹��L��E1�菹��1��������UH��AWI��AVAUATSH��H��(H�F�bH�NHdH�%(H�E�1�H�sHD��(E��t
�z�q�H9�r(H�����w1��ztI���H��p���H9�H���t'H�U�dH+%(��H��([A\A]A^A_]�DLc�P���A��~̾L���~���L��I���n��I��M����H����I�G0H��0I9�t6@H���H��H�����X���+��H�HHH�@0Hc�I��H��0I9�u�H�C0H��0H9�t;f�H���H��H�����X���+��H�HHH�@0Hc�I��H��0H9�uθ��wvH��L9�t}I�<�I9<�s�L��H�E��ڷ��L���ҷ��H�E�����f�1�H9�I��������H9�C�H��p���H9�H���t$Hc����@H�������1��@Lc�P���A��~�E��~�E9�}žL���U��	���L��I�����M��U�I���H����I�G0H��0I9�t7@H���H��H�����X���+��H�HHH�@0Hc�I�L�H��0I9�u�H�C0H��0H9�t2H���H��H�����X���+��H�HHH�@0Hc�I��H��0H9�u�IcԸI�|�I9<�r+H�����w"�A9�t
I�|�I9<�r/w4H��I9�u�1�L��H�E�L�E��P���H�}��G���H�E��?������H��������Y��L��L�E�E1�U�����H�}�����HcE�E1��L��E1����L��E1����1��������UH��AWI��AVAUATSH��H��(H���bH�NPdH�%(H�E�1�H�sPD��(E��t
�z�q�H9�r(H�����w1��ztI���H��p���H9�H���t'H�U�dH+%(��H��([A\A]A^A_]�DLc�P���A��~̾L�������L��I������I��M����H����I�G0H��0I9�t6@H���H��H�����X���+��H�HPH�@0Hc�I��H��0I9�u�H�C0H��0H9�t;f�H���H��H�����X���+��H�HPH�@0Hc�I��H��0H9�uθ��wvH��L9�t}I�<�I9<�s�L��H�E��:���L���2���H�E�����f�1�H9�I��������H9�C�H��p���H9�H���t$Hc����@H�������1��@Lc�P���A��~�E��~�E9�}žL���U��i���L��I���Y��M��U�I���H����I�G0H��0I9�t7@H���H��H�����X���+��H�HPH�@0Hc�I�L�H��0I9�u�H�C0H��0H9�t2H���H��H�����X���+��H�HPH�@0Hc�I��H��0H9�u�IcԸI�|�I9<�r+H�����w"�A9�t
I�|�I9<�r/w4H��I9�u�1�L��H�E�L�E�谲��H�}�觲��H�E��?������H����������L��L�E�E1�U��w���H�}��n���HcE�E1��L��E1��Z���L��E1��O���1��������UH��H��H��H��M��H��H��dH�%(H�M����D�U��u�H|H��bH�xHu1E1�A��A��E�D�H�E�dH+%(u4D�UL�������DH�E�dH+%(uD�UA�L��������������UL�
��L�@�H�
v��H��H��dH�%(H�E�1�j詞��H�U�dH+%(u�������UL�
���L���H�
���H��H��dH�%(H�E�1�j�Y���H�U�dH+%(u���3����UL�
t��L���H�
V��H��H��dH�%(H�E�1�j�	���H�U�dH+%(u��������UL�
$��L�W�H�
���H��H��dH�%(H�E�1�j蹝��H�U�dH+%(u�������UL�
���L��H�
v��H��H��dH�%(H�E�1�j�i���H�U�dH+%(u���C����UL�
���L���H�
���H��H��dH�%(H�E�1�j����H�U�dH+%(u������UL�t�H�
]��H��H��L�
��bdH�%(H�E�1�j�ɜ��H�U�dH+%(u�������UL�
���L��H�
���H��H��dH�%(H�E�1�j�y���H�U�dH+%(u���S�����UL���H�
}��H��H��L�
�bdH�%(H�E�1�j�)���H�U�dH+%(u��������UL�
D��L�}�H�
���H��H��dH�%(H�E�1�j�ٛ��H�U�dH+%(u�������UL�4�H�
���H��H��L�
�bdH�%(H�E�1�j艛��H�U�dH+%(u���c�����UL�
���L���H�
��H��H��dH�%(H�E�1�j�9���H�U�dH+%(u��������UL���H�
���H��H��L�
߷bdH�%(H�E�1�j���H�U�dH+%(u���ÿ����UL�
��L�=�H�
&��H��H��dH�%(H�E�1�j虚��H�U�dH+%(u���s�����UL���H�
���H��H��L�
?�bdH�%(H�E�1�j�I���H�U�dH+%(u���#�����UH��H��dH�%(H�E�H���bD�U�xuV���L�H�v��DG|H�U�dH+%(uMH�
k�D�H��L��H�ML�O�1�H�����������H�E�dH+%(u
D�U�闙��肾��f���UL�
���L���H�
&��H��H��dH�%(H�E�1�j�)���H�U�dH+%(u���3�����UL���H�
���H��H��L�
��bdH�%(H�E�1�j�ټ��H�U�dH+%(u�������UH��ATSH��H��bfDoQ�dH�%(H�E�1�fo�fDo
!�H��fHn�fDo<�H�����fLn�H���fHn�fD��f�H��f�-��fEl�fl�fo�fo��o`hH fb�fj�fA��fA8(�fA8(�fo�fDo�f��f��@�fo�fA��fl��8����oX�fE��f8)�fDo�fDl�f8�fAo�f8)��H���fA8�X�H9��i���H����tpH�A�bH�8�����u=L�%F�bA�|$��1�諮��A�|$��A�|$ugA�|$
u7�H�E�dH+%(��H��[A\]�@D��fDH�E�dH+%(��H���[A\]�3�����&���A�|$
t���@����������H�ŵb�8�U������������<�����έ��H�9�1�H�轭��A�|$������z���f.���UfHn�H�VXH��H��dH�%(H�E�1�H�GH�WfH:"�FXH�H�E�dH+%(u��� �����UH�OH�VhfHn�H��H��dH�%(H�E�1�H�GH�WfH:"�FhH�H�E�dH+%(u���̺��ff.����UH�OH�VhH��H��dH�%(H�E�1�H�GfHn�H�PfH:"�FhH�WH�E�dH+%(u���k���ff.���UH��AWAVAUATSH��(dH�%(H�E�H�ױbH�8菺������L�-��bI�EH�L�`�L�y�L9���H�)�bL�5���0�L��H��SH��tM�4$I�GXM��H�P�I�GXL9�t[I��fHn�L��H���fl�)E�����t�I�D$`I�T$XfoM�H�BH�I�D$PAL$XH��t�L����f.�H�E�dH+%(uH��([A\A]A^A_]��M���ff.�f���UH��AVAUATL�gSH��dH�%(H�E�H�GI9��|I��H�X��fDH�ChH�X�I9�t`H�{(uH�{ t�I�EL�p�I9�t"H�CHH��t_L��H��Є�u�I�FXL�p�I9�u�H�=O�bH��觘���H�ChH�X�I9�u�H�E�dH+%(u?H��[A\A]A^]��I�FXH�P�I9�t�H�BXH�P�I9�u�H�=�bH���G�����@�����UH��AVAUATSH��dH�%(H�E�H�H9�t;I��H�X�L�oI�D$L�p�I9�tYH�CHH��t6L��H��Є�tZH�CXH�X�I9�u�H�E�dH+%(u\H��[A\A]A^]ÐI�FhH�P�I9�t
H�BhH�P�I9�u�H�=?�bH���G����DI�FhL�p�I9�u�H�=�bH���$�����m���ff.�f���UH��ATI��SH��dH�%(H�E�H�H�H�x�H�Z�L9�u�^@H��H��H�O`fHn�H�Ghfl�H�JH�H�WhGXfHn�H�Gpfl�H�BH�H�GPGhH��t��H�SXH�CXH�J�L9�u�I�D$I��H�0H�x�H�^�I9�u�aH��H��H�OXH�W`H�wXfHn�H�Qfl�H�
H�OhH�WpGXfHn�fl�H�QH�
H�GPGhH��t��H�ChH�P�H�ChL9�u�H�E�dH+%(u	H��[A\]��5���D��UH��AWAVAUATSH��8H��HdH�%(H�E�1�H�H9���I��H�X�A�E1�L�u��5E��A�D$L��L��DD�H��E1��SI��HA�H�CXH�X�H9�t>�{xu�H�������t�L��H���a����u�H�CXI��HH�X�H9�u�f�H�i�b���~�B,��u$H�E�dH+%(u"H��8D��[A\A]A^A_]�A����f�E1���&���fD��UH��AWAVAUATSH��8H��HdH�%(H�E�1�H�H9���I��H�X�A�E1�L�u��>H��������u;E��A�D$H��L��DD�L���SA�H�CXH�X�I9�HtE1�H���=�����t�D��H�U�dH+%(uH��8[A\A]A^A_]�1����_���ff.�@��UH��AUI��ATI��H��dH�%(H�E�1��ֺ����uZL���Z�����u/A��$�
w$A��$�H��Hc�H�>��f�A�D$|H�E�dH+%(uSH��A\A]]�fDH�E�dH+%(u4H��L��L��A\A]]���DA�D$|�DA�D$|	�耳����UH��AWAVAUATI��SH��H��HdH�%(H�E�1�H�H9�t'H�X�fDH��L���u���H�CXH�X�I9�$Hu�M��$PM��$PM9�t>fDI�FM�nH�X�I9�t�H��L���%���H�CXH�X�I9�u�M�6M9�u�H�E�dH+%(uH��[A\A]A^A_]�譲��ff.�f���UH��AUATSH��L�%(�bdH�%(H�E�1�I�$L9�t=H�X�L�m��fDH�CXH��H�X�L9�t�
L������H�}Љ���?,t�H�E�dH+%(uH��[A\A]]�����@��UH��AWAVAUATSH��8H�}�H�u�dH�%(H�E�H���b�x�)L�&L9��H��H��H�E�I��$�H�E�H�E�H�@L�x�H9E���I��$�@L���ț����tH�u�L���H�������E�wxE��uL��螛��A�Ƅ��3M��$�L9���A����f�M�mI9���A9EPu�L��襪��H���"E��uA�ETI�uH�xXM�EI�}I�U H�p`L�@XH�>I�u(H�xhI�}(H�PhH�ppH�>I�GhL�x�H9E��*���M�$$L;e����1�H�U�dH+%(��H��8[A\A]A^A_]���X�f���I��H����A���E�uTI�}A�EP���I��$�fHn�A��$�M��$�fH:"�AEL�(����H�u�L�������A�����I�$H9E��O���H�H9E�u��>���������6����ۯ��f.��UH��AWAVAUATI��SH��H��X���D�m�H��h���H�����d���D��`���L��P���dH�%(H�E�1�HDžx���������E��~.E1�L�5i��L��L��1��ޏ��A���E9�u�Hc�H�;���d���tH�r��L��1�誏��H�HË�`�������L�5"��L��1�L���~���H��P���H��X���L��Lc����L��L��H�I�1��N���L�Lc�I�DH��h���1ɺH�������ֺ��I��H�4�b�x��L��L���'����
L��誛��H��x�������H�E�dH+%(��H�ĈL��[A\A]A^A_]�fDH�
"�H���L��1��裎��Lc�I��X����H��h���H������1��@H������L��1�I��H��x���H�h���Ӝ��L�5����LI�x����+���蘭���UH��AWAVAUI��ATSD��H��H�EH��P����H��X���D�}H�[���l���D��h���H��H���dH�%(H�E�1�HDžx������A��E��~<D��E1���d���L�=�D��A��L���L��1�莍����A�A9�u㋝d�����l���Mc�~~��L�=��L�5L���d����������h���H�����rH����L��1��(���H�I�9�d�����L��L��L��1������H�I�9�l���u�H��X���1ɺH������舸��I��H��b�x��L��L���٣���
L���\���H��x����Й��H�E�dH+%(�GH�ĘL��[A\A]A^A_]��L�5���L��1�L���W���H��H���H��P���L��Hc��Ϊ��L��L��H�H�1��'���L�Lc�I��'���f�H��X���H������1��@H��胓��L��1�I��H��x���H�����S���L�5 ���LI�x������DE1�L�5�H������h���H��D��sH�?��L��1�A���s���L��H�ھH�L��I�1��Y���H�I�D9�l���u��T���衪���UH��AWE��AVE��AUI��H��ATA��SH��H��H�����D����dH�%(H�E�1�舌��H�����H�����A�D$��Ɖ��������D�����Dž����D��Dž��������H������։�������HDž8���HDž(����bH�����H�G�H��(���H�����H+G@H+G8H������G,�������G0������O���H���H���OH����L��1�������E��~+E1��H�s�L��1�A�������E9�u�Hc�E��~OE1�D��0���E��D����E��H�1�rH���L��1�A��衉��H�H�E9�u�D��0���H����L��1��z���H��8���H�Hc�H�H��0���H�����H�X�H��8H�� ���H9�t_E1�D��8���E��D�������8���E��D��H��H��(���E��L��A��P���������YH�H�0���^H;� ���u�D��8���H�!�b�x�����t=H����������E��A�L$D�������H�����H�� D�����8���������A�L$E��H�����D����D����H�P@H�p L������H�5��bH�0���������H��8���������9FtH���H�����H����������H�U�b�xt[H�E�dH+%(��H��8���H�e�[A\A]A^A_]�fDH��b�xuH��������������!�������H�����H��t�H�����H9�t�H��@���1��H�=�cH���H�H�U��g���H���b�x8u+H��(���t!H��(���H�H��t�@p+����E��/���AVD�����E1�D����������H�+�cL��A������H�8���XZ���\���ff.��UH��AWI��AVI��AUATI��H��SH��H�����dH�%(H�E�1��E���H�����H��tH��H��讟��H���)H�ΝbE1�Dž���H�����L���xLD�������Dž��H�����H�����L������Dž���HDž(���Dž ���HDž���H�����H�G�H���H������H+G@H+G8H�������G0G,������H����H���-�� ���������H���L��1�訅����E��t*E1�fDH��L��1�A����~����E9�u�H��L��1�Hc۾�^���H�_��L��Lc�1��E���I�L�(���Hc�J�;H��(���H�����H�X�H��8H�����H9�tpE1�L��D��$���M��E��L���I��D��$���H��E��H�ڋ� ���L��M��L��PA���W�YH�H�(���^H;����u�L��D��$���M��I��H������x��H�����E��L��L��D�����H�� �,���H�����H�(��������H��(��������;FtH����H�����H���������H������x�������1��(����=H�U�dH+%(��H�e�[A\A]A^A_]�f�H�����E��L��D����H�P@H�p �z���H�����H�(��������H��(��������9F�J���H������x�f���H������H���V���I9��M���H��0���1��H�=߈cH���H�H�������$���H������x8u.H���t$H���H�H��t�@p+������������H��H��cM��1�AUE1�L���t�����H�(���X1�Z�(�������H�Ȟ�L��賂���Hc����f�H������xuH�����������Dž��Dž ������H�C@HC8H;��������H�s�H��H��8H��(���H������H9���H�5n�bE1�1�Dž���L���L�=��Dž$���Dž ���H�����fD�������uH�ߘbH�:��H�5��L����蹁����E���E1�L���L��1�薁��A���E9�u�H��L��1�E1��� ����l���H�̸�L��؉� ���1��N�����@L���L��1��6���A���E9�u�H���L��1�A���� ��������H��(���1ɺH������蟬��H�չ�L��H��1��ր���H��b�x��H�Ĝ�L��1�譀��H�������$���؉� �����$���;Bt*�H��(��������H�H��(���H;������r����� ���L��������L�����H�����I�� ����1��)�����E��t'E1��L���L��1�����A���E9�u�� ������H��(���1�1�L���m��������H���bH�H������:s����zy�����zm����������L��1�� ���H�	����H���L��؉� ���1��f�����4���H���bE1�Dž���H�������蚞��f.�UH��AWAVAUATE1�SH��HdH�%(H�E�1�H����L�6I��H��M���I�H������H����H�H������H���"H�H������H����L�8M���M�M����I�1L�������_���L������I��M�AI��M9�t~L������M��M��I��H������L��fDH�{���H��H������1ɺ���H����L��H��1��~��H�H�I�L9�u�M��H������M��L������M�GI��M9�t^H������L��M��I����L��H������1ɺ�l���H�8��L��H��1��}��M�?H�I�I9�u�H������H������L�xL�HM9�t\H������L�ːI���jL��H������1ɺ���H�ȴ�L��H��1��3}��M�?H�I�I9�u�H������H������L�@L�xM9�tdH������L��M��fDI����L��H������1ɺ脨��H�P��L��H��1��|��M�?H�I�L9�u�H������H������L�@L�xM9�tdH������L��M��fDI���JL��H������1ɺ����H�س�L��H��1��C|��M�?H�I�L9�u�H������M�~I��M9�tLf�I����L��H������1ɺ謧��H�x��L��H��1���{��M�?H�I�M9�u�L�sH��I9�tKL�=E�f�I�~�wfL��H������1ɺ�P���L���L��H��1��{��M�6H�I�I9�u�H�E�dH+%(��H��HL��[A\A]A^A_]�fDM�6I9�u���fDM�?M9������Y����M�?I9����������M�?L9������Q����M�?I9��|�������M�?I9����9����H�L9��d����������f�UH��AWAVAUATSH��XH������dH�%(H�E�H���bH�PHH�C�H��HE�E1�H������H���NH�I���H������H����H�H������H���H�H������H���{H�H������H����L�(M���KI�UH����H�2H�������0���H������H��I��L�zL�Z��M9���H������L������M��A��H������L�ېI��wIH������1ɺL���0���E���L��H�
w�HD�����I��1�H�e�E1��Ry��H�I�M�?I9�u�M��L������M����M�}I��M9�twH������H������f.�I��wGH������1ɺL��蠤���۾L��H�
�HD�����1�I��H�ֲ1���x��H�I�M�?M9�u�M����H������L�xL�hM9�ttH������H�������I��wGH������1ɺL�������۾L��H�
`�HD�����1�I��H�N�1��<x��H�I�M�?M9�u�M����H������L�hL�xM9�tuH������H�������I�}�wGH������1ɺL��萣���۾L��H�
ؓHD�����1�I��H�Ʊ1��w��H�I�M�mM9�u�M����H������L�xL�hM9�tsH������H������fDI��wGH������1ɺL�������۾L��H�
P�HD�����1�I��H�>�1��,w��H�I�M�?M9�u�M����H������L�hL�xM9�tuH������H�������I�}�wGH������1ɺL��耢���۾L��H�
ȒHD�����1�I��H���1��v��H�I�M�mM9�u�M����H������L�hL�xM9�tnH������H������fDI�}�wGH������1ɺL������۾L��H�
@�HD�����1�I��H�.�1��v��H�I�M�mM9�u�H�E�dH+%(uH��XL��[A\A]A^A_]��E���DUH��AWI��AVAUATI��SH��HH�O@dH�%(H�E�1�H�ǎb�xt
H���H�H�،b�@���,����H�E�����H�� L�-m���v���E�H���	L�}�I����L���H���I��H����H���L��M�w�1���u��H�U�L��L��Hc�萓��L��L��H�H�H�E�1���t��L��L��E����L��L��H��1���t��HcU�H]�H�
�bH��E�H�HËE�H]�;A�W����H�E�dH+%(�CH�E�H��H[A\A]A^A_]�f.�����H�� ��u��H����L�-C�H�E�E1�L�m�I��L�}��L���(���I��H��t�H�U�M�}�L��A��L��艒��H�d��L��Hc�1���s��L��L��H�H�H�E�����H�U��L��H��1��s��H]�H�H�H��bH]�D;p�x�������H�E�dH+%(uGH��HH�� L��L��[A\A]A^A_]�q��H���bH��1��01��ׅ��H�E����蕒��D��UH��AWI��AVAUATI��SH��8H�U�dH�%(H�E�H��b�xtH����YH�}�H�L�p�H9��EH��bA�H�HHI�H��H�E�H�$�HE�H�M�H�E�L��M��I���6@A�W(��L��L��L��A���Aw��H�II)FI�GXL�x�H9E���A�xu�L��I��$���{����ukH�}�D���Ã��t%I�vH�M�1�D��I�>H����{��H�II)FH�%�bL��L��L��H�xH�a���I�G H���T����Љ��Q���H��L��������u�I�GXL�x�H9E��Y����I�+E�H�U�dH+%(uH��8[A\A]A^A_]�@1���������Uf�H��H��AWI���AVI��H�=
�AUATSH��dH�%(H�E�1�H������)�����HDž����H������HDž�������H��b�@��W������2H��bA��`H�@H��8���H��H��@����ƒ���ˆ�H�����I��PH�FH�NH��p���H��x���L�h�H9�tmH��@���H������L�%�H��LE�H��L��I��fDH��E1�1�L��L��SH������M��L��H�N��1��7p��H�CXH�X�H9�x���u�H��p���I��P1�Dž`���H��X���H�H��h���H9���H��h���H�FH�NH��x���L�h�H9���D��W���H������H��p���D��M��A���]fDH��p���E1�1�L��L��E1�A�T$H�������r���H�h��L��H��1��io���I�D$XL�`�H9�x����QA�|$xu�L����x�����!E��u�H�ha�L��1��o����o������`���L����H�=���ј����H�����H���bH��H���I��PH�GH�OH��h���H��x���L�h�H9��H��@���H�~cH������L�%cbH��HE�H��p���H��L��I��f�L��L��H��SL��L����H��H���L�1��Fn��H�CXL�H�H9�x�����H��p���L��L���?����DL��L���=������������H��h���H�H��h���H9�X�������H����L��1���m�������DI��HH�=ʆbH������E1�H��x���H�OH�B H��h���H��`�������@Dž����H�L�h�H9��H��H�ͤ�HD�H��p����$f.�I��H1�I�EXL�h�H9���A�}xu�L���v��������������u��uH��p���L������L��x���D��L��L��H������A�U��������u�H������L���ރ���y���f�H��`����uH�U�dH+%(�uH�Ĩ[A\A]A^A_]�DL��L��蝛�����V���I�EXI��HL�h�H9��'���DL���
A����x��H��h����@ A9��t���L����H�=ǣ���H��bI��HH�HH�t�����8���H����L���L@�H��bA��L�H��H���1��k������Lv�L�
"�L��1�A��H�L���k������f�H��h���D��`���Džh���H������H�H��p���H9�X�����L��8���I��H��p���H�CH��H��x���H9�L�`���h���tcD��W����-�A��L��L��L���E1�A�T$�I�D$XL�`�H9�x���t,A�|$xu�L���t����t�L��L��������u������`���9�C�H��p�����h���H���`���H��p���H9�X����B���L��8���A��H��@���H��H���L��H��H�#�H��L�	HEƾH��1��Pj��L����H�=�����������L����H�=ˡ���I��HH�L�`�H9�tp@A�|$xuVL���s����������H������L��L��1�A�T$A�Ņ�t|DL���.���Xv��A9�u�I��H1�I�D$XL�`�H9�u�L���
�/v��L����H�=.��F���H��h����@ �����L��L���y������a���I��H�L����H�=������B����u���D��UH��AWAVAUATA��SHc�H���H�� ����6��<���L��`���D��;�����\���dH�%(H�E�H�΁bH�@HH������m��H�/ncH����	�]�@-[...f�P1H��mcH�� ����y��H���bH���H��t�[���DžX���E���`��<�����tNHDž(���9�X���|;H�=�mc��H�E�dH+%(�&	H��(���H���[A\A]A^A_]�H�� ����Ď��D�`H��bH�xH�iL���m��H��P���H����L�� ���L���7���M�v0�������M���fH��M��HDž(�����L9����IE�H��h����FDL���a����\���/��H��P���f�H�E�)E�H��p���I���H��H���L���H��h���H��x���H��b�X���3�x�!H��H���H��HH��b�x ��L��p���L��L���w��H��P���1�H��`���H�C|��f��fA����t
��;�����Hc�H�(�����<�����t��X�����X���9���A����I�����L����k��I��H����A��u�H�&bM�g�x �����I���I�_(耓��H����H�
�~b�ytI�WpH�H��x.f��H*��Y�H��x>f���H*��^��Z��J���f�H�ڃ�f�H��H	��H*��X��Y�H��y�H�ƒ�f��H��H	��H*��X�����\���L���8����������H�� ���A����`H�����u,��~(�L@�H��`���1�L�
{�A��H�����d�������f�H��`���L��H������Z�\����d����<������k�����X�����X���9��V���@H��P����q���\����H��{b�8�8���I���H��`���H�8H��臃��H�~b�H��H��H�1��7d�����f�H��P���L�m�L��H�E�L��)E�H�E�H��h���H�E���t���}������H��P���H��`���H��y1����c��H�H�(����S���fDH�@HH�����H��|b�xtI����'���A���H��h���H�~�L��H��P����@1��<m��H��H���H��p���H��x���H�H��PH�H)�H��p���H�QH��H��x���L�r�H9���H�����H���L�����L��p���I��H��HE���@���H��0����@H��0���H�Ǜ1��l��H�H�p���H)�x���H��{bL��L��L��H�xHt\A�V(��L��L��L���Xg��H��p���H��x���H�I�VXH�H)�H��p���L�r�H��x���I9�t'�؃�
�@����n���1��f�I�F H��t��Љ�뛐L�����H�������H�H)�1�H��P���H��p���H��`���H��H��x�����a��I�����H�L�p�H9���H�����H�@�L��p���L��@���L��P���H��HE�H��0���L��M��I���K�A�U(L���ԅ��H��0����H��`���I��H���1��pa��H��@����I�EXL�h�H9��t>H��h���L��L��L��L��p���H��x���H�IzbH�xHu�I�E H��t����DI��H��`����
�{���A��tfA��tH��yb�x��Hc�H�(����Z���DI����G���1�L�m�H��H���E1���@���L��L��P�����A�FI��I;��suI���f�L�m�H��L��H��h���H�E�)E�H��HH�M�E�w��4q���}�u�H��`���L��1�H��u�!`���@���A�FI��I9��w�Hc�@�������f�����H��`���H�� ����p����X������f.�H��`���L��L�������*����H��H���L��{H�����`�L@�1��Ji��H��p���H��x���H��:���I��4���H��H����݌��H��`���L��H���[��H�H�(������HDž(������HDž(����������H�4xb�,�H�=�H��ˈ���g����A~�����UH��AWI��AVE1�AUATE1�SH���H��(H�}�dH�%(H�EȋG8�E��Pf�D��f�f��L���H*��оL���Yú�H*�H�_���^��a^��H�I�I��I��StmD���`��H��H��I���l~����t�H�E�F�D�8H�@wb�x)tE��t�E��t�U����m���L��H��L��1��I����]��H�I�I��Su�H�E�dH+%(uH��(L��[A\A]A^A_]��}��f.�D��UH��H��dH�%(H�E�H�Pvb�G �H��tb����у�����	ȉG$�G.)�f�W(H�E�dH+%(u���|��ff.���U�H��AWAVI��H�5F�AUE1�ATI��SH��dH�%(H�E�1��Bb����uNH�=�!c��I��L�=�!cH��tL���
}��A�Ņ�tNI�?I�� ��H��u�E1�L��A�������i��fDH�E�dH+%(ujH��D��[A\A]A^A_]��L���z��I��I��H��t��,H���w��H��t��H�xHc�H��萀��fIn�fH:"�H�5!c��{��f���UH��ATH��D�g0dH�%(H�E�1��w0���_��H�E�dH+%(u	D��L�e����G{�����UH��H��dH�%(H�E�1�@��t����5t)���u f/iy�2s1�f.�@�ƃ�3H�E�dH+%(u��ԁ����z��ff.�@��UH��H��dH�%(H�E�1�H�E�dH+%(uD�G"� �A��D����Z���zz��f.���UH��H��dH�%(H�E�1�H�E�dH+%(u�G D�G"��G.A�ɍ<D���xZ���#z��UH��AWAVAUATSH��H��(dH�%(H�E�1��G$�E��W`�4H�߉E�������t
��������D�k&E1�D�s$��vA�E��J�1�H�H�H��I�Ŀ�{��E��t3E1�fDD��D��H���jp��E9����aA���C���E9�u�1���z���E��S(A� �{ �s")‹M���C.��X�������uH�sPH��t	1�1�贀��H�E�dH+%(u>H��(1�[A\A]A^A_]�@�`E�|$���E9��P����u����E��p�����x��f���UH��H��dH�%(H�E�1�H�E�dH+%(u�H�����;p���x��fD��UH��H��dH�%(H�E�1�H�E�dH+%(u�H��H���jr���Ex��D��UH��H���H��`���H��h���L��p���L��x�����t )E�)M�)U�)]�)e�)m�)u�)}�dH�%(H��H���1�H�EH��0���Dž0���H��8���H��P���H��@���Dž4���0�&X��H��H���dH+%(u���w��ff.�f���UH��AVAUATSH��dH�%(H�E�1������tH�GI��I��������tR��t H�E�dH+%(�H��[A\A]A^]�H��H��L��Є�tOH�I;\$��I�D$xH��u��5H�X�@H��L��Є�tH�[I;\$��I�D$xH��u��H�_M�u�M��~B@H��H��L��Є�tH�I;\$tZI�D$xH��u�I��s�I�\$�8���1�I��t�H�[�f�H��L��Є�t�H�[I;\$t�I�D$xH��u�I����f�1��1��p���� v����UH��AUATI��SH��H��H�dH�%(H�E�1���t)��t|��thH�E�dH+%(usH��[A\A]]�fDH�{M�l$�M��~"f.��[o��H��I��s�H�{�@t��an��I��H������W��H���fD��^��H����au�����UH��AUI��ATSH��L�gdH�%(H�E�1�M��tT1��L����n��I��H��t$��1�L���l����L��L��A�UhA�E(��9�ủ�H�U�dH+%(u$H��[A\A]]�H��W��I��I�E1�M��u�����t��f���U��H��H��HwdH�%(H�E�1�H;7��H�U�dH+%(u���t����Uf�H��H��dH�%(H�E�1�H�E�dH+%(uH�Gp1�1����?t��ff.�@��UH��AUI��1�ATI��1�H��dH�%(H�E�1��RT���8L���z��H�E�dH+%(uA�T$$H��L��L��A\A]��]�i^����s��@��UH��AVAUI��ATI��H��L�5ikbdH�%(H�E�1�L����y��L��L����W��H�E�dH+%(uH��L��A\A]A^]��Q���Xs�����UH��AVI��AUATI��SH��H���H��H���L��P���L��X�����t&)�`���)�p���)U�)]�)e�)m�)u�)}�dH�%(H��(���H�CXH����H���L�-�jbL���!y��L��H���&W��H�{@�}`��L��L�cH�aq��L��H�C@����H�EL��L��H������H��0���H�����Dž���Dž���0H�� �����A�ą�:L����P��A���H��(���dH+%(uBH���[A\A]A^]�f.�H�{H�gn��L���P��1��H��ibH�CX������q����UH��ATSH��H��L�%�ibdH�%(H�E�1�L���x���[l��H�{H����H�{@����H�E�dH+%(uH��L��[A\]�O���uq��D��UH��AUATI��H��L�-ibdH�%(H�E�1�L���w��L������L���O��H�E�dH+%(uH��1�A\A]]��q��ff.�@��UH��ATI��1�H��dH�%(H�E�1��k��I�t$@H�vmL��H�kbH��HD�1��uU��H�E�dH+%(u
L��L�e���z���p��@��UH��AWAVAUI��ATI��S��H��H��8���L��@���L��H�����t))�P���)�`���)�p���)]�)e�)m�)u�)}�dH�%(H�����1�H�EL��L�����H�����H������H�� ���L��L����L�=��H�����L�5��Dž���Dž���0�\~��L������y�g�L���0��H��������L��L���k�����t�H�����������E]������H�����dH+%(uQH��[A\A]A^A_]�H�EL��L��Dž���H�����H�� ���H�����Dž���0�@s��1���o�����UH��ATI��SH��H��dH�%(H�E�1��f.�L���X~��H���s�����t�H�U�dH+%(u	H��[A\]��n��ff.���UH��ATI��SH��H��dH�%(H�E�1��f.�L����}��H���_�����t�A���
tH�5ygb�ȍP�E�<YA��H�E�dH+%(uH��D��[A\]��n��fD��UH��H��dH�%(H�E�1����H�GH�E�dH+%(u
H�wH�Gp1����m��fD��UH��AWAVL�5�rAUA��ATSH��H��L�%OebdH�%(H�E�1��L����s��H���@�A����T��L����K��E��x[D���sZ����������tq���uh��������v"f�{*t��������v��������v�� uƃ��f������H�U�dH+%(�bH��[A\A]A^A_]�f��� t[��������wō���������Ic�L�>���1���f��H��SXH�s@H����P��H�{H��N�����f�H�K�s(���H�1��H9�����L�J�<H9�s
H��H��L)�H�1H�;H��H�S��Sp���f�H�CH��������s(H9��vH)�H��H��H�H��H��H)�fHn�fH:"���Sp�U���D���H���H9��;���H�K�S(H��H�H�H9�� ���H����H��H�K�Sp����H���S�����s(���H�߃�H��Hc���H9�H��fHn�HN�H)�H��fH:"���Sp����S*f��� ����K,����9��������f�C,�~���fDf�{*���C,f���`�����f�C,�T���@H�H���D���H�SH��H�H9��0���H��H�����H��H�S��Sp�����H��1�����qj�����UH��AVAUATI��SH��L�wH�_dH�%(H�E�1�L9�t~H��tyE1�fDI�D$xH��tH��L��Є�u*D��1�L���`��D��H��L��A�T$hA�D$(A��D9�tH�I9�u�D��H�U�dH+%(ufH��[A\A]A^]�f�I��H��L��Є�tH�I;\$t"I�D$xH��u�I�\$I9��Q���1��fDI�D$1��7����ki��ff.���UH��H��dH�%(H�E�1���t ��t;��t'H�E�dH+%(u@��fDH��Hw��fDH�GH�G��fD���H�WH�H�D�H�G���h����UH��AWAVAUATSH��H��L�odH�%(H�E�H�GA��M����E1�;��sdL�=�ab�PH�CxH��t
I�uH��Є�u&D��1�H���]_��D��L��H��Sh�C(A��D9�tA��I��D9��vA���D9�w�H�E�dH+%(u&H��D��[A\A]A^A_]��L�oL�o�\����g��ff.���UH��AVA��AUI���ATA��SE���H��dH�%(H�E�1��%i��L��D��D���^������D)�{��k��H�E�dH+%(uH��1�[A\A]A^]��h���pg����U�H��ATA��H��dH�%(H�E�1��h��D���v��H�E�dH+%(uL�e�1���h���g��ff.���UH��AWAVI��AUI��ATI���S��H��(dH�%(H�E�1�H9����>h��A�T$(I�D$I��H�I9��4E1�I9�vE��A)�D)�Alj�D��L���]��D����j����D��L���n]��M;l$��H�E�dH+%(��H��(1�[A\A]A^A_]�g���g��I�D$�1�I9���A�T$(I��H�I9���A��E����L��M��\���M�A�<�9j��L���D����\��A�D$(ID$�mI9��`����t���SD��L���\���q�t��L��SD���\���+�~t���&���f�E��A)�E���p���f�E����L��A)�D���]\���m�Ct��L��SD���E\�����J��E������I�D$����E����L��E�A)�D���
\���l��s��D���SL����[����J��D�����y����M�I�D$D)����@�l�����d��f���UH��AWAVAUATSH��(�u�dH�%(H�E�1���~1I��E�ǿA��A�ԉ���e��E��t8I�FB�#H��H9���H�E�dH+%(�H��([A\A]A^A_]�f�I�FD��H9�rЋ]�A)�L��D��S��[���t�r��L����D���Z����I��H�E�dH+%(��H��(1�[A\A]A^A_]�Ge���)�A��L����D���Z���t�r����y"�d@��L��D���Z�����g����rDE��E+~A9�|�D��D��L���bZ���l�Hr���U�L��D���JZ�����H���L���E��E+~E���9�����1c�����U1�H�=��H��SH��dH�%(H�E�1��H��H�5�cH��t'H��cH�K�{H�� H�S���I��H�s�H��u�H�E�dH+%(uH�]����b��f���UH��H��dH�%(H�E�H��[b���tH�~(���H�U�dH+%(u���kb��ff.���UH��AVAUI��ATA��H�%SfHn�H��D��fHn�H��H��`H�W8dH�%(H�E�H�1L�rfH:"�H��fH:"�H��[b)E�)M��@I)��R��H��ZbE���E��:����E�tp1��E�foU�H�
��C$fo]�H�]�H�MЉE�)U�)]���tH�U�L��L���Pj���}�tL���H�E�dH+%(��H��`[A\A]A^]���tL�����t���uMH�
�foe��C$�E�fom�H�]�H�M�)e�)m�@���E��v���D��8���fD�C$fou��E�fo}�H�]��E�H� H�E�)u�)}��0����`��f.���UH��H��dH�%(H�E�1�H�E�dH+%(u���N���{`��ff.���UH��H���H��`���H��h���L��p���L��x�����t )E�)M�)U�)]�)e�)m�)u�)}�dH�%(H��H���1�H�EH��0���Dž0���H��8���H��P���H��@���Dž4���0�V@��H��H���dH+%(u���_��ff.�f���UA��H��H��H�RYbdH�%(H�E�H�G8�RH�@E��t���A�5t(���uH)�A�2H�;p0tE1���A��A��3H�E�dH+%(u	�D���-f���(_�����UH��H��dH�%(H�E�1�H�E�dH+%(u���e����^��ff.���UH��H��dH�%(H�E�1�H�E�dH+%(u
�@���W���^�����UH��AWI��AVAUATSH��HH�W8H�5XbdH�%(H�E�1�L�j�CI)��Q���{
A��I�E҃�D�h$H�[Wb��D��xuN�4L���)e��A�G(1�D��L���H����1n��H�E�dH+%(�cH��HD��[A\A]A^A_]�@M���H�5
w�U�M��I�@�LE�I�G8L�P�CL�E�M��I�z-L�U�I)�I��@$�E�H��Vb�@
����QS����L�U�L�E�H���U��D���L��L�ljU�L�E��XQ�����*���L�E�I�>I�p8�0X��L�E��U�H����H�5QVb�>�`A����@x�u��E�I�~�E�tH�#Vb�x�����E��6L��L�E��U���c���U�I�L���U��M��@@��U��u��tu���<��L�E�I�@xH�x(�H�P���H��Ub�8����H�Bx��H�x(�H�P�t�H�׉M�L�E�H�U��S��H�U�L�E����M���H�rH�M�H������I�H������I����P^���M�������E�E1�9E�L��A���u�)ȉ�I��@@�tu���YB�����@H�BxH�x(�H�P�t��Y����A����@t�u��E����H�2�\���I�p8H�=u{�F���e����[��fDUH��AVAUATI��SH��H��dH�%(H�E�H�G8L�pH�UbHLJ��@M��I)�L���BO��L��L��L�����D��I�UH�H9��[L�`�f���I��$��I��$��A��$�����H��Sb�@@�����N�f�I��H��H)�I�T�8D���H��8�_�f(�H9�u��-��f/���H���H����H��SbD�N�HcH@J��L)�L�LL��I���!�\�H�Jf/�wH�JH�H����H�…�~�L���A���Lpf.�z�u�H��8L9�u���DI��$�H��tf��. �Y����S���I��$�I�D$pL�`�I9E�����L���]8��L���%C��H���H�E�dH+%(u=H��[A\A]A^]�L��1�Df��I��$�L��A�$�H�9�G��f����]Y��ff.�f�UH��AVAUATSH��H�$H��PH�FyL�o8I�]dH�%(H�E�1�I����=������H�RbM�u�@@����H��bH��PI�F L����L�K-L�r�@�L����@1���a��X1�ZL��1��QE���8L���_��A�T$$L��L���� C��1�H�U�dH+%(uVH�e�[A\A]A^]�f�H�)RbA��H�
�xH�zq�H�81���8��H��o�E���f.��������X��@UH��AWAVAUATSH��H��QbL�'dH�%(H�E�1�H����RH�H�G8H�@H)�L�(L9�tFH��PbI��L����8tG�H�{(�tH�{0I��H��t
L���LM��H��uWH�L9�u�H�=�p��9��1��@H�{0I��H��t
L���M��H��uH�I9�u�H�=�p�9��1��|fDL���x:��A�W&D��fHn�f������AH��t5��t1H�5Pb�>te�H�[H�{(�t�H��I�G��tH��u�I�_�AƇ�AƇ�H�U�dH+%(u.H��[A\A]A^A_]�@H��tH��H�[�D9�u�I�G��xV���UH��AWAVAUATSH��H�PbL�'dH�%(H�E�1�H����RH�XH�G8H�@H)�L�(L9�tFH�5ObI��L����8tFfDH�{(�tH�{0I��H��t
L���K��H��uWH�[L9�u�H�=<o�28��1��H�{0I��H��t
L���sK��H��uH�[I9�u�H�=o�7��1��{DL����8��A�W&D��fHn�f������AH��t5��t1H�5uNb�>te�H�[H�{(�t�H��I�G��tH��u�I�_�AƇ�AƇ�H�U�dH+%(u.H��[A\A]A^A_]�@H��tH��H�[�D9�u�I�G���T�����UH��AWAVAUATSH��H�$H��H�$H���&H�_H��`��I��L�����H��I��L��dH�%(H�E�1�H�Nb�@H)�1��H�H�
����H���ƅ���fHn�H��@��fH:"�H�|�L�����)�����~nNbƅ��fH:"�)���H�H�9H��8��H��0��H���TI�D$L�x A����=H9����C+��H�=Cm�P��H��@��H��BF�J(H�����������f�����H��Lb�8��
I���H��p��H�����H�XH��H��H�MbH��(���@H)�H��`��H��x��1�H��t�XH�������H��p��I��H��HDž���H��X���T^��L��L��������\
L��L���F���L�� ��M����H��KbA�t$dM�|$�8�|L�h����5���������h��f��fHn�����)������t6H��t1H��Kb�8��f�M�I�(�t���H��t��u�H�����L�����L�� ��ƅ��H�����1�L���E�����T��f���L���VB��A�Dž�����t�����|ݍ@��uw�H�.WHc�H�>��@H��0��L��L���Q���������A���L������ ��L��L��A������/G��H�s-L��1�H�=�j�
H���}DA��tGA���J���H��(��H���iH�(��MH����
H��p�[I������L����<��H�������>��H��0��H9�8����
H�E�dH+%(�kH�e�D��[A\A]A^A_]�E1�H��Ib�@@���Z
w+���A��b�P
H��Ib�@@��fD���A��b��H��Ib�@@�f.�A������A����L�� ��M��� ���H�AIbA�t$dM�|$�8tA�t$hL�h���W3����h�������fHn�f������)����H��tE��tAH��Hb�8uOH�A�)�f���M��H��@��H��@��H��@��u�H�����L�����ƅ��L�� ���p���M�I�(�t���H��t˅�u���E1䀽8����9���/��u>D��T��H��9��H�
iH�5iH�=i�3^����
�
�����9�������L����������H�=o�Q1������E1�H��(��H����_��H�x(���L�@@M����L�����H�
nh�@1�L����W��L���0�����E1�H��Gb�p�o���E1�H��Gb�p�\���@E1�DH�5�nL���<���<���E1�H��H��H��x���VR���!���E1�H�?Gb�pH��x���'F������E1�H�����H����L���;�����E1�D��T��H��9��H�
�gH�5�gH�=�g��\����
�������9�������L���9�������������E1�H��H��L���
O���u���E1��-H��H��x��H���5�3c�����L�����H�=rH��VL������|3c�����@,P1��U��Y^����E1�H�:Fb�H�
.Fb�@
��h����<GˆA
���E1�H�
FbH��x���p��D������E1�H��`��H��tH�x��������A���H��(���]3��L��X����H��p��L����W��L��L���2��m���E1�H�+Fb�x
���f�P�8���E1�H�����H�FbL��L�����L+�����@L�zL�ƺI)�L��h��M�����H�'EbH������8HcAx��I9�~D�ytH��h��L���L��H�� ��H��H��@(����H��Db�����H�� ���AtH��L)��E1�1�1��iW��H��X��L���:��u���E1�H��DbH��x���p�{C���V���fD=���������=�4���M���������fDL��L����M��t@M;$$t:A��t��A��������A�G��u���H�RHc�H�>���A��t�v���A���|0A�G��uw'H��SHc�H�>��A�������A��tE1�����E1��.����A�����������A���P���L���D��I��H������H�����p,��I������A�t$h�z���fD����L������������L9�H�� ��L��LN�H��h��H�H��BbL��H�ދR,����������H�� ��HcQxH��H��L)�fHn�H�=�ifH:"�)������+���y���f��x�%�@�J���DH�A�)�f���M��H��@��H��@��H��@��u���@H�=�m�+�������H�5�nL���6�����M������L����A��I��H�������H�����S2��I�����H��BbH�����L���@H�ZH)��W0��H��@,�����0���fDA��b����H��Ab�@@����H�IBbA��H�
gbH��a�H�81��(������@
�%���L��(��M��I�G�LE�L���~0������A�A�tH��AbI�w8�PH�����H�@H)�H�8H�H�B�H9�t4H�
Ab1Ҁ9tRH���H9�tOH�������H�H�HpH�A�H9�u�H�=�a1��2������H�HpH��H�A�H9�t�H;��u�L��H��h��L�xp��*��H��h���������f��fHn�����)����H��t3��t/H�5U@b�>�d�f�H��tH��M��49�u�H�����L�����ƅ�����H�H�����H9�u�-�H�H9�t$HcBx��x�H�������HcAx������H�IH9�u�L���L��H�������H��`��L��L�������x�H�=�k��(���g�H��@��H�8��A���k���A��b�����H�j?b�@@���D��T��H��9��H�
`H�5`H�=`�.U����
���9�������H�=�_�W(�����H�=�j�F(�����H�=rk�5(����M�I�(�t���H���������u����E1��\���H��`��L��L������A�������A������n��uE��D��UH��H��dH�%(H�E�1�H�E�dH+%(u��`1���;E��ff.���UH��AUI��ATI��SH��H��dH�%(H�E�1���4��1�1�1��0�����9��H�E�dH+%(uH��H���L��L��[A\A]]�4���D��ff.�f�UH��AWAVAUATSH��H�$H��H���L�8H��X�dH�%(H�E�1�H���&U��H�C�H���	H�
>bI��I���RH)�H��I���:8��I���H�{��p�;��H�����AoL��)�`�H�C�H��p��"��H��X�L��H��`��	0��H��<bI�_M�w�@@����H���aH��PH�C L����M�N-��@L��@���L��\1��L��XL��ZL���J��H�E�dH+%(��H�eظ[A\A]A^A_]�f�H�=!i�%����f�L����!��H�s�H�=-i1�H��-�+���H��<bA��H�
LcH�:\�H�81��#��H��Z�'�����B��DUH��AVI��AUATI��SH��H���dH�%(H�E�1��H�C(H9C(�lS��H�[0�{@t�E1��)I�D$(M�d$(L9�I�FxMD�H��tNL��L���Є�tBI9�t=A�|$@t�L��M�d$ M��tI�T$(I9T$(tI;D$0t�DL� I�FxH��u�@H�E�dH+%(uH��L��[A\A]A^]��B��@UH��AWAVAUATI��SL��H��HH�}��HH�u�H�M�D�M�dH�%(H�E�1����H����I��H�E��Hc���/G��I�EI��H���]H�E�H�M�D�p4I�M �M�I�EA�M8E���H�;bH���D�x)H�E�D�X0E�DE���Mc�L��H��I��f�H9�t6H;��u-Hc��I�t$XH�L��M�A�y:I�qHrH��H��u�A��E9�u�I�E(H�M�fHn�H�E�H�E�H����H���H�A(AE(HD�H�PfHn�L�hH�E�fH:"�AEL�xL�*L�pM9�tA�E�I��L��D�HE��H�}�E��L��H��L��L�E��f�����x~M�6L�E�M9�u�H�E�L�pM9�tV�H�_���H���H�P(fInŋ]��@@fH:"�H�P0@ �~E�I�U0�X8fH:"��@<I�E0H�A�E@1�A�E<H�U�dH+%(��H��H[A\A]A^A_]�f�Ic�D�]�H��1�H���=fDLc��M�D$X�pH��H�L�O��I�E�KDM�CL@H��t,Hc�H9�t$H9��u��P��u�H�H��t	����DD�]�A��E9�����F���L���,��������<����?��ff.���UH��AWAVAUA��ATSH��H��HH�u�L�~��dH�%(H�E�1�H�G8L�}�L��H���H�E�H-�H�E��s/��f�H����M������7��H�}�H�G(H9G(�I�@H�5�[H���1�����+�L ��L�e�D�m�1�M���7D1�H�5�XH��1��% ��D��f�H���7��HcE�M�?M���nH�M�L9�`����]H�M�I9���LI�VXIc��H�4�H��7b�z)tA��P��t�H�M��xH��H�E��}�HAL�`H�H�@H��t5M����f��I*��Y�zH����f���H*��^��E��u�H���3.���E�H��D��D���6��H�7b�x
u2�x���E�H�5�WH�߸������f.�L��H�5�WH��1����������H�ƃ�f��H��H	��H*��X��[���f�L��L��f�H���H	��H*��X������H�E�H�U�D�@8H�JH�rH�B(E����H��VH9�tH�M��y@H�
�HD�H��H�
9HD�H�}�H��A��L�
9H�5�VD�W0�O4RP1��w D��H�����H�� H�E�dH+%(�C�S$H�5�8H�e�H��[A\A]A^A_]�&���H����H�UVH9��u���H�M��y@H�
&HD��]�����5��1�� H��H�5YL�%L8�~�����L��H�ߍ��P�'&��H�E�M��L��L��H�5�UH��D�@81�A���A��H�E�dH+%(uu�S$L���1���fD� H�5�XH��1��
�����DL�
�UH9�tH�E��x@H�[LD�H�E�H�5zUH�ߋH4�P0L�@ 1����������:���U��H�w(H��AWAVAUATSH��H��P���dH�%(H�U�1�H�W(H��h���H�U�H9��:�@����(H�U�H9�h����H�u�H�~(H�F(H��`���H�}�H9���H�U�H�J(H�B(H��X���H��x���H9��yH��x���H�~(H�F(H��p���H�}�H9��)H�U�L�z(H�B(H�E�L9���M�w(I�G(H�E�L9���L�}�L�u�H�M�H�Y(H�A(H�E�H9�trL�c(L�s(M9�tMM�l$(M�|$(M9�t*M9�tL�����M�mM9�u�A�D$<A�D$@M�$$M9�u��C<�C@H�H9]�u�H�E��@<�@@H�E�H�H�E�H9E��d���L�}�A�G<A�G@M�?L9}��)���H�E��@<�@@H�E�H�H�E�H9�p������H��x����@<�@@H��x���H�H��x���H9�X��������H�E��@<�@@H�E�H�H�E�H9�`����L���H�E��@<�@@H�E�H�H�E�H9�h�������������H��P����@<�@@H�E�dH+%(uH�Ĉ[A\A]A^A_]��(8�����UH��AVAUATSH��dH�%(H�E�1������tI��I������������t H�E�dH+%(��H��[A\A]A^]�H�GxL���H��t
L��Є��dI�]�M��~&f�L��L����I��H��s�M�f�fDI��t�I����+�H�P(H�H(H9�t,I9�u'I��I�FxH��t�L��L���Є�t�L9�t�I�D$ H��u�M�d$�M�d$0A�|$@u�I�D$(I9D$(u���G��f�L�����I�D$(I9D$(��G��M�d$0A�|$@t�I�FxH���+���L��L���Є�����I����2H�P(H�H(H9�t8I9�u3I��I�FxH����L��L���Є����L9�����I�D$ H��u�M�d$��M�d$0A�|$@u�I�D$(I9D$(u��G��f�L�g���L��L���L�I������6��ff.�@U��H��AWAVAUATSH��@��;���H�w(H�� ���dH�%(H�U�1�H�W(H��@���H��h���H9�ts�@u��tiDž<���H��h���H9�@���t;��;���ukH��h����@<��d�����H��h����<���H�H��h���H9�@���u�H�� �����<����@@�p<H�E�dH+%(�7H�ĸ[A\A]A^A_]�H��h���Džd���H�~(H�F(H��0���H��X���H9���H��X���Dž`���H�~(H�F(H��(���H��P���H9���H��P����E�H�~(H�F(H��H���H��x���H9��]H��x����E�L�N(H�F(H��p���L9��#M�y(M�Y(A�M9��L�M�L��M��L�`(L�P(A�M��M9���I�\$(I�L$(H9���A�H9���L�{(H�S(L9���A�I9�t`L�e�M��H�]�H��D�m�E��L���H�M�D�M�L�U�H�E����Ew<M�?H�E�L�U�L9�D�M�H�M�u�M��E��L�e�H�]�D�m�D�s<�C@H�E�H9��m���E�l$<A�D$@M�$$E�M9��0���D�H<M���@@H�E�I9������L�M�E�A<A�A@DE�M�	L9�p��������H��x����}��@@�x<H�}�H��x���H9�H����i���H��P����u��@@�p<H��`���H��P���H9�(�������H��X�����`����@@���p<H��d���H��X���H9�0��������H��h�����d����@@���x<����f.�D�s<����E�l$<���D�H<���E�A<�����F<�E���H��x���� ����F<�E���H��P����7����F<��`�����H��X����S����F<��d���������2��UH��AWAVAUATSH��hH�}�L���@�u�dH�%(H�E�1�I�G H�E�H��t@A�W<)P<H�@ H��u�H�}�A�O<����M�)�A�@�E����I�G(H�E�I�G(H�E���H��H�E�H9�tP�E�H9�t8L��p����}���H�E��@<�E���H�E�u�H�H�E�H;E�u�L��p����E�A�G@A�G<H�E�H��t+�u�p<H�@ H��tDA�W<P<H�@ H��u�A�G<�E�H�M��E�E����H�E�dH+%(�LH��h[A\A]A^A_]��H9E�t�@��t!L�e�H��L9�tH�߾���H�L9�u�A�G<A�G@�E��N���H�M��E�H�q(H�A(H��x���H�u�H9���H�M�L�y(H�A(H�E�L9���A�L;}�tFI�_(M�w(I9�twA�L9�tH�߾���Dk<H�I9�u�E�o<A�G@M�?E�L9}�u�H�E�D�`<�@@H�De�H�E�H9�x����q���H�E��u��@@�p<�[����E�o<�f�D�a<H��뽋A<�E����6����u/��D��UH��AUATSH��H��L�gH�WxdH�%(H�E�1�M����L;g��E1��DH�SxI��H��tL��H��҄�u91�D��H����%��D��L��H��ShIc�HCH;uL����C(A��D9�tL��H�����I9�u�H�E�dH+%(uQH��D��[A\A]]Ë�����j���L���H��tL��H��҄�u
H�SxL�c�E���L��H����I�����b.��f���UI���H��AWAVI��H�5���AUL�����fHn�ATL��L������SfIn�H����H���dH�%(H�E�1�)�����H�H���H�=}Gƅ>���fH:"�H�����L��H���H������Dž����)�p����*��H�e'bfo�����xtL��M9���OH��H���E1�1�L�������H��H���H�JL��p����[�H������L��(���1�L9�tfDP<H�L9�u�L�����E1����1�L������H��tD�sH��H���H��L�������L��L� S�H�����L�J8P�1���5��^1�_H��LL��L���w������1�1�L���S���8L���3��H�G&b�x
���x��H�PercentH������������M��L�=)L��H�5GL�
�FL�������TPH�GPH�GP1����XZL����4���L�����H���(�D��L���E��A�ǃ�E������h���\=��ǃ�qt=u�L�����L������I�]L��M9�t;I�MfHn�I�}I��fl�H�KH�AE軘��L��I���@��H�L��M9�u�H�U�dH+%(�DH�e�D��[A\A]A^A_]�A���������x)I����u8�fDI9��uM�6��M��tL9�u���������^���DL9�t�I9��u�A��PM�6��M��u���DH�SamplesH�������@���f.��odƅ���Dž����Perif���������fD��e�o���1�L���e����`������tk�U�����<�����~��>������<������x������o����%���DH�5qPL���������@�L��������fDH�{����DH�1Pf�D��L�����A�ǃ�e����Etr�<���~5��>u����=����~3=u�H��L���)�������������������@��htσ�q�z������D�L���3����^���fD1�L�������I���A������[����	)��f���UH��H��dH�%(H�E�1�f9w(��H�U�dH+%(u����(�����UH��H��dH�%(H�E�1�H�E�dH+%(u�1���(��ff.�f���UH��H��dH�%(H�E�H�G@H�U�dH+%(u���M(��ff.�f���UH��H��dH�%(H�E�H�GHH�U�dH+%(u���
(��ff.�f���UH��H��dH�%(H�E�H�GPH�U�dH+%(u����'��ff.�f���UH��H��dH�%(H�E�H�GXH�U�dH+%(u���'��ff.�f���UH��H��dH�%(H�E�H�G`H�U�dH+%(u���M'��ff.�f���UH��H��dH�%(H�E�H���H�H�U�dH+%(u���'�����UH��H��dH�%(H�E�1�H�E�dH+%(u�1����&��ff.�f���UH��H��dH�%(H�E�H�X b�@��t
H9����H�U�dH+%(u���x&�����UH��SL��A� H��dH�%(H�E�1��~yt�~xE�A��A��-H��H�{�1�RL�1>H��@���CXZH�E�dH+%(uH�]�����%��@UH��ATL�g`L��H��dH�%(H�E�1����L������H��t[I��I�D$�I�L$�I�|$�H9�t/H��H9�tj@�pyH9�tH��H�H9�t>�@yH9�u�D���L�����I��H��u�H�E�dH+%(u2L�e���fDI�T$ H���@y��f.�I�t$ H��@����%��ff.�f���UH��AVAUI��ATSH��H���H��@�����t)�`���dH�%(H��(���1�L�cH�EDž���0H�������`���H��0���I�<$A�t$	H�� ���D�p�����Dž�������H�sH�;L�������D������H�I�<$H�5�=A��1��%��H��(���dH+%(uH���D��[A\A]A^]��$��ff.���UL��;H�
}���H��H��L�
�bdH�%(H�E�1�j���H�U�dH+%(u����#����UL�D;H�
��H��H��L�
_bdH�%(H�E�1�j���H�U�dH+%(u���s#����UL��:H�
]���H��H��L�
bdH�%(H�E�1�j�I���H�U�dH+%(u���##����UL��:H�
���H��H��L�
�bdH�%(H�E�1�j���H�U�dH+%(u����"����UL�T:H�
=���H��H��L�
obdH�%(H�E�1�j���H�U�dH+%(u���"����UH��ATI��H��H���dH�%(H�E�1���)����fA�D$$H�E�dH+%(u
L��L�e������*"��f.���UI��H��AWI��AVAUATA��S� H��8H�U�L�M�dH�%(H�E�1��~yt�~xۃ��-A����E�t$I�B(H��tH�kb�RH)�H�8�����E�E�w$E��L��L�U�A�D$D��A)��?���4��tL�U�H�E��5I��M����@L���R(��1�D��L���5��D��H�5M9L������1�H�5l���L������}�L������+�l��H�E�dH+%(uH�u�H��8D��L��[A\A]A^A_]����� ��ff.���U1�H��AWAVAUI��SH��H��L���H���dH�%(H�E�1�M������+��A��M��tlM�vHc�1�L��H)�H�R;H�I��xM�����
��I���L��M���A�H�0;1�Ic�H)�H��
��A�A���u1L�����H�E�dH+%(u:H��D��[A]A^A_]�f.�Ic�L��H��:H)�H�<1��E
��A������ff.���UH��ATI��SH��H��dH�%(H�E�H��bH�x ��H�=b�PH�C0H)�H�8tsI��$�H��toI��$�H�{ �F��I��$���qt��u*H����tH�U�dH+%(upH��[A\]�I��$�L��pH�����u!1���DI��$�H��p�����L���h.��1��@H���H�p �0�����,���1�����ff.�@��UH��AWAVAUATSH��H��hH�~dH�%(H�E�1�H���<�7���H���_��D�hdMc�L���@��I��H���eH�{�H���$���H�.9H��L��L��1����A��H�;H��u=H�sL����)��L������1�H�U�dH+%(�
H�e�[A\A]A^A_]�DH��bL���I�L9�uH��@BI��@BL�}�� L��p���E)�L���A��H�M�� H��p���H��H��x����"��H��Ic�Ic�H��x���L�M��1��L�c8QH������`&��XZ�,���f�H�~0H��t7H��-���D�hdMc����f�H�K0E1�H�����H��-H�8����A�dA�d������������f���UH��ATI��H��H���H�vdH�%(H�E�1�H�����
��L���!,��H�E�dH+%(uL�e�1����u��D��UH��H��H�~(dH�%(H�E�1�����H�E�dH+%(u�1���4��@��UH��H���N<H�VdH�%(H�E�H������H������H�E�dH+%(u�1������ff.�@��UH��AVAUATI��SH��H��L���dH�%(H�E�1�L�����A��PL��H��I��H��61�����M��tH�L��H��6L��H)�H�<1�����H�E�dH+%(uH��1�[A\A]A^]��6��fD��UH��AWAVAUATI��SH���H��8dH�%(H�E�1����H��D��PA������A��L���4I����!��H��b�xt
H9���>L������L��L������L��L�������>
��D������L��L��2L�
.H���A�� H��5ME�H��1�AV����H�5�3L��Lc�X1�ZL���(���H���L�L2L��D��	M��tcE��L������AƄ$���L��L���	��D������L��L�
�H��H��1�A�� H�'5LE�1��@��L��H�I�A�T$$L��D)��W��E��tI��$�H�E�dH+%(��H�e�[A\A]A^A_]����������H���+��I��H�H�������H����%H9���������PH�I�H���p���H9�u��f���@�2L��L������L������� ��L������L�������������ff.�f�UH��AWAVAUATSH��H��dH�%(H�E�1���uDH��0H����&ƃ$H�E�dH+%(��H��[A\A]A^A_]�f.�H�� L�� H����&L��E1��}���H��H��t����H��A��L���a���I��H��t�L�5b�L������I��H���i���I�D$�I�T$�M�|$�D�hyH9�tI�T$ I�D$�H���@yL�����A�F��t��u�L���u������ff.���UH��AUATI��SH��dH�%(H�E�H�@b�xuj���H�^��tTI�t$I�<$H��.1�L��.����I�$H�;H�5�0A��1�����H�E�dH+%(u>H�e�D��[A\A]]Ð�O|�H��L�
�bL��.H�
��j��A��XZ�����ff.��UH��AWAVI��AUE��ATSH�ӺH��H�EL�M H��P�����l���H���������H��L��X���D�}L��H���H��`���dH�%(H�E�1�HDžp���HDžx����"��L��X���I��H��b�xunE����H��`���D��L��H��D��l���L����H��p�������H��x�������H�E�dH+%(��H�Ę�[A\A]A^A_]�fDL������1��@H��L��L��@���L��X�������L��1��L��X���H��x���H�50L�%w-���L��@�����LI�x���E���/���H��H���H��P���L�������@L��L��X������M��1�L��H��p���H�C3��;��L�%-L��X�����LI�p����������f�UH��AWAVAUATSH��H�$H��HH�=)�dH�%(H�E�1�����H���H��H������I��H���H����1�� Dž��H�����H�H����� H�����H�DL���#��H�����xu�H��L�`I�ٹATL�����1�L����L���-��ZH�5�rYL�����I��H��t�H����H�����[��H��vH������	����ueL���M�������X���H�=;1����L�����������uxA�����H�E�dH+%(�5H�e�D��[A\A]A^A_]�DL�����Lc���J����H���{���L�����J����H���������V�������H����1�A�����������9�r^����H����D�h�I��N��-��H��H������L9�u�H����N��-���H��H������I9�u�����H�H�����K��H��H��t.�=��bL�%}buI�$E1����b�h���I�<$�'����H�=�91��'����J����=��H����J�<��m��H�=F91����L������p���f���UH��H��dH�%(H�E�1�������t$H�=�91����1�H�U�dH+%(u������������ff.��UA��H��H��AUI�պATH����I��A��H�� dH�%(H�E�1����I��M��taH�bL��H�����H��,�L�@HH��+M��LD�1��S����xPH�����H�U�dH+%(u_H�� A\A]]�@H�����M��t$I��L��1��H�/�����y�1��H��H�[*1�������y�1�����DUH��AWAVAUM��ATI��H��S��H��xH�EL��`����U�H�E�H�Ef�M�H�E�H�E H�E�dH�%(H�E�H�:b�xMD�L�E��y�H��H��x���1�H���@H��I����	���E�H������D�}���l����E��@�E�H��x���H�p�H��H��H��p���H�u��	��L�k�A�H��x���H��1�H��8H�E�L9�u?�6@H��D��L��H�H�E��Є���A��+�
M�mE1�L;m���E��u�}��E�A�}yA� tA�}xE�A��A��-H�M�H�H��u��E�H���u�D��H�u��u�L��L���L�E�PD#M��>���fE�H�� L��D�}�H�E�D���Є��`���D��+�l���H�U�dH+%(��H�e�[A\A]A^A_]�f�A��-uDH��p����U���D��L��`���L��L�H@��H���u��u�H�p �u����fE�D�}�H�� H�E�D��L��Є��u���H��x����_����b����I�G@IG8L9��E��(����?��ff.�@UH��AWI��AVAUA��ATA��SH��H���H�EH�� ���L��@���H���L��P���H��X���f��n���dH�%(H�E�1����H��0���H�xb�x��I���L�8H��bA�Չ����H�� ����@H�� ���A����H����X���M��D��L��0�����P���H�ߋ������@������H�� H��P����ztH�� ���H���H�U�dH+%(�cH�e�[A\A]A^A_]�fDH�� ���L�x@�D���H����D�����H��H���I��H���~H���v��ƅ���H��uI�F@IF8L9������D��n���D��h�����@������3H��8���H��0���L�e��@L��H�x��	��1�L��H��x���H��%�c������H��'E1�E1�HDž(���H��x���1�H��L��P���L��H��H��@���D��h����D��n�����H��(���A�����L�����A��f��n�����h�����h���H��X���H��A��Є�u[H��H���tQH��H���H��H��8����;��HDžx���H��H���H��P���H�H������H��P���H��H��fDD��+�������f�H���h�D�����H�����H����H��I������ƅH���H��uI�F@IF8L9���H���D��n���L��P���H�����H�p�H��I��H�����H��(����u��M�n�H�����L��H��(H��8���L9���Džh���A��>�H��I�$H��X���D��H��Є���A��+��M�mE1�L;�8����cE������H�����Džh����A�}yA� tA�}xE�A��A��-I�$D��`���H���m���H��D��L��H��L��0���D#�H���AT��@���VH��(������f�n���H�� D��n���D��`����&���E��H��X���D��H��Є�uH����������f.�D��`���D��+�����(�����h����p�,���f.�HDž(���H��8���L�p�H��(H��`���I9��\A�E1�H��(���E1�D��M��A���!�M�6L9�`���tDH��x���I��E1����L��L���%���E��ME�H��u�M��H�� ���f.�E1�H��8���L�q�H�Q�H��`���I9���I��D���fDM�61�L9�`���tZH��x���H��L��I��A��������ME�H��u�L��H�4 M�������H��x���H��(�������DL��I����D��`���A�Džh���fDH�����H�P�L�p�I9��-���L��8���E��I���Ef�H��I�$H��X���D��H��Є��!���A��+���M�?E1�L;�8�������E������H�����Džh����A�yA� tA�xE�A��A��-I�$D��`���H���n���H��D��L��H��L��0���D#�H���AT��@���VH��(�����f�n���H�� D��n���D��`����'������h����p�k���E1�A�1��
���I��������ff.�f�UH��AWA��AVD��AUATI��D��SH��H����p���D�o$dH�%(H�E�1�����fA��$��w�����H���a�x��A��$ ��w����PA��$$uL��H��x����&�H��x���A��$&�A��$%��H���@H�B�E1�fD9{(t}f�H���L��H������A����w���L�����������H��L�p�������H����PHDž�����B���A�XZ�A��$ ��w���u6f��tQ�E1�H�E�dH+%(��H�e�D��[A\A]A^A_]��I��$�fIn�fH:"���f��u�DžT���ƅv��� �DH��u�DžT���ƅv��� ��v���1�D��H��H��������������w������������H���H��HH�L�p�H9��fH������E1�ƅ`���H��x���H������D��H��h��������w���t
������4H���-����`�������T������:I�F H������L��H����H��X���L����H��X���L��L����`����5�Hc�`���H�5T1�H�x���H����H��x���ƅ`���D��D+�����H���A�I�FXL�p�H9�H�_H��x���f�A�~xHDž����)�����M��$�H������H��h���HDž����H������u�L����X����e���X�����uk�C,D�z9������H���D���h���@H�������DžT���ƅv���+����f�I��$�fIn�fH:"������L��L���u����X�����u�H�������H�5�H��1�A���S��\���fDH��`���L��A�V(H��`���L��L����H��x���H�5�1�H�����s���f����H�5��H��A��D��A�����v���-�����D��p���1�A������D��v���H��1�A��H�5&�������5H������q���fDDžT���ƅv���-����}��ff.�f�UH��AWAVAUATSH��xL���dH�%(H�E�1�I��H�P H��h������"H������H��Dž|���H��p���f�H��p���f�A��BHDž����HDž����H������L��Dž����)�����tH�{�a�z�qH��HH�L�p�H9���E1��f.��C,E�gD9�rH������L����|���L��L������A�VH������H������H�H�H)�H������H����������������u 1�H�9��H�H�����H)�����tiH���E��H��HI�FXL�p�H9�tKA�~xu�L���$����L���L��L��������9���H���H��HI�FXL�p�H9�u�fDD��|���1�H��D��A���@��8H������S$H��p���H�߃��
�H��h���D��|���D9x ~NL����=���DH��p���H�K1����Hc�H�����H)������o���H����R���H�E�dH+%(uH��x[A\A]A^A_]����DUf�H��AWAVAUL�-�ATL��SH��xH������L���H������dH�%(H�E�1�A��$`H��p���HDž������l���1�H������HDž����)��������Hc�H�����H)������RI��$Pƅx���H������H�CL�{H��`���1�L�p�I9���L������M��I��DH�������@,9�[E1�1�L��L��L��A�WH������H������H�H�H)�H������H������t71�L����H�H�����H)�����tƅx���I�GX��L�x�H9�����u���x�����H��`���I��$PH��x���H�H������H9��X@H������A�H�AH�YL�p�H9�����f�H������E1�1�L��L��A�VH������H��H�������q��H��I�����H������I��I9�tH�PL������H������Ic�H�H)�����H������tfE1�I�FXL�p�H9�tVA�~xu�L��������E���^���H������H������1�H�2`���H�H�����H)������,���@H������H�H������H9�x���t2H������H������1�H�Y�r�H�H�����H)����������H������1�1�H�����8H�������S$H��p���H�߃��V�H�E�dH+%(uxH��x[A\A]A^A_]�DL��L���U��������������l���H������H��L��H�������L@�1���Hc�H�����H)����������C����*���f.�UH��ATI��SH�� �E�dH�%(H�E�1�H����DI��$�I�\$(�	��H���H���a�ztI�T$pH�H����f��H*��Y�7H��xif���H*��^��Z�A��$u
/E���L���#�H��t~L����I��H���j���1�H�U�dH+%(��H�� [A\]�@H�ƒ�f��H��H	��H*��X��DH�ڃ�f�H��H	��H*��X��L���f.�L��1��f��I��H������{���Df��7����L���`��������UH��AVAUATSH�� D���dH�%(H�E�1�E����L�gI��H��M����������������A��I�}�E��j���E�H���
���I�ĸI�U1�f��M����H����DI�EH��tvA��$
t9A��$t.A��$A��$
)���H9��n1�H)�fA��$�A��L��1��E�����E�H���m���I��H��u�DH�E�dH+%(�AH�� [A\A]A^]�H���L�`0L�g�������&���fDu&1�M�efA��$�DM�e1�H����A��$
t<A��$t1H��H�ڄ��A��$H��H9��1�H�fA��$�A���]�L������I��H���$���H���L�p(����H����H�<�a�ztI�T$pL�2M��xaf��I*��YH4H��x+f���H*��^��Z�A��$u�/E�r�����@H�ƒ�f��H��H	��H*��X���DL��A��f�H��L	��H*��X��@f��f.�A��$
H��H9����fA��$M�e�7����I�}��H������A��I���m�H��u]�F@f��I*��YW3H����f���H*��^��Z�A��$u
/E���L���X��I��H����I��$�M�t$(�J��H��tmH���a�ztI�T$pL�2M���w���L��A��f�H��L	��H*��X��Y�2H���f���H�ƒ�f��H��H	��H*��X��Q���fDf��J����1��l�����x���A��$
�����A��$���A��$
fA��$�����fA��$M�e���I�E1�f������'����UH��AWD��AVI��D��AUI��ATS��H����P���D�g$��D���dH�%(H�E�1���A�� HDž������V�����������I���H��H�����`L��������@�����tI���fIn�fH:"�A��A��$uL�����A��&ƅW��� ��U�����tA��%<��U�������-��W�����W���H��H�����������A"���1�D��L�������V���tA������4L��������P���H�5t�L��@��A)��[��I���H��PH�BH�JH��h���H�X�H9��H������E1�D��`���H��x���H������E��I��ƅp���H��X�����f���W���H�5�L��1����D��`���A��H�C H������L��H���fH��`���H���H��`���H��L������p����P��Hc�p���H�5o1�H�x���L�����H��x���D+�����ƅp���D�E����`���H�CXH�X�H9�h����+M���H��x���f�{xHDž����)�����H������H��X���HDž����H������u�H�������uuA�E,E�|$D9�V��V���tA����G�4L��������p��������D��`���H�5
L��1�A���������@E���(����L��H��������x��������H��p���H��S(H��p���H��L��������H��x���H�51�L���W�����f���p���E��D��`�����A�E,D9���A���H�5��L��A��D��A�������D���A������U���t����H�E�dH+%(��H�e�D��[A\A]A^A_]���5L���������fDH������f��V���HDž����H��x���H������H������)�����HDž����H������tA������4L���X���I��H�H�X�H9��������p���L������tO��W���H�5�
L��1�L����������H�C A��L��L��H��H��tl��A)�H�CXH�X�I9�������H�5�L���n���@��@���H�5d�L��\@���A)��J��A�E,D9��Q���������S(H��x���A)��^�H�5	L��H��1��J��H�5��a������� �[���H�������A��H��� u��;���f�H��H���E1������5L�������6���fDfA9](�����H��H�����L��L��H����P���f�L������H������H����L����P��HDž����������x�A�XZ���fD�5L������.���fDƅp���E1������l�ff.����UH��AWAVAUATSH��H��hdH�%(H�E�1�����L�cM���/H���f�H��x������E�M����L���������/�wSD�}�A�|$A����A���L��H�����A��C(D�}�D9���E��E���/�L��1����I��H����A��uxL�%�aM�u�A�|$ �a���I���M�}(�q���H����A�|$tI�UpL�:M����f��I*��Y�)H��xjf���H*��^��Z�����@Aƅ
�[���L��H���5�E��C(;E��=���H�E�dH+%(�	�E�H��h[A\A]A^A_]�H�ƒ�f��H��H	��H*��X����f�L��A��f�H��L	��H*��X��J�����u�H��D�s$E����u����H���A������`�r��u��u��b1�H�����4H����G�$dH�5��H��D��E)����H���D�u�L��PI�D$I�L$E1�H�M�L�p�H9���H��L�m�D��I��I���F1�I���L��E��A�V��H�5<�L�����‰��$��)]�1�I�FXL�p�H9E�t:M���A�~xu�L�������uA�G,E�l$D9�~�I�FXE��L�p�H9E�u�L�m�L���E�H�5��H��D�<@D�����D�u��C,E)�A9������H�5��H��A��D������E�����DL��L��������n����A���H��x���f�1�H�������������5H�����~���f�f�����L�}�L��� �� L��f��Z���]�L��H�5�A��H��1����A��E)�����f.�H���a�xtP��L�cM�������H���f�L�`0H�����H��x���L�cM��������E�����>��}�����@UfHn�fI:"�H��AWI��AVM��AUI��ATI��SH��H��dH�%(H�E�1�)�0���H����H�����L���H�\H��1��L���J������H�{��L�s0fo�0���A�H�CDž(���KH�x�aH�8H���H�5�@D��,������D��,���H����M���I�EH��D����|H�� ���L��@���I���ƅD���Dž@��� in �/��-�P�pH�%�aHc�Hc�fA�L��H���H)�I��L����M��D��,�����L�����M��H�AH��H�� ���1���+��D��,�����x"H�Y��fo�0���L�spH�CXD��(���SHI���H�C@H�E�dH+%(��H�ĸD��[A\A]A^A_]�M��tI�H-H��L�:��S���@1�H�
(��L��H��������A���Dž(���E1��W���fDD��,���M��tI�N-M��H�~����f�H�� ���1�L��H����#��D��,��������������f.�UH��AUATSH��H��H���a���dH�%(H�E�H����ztD���H�x0E1��5���t��
t
��
A�1��������H���=�H��H��u�Ic�H���C�,H�U�dH+%(ufH��[A\A]]�DH�xhtD����u���DH�xpu�H���uދ�D��y�.�"z�uɀzuÀ��u�D�hH�2������ff.�f�UH��ATI��S1�H���A�$�dH�%(H�E�1�.x"H���HLJ�H�x0z8u6H�'�a�zu)H�@PI��$��:1�H�������A�$�H����H��H��u�fHn�fl�A�$�L���9���I��$�A��$���L��fA�D$$�C��H�E�dH+%(uH��L��[A\]�t�����ff.�@UH��ATI��H��SH��I��$�dH�%(H�E�1�H��H�@0��tSH��tNH�~ptfI��$�H��p�m�1����I��$�H�@p���I��$��O��L���w���H�E�dH+%(urH��1�[A\]�DH�_ ��H�5����uH���1�H�=t�����I��$�H�Xp�t�I��$�I��$�H�pp����j��������UH��AUATSH��L�nH���dH�%(H�E�1�H��HH�z8t^M��tYH�~hI��txH���H��h�X�1�����M��$�I�}h�+�I�Eh����I��$����L���Y���H�E�dH+%(��H��1�[A\A]]�f�A�}!H�5��ur�B8��uSH�=�1���I��$�L�����1��H�Ch�M�I��$�I��$�H�ph����a����A�UH�=51��V��@L���8���H��I��$�H��H�o����L�ff.����UH��ATH��dH�%(H�E�H���H��HD�B4E��tZ�V8��xS��DI����xbH���H��D���I��$�1��	ǀD�����i�I��$��<��L����H�E�dH+%(u@L�e�1���D��D��	�#�I��$�I��$�H��D�W����`���UH��ATI��SH��H�~(I��$�dH�%(H�E�1�H��H�@0��tSH��tNH�~ptiI��$�H��p��1����I��$�H�@p���I��$�����L������H�E�dH+%(uuH��1�[A\]��H�_ �'�H�5@���uH���1�H�=	�g���I��$�H�Xp��I��$�I��$�H�pp�C���g����I�f�UH��`H��AWAVAUATS��H��x@�u�dH�%(H�E�1��A���E�H�E�H��tU�É�h���H�u�H�F�H�V�H9�t)�}�ua�u�fD�@x�Hy��H�H9�u�u���uuH�}��e��H�E�H��u�H�E�dH+%(�c�E�H��x[A\A]A^A_]���M�D���xyu�@xH�H9�u�M��fD�@xH�H9�uԉM�H�E�H�x �l���H�E�H���q����E�H�}�H�G�H�W�H9�t)�}�uA�u�fD�@x�Hy��H�H9�u�u���uUH�}����H�E�H��u��u�u�������M�D���xyu�@xH�H9�u�M��fD�@xH�H9�uԉM�H�E�H�x ���H�E�H��t�E1�H�}�H�G�H�W�H9�t'�}�uMf��@x�HyA��H�H9�u��uHH�}�����H�E�H��u�D}��?���D�@xH�H9�tA���xyu��@xH�H9�u��@H�E�H�x �#���H�E�H��t�D��l���E1�H�u�H�F�H�V�H9�t'�}�uUf��@x�HyA��H�H9�u��uPH�}��O��H�E�H��u�D��l���E��?�����@xH�H9�tA���xyu��@xH�H9�u��@H�E�H�x �{���H�E�H��t�D��p���1�H�u�H�F�H�V�H9�t�}�uNf��@x�Hy��H�H9�u��uQH�}����H�E�H��u�D��p���A��@�����@xH�H9�t���xyu��@xH�H9�u��DH�E�H�x �۾��H�E�H��t�E1���t���D��H�}�H�G�H�W�H9�t%�}�uS��@x�py��H�H9�u�@��uPH�}����H�E�H��u�A�ދ�t���D��4���D�@xH�H9�t���xyu��@xH�H9�u��DH�E�H�x �3���I��H��t�E1��]�D��I�D$�I�L$�H9�t#�}�uID�@x�py��H�H9�u�@��uHL���h��I��H��u�A�ߋ]�D��B���f��@xH�H9�t���xyu��@xH�H9�u��DI�|$ 螽��H��t�D�u�L��x���E1�I��I�D$�I�T$�H9�t E��uO�@x�HyA��H�H9�u��uPL������I��H��u�L��x���D��I���f��@xH�H9�tA���xyu��@xH�H9�u��@I�|$ ���I��H��t�E1�I�@�I�P�H9�tE��u=��@x�HyA��H�H9�u��u@L���@��I��H��u�E��X����@xH�H9�tA���xyu��@xH�H9�u���@��h���I�x�L��`�������L��`���A���1���UH��AWAVAUATSH��8H�}��U�dH�%(H�E�1�������H�� A��1ۉE�����I��H���Wf�I�G�I�W�H9�t'E1�E��uw�@x�pyA��H�H9�u�@��uwD�L���D��I��H��u�1��}�D�H�E�f��"H�E�dH+%(��H��8[A\A]A^A_]�fD�@xH�H9�tA���xyu��@xH�H9�u��@I� �O���I��H���t���E1�I�A�I�q�H9�t$E��uJfD�@x�xyA��H�H9�u�@��uGL�����I��H��u�E��(�����@xH�H9�tA���xyu��@xH�H9�u��@�u�I�y�L�M��H���L�M�A���L��8M����I��1��V�f��I*��Y7H����f���H*��^��Z�A��$uA/����L������I��H������I��$�M�|$(�)�H��tiH�
��a�ytI�T$pL�:M���v���L��L��f�H���H	��H*��X��Y�H���c���H�ƒ�f��H��H	��H*��X��N���f��M���1�����v��fDUf�H��AWAVAUATI��SH��(@�u�dH�%(H�E�1�@�����E�H���H�X0@H���'I��H��L�k�苼��A��H��uL�������}�A���,����AƇ
�U�L��L�����I���M�w(���H����H�5^�a�~tI�WpL�2M��� f��I*��YgH����f���H*��^��Z�A���/����A�$�/�����A���A��t
�}���I��$������}��AƇH������@L����A��$�H�E�dH+%(�H��(L��[A\A]A^A_]�_����AƇ
������1�fA��
����f�H�ƒ�f��H��H	��H*��X�����f�L��L��f�H���H	��H*��X������AƇ
�fD���#���A��
I�$���f�����(�L���������I��$��AƇfA��
�������ff.�f�UH��`H��AWE1�AVAUATSH��XdH�%(H�E�1��Ķ��H���?I��I�F�I�V�H9��A���xy�A�xx���-H�H9�uۀ�-��I�~ �p���I��H����L�u�E1�D�}�I�D$�I�T$�H9���A���xy���xx�~�-H�H9�uۀ�-�hI�|$ ����I��H���RD�m�1�L�e�I�F�I�V�H9�����xy���xx��-H�H9�u܀�-��I�~ 豵��I��H�����]�E1�L�u�I�E�I�U�H9���A���xy�T�xx���-H�H9�uۀ�-�lI�} �S���I��H���WD�}�1�L�m�I�D$�I�T$�H9�����xy��xx���-H�H9�u܀�-��I�|$ ��I��H�����]�E1�I�F�I�V�H9���A���xy���xx���-H�H9�uۀ�-unI�~ 蝴��I��H��t]1�I�G�I�W�H9�t6���xy���xxt#�-H�H9�u�-uI���h����fDL������I��H��u�A�DL�����I��H���C����]�D�fDL�����I��H������D�}�L�m�A�f�L���h��I��H���A����]�L�u�D�f�L���H��I��H������D�m�L�e�AݐL���(��I��H���C���D�}�L�u�E�L�����I��H������H�E�dH+%(uH��XD��[A\A]A^A_]�� ����fD� ����fD� �c���fD� ���fD� ����fD� �K���fD� ������f�UH��AWAVAUATSH��HdH�%(H�E�1�軲��H�����E�I���H�I�a�@������txI�G�I�W�H9�t&1�f�H��H���H9�u�yx��]�L������I��H��u�H�E�dH+%(��E�H��H[A\A]A^A_]Àzx��fD�E��f.�I�W�I�O�1�1�H9�t*@���.�zyu`H���� H9�tH���H9�u�I�W�I�O�H9�t*@���Y�zyu�H���H9�t� H���H9�u�E��1���fD�zx�f���H����-H9�u��@I� �W���I��H����L�}�E1��]�I�D$�I�T$�H9��A���xy�8�xx��-H�H9�uۀ�-��I�|$ ���I��H����D�u�E1�L�e�I�E�I�U�H9���A���xy���xx���-H�H9�uۀ�-�pI�} 藰��H���^D�}�E1�I��L�m�I�F�I�V�H9��A���xy���xx���-H�H9�uۀ�-��I�~ �8���H����D�e�E1�I��I�G�I�W�H9���A���xy���xx���-H�H9�uۀ�-urI� ���H��tdE1�D��I��I�D$�I�T$�H9�t4���xy�W�xxt!�-H�H9�u�-uI�|$�����L�����I��H��u�A�DL������I��H���?���D�e�E�DL������I��H������D�}�L�m�E�f�L�����I��H���=���D�u�L�e�E��L�����I��H��������]�L�}�D����D� ����fD� ����fD� �l���fDH���H9����f�H���H9�u�����H���H9����-������ �o���fD� ���H���H9�������H���H9�u�E������E���������ff.��UH��AVAUATSH��dH�%(H�E�1����5��'A���L��8E1�M����H������I��$�M�t$(���H���)H��a�ztI�T$pL�2M����f��I*��Y"H���qf���H*��^��Z�A��$u/��A��L�����I��H���r���H�E�dH+%(��H��D��[A\A]A^]�I�t$�H������A�L���`��I��H��t�I��$�M�t$(����H���=H�>�a�ztI�T$pL�2M����f��I*��YFH��xYf���H*��^��Z�A��$u�/���x���A��A��$
�M���L������I��H���a�������@H�ƒ�f��H��H	��H*��X��DL��A��f�H��L	��H*��X��[����H�ƒ�f��H��H	��H*��X��z���f�L��A��f�H��L	��H*��X��?����H�E�dH+%(u:H��H�� [A\A]A^]���f�f�����f������2��f�UH��AVAUATSH��L���dH�%(H�E�1�H���M����H����I��$�H��H9���D�@aE���?�p`A��$$uL���ٯ��A��$"���H��)�A��$������.H)��L�-8�aA�}uiE1�A��$%�����G1�L��H���r���fA��$"A�}t
��A)�D��A��$�&��������D�L��H��� ���A��$%A���>A�}���tA��$"A)�D)�A��$A��$'t	AƄ$'1�fA��$"���1���unH��A�H�E�dH+%(��H��D��[A\A]A^]�fDE1���H)������@E��$&E��t�A��$%�h���@1�H��A���I��$ �[������fDf��t����������F���@����AƄ$'fA��$"�P���f�A��$�;������ff.�f�UH��AWAVAUATSH��hL�zdH�%(H�E�1�M����I�G H���������L�rI��I��H��M��tgH��a�PL��H)�H�8��1�I�N-H���L���"��������oAD$ H�CI�D$0H� ���I�D$��f@H��L�u�A�1�QL��L�`��H��x����@�@���H��x���XM��Z1�1Ҿ����I��H��u(H�C1�H�U�dH+%(u@H�e�[A\A]A^A_]þH��購��H��tI� L����L�s��L���������f.���UH��AVAUATSH��L���dH�%(H�E�1�H���M����H����I��$�H��H9����xa���p`A��$$uL��蹫��A��$"���H��)�A��$������.H)��L�-�aA�}��E1�A��$%�����;1�L��H���N���fA��$"A�}t
��A)�D��A��$�����H�����H�E�dH+%(�SH��1�[A\A]A^]�fD�L��H������A��$%A���A�}���tA��$"A)�D)�A��$A��$'t	AƄ$'1�fA��$"���1���u>H���Z���f�H)������@A��$&�9���A��$%�g���1�H������f�I��$ �C����fDf��t#������H���������D����AƄ$'fA��$"�P���f�A��$�?����}��ff.�f���UH��AWI��AVAUE1�ATI��S��H�������������dH�%(H�E�H���H��tD�hI���L��L�� ���H��(I�G�&��A���I���H��t
��L��L����1�L��H�g�L��L��菦�������u;L�%��@D��L������Ã�e>��B���@���"w-Ic�L�>����e��B���C���"�~fDL���0���H�E�dH+%(��H�e؉�[A\A]A^A_]�@I���H���p�����%�c���L������V���fDA���I�����H��HA�W(A������K�A ��A�G.��)�I�fA�G(��fA+Wf9����������)BH�H)�I����D�L��������fDI�_�'����5��bE�O(��
��I���D�FD���bE�G.H�JHA���WH�=�P1�A�wA�7���H�� �d���@1�L�����Q���f����t+��+�����L��������*����+�p����I���I���H�����H���H�x�I����A��H�xh�!f��H�x0.�z�7fD1��@1�H������A��H���4��H��H��u�fHn�fl�A��L���S��L����蹛�������t8H�����H��`�����	9�D
t��D
�L��1�H�%����I���H��t
��L��L����L��L��������������W��+�>���L���n�������������A�G.A�G.�fA�G(����f.�H�xp����H���������D�������.*�����H�
�a�y�QuA������H�x0�������H�@PI������@H�e�Hc�H�>��I���H��������%�����L������DH�ȋ
�b�������H�h�aA��H�
H�H������b�H�81�����I�������I���M���H����H�x�I����A��H�xh��.�H�x0z��1�� f�1�H���u����A��H������H��H��u�fHn�fl�A��L������L�����I��������t2I��$`�����	9�D
t��D
�L��1�H����~���I���H��t
��L��L����L��L��������A���I�����H��HA�G(A��������Q I�A�W.��)Љ�fA+WfA�G(f9��A�������)BH�H)�I��)����L���o�����I�_聶���5�bE�O(��
��I���D�FD��bE�G.H�JHA���WH�=f�P1�A�wA�7�_��H�� ���1�L�����������H�������H�x0����A�W.A�G.�fA�G(�n���H�xp����H����������D���.�zBu@H�
��a�y�Qu
A���tH�x0������H�@PI��������u����H�x0���������M����5�bL���h���H�e�aA��H�
E�H������b�H�81�����I����(����W���H��a�R�f���H���a�R�+������UH�T���H�
����H�5����H�=O���H��H��dH�%(H�E�H��aH�P H�ٕ��H���H�
+���H��@H���H��`H���H�E�dH+%(u��Z���襹��D��UH��I��H��H��dH�%(H�E�1�H���H�=�H�
��fHn�Ɓ�fH:"�H�����H�ApH���aAXH��H1��P�����t�w �x@�q.t\I��PD�I*H�GH��H�p�H9�tA�AH�VXA����H�r�H9�u�A��fD�I*H�E�dH+%(u3�L��锪��@H�H�p�H9�t��Q*H�FX��H�p�H9�u�f�Q*�蕸��D��UH��AUI����ATH��dH�%(H�E�1��%��I��H��tL��H���!���H�E�dH+%(u
H��L��A\A]]��0�����UH��H��dH�%(H�E�1�H�E�dH+%(u��Х�����ff.�UH��AWAVAUATSH��H�$H��H�$H��H�]H�����M��M�͉����H����L����L��������������dH�%(H�E�1�H���H��H������Ƙ��I�Ǹ����M����H�~���M��M�������M���I���Dž,�������H�����@�����������1�1�1������������f��.�z��
�A��I���H�x0E1��@1�I���-����A��H���|���H��H��u�fIn�fl�A����Y���I���H����
H����轲��1��H�����H�����H�H������H������H�H��aH���H��t跾��H����tH���I�GP����H�e�HDž����H���������H�#�HD�H������É����1ɻ����E1�E1�Dž,������H����L��蜓��I�����,��H��tI���L������L�b��z-�����P���w��H�5I�Hc�H�>���=��~|=��������S���I�������L��臷��H����L�����f�H��H���"��L9�u,��H�U�dH+%(�AH�e�[A\A]A^A_]�=��=���H���H��������DH�)�a�{��H������������H��a��0;�(�������(9��SDž,����������fDDž�����f�H�����H���D�h,E���]I���H���;H�άa�8�!H����H���H�����H������H����I���A��H�H�����H�<�H����I��H��H���pH�xX�eH�H8H���1��L������_���L�������I���x>Ic�H��p���A��H��H����H�
<���H����Ic�H����H�����L��H��HH�z8��M����H�xhH�
��H���HE�A�~!L�Z���D�J8E����E�NL��H�@�1��L�����蝡��L�����I���H��H��x7Ic�H�
���A��H��H����H����L����Ic�H�����L��D�B0E����M����L��L�����舾��L�����L�����uI�D$ L���I���L��H�
��H�B��L�����H�xpH���HE�1��Ϡ��L�������x7Ic�H�
:��A��H��H����H����L��(��Ic�H�����L��I���I���H��t#H����H���H9����xa��H����H�HI���H��H�z0����H��H�����ttL��1�H����L���������L�����H�������x7Ic�A��H��H��(��H�
L���H����Ic�H����H�����L��I���H��H�r4��t��x{��DL��H�G�A��H�W��L�������H�
/�HI�1��l���L�������x6Ic�H�
���A��H��H����H������8��Ic�H�����L��H������1�H���L��������xIc�H�
و��A��H��H����H��,��L�����L�����L�����L���������H���+D9���H�L��H��I�4������,����uH��L��D���B������uŋ�,�����������>���f�H�����L������t���H�ݨa�p�d���@I����ܕ����������������,���2���H�5f�L���k�������������H����H������H�@������H��@��D�����H�
m�H��H�5W�H�=��蓽����
����1���@��H�����HD�H�����H������L��������H�����L��L��������s���H�����e����(�������H�=��1�����E���H�����7���H�����L��HDž��HDž0���*����
���I���H�������H�8�-���L��H��h�������H�����H�����袖������L����H���H����M�������I�^H��@���H��@���������A�FH���H�����f���L��L��L��(���L���O���H�����H����p,���.I���H���&���H�XH������H�S H��������������H�@fHn�fH:"�H����H�O�aH���KH)�H�:���(��H�����L��譌�����H�
a�aH�=���������4��)օ��1A���1������y���I�����8��H��H�J4���Z������R�����DH��D���sI������I���1��	ǀD�����l���I����@���L��������L����A���H���L������"DA����HA����� �[�@L��1�H���ѕ��1�L��觴����t�H�����H�5�	�@���H�����H���JA���I��A��������>��������H���ޯ��H��H�����H�����I��H�����H���aM�l$�x��H�����f�I��$�L�Y�A��$�H������HDžp����`H��P��HDžX�� �LR�H��)�`��������@1����� A��$�������tA��$
<���������-H�����H��P���b���I��$�H��PH�BH�JL�p�H9���	L�����M��I��H��L��L��L��A�T$(H��P��H��X��H�H�H)�I�D$XH��P��H��X��H�H�H9���H���1�I�����H�H�P��H)�X���L��@��D�����H�
6�L��H�5��H�=���\�����
���H��P��L��蔸��f��f/�wf/
��������H�|�L��������E�D�A��I���.�H�x0����H���a�z��H�@PI����1�@H�����H����H����H��������HDž���H�������fo(�H��@��H����)�@��fo�)�P��fo�)�`���x�H�����f�fA��$HDžp��HDžX�� H��P��)�`����H���a�x�xA��$�RA��$
Ƀ��-A��$
����-�����H�����H���1��/���I���H��HH�H�X�H9��A�L�����E���\�L��P��L��H��E1�L���S(H��L��L��蒌��H�H�P��H)�X��I���H��HH�CXH�X�H9����{xu�H�����H��L����'�����u;E��u�H��X��H��P��H�ھ1��2���H�H�P��H)�X���U���DL��H���m�����u�I���H��HH�CXH�X�H9��w���f.�L�����H�����H�����1�H������������-���A��L��1������蠋�������H����I��H���$���H�����般��H�����H�=�1��í���&�fDL����������H�^�L�f�1�H��P������H��P��H��X��I��$�H�H�H)�H�H��P��H��X��L�p�H9�th�H�z�1��ҏ��L��H��L��H�H�P��H)�X��A�V(H��P��H��X��H�H�H)�I�FXH��P��H��X��L�p�I9�$�u�H�����H��萳��1�H�پH�����H�*�胅��A��$���������������H�����f�1�L��A��$�H��)�@��L��0��H��@��H��{��L��L�~��P��)�0���D���AXAY�0���I���ƅ��� H��HH�H�X�H9�����H�����H�����H�p�1���„�����DH�=��1�����M�DI���t)H�����H����P8��t	M�����@,���7H�����E1�1�L��L�����L��H��H��������A�I���Mc�H����H����H���N����H���L��1��H�
�L�����L������j���L�����L�����H�M���x@I��H�����H����A��J����H�����BDŽ<��J����Mc�N��I���H���H����
L��1�H�
ο�L�����L������ϑ��L�����L�����H�����x@I��H�[���A��J����H�����BDŽ<��J����H����Mc�N��I���H����H���tvL��1�H�
F��L�����L������8���L�����L�������x@I��H�˅��A��J����H�����BDŽ<��J����Mc�H����N��I������L��1�H����L�����L�����贐��L����������L�����H�g���H����A��I��J����Ic�L���W�@H�����f�L��1�H��L��)�@��L��0��H��@��H�]x��L��z���P)�0���Ԓ��^_����DI�����@L�����L������������,���(�L��H����1�L������̏��L������*�H�����@H���H�������ƅ��� � ����H��X��H��P�����H�����H�x�A����L�����臬��L�����I��I���I���H����H����H���H9����@`��H�
ֻH�ƻHE�L��1��L�����H������L�������x/Ic�H�
���A��H��H����Ic�H����H�����L��I����K���D��	�
���I���I���H��D�C�����I���L��(��苝��H��I���H�PpH9��6H�PhH9���H��DH9��O�H��HD�Z4E���;���8�����-�D��DE������D�N���f.������H���L��1��|�����H����H���H�����H����H�����H����H�����H�H H������A��H�����H�HH9HPt/H����H��H�����Ic�H�J`H��@H�4�H��H����A�H����Ic�I���H�����H�<��{�H������m���HDž���I���E1�L�������L��L�����H������z��L�����H�����I��I���H��H���I���L�@M�������H�����1�L��L��L�����H��H����輳��H����A�Ic�H�����L���r���H�����L��I��E1�L��L��H��H�����y���H�����H����A�H���Ic�H�����L������H��(��L�������"�H��D�J���1�AQ���H��@��L�i�H���ɥ��_AXH�������HDž���H����E1�����&�<�H�����1ɺL�����詟��L�����L���������H�
%��U���I��������L�g0�R��������f(��^�������H���{f���H*��Y����f/
�o�!�H,�H�����H���a�C�Z��A����1�L��L���s����A��L���r���I��H�����A��M�u�t1�AƅfA��
A��t=fA��t2H��a�xt%�{�:H�����I��(I��H��S �L��蒀��AƅL��I������A��Aƅ
�9���E1�fE��
�6���H����H�ɓa�8��L���L�����M�����I�^HH�����M�f`H��H�����A�H��I��L�%�1�AT�@��@肣��XL��ZM��1�1Ҿ�܄��I��H���?�H��裑��H����H�{ L���ށ��H�����L��0��H�@HH��(�����x��I�E(H���}f��H*��Y����f/�msv�H,�H���������\
�m�H,�H��?H���������H����E1�HDž���I���
�H�ƒ�f��H��H	��H*��X��p�����%�_����\hmH���H,�H1�H��������I�EpH��4���L���H��H�����A�H��I��L���1�AT�@��@�	���AZL��A[M��1�1Ҿ�a���I��H�����H���(���H����H�{ L���c���I���L��0��H�@��������H�H-H���L��1��qv����H�ƒ�f�H��H	��H*��X��n���H�����L��菴����Dž,������� �L��蓢��I���HDž0��H�@H��(���I�HDž0��I�FHH��(���.��!���H������@�8H�����H�����H�=ͳH��1�蓟����I���I����L������H�����HDž0��H�@HH��(�����p�����UI��H��AWI��AVAUATSH��H���H�����D�g�~$�aD������~
��adH�%(H�E�H��{��fH:"�H�Kp��fH:"�A����A����H���a�xt4I�?I9���H��E1�1�H9��H���A�I9�u�A����D��A��L�� ���������L��1��H�L��8���L��X����U�)U�)M��H�=��L�M��E�躒��M�7M9�t5f�L���8u��H���@�����D���H��H9���M�6f��D���M9�u�H��X���E1�H�����H��tD�{1�H�b�H�5�L���z�����HD��L���%���A�ƃ���o���V��
tu���u�H��tH�{��}�uÀ}�t������t�H�ž�L��1��~s���E���M�6M9������B������q��=�d���L�U�M���W���H���fD�����M��I��H�����L������I�A������H��D��L�M��E�L��M��PH����������H�5��L���XZ�
�����	������q������M�vL;�����t���M�v�k���A��H�����A����_A�������fDL���0���H�E�dH+%(��H�e�D��[A\A]A^A_]�H�G�x�:���H�x	�/����I�?H��E��1�D��APH�����I�����_AXA����H�5d�L���i������i������@H�����M��H�����L��蟠��H��L�M�E1��E��D��L�������H�����P���YL��^��H�5��讚����	��������qtZ����M�vL;����u�M�v�z����A��L��D�����A��������A����/������f.�A������M�6L;���������M�6���1�E1��"���M�6L;��������M�6���L��D���������H��������A������0���苒��ff.���UfHn�fH:"�H��AWAVAUATI��Sf~�H��xdH�%(H�E�1�)�`���HDžx����s��H����I��H�Ov��fo�`���I���A���A���܁��1�1�1���}������f�1�L��H��x���)E�)E�)E�)E��„�����
H��L�5u�L�-�L�%��1ɺH��L���o����Bto%��
����?u�L��L�������f.���atk��qu�L���ޓ��H��x������1�H�U�dH+%(��H��x[A\A]A^A_]�fDH��x���1�L��H��tgL������\���f.�I���H���B���H�PH���5����~@H�u�L��fH:"�E��p������f�H���a�@
���H�5	�됸�����M���舐�����UH��ATI��SH���H��dH�%(H�E�1�����f�L����薉���S*�g���t������wH��H�KL�K L��PH�5n�1�A��$�A����q��A�T$$A��$��D)�XY��H�E�dH+%(u1H�e�[A\]�DH�E�dH+%(uH�e�H�s-L��[A\]�Ez��蠏����UH���H��AWAVAUATL�����SL��H����~��adH�%(H�E�1��H�H�B H������H�x0H����fH:"�H�0�aH��(���)�p���H�������Nq��H���
H��aH��E1��H�G �I9�LB�����������G�������脈��H��H��u̹�1�M��H�������L�Ѩ�ŗ���H�
E�L�爅����H�����H��HN�H������H�@ H���1��&s������L�=+�L�5�L�-ɫ1�L���~����/��d��t��u�L�牅�����y�������H�U�dH+%(�QH���[A\A]A^A_]Ð������蔇��H��H������������qt�=t�1�L���
~����/�{�������^���L������E1�L��L��L��L��L�����蕜����
�3���������0L�����u������H�5��aH�ȍQ �E�<xtjH������L��L�����跊��L�����H��t2fn@�H�� ���fl�)���������H��aE1��-���L��H�=��1��Ȕ�����L�Ϻ1��D���H������H��襂��L�����댸�������譌��f.���UH��AWI��AVI�ֺAUI��H�5�ATSH��(dH�%(H�E�1��Br����u~A��IcF��?rI�H��I�H�H�U����H�U�H�H��tKI^1�M�F L��H��H�2���*{����x&A�FH�E�dH+%(uH��(D��[A\A]A^A_]�A��������Ӌ����UH��ATI��SH��H��dH�%(H�E�1���F uKI�D$��u_����������H�E�dH+%(�H��[A\]�fDH� -F +met�GricH�I�D$��t�H���2y��fo��edH��@ --xf�p�@I�D$��r���H���x���gsH� -F +ireH�H�f�H�@
I�D$���H���H����x���gsH� -F +ureH�H�f�P�@
I�D$�����H���x��H�H� -F +phyH��dr�Cs_adf�C�C����b���f���UH��H���H��ATI���SH��H�8�adH�%(H�E�1��3�R}���y��L���p����xYH�5Ч�1��m��H�?�aH�8�ߙ��1�1�1���u�����~��H�E�dH+%(u*H��[A\]�Nq��fD�3L��H�c�1�1���|���蜉��ff.����U� H��AWAVI��H������L��`���AUE1�ATL������SH��H��dH�%(H�E�1�H������H��0���H��8���fHn�L���H�fI:"�H������)�@����g���fo�@���L���$H��H��(���L���H��`H��x���ƅ����)�`����n��H��@���H����H����H�sH������H��輁��Hc�p���H��`���H��H�H�h���H�g�L���H�H��aL�
,�L�-[�H��x���L��x!MD�H��1�S�Ww��ZY����p���x	����p���H��`���H�́aH�H�5�L�
ԥL�!��z!H�4�L��H�
��LD�H��H��h���SH�<�H��x���1���v��AZA[����p���x	����p���H��`���H�W�aH�H�5��L�
_�L���H�4��z!H�
"�L��LD�H��H��h����SH�<�H��x���1��ov��AXAY��x��p���L��H�=�����l��H��aHc�p���H�5L�H��`���L���L�
פ�z!I��L��H�4�MD�H��h���H��S�H�<�H��x���1���u��^_������p����X��p�����?�p�?A�@)�A)؃��_H��8���fn�Hc�D��fo^�H��fp���fo5f�fo^�H�H��f��L�1���@���Dfo�f��f��fo�fr�f��fr�f8%�fs�f8%�f��f��f��Df��DH�� H9�u�D����D9�����H��@���Hc�)؍@��H�H�H��̀���H��H��̀����J��?tT��Hc�)؍@��H�H�H��̀���H��H��̀����J��>t&��Hc�)؍@��H�H�H��̀���H��H��̀���L��8���H���@�)�I�4<L��
j��1�L��H�1ҍ�ω�8��������8�����A���ZA9��9��|H��@���A9���8���E1��r����~"E1�@K�<�I����D9��E���H��}aH�5y�H�H���H�5�M��PH�
_�VLD�8���H��X���HD�L��0���H���1�M���ms��ZY���H��X���臊��H��X����q��1�H�U�dH+%(�9H�e�[A\A]A^A_]��H��8���L��H�������j���f.�L������E1�H�
x���8���L��H�5�H�=�����D��8�����
�H��@����gq����~eƅ8���A���������H�"�H�������H�H��0����D��8���H��ŀ����~~��D��8����V���f�Hc������Hc��X�����������M����1�L��(���H��0���H�
X�D��8����d���D��8��������p����$����؂�����UI���H��AVAUATI��SH��M�(dH�%(H�E�1�� f�E��Us��L��M�t$8���p��M���L��I��H�E�M9�LF��4�g���A�T$$L��L���l��H�E�dH+%(u
H��[A\A]A^]��5���D��UH��AVAUH��H���ATSH��H��@���H���~A|adH�%(H�E�H����fH:"�)�0������H�{��1�I��H���v��L���L���L��@���L���H�x�Ӿ
��}���SH��u�Hc������I��H����L�(L�p�fDL�h�I��M�n��
L���}��H��u�H��P����fo�0���L��P����H�L��]�H�TyaH���H�5��L��h���H�E�1�)M��e����x<H��D1�L���6q��=��a��?t~J��htx��qu�L���l��L���n��H��@����|n��H�E�dH+%(uvH�İ1�[A\A]A^]���t���t��@=uH�E�
�w����=�e���H��L���m���U���DH�E�H�P�H��	HG�H�E��7����"���f���UH��AUI��H�5��ATI��H��dH�%(H�E�1�迀����tH�E�dH+%(u6H��1�A\A]]�H�E�dH+%(uH��L��L��H�=pbA\A]]�|��������UH��H��dH�%(H�E�1�H�E�dH+%(u�1�H�=L����d���b��f���UH��AWAVAUATS��H��hH������H�������������H������������dH�%(H�E�1��l���Hc��H������舄��H������H��������������H������E1�A��L������L������H��x����fDA��I��H��E9���I�?�@L������E�OE�G1�L��H�8��H���m����y�A�F�E��t6H������HcЉ�H��H��L�d�H�I)�DH��H���l�I9�u�H��������k�������H�U�dH+%(��H�e�[A\A]A^A_]�@L������D������1�L��x���L��D��讈��Hc�D��M�$�@L��I�����M9�u�H�������sk����x�9������x���H��L��@���H�������H�;L��H+=/bL������~��H�=bH;L������HcЍpƄ@���,Hc���H)�L��q~��H�;�@L���a~��H������L��H���.v��D�CH��yH���������L��E��HD�H������H��va�z!H���HD�H��������t��L��LE�L�=SyM��E��tNL������L������1�� H�
�T�L��L����������SL������H��L��������LE�D�CH��xI��E��xgH��x���H����1�� H�
#T�L������L������H�������v����CL��x���H������L��������H���L������LI�H��taL���H�H��t|H��H�
���AVH������������Q������H������ARATAWAQI��1�RH�R�ASAU�j��H��`���R���H������豁��H�������%i��1��8���fDH��wI���u���H��������1��	���Hc��r����{��f.�f���UH��H��dH�%(H�E�1��{abH�E�dH+%(u����z��D��UH��ATL�%abSH��H�stadH�%(H�E�1���u�WD��x���8u�;L���/V�����t�H�E�dH+%(�H���[A\]�u��D�x���8u�;L���t�����t�L�%��fD�cx���8u�;�L��识��H���t�H�E�dH+%(uH���[A\]�(u����y����UH��ATH��dH�%(H�E�1�@��u
�H`b��tDL�%uqa�3`bL������a�����H�E�dH+%(u'L��L�e���W��DH�E�dH+%(uL�e����Vy��fD��UH��AVAUATSHc�H��dH�%(H�E�1������=�_b��L��X����1�HDžP���L��H��@���L��@����H�HDžH���L��P���1�1ҿL���S[�����0�������v��E1�8A��A��H�E�dH+%(�H�ĠD��[A\A]A^]�f��=�^bt'L��X���1��HDžP���L��E1��H��e�����^b��{��H�5������s��뺐��^b�{��H�5�������s�����fD��|��A�Ń��H���1��L��1��H�fo�L���L��@���HDžP���)�@����:Z����������H���sw��A�����A���������{w��ff.���U�����H��AUATH��dH�%(H�E�1��w`���bs���^����z��A�ą���1�1�������c��A�ą�����
l���l��A���L�-��H�=��Bz��L���r��L��L�-f�r��L���r��L���wr��L��L�-����cr��L���Vr��L���Ir��H�=�oa�
Q���V���Ss���nl����b��H�E�dH+%(u-H��D��A\A]]�DH�1naH��1��01��Oi�����v�����UH��ATH��dH�%(H�E�1�@��tH�_na�8uZ��W��L�%�maL����c����u%�e��H�E�dH+%(uGH�=�naL�e���@|���+]���y��L���T����@H�5�na1�H�A�H�=~��lq����eu��D��UH��ATA��1�H��dH�%(H�E�1��k��D��H�5x��:^��1���W����UH��ATA��1�H��dH�%(H�E�1��Vk��H�58�D��L������]��H�=��V��� L���R��L������z��1��W�����UH��ATI��SH���H��dH�%(H�E�1��Ee��L�����4�w{��H�E�dH+%(uA�T$$H�3H��L��[A\]��^���Jt��f.���UA���H��AUATH��P���L��P���SH��H�+�H���~DnadH�%(H�E�1��H�H�;���L��D�E�fH:"�H�laH��h���H�5ڋ)E�H�E�1��yX������H����1�L����c����q��K��
tT�������|�D�m�L���>_��H�E�dH+%(uuH�ĘD��[A\A]]�fD=tA=u�D��P���뻐=t)=t�f�1�L���fc����qt߃�
tЃ�t��u�A���������A������|�����r��D��UH��AWI��AVAUE1�ATA�<SH��HH�}�H�u�H�U�H�M�D�E�dH�%(H�E�1��'f.�H��L)�A9�DL�8A�Ut(L�xA�վ
L���n��H��u�L���Y`��A9�DL�L�5jaL���x��E�E	A�L$1�H��kaD�E��M������D�����D����)�H��ia����‰�����)�A���V��D���M��U��Ɖ߉E�E��Rq���E���H�}��E�t�߉��	R��H�}��T����D��A������Q��H�}�D���A�E��D���%]��B�+L�-���E��X���E�fD��D�����Q��D��L���i��9]�u�]��u���K�߉M�A�L$��p��D���{�sQ��H�}�D����h���X��L���O��D�m�D�e�D��A���&^��A�ǃ�
�3����*���!H��WbD�e�1�D�e�H�E��T������E���D��4�P��� �}���(X��L���O����1��D���]��A�ǃ��tg��
tb��t]L����v��A��t�H�M�Hc�D��D�<�M��C�E��4�P��D������]��L���N��D���P]��A�ǃ��u���SHc�H�E�Hc�H�}��H����v��H�E�dH+%(uAH��HD��[A\A]A^A_]�f�H�=	�A�
�l���2�H�`Vb�1�H�E���o��ff.���UH��AWI��AVI��AUE1�ATE1�SH��8H�}�H�u�dH�%(H�E�1��#fDH��L)�A9�DL�8A�Ut,L�xA�վ
L���Vk��H��u�L���]��A�UA9�DL�A�EA�L$A���E�M���H�Nha1��M������D����D��)�H�zfa����‰�����)�A����R��D���M�D��Ɖ߉E�E��n��H�}�t�u��߃���N��H�}��vQ����D�����N��M����D�m�H�}�A���E��D��D��D���Y���{�D���N��D��H�=<���e���{�D���jN��H�E�dH+%(ubH��8D��L��[A\A]A^A_]�e��f��U�A����DH�E�dH+%(u(H�}�H��8E��D��D����A�[A\A]A^A_]�IY���m��@��UH��AVAUI��ATI��H��L�5IeadH�%(H�E�1�L����s��1�L��L���t����T��H�E�dH+%(uH��L��A\A]A^]�K���1m�����UH��AWA��AVI��AUI��ATI��SH��H��dadH�%(H�E�1�H���Qs��L��L��L���t���nT��H���VK��H�E�dH+%(uH��D��[A\A]A^A_]��Y���l��ff.����UH��H��AUI���ATI��H�}�SL��H��dH�%(H�E�1��{����~BH�u�H�=��1�H���Ih��H�}�A���
Z��H�E�dH+%(uMH��D��[A\A]]�H��eaH�
G�H�Mb1��H�;�L��H�;L��L��A��[�����k��ff.����UH��H��AUI���ATI��H�}�SL��H��dH�%(H�E�1��6z����~BH�u�H�=Ê1�H�G��yg��H�}�A���=Y��H�E�dH+%(uMH��D��[A\A]]�H�eaH�
~�H�}a1��H�;��K��H�;L��L��A���Z����k��ff.����UH��H��dH�%(H�E�1�H�E�dH+%(u�H��H���1�H�=��f���j�����UH��H��dH�%(H�E�1�H�E�dH+%(u�H��1�1�H����rf���mj��f.���UH��H��dH�%(H�E�1�H�E�dH+%(u���/j��ff.�@��UH��AWI��AVI��AUATH��L�-�aadH�%(H�E�1�L���Hp���L��L��Hc=�TbL�%?caH)�L��u����TbA���H�*ba��Tb��Hc��A�<
t,L���H��H�E�dH+%(u.H��D��A\A]A^A_]�@L����K����P���iTb��Ri��f���U1�H��ATI��H��dH�%(H�E�H��ba�8���oI��1��HM��H��`aL��0�`���P��H�E�dH+%(uH�=<caL��L�e����G����h��D��UH��H��H��badH�%(H�E�H�AbaH�H�E�dH+%(u
�H�=����J���h��D��U�H��H��H�`adH�%(H�E�H�G �
��Hc�H9�r1�H��H��fHn�fl�GH�E�dH+%(u���h��ff.�@��UH��AUATH��dH�%(H�E�H��_a���~~1��b��L�-�_aL���0n��H�)aa1��A��A��A���K��A�1�A� H�T_aA���D����F���O��H�E�dH+%(u.H��L��A\A]]��E��@H�E�dH+%(u
H��A\A]]��Jg��f.���UH��AWAVAUATSH��dH�%(H�E�H��^a�����H� H�����(L�?��1��(a��H�=�^a�<m��H�5`a1��A��A��A���J��A��1�L�5a^aE�l$�A��D��A��f���D����F��L���I��A��D��A� ��H��E���5�J��A�1�D��A� ���H�H�CH�s �H���E���M��H�=�]a�D��H�E�dH+%(umH�ĸ[A\A]A^A_]�H�WL�� ����L��@���L��L��`����\��H�S L���\��H�M��M��H���dL��1��P������e����UH�TbH��H��dH�%(H�E�H�)^aH�H�E�dH+%(u���`e��UH���H��H��SH��dH�%(H�E�1�Hc�H�1Ҁ8HE�H�@H��H���x�u�z�HE�H�Q f.�H��H���x�u�z�HE�H�Qf�H��H���x�u�z�HE�H�Qf�H��H���x�u�z�HE�H�Qf�H��H���x�u�1ۀx0�Àz0�€x��H�P�HE�f�Y@H�A8�H��H���z�u�x�HD�H�Q(H�E�dH+%(uH�]����(d���UH��AWAVAUATSH��(H�}�dH�%(H�E�1��=EObtH9=4ObL�-%Ob��H�}��[��I��H�����=�Nb��H��\aE1���H�{L��H��0�R������L�+M��I��M��u�H�=�Nb�MQ��fIn���NbfI:"�)�NbfIn���NbfH:"E�)�NbH�E�dH+%(uiH��(L��[A\A]A^A_]��H�=ANbH����c�����L���L��L�-Nb��P����O�,I��L-\a�S���@E1��m�����b��ff.���UH��AWI��AVI��AUL�m�ATSH��xH��h����wH��x���dH�%(H�E�1�HDžp�������fDL��p���H���I��M&Ict$H�H��h���H����H���a�����}IcD$H�
O�f�A�T$)E�H�H�E�H�E�)E�)E�)E���ta1ېI�$��L��<�����H�}�tH��x���L��L��L��Ѕ�u2��A;\$r�H��h���uH��p���A�FH��p���H9��*���1�H�U�dH+%(u@H��x[A\A]A^A_]�f�A�D$f�H�E�)E�)E�H�u�)E�)E����P�����<a��ff.����UH��AWAVAUATSH��H��`���H��h���H��X���L��P���dH�%(H�E�1��G���2I��E1��df.�H��h����dD�������NH��X����PH��P���H��X���L��L����=����1A�EI��L9���M��L�=��H��`���I��MuIcvL��!_����t�I�A�^f�H��x���IcF)E�L�H�E�H�E�)E�)E�)E���x�H��h���L�}�teE1�L�}�B�3H��x���L�����t���H��<��~���H�}�H������D��t���A��D9�}�A�EI��L9��:��������Sf�H��x����L��HcË<�I���&���H�}�����A�^�E��u����D��t������D1�H�U�dH+%(uH�Ĉ[A\A]A^A_]��0_����UH��AWAVAUATSH��dH�%(H�E�1��G��tuI��I��E1�E1�f�L��H��L��H��IHcsH��]����t�CI�A�FI��L9�w�H�E�dH+%(uH��L��[A\A]A^A_]��E1����^��fD��UH��AWAVI��AUATSH��H��X����WdH�%(H�E�1����:H��h���I��E1�HDžP���H��H���DL��P���I�$L���H��H���I���H�L�H�=%�HcCH�H��`����C����E1�f�H�D��H�
�Hc�L��HȀ8HE�H��h���H��H���x�u�zL��HE�H��p���f�H��H���x�u�zL��HE�H��x����H��H���x�u�zL��HE�H�U�@H��H���x�u�zL��HE�H�U�@H��H���x�u�zL��HE�H�U�@H��H���x�u�zL��HE�H�U�@H��H���x�u�zL��HE�H�U�@H��H���x�u�zL��HE�H�U�@H��H���x�u�zL��HE�H�U�@H��H���x�u��R�@�JЃ�0fn�f:"�f�E�H��tH��X���H��`���L��L��Ѕ�u/A��D;k�Y���A�T$H��P���H��P�����H9����1�H�U�dH+%(uH�Ę[A\A]A^A_]���[��f���UH��AVAUATSH��H��dH�%(H�E�1����H��tqI��H��tD�@E1�L�%̋��u�VfDA�FI��L9�vCL��H��H��IVHcrL��9Z����t�I�FH�U�dH+%(uH��[A\A]A^]�D1����7[�����UH��AVAUATSH��H��dH�%(H�E�1����H��tqI��H��tD�@(E1�L�%���u�VfDA�F(I��L9�vCL��H��H��IV HcrL��Y����t�I�F H�U�dH+%(uH��[A\A]A^]�D1����Z�����UH��AUI��ATI��H�=��SH��H��SadH�%(H�E�1��f.�H�{0H��0H��t L���Z����u�H�{L���H����u�H�{H�E�dH+%(uH��H��[A\A]]���Y�����UH��AUI��ATI��H�=(�SH��H��RadH�%(H�E�1��f.�H�{0H��0H��t L���kZ����u�H�{L���kH����u�H�{ H�E�dH+%(uH��H��[A\A]]��aY�����UH��AUI��ATI��SH��dH�%(H�E�H�eRaH�X��H��0H�{�t1�L��L��H���m=����t�H�U�dH+%(uH��[A\A]]���X��f.���UH��AUI��ATI��SH��dH�%(H�E�H��QaH�X ��H��0H�{�tL��L��H����J����t�H�U�dH+%(uH��[A\A]]��lX��ff.����UH��H�=��H��H��dH�%(H�E�1��Y�����H��)aHE�H�U�dH+%(u���X��ff.���UH��H��dH�%(H�E�1�H�E�dH+%(u�H��H�=C)aH��1��!<���W��ff.����UH��H��dH�%(H�E�1�H�E�dH+%(u�H��H��H�=)a��I���nW��ff.���UH��AWA��AVE1�AUI�����RATL�%h�S��H��dH�%(H�E�1��#DD�{�E9�|1C�7H�
��\���H�Hc<�L�L����W����t1y�D�sE9�~�1�H�U�dH+%(u)H��[A\A]A^A_]�fDHc�H�>�\HcD�L����V��f.�D��H��`1��N?��f.�@��UH��H��dH�%(H�E�1�HcG H�U�dH+%(u���;V��ff.���UH�
4bH��SH��H��L�E�H��H�-�_H��dH�%(H�E�1�H�E���D����t(H�}��OE��H��H�C�H�U�dH+%(u
H�]��ø�������U��@��UH��H��H�dH�%(H�E�1��L;��H�U�dH+%(u�H���tU��@��UH��ATI��SH��H��H�dH�%(H�E�1��;��H�H9�~<I�|$���<����H�E�dH+%(u9H��H�=V�_1�[A\]��R��f�H�E�dH+%(uH��1�[A\]���T�����UH��H��dH�%(H�E�1�HcG H9�~LH�GH��H9�tH��u��H��tH�H9�u�H�U�dH+%(u&�H�p�H�=	�1��PR��H�E�dH+%(u�1���XT�����UH��ATI��H��H�dH�%(H�E�1��B��H�E�dH+%(uI�D$L��L�e�H��@���S��ff.�f���UH�
�aH��SH��H��L�M�H��L�E�H���_H��0dH�%(H�E�1�H�E��E����P1��E����E����B��ZY��t.�U�u�}��\��H��H�C�H�U�dH+%(uH�]����������TS��@��UH��H��H�dH�%(H�E�1��K��H�U�dH+%(u�H���S��@��UH��ATI��SH��H��H�dH�%(H�E�1��sK��H�H9�~<I�|$����Z����H�E�dH+%(u9H��H�=��_1�[A\]�P��f�H�E�dH+%(uH��1�[A\]��R�����UH��ATI��H��H�dH�%(H�E�1��'-��H�E�dH+%(uI�D$L��L�e�H��@���-R��ff.�f���UH�d�_�H��ATL�U�H��D������dH�%(H�E�1�D���L��fA��
A����@����x1H�}��05��H�}�I���?��H�E�dH+%(uL��L�e���@��5��I�����Q��f���UH���_�H��ATH�}�H��dH�%(H�E�1��n@����x2H�}��4��H�}�I���?��H�E�dH+%(uL��L�e���D�S5��I�����	Q��f���UH���_�H��ATL�M�H��H���L���L��dH�%(H�E�1���?����x1H�}�� 4��H�}�I���t>��H�E�dH+%(uL��L�e���@��4��I�����yP��f���UH�,�_�H��ATL�U�H�� ���dH�%(H�E�1�H���P1�������L���D���L���1?��H�� ��x1H�}��p3��H�}�I����=��H�E�dH+%(uL��L�e���@�4��I������O��f���UH��AWAVAUI��ATI��SH��dH�%(H�E�H�G�xt.H�E�dH+%(�xH��L��L��[A\A]A^A_]�E��f�H���8��H���]��I�\$I��H��0H����L���N��H��H��t�H�@M��$��S,�{(L�H�C8�tcA�ר��I��tD��L���N@������D��L���+4��H�c8�H���?���H�U�dH+%(��H��[A\A]A^A_]�Hc�I�4>L���y/��H��H�C8�u�t
�Q����{/���f��{�A��H��H������H=�����H��0����@I�4>L���/��A����H�C8A���������S,S(�����L���x1���-����M��ff.���UH��H��dH�%(H�E�1�H�E�dH+%(u������1�H�=��_��6���M��fD��UH�5=JH��H��L���H���dH�%(H�E�1����H���H���HE�H�E�dH+%(u�H�=��_1��6���4M��@��UH�5h�H��H��������dH�%(H�E�1����H�k���HE�D���1�D���H�=c�_�6��H�U�dH+%(u���L�����UH��H��dH�%(H�E�1�H�E�dH+%(u"������H���1��H�=9�_�5���_L��ff.�@��U�I��H��H�9�_H��ATSH��H��`���L��`���L��h���M��H���dH�%(H�E�1�HDžX���Dž���Dž���Dž���Dž���Dž����H�L��H��T���HDžx���PH�E�H�
��aPH�E�PH�E�Dž���Dž ���Dž$���Dž(���Dž,���Dž0���Dž4���Dž8���Dž<���Dž@���DžD���DžH���DžL���DžP���DžT���PH�E�PH��P���PH��L���PH��H���PH��D���PH��@���PH��<���PH��8���PH��4���PH��0���PH��,���PH��(���PH��$���PH�� ���PH�����PH�����PH�����PH�����PH�����PH�����PH�E�PH��x���PH��X���PH��p���P1��/9��H�������H��X���H��tH��p����~H��p�������������H�{L��Džd��������������	Ѝ��������	Ѝ���	Ћ��������	Ћ�������� 	Ћ� �������@	Ћ�$�������	Ћ�(�������	Ћ�0�����	��	Ћ�4�����
��	Ћ�8�������	Ћ�<�������	Ћ�@�����
�� 	Ћ�D�������@	Ћ�H��������	Ћ�L�������	Ћ�P�������	Ћ�,�������	ЋU�����	Ћ�T����E��h,��1�H�U�dH+%(uH�e�[A\]�fD��������H��ff.����UH��ATI��H��H��dH�%(H�E�1��W0��H�E�dH+%(uI�D$L��L�e�H��@���-H��ff.�f���UH��SH��H��H�M�H�U�H�5��H��(dH�%(H�E�1�H�E�H�E��4����t6H�E�H�{H�PH�E�H�p�&��1�H�U�dH+%(uH�]���fD��������G��@��UH�
�aH��AVAUL�M�L�E�ATSH��H��H��H���_H��dH�%(H�E�1��E��36�����;�S@�Eԅ��-�J�L���H��H��H)�H��I����I���L9���A;D$u�L���L������L���I��I��H������P���v�P���H� aH�<���.��I��H����A�VH���L����T��H�{L���<��H��H��tCI�EL��I�U�#D��L��A���B��E��t/H��=aD��H�5b�_H�81���1��I���L�-�?aI�EH�E�dH+%(u'H��L��[A\A]A^]�DE1����3*��I������E��f���UH��SH��H��H�U�H�5X�H��dH�%(H�E�1���1��A��1�E��t3H�E�H�{H�H�u�C H������r-���s H�=��_1��QC��H�U�dH+%(uH�]����WE�����UH��ATI��H��H��dH�%(H�E�1���=��H�E�dH+%(uI�D$L��L�e�H��@���D��ff.�f���UH��AWAVAUATI��1�SH��dH�%(H�E�1���Q��A�L$PI�Ņ�~f1�L�=U�DI�T$`Hc�E1�E1������H�5׫�<�jL��j�D��I��XZM��tnL��L���E����I�uSH��I�t2��A9\$P�H�E�dH+%(ukH�e�L��[A\A]A^A_]��L�����I��A9\$P�j����H��I�t'H�E�dH+%(u"H�e�[A\A]A^A_]�%(��DL���I������C�����UH�
t�aH��ATSL�M�H��L�E�H��H��H���_H�� dH�%(H�E�1�H�E��E�PH�E�P1�H�E�H�E��E��T2��ZY��tnL�E�L�cM��tM�@H�u�H��tH�v�U��CHL������	�L�ˆCH�1����x4H�}<aH�H�U�dH+%(u-H�e�[A\]�f�1���@H�Y:aH�8�P��1�����B����UH�
4�aH��ATE1�SL�E�H��H��H��H���_H��dH�%(H�E�1��E����x1����t#�u�H�{�A���ƅ�x2H�=��_1��D@��I��H�E�dH+%(u#H��L��[A\]�fDH��9aH�8�qO�����*B��f.���UH��H��H��dH�%(H�E�1���Q����x H�Y;aH�H�U�dH+%(u��@H�I9aH�8�	O��1����A����UH�
�aH��ATE1�SL�M�H��L�E�H��H��H���_H��dH�%(H�E�1��E��E��]0����t�u�H�{�L����x)L�%�:aI�$H�E�dH+%(uH��L��[A\]�f�H��8aH�8�aN�����A��f.���UH��H�
1�aH��H�?�_H��H�� dH�%(H�E�1�L�E�L�M�H�E�H�E��/��A��1�E��tH�u�H�}��>��H=�wHcx�C��H�U�dH+%(u��H������C�����u@��D��U��H�=��aH��AWAVAUATSH��dH�%(H�E�1��F��I��H���_H��7aL�-�bL�5��aL��L�=`�aH�bH��aH���aH���aH�ubH��bH��bH��b�S������L���C������H�=D
b�/������H�=Pb�������H�=\b�������H�=H�a��������H�=�a��������L���������wH�=0�aH�a�a������\H�=��aH�&�a������AH�=��aH���a�~�����&L���aH���aL���`�������n(��H�KbL��H�5�H���a�H���a���H�p�aL��H�5�H�^�a�i��L��L��H�5h�_H�$
bL�-3�_�H��L��L��H�5��_H���a�.��H��bL��H�5��_H��b���H��bL��L��H��b���H��bL��H�5��_H��b����H��bL��L��H�sb���H���aL��H�54�_H���a���H�)�aL��H�5�_H��a���L��L��H�5��_H�=�a�h��H�1�aL��H�5�H��a�J��L��aL��H�5��_L��H���a�)��L���!2��I��H��tCH�j�`L�=w�_Hc;�@@��I��H��t"H��L��L������I�mt[L�{H��M��u��*��H��tH�-4aH�58�_H�8�&E��fDH�E�dH+%(u4H��L��[A\A]A^A_]��L��H���|B��L�{�M���g�����<�����UH��H��dH�%(H�E�1�H�E�dH+%(u�1���M<��ff.�f���UH��H��dH�%(H�E�1�H�E�dH+%(u�1���
<��ff.�f���UH��H��dH�%(H�E�1�H�E�dH+%(u�1����;��ff.�f���UH��H��dH�%(H�E�1�H�E�dH+%(u�1���;��ff.�f���UH��H��dH�%(H�E�1�H�E�dH+%(u���O;��ff.�@��UH��H��dH�%(H�E�1�H�E�dH+%(uɸ������
;��f.���UH��H��dH�%(H�E�1�H�E�dH+%(u����:��ff.�@��UH��H��dH�%(H�E�1�H�E�dH+%(u���:��ff.�@��UH��H��dH�%(H�E�1�H�E�dH+%(uɸ������J:��f.���UH��H��dH�%(H�E�1�H�E�dH+%(u�1���
:��ff.�f���UH��H��dH�%(H�E�1�H�E�dH+%(u�1����9��ff.�f���UH��H��dH�%(H�E�1�H�E�dH+%(u���9��ff.�@��UH��H��dH�%(H�E�1�H�E�dH+%(uɸ������J9��f.���UH��H��dH�%(H�E�1�H�E�dH+%(u�1���
9��ff.�f���UH��H��dH�%(H�E�1�H�E�dH+%(u�1����8��ff.�f���UH��H��dH�%(H�E�1�H�E�dH+%(u�1���8��ff.�f���UH��H��dH�%(H�E�1�H�E�dH+%(uɸ������J8��H��H�����H��H�� r~@8�|5H�� H�� L�L�NL�VL�^H�v L�L�OL�WL�_H� sԃ� �DH�H�H�� H�� L�F�L�N�L�V�L�^�H�v�L�G�L�O�L�W�L�_�H��s҃� H)�H)׃�r$L�L�NL�T�L�\�L�L�OL�T�L�\�Ð��rL�L�L�L�L�L��f.���r�D�D��D�D��ff.����r�tL�FL�D�GD��ÐI��@��H���L��ÐI��@��H�H��A��A��upH��H��t9f�H��H�H�GH�GH�GH�G H�G(H�G0H�G8H�@u���у�8t��fD��H�H�u���t
�ʈH�u�L���H��v�H�I�M)�L�L)��r�����H��H���fs__init_once%d [ %s%d ]/proc/mounts%*s %4096s %99s %*s %*d %*d
PERF_%s_ENVIRONMENTfs/fs.c!fs->path%s/sys/%shugetlbfstracefsdebugfsprocfs/sys/fs/bpf/sys/kernel/tracing/sys/kernel/debug/tracing/trace/sys/kernel/debug/proc/systracing/%s/events/%ssdt_cgroup ,devices/system/cpu/onlineError:	File %s/events/%s not found.
Hint:	SDT event cannot be directly recorded on.
	Please first use 'perf probe %s:%s' before recording it.
Error:	File %s/events/%s not found.
Hint:	Perhaps this kernel misses some CONFIG_ setting to enable this feature?.
Error:	Unable to find debugfs/tracefs
Hint:	Was your kernel compiled with debugfs/tracefs support?
Hint:	Is the debugfs/tracefs filesystem mounted?
Hint:	Try 'sudo mount -t debugfs nodev /sys/kernel/debug'Error:	No permissions to read %s/events/%s
Hint:	Try 'sudo mount -o remount,mode=755 %s'
devices/system/cpu/cpu%d/cpufreq/cpuinfo_max_freqINTERNAL ERROR: strerror_r(%d, [buf], %zd)=%dperf_cpu_map__mergecpu_map__trim_newrefcount_sub_and_testrefcount_inc_not_zerorefcount_increfcount_sub_and_testrefcount_inc_not_zerorefcount_incmmap_per_cpummap_per_threadoverwrite_rb_find_rangerefcount_sub_and_testperf_mmap__putrefcount_inc_not_zerorefcount_incionj <= nr_cpus!(new == (~0U))!(!refcount_inc_not_zero(r))!(new > val)cpu_map refcnt unbalanced
k <= tmp_lenthread map refcnt unbalanced
mmap.clibperf: move evt_head: %lx
lib.c/builddir/build/BUILD/kernel-5.14.0-570.el9/linux-5.14.0-570.el9.x86_64/tools/include/linux/refcount.hPerf can support %d CPUs. Consider raising MAX_NR_CPUS
libperf: Unexpected characters at end of cpu list ('%s'), using online CPUs.libperf: Number of online CPUs (%d) differs from the number configured (%d) the CPU map will only cover the first %d CPUs.libperf: idx %d: mmapping fd %d
libperf: idx %d: set output fd %d -> %d
libperf: %s: nr cpu values (may include -1) %d nr threads %d
libperf: Miscounted nr_mmaps %d vs %d
libperf: %s: nr cpu values %d nr threads %d
!(map->base && refcount_read(&map->refcnt) == 0)failed to keep up with mmap data. (warn only once)
libperf: %s: buf=%p, start=%lx
libperf: Finished reading overwrite ring buffer: rewind
libperf: Finished reading overwrite ring buffer: get start
!((size_t)(buf - buf_start) != n)�C������`����������0��������¿�����h�� Fatal:  %s%s
PWDToo long path: %.*sPREFIX%s%s/%sPATH/usr/local/bin:/usr/bin:/binLINESCOLUMNS%-*sOut of memory, realloc failedperf-%s%s/available %s in '%s'
----------------FRSXLESS/usr/bin/pager/usr/bin/lessPAGERcat-c-%c--%s[=<n>][<n>] <n>[=<%s>][<%s>] <%s>[=...][...] ...%*s%s
%*s(not built-in because %s)
  Error: %s

 Usage: %s
    or: %s
 Error: switch `%c' %s Error: option `no-%s' %s Error: option `%s' %srequires a valueis being ignored because %s is not available because %sis being ignoredis not availabletakes no valueisn't availableis not usablecannot be used with %s Warning: switch `%c' %s Warning: option `no-%s' %s Warning: option `%s' %sexpects a numerical valuevasprintf failedno-%s%s %s [<options>] {|help-alllist-optslist-cmds--%s %sunknown option `%s'%sunknown switch `%c' Error: waitpid failed (%s)exec %s: cd to %s failed (%s)BUG: signal out of range: %dCannot determine the current working directory Error: too many args to run %s
%s available from elsewhere on your $PATH
---------------------------------------cannot be used with switch `%c'expects an unsigned numerical valueshould not happen, someone must be hit on the foreheadSTOP_AT_NON_OPTION and KEEP_UNKNOWN don't go together Error: did you mean `--%s` (with two dashes ?)
 Error: Ambiguous option: %s (could be --%s%s or --%s%s)
SUBCMD_HAS_NOT_BEEN_INITIALIZEDELF��@@��ac1�e %uc��ch uprob{�perf ben{��s������������ac1robe %u{��ch uretp{�perf ben{���������Dual BSD/GPLperf bench uprobe %uperf bench uretprobe %u%%%r%U#st4%I?:;I!I7$%>$%>4%I:;&I	I
I'I
I%:;.@z%:;'I?%:;I.%:;'I !4%:;I%<1XYW41{6'�BF
U�U�	l�q	v
���
	�B
UZ
�
����
	�BF�Z������Z����!DBF�Z��+��;��\+L���������#'<BOYclang version 19.1.7 (CentOS 19.1.7-1.el9)util/bpf_skel/bench_uprobe.bpf.c/builddir/build/BUILD/kernel-5.14.0-570.el9/linux-5.14.0-570.el9.x86_64/tools/perfLICENSEchar__ARRAY_SIZE_TYPE__nr_uprobesunsigned intnr_uretprobesbpf_trace_printklong__u32____trace_printkintctxpt_regsfmt____trace_printk_retemptytrace_printkempty_rettrace_printk_ret<��\\D
	
 
	�
	+
	\
�
� 	
 )7<
pt_regsctxintemptyuprobe/builddir/build/BUILD/kernel-5.14.0-570.el9/linux-5.14.0-570.el9.x86_64/tools/perf/util/bpf_skel/bench_uprobe.bpf.cint BPF_UPROBE(empty)trace_printk	bpf_trace_printk(fmt, sizeof(fmt), ++nr_uprobes);	char fmt[] = "perf bench uprobe %u";int BPF_UPROBE(trace_printk)empty_returetprobeint BPF_URETPROBE(empty_ret)trace_printk_ret	bpf_trace_printk(fmt, sizeof(fmt), ++nr_uretprobes);	char fmt[] = "perf bench uretprobe %u";int BPF_URETPROBE(trace_printk_ret)char__ARRAY_SIZE_TYPE__LICENSEunsigned intnr_uprobesnr_uretprobes.bsslicense�� 44��5	�(�%P@�H��P�@5?dm%�H���m���|����|���i�
Sl}[,��g��܉�1�������ps>c�~��$n�ih�7�
i;��<�6�
		%

.d�*
	%

.r�*/builddir/build/BUILD/kernel-5.14.0-570.el9/linux-5.14.0-570.el9.x86_64/tools/perf/usr/include/asm-generic/usr/include/bpfutil/bpf_skel/bench_uprobe.bpf.cint-ll64.hbpf_helper_defs.h �

��o(2�zL

#' $(,048<@DHLPTX\ (08HTl,4DL`p��������	,	0D	H\	`"&*6K`}�
empty.debug_abbrev.text.rel.BTF.extempty_rettrace_printk_ret.debug_rnglists.rel.debug_str_offsets.bssnr_uprobesnr_uretprobes.debug_str.debug_line_str.rel.debug_addr.rel.debug_infotrace_printk.llvm_addrsiglicense.rel.debug_line.rel.debug_frame.reluprobe.reluretprobebench_uprobe.bpf.c.strtab.symtab.rel.BTFLICENSE.rodata.str1.13Mc@@�	@���	@��
j�T2�-�����	@PCZWs`S	@h`�0�j�=@�	@�pG��C	@808
	@h��@p�	@H�����	@���0s��L�o�H;0
�Usage: 	%s
%14s: %s
# Running %s/%s benchmark...
%s-%sdefault# Running '%s/%s' benchmark:
--helpUnknown collection: '%s'
default|simplerepeatschedScheduler and IPC benchmarkssyscallSystem call benchmarksmemMemory access benchmarksnumafutexFutex stressing benchmarksepollEpoll stressing benchmarksinternalsPerf-internals benchmarksbreakpointBreakpoint benchmarksuprobeuprobe benchmarksAll benchmarkstrace_printkempty_rettrace_printk_retenableRun all breakpoint benchmarkskallsyms-parseBenchmark kallsyms parsinginject-build-idBenchmark build-id injectionevlist-open-closepmu-scanRun all futex benchmarkshashwakewake-parallellock-pimemcpyfind_bitbasicgetpgidBenchmark for fork(2) callsexecveBenchmark for execve(2) callsRun all syscall benchmarksmessagingRun all scheduler benchmarksBenchmark for NUMA workloadsRun all NUMA benchmarksCLIENT: ready writeSERVER: readSENDER: writepipe()socketpair()main:malloc()fork()pthread_attr_init:pthread_attr_setstacksizepthread_create failedReading for readyfdsWriting to start them# %d groups == %d %s run

Total time %14s: %lu.%03lu [sec]
%lu.%03lu
Unknown format:%d
Specify number of groupsnr_loopscgroup.threadscgroup.procstasksCannot enter to cgroup: %s
perf_event Hint: try to run as rootmemory allocation failure
 %14s: %lu.%03lu [sec]

 %14lf usecs/op
 %14d ops/sec
Specify number of loopsthreadedSEND,RECVgetppid()# Executed %'d %s calls
 %'14d ops/sec
getpgid()fork failed
waitpid failed
/bin/trueexecve /bin/true failed
execve()perf bench syscall <options># function '%s' (%s)
# Copying %s bytes ...

 %14lf cycles/byte
 %14lf bytes/sec
 %14lfd KB/sec
 %14lf MB/sec
 %14lf GB/sec
%lf
Invalid size:%s
Unknown function: %s
Available functions:	%s ... %s
x86-64-unrolledx86-64-stosqx86-64-movsq1MBfunctionsharedprivatemlockallpthread_attr_setaffinity_nppthread_createpthread_joincallocSpecify amount of threadsSpecify runtime (in seconds)silentnwakesMust be perfectly divisiblenwakersfutex_waitfutex_wait_requeue_piPI cpu_map__newnrequeuebroadcastRequeue all threads at oncemultiUse multiple futexesepoll_createepoll_ctlepoll_waitlinealsingle (EPOLLONESHOT semantics) (nonblocking)CPU affinity Using %s queue model
Nesting level(s): %d
getrlimitsetrlimiteventfdmain thread: toggling donenfdsnoaffinityDisables CPU affinityrandomizeverboseVerbose modemultiqnonblockingnestedoneshotUse EPOLLONESHOT semanticsedgeepoll_ctdata Session creation failed.
Thread map creation failed.
selfRun single threaded benchmarkmtRun multi-threaded benchmarkmin-threadsmax-threadssingle-iterationsmulti-iterations/proc/kallsymsouter-iterationsinner-iterations  Adding DSO: %s
  Iteration #%d
   [%d] injecting: %s
   Child %d exited with %d
  Build-id injection failedinject-b--buildid-all  Memory allocation failed/usr/lib/  Collected %d DSOs
nr-mmapsnr-samplesstrbuf_init: %s
strbuf_add: %s
strbuf_addch: %s
  Number of cpus:	%d
  Number of threads:	%d
  Number of iterations:	%d
Started iteration %d
evlist__mmap: %s
Iteration %d took:	%luus
nr-eventsall-cpususer to profileper-threaduse per-thread mmapsdummy %14lf usecs/op/cpu
Unknown format: %d
ioctl(PERF_EVENT_IOC_DISABLE)ioctl(PERF_EVENT_IOC_ENABLE)passiveactiveSpecify amount of breakpointsSpecify amount of parallelismCannot find PMU %s
 corebench_uprobe_bpfbench_up.bss.rodata.str1.1usleeplibc.so.6usleep(1000)usec %14s: %'lu %ss %s%'ld to baseline %s%'ld to previous

 %'.3f %ss/op %'.3f %ss/op to baseline %'.3f %ss/op to previousperf bench uprobe <options>got NODE list: {%s}
got CPU list: {%s}
thread %d/%d
 #  %5.1f%%  [%.1f mins] %2d/%-2d [%2d/%-2d] l:%3d-%-3d (%3d) [%4.1f%%] {%d-%d} (%6.1fs converged)
 (%6.1fs de-converged)taskmain,g->p.nr_tasks: %d
# binding tasks to CPUs:#  
token: {%s}, end: {%s}
CPUs: %d_%d-%d#%dx%d
%2d/%d# binding tasks to NODEs:NODEs: %d-%d #%d
 %2d,%2d
 ### # # process %2d: PID %d
process %dNUMA-convergence-latency %-30s %15.3f, %-15s %s
secs latency to NUMA-converge %14.3f %s
runtime-max/threadruntime-min/threadruntime-avg/threadsecs average thread-runtimespread-runtime/thread%,data/threadGB,GB data processed, per threaddata-totalGB data processed, totalruntime/byte/threadnsecs,nsecs/byte/thread runtimethread-speedGB/sec,GB/sec/thread speedtotal-speedGB/sec total speedprocess%d:thread%dGB/secthread-system-timesystem CPU time/threadthread-user-timeuser CPU time/thread"RAM-bw-local,-p-t-P1024-C-M-s-zZq--thp--no-data_rand_walkRAM-bw-local-NOTHP,-1RAM-bw-remote,RAM-bw-local-2x,0,2RAM-bw-remote-2x,1x2RAM-bw-cross,0,81,0 1x3-convergence,-zZ0qcm 1x4-convergence, 1x6-convergence,1020 2x3-convergence, 3x3-convergence, 4x4-convergence, 4x4-convergence-NOTHP, 4x6-convergence, 4x8-convergence, 8x4-convergence, 8x4-convergence-NOTHP, 3x1-convergence, 4x1-convergence, 8x1-convergence,16x1-convergence,32x1-convergence,32128 2x1-bw-process,-zZ0q 3x1-bw-process, 4x1-bw-process, 8x1-bw-process, 512 8x1-bw-process-NOTHP,16x1-bw-process, 1x4-bw-thread,-T 1x8-bw-thread,1x16-bw-thread,1x32-bw-thread, 2x3-bw-process, 4x4-bw-process, 4x6-bw-process, 4x8-bw-process, 4x8-bw-process-NOTHP, 3x3-bw-process, 5x5-bw-process,2x16-bw-process,1x32-bw-process,2048numa02-bw,numa02-bw-NOTHP,numa01-bw-thread,192numa01-bw-thread-NOTHP,perf bench numa <options>nr_procnumber of processesnr_threadsnumber of threads per processmb_globalglobal  memory (MBs)mb_procprocess memory (MBs)mb_proc_lockedmb_threadthread  memory (MBs)nr_secsdata_readsdata_writesdata_backwardsdata_zero_memsetinit_zerobzero the initial allocationsinit_randominit_cpu0perturb_secsshow_detailsShow detailsRun all tests in the suiteshow_convergencemeasure_convergencemeasure convergence latencyquietserialize-startupserialize thread startupcpu[,cpu2,...cpuN]memnodesnode[,node2,...nodeN]perf bench [<common options>] <collection> <benchmark> [<options>]        # List of all available benchmark collections:
Unknown format descriptor: '%s'
Invalid repeat option: Must specify a positive value
        # List of available benchmarks for collection '%s':

Unknown benchmark: '%s' for collection '%s'
Specify the output formatting styleSpecify number of times to repeat the runNUMA scheduling and MM benchmarksBaseline libc usleep(1000) callAttach empty BPF prog to uprobe on usleep, system wideAttach trace_printk BPF prog to uprobe on usleep syswideAttach empty BPF prog to uretprobe on usleep, system wideAttach trace_printk BPF prog to uretprobe on usleep syswideBenchmark thread start/finish with breakpointsBenchmark breakpoint enable/disableBenchmark perf event synthesisBenchmark evlist open and closeBenchmark sysfs PMU info scanningBenchmark epoll concurrent epoll_waitsBenchmark epoll concurrent epoll_ctlsBenchmark for futex hash tableBenchmark for futex wake callsBenchmark for parallel futex wake callsBenchmark for futex requeue callsBenchmark for futex lock_pi callsBenchmark for memcpy() functionsBenchmark for memset() functionsBenchmark for find_bit() functionsRun all memory access benchmarksBenchmark for basic getppid(2) callsBenchmark for getpgid(2) callsBenchmark for scheduling and IPCBenchmark for pipe() between two processes# %d sender and receiver %s per group
perf bench sched messaging <options>Use pipe() instead of socketpair()Be multi thread instead of multi processSpecify the number of loops to run (default: 100)Failed to open cgroup file in %s
 Hint: create the cgroup first, like 'mkdir %s/%s'
it should have two cgroup names: %s
# Executed %d pipe operations between two %s

perf bench sched pipe <options>Specify threads/process based task setupPut sender and receivers in given cgroups# Memory allocation failed - maybe size (%s) is too large?
No CONFIG_PERF_EVENTS=y kernel support configured?
Failed to open cycles counter
Default memset() provided by glibcunrolled memset() in arch/x86/lib/memset_64.Smovsq-based memset() in arch/x86/lib/memset_64.Sperf bench mem memset <options>perf bench mem memcpy <options>Default memcpy() provided by glibcunrolled memcpy() in arch/x86/lib/memcpy_64.Smovsq-based memcpy() in arch/x86/lib/memcpy_64.SSpecify the size of the memory buffers. Available units: B, KB, MB, GB and TB (case insensitive)Specify the function to run, "all" runs all available functions, "help" lists themSpecify the number of loops to run. (default: 1)Use a cycles event instead of gettimeofday() to measure performanceNon-expected futex return callRun summary [PID %d]: %d threads, each operating on %d [%s] futexes for %d secs.

[thread %2d] futex: %p [ %ld ops/sec ]
[thread %2d] futexes: %p ... %p [ %ld ops/sec ]
%sAveraged %ld operations/sec (+- %.2f%%), total secs = %d
perf bench futex hash <options>Specify amount of futexes per threadsSilent mode: do not display data/detailsUse shared futexes instead of private onesLock all current and future memoryRun summary [PID %d]: blocking on %d threads (at [%s] futex %p), waking up %d at a time.

Wokeup %d of %d threads in %.4f ms (+-%.2f%%)
[Run %d]: Wokeup %d of %d threads in %.4f ms
perf bench futex wake <options>Specify amount of threads to wake at oncecouldn't wakeup all tasks (%d/%d)Run summary [PID %d]: blocking on %d threads (at [%s] futex %p), %d threads waking up %d at a time.

Avg per-thread latency (waking %d/%d threads) in %.4f ms (+-%.2f%%)
[Run %d]: Avg per-thread latency (waking %d/%d threads) in %.4f ms (+-%.2f%%)
perf bench futex wake-parallel <options>Specify amount of waking threadsRun summary [PID %d]: Requeuing %d threads (from [%s] %p to %s%p), %d at a time.

Requeued %d of %d threads in %.4f ms (+-%.2f%%)
couldn't requeue from %p to %p[Run %d]: Requeued %d of %d threads in %.4f ms
[Run %d]: Awoke and Requeued (%d+%d) of %d threads in %.4f ms
perf bench futex requeue <options>Specify amount of threads to requeue at onceUse PI-aware variants of FUTEX_CMP_REQUEUEthread %d: Could not lock pi-lock for %p (%d)thread %d: Could not unlock pi-lock for %p (%d)Run summary [PID %d]: %d threads doing pi lock/unlock pairing for %d secs.

[thread %3d] futex: %p [ %ld ops/sec ]
perf bench futex lock-pi <options>starting writer-thread: doing %s writes ...
exiting writer-thread (total full-loops: %zd)
Setting RLIMIT_NOFILE rlimit from %lu to: %lu
Run summary [PID %d]: %d threads monitoring%s on %d file-descriptors for %d secs.

starting worker/consumer %sthreads%s

Averaged %ld operations/sec (+- %.2f%%), total secs = %d
[thread %2d] fdmap: %p [ %04ld ops/sec ]
[thread %2d] fdmap: %p ... %p [ %04ld ops/sec ]
perf bench epoll wait <options>Specify amount of file descriptors to monitor for each threadEnable random write behaviour (default is lineal)Use multiple epoll instances (one per thread)Nonblocking epoll_wait(2) behaviourNesting level epoll hierarchy (default is 0, no nesting)Use Edge-triggered interface (default is LT)Run summary [PID %d]: %d threads doing epoll_ctl ops %d file-descriptors for %d secs.

[thread %2d] fdmap: %p [ add: %04ld; mod: %04ld; del: %04lds ops ]
[thread %2d] fdmap: %p ... %p [ add: %04ld ops; mod: %04ld ops; del: %04ld ops ]

Averaged %ld ADD operations (+- %.2f%%)
Averaged %ld MOD operations (+- %.2f%%)
Averaged %ld DEL operations (+- %.2f%%)
perf bench epoll ctl <options>Perform random operations on random fds  Average %ssynthesis took: %.3f usec (+- %.3f usec)
  Average num. events: %.3f (+- %.3f)
  Average time per event %.3f usec
Computing performance of single threaded perf event synthesis by
synthesizing events on the perf process itself:Computing performance of multi threaded perf event synthesis by
synthesizing events on CPU 0:  Number of synthesis threads: %u
    Average synthesis took: %.3f usec (+- %.3f usec)
    Average num. events: %.3f (+- %.3f)
    Average time per event %.3f usec
perf bench internals synthesize <options>Minimum number of threads in multithreaded benchMaximum number of threads in multithreaded benchNumber of iterations used to compute single-threaded averageNumber of iterations used to compute multi-threaded average  Average kallsyms__parse took: %.3f ms (+- %.3f ms)
perf bench internals kallsyms-parse <options>Number of iterations used to compute average%d operations %d bits set of %d bits
  Average for_each_set_bit took: %.3f usec (+- %.3f usec)
  Average test_bit loop took:    %.3f usec (+- %.3f usec)
perf bench mem find_bit <options>Number of outer iterations usedNumber of inner iterations used  Build-id%s injection benchmark
  Build-id injection setup failed  Average build-id%s injection took: %.3f msec (+- %.3f msec)
  Average time per event: %.3f usec (+- %.3f usec)
  Average memory usage: %.0f KB (+- %.0f KB)
  Cannot collect DSOs for injectionperf bench internals inject-build-id <options>Number of iterations used to compute average (default: 100)Number of mmap events for each iteration (default: 100)Number of sample events per mmap event (default: 100)be more verbose (show iteration count, DSO name, etc)Not enough memory to create evlist
Run 'perf list' for a list of valid events
Not enough memory to create thread/cpu maps
  Number of events:	%d (%d fds)
  Average open-close took: %.3f usec (+- %.3f usec)
perf bench internals evlist-open-close <options>event selector. use 'perf list' to list available eventsnumber of dummy events to create (default 1). If used with -e, it clones those events n times (1 = no change)Number of iterations used to compute average (default=100)system-wide collection from all CPUslist of cpus where to open eventsrecord events on existing process idrecord events on existing thread idSkipping perf bench breakpoint thread: No hardware support# Created/joined %d threads with %d breakpoints and %d parallelism
Skipping perf bench breakpoint enable: No hardware support# Enabled/disabled breakpoint %d time with %d passive and %d active threads
perf bench breakpoint enable <options>Specify amount of passive threadsSpecify amount of active threadsperf bench breakpoint thread <options>Computing performance of sysfs PMU event scan for %u times
pmu[%d] name=%s, nr_caps=%d, nr_aliases=%d, nr_formats=%d
Failed to initialize PMU scan result
Unmatched number of event caps in %s: expect %d vs got %d
Unmatched number of event aliases in %s: expect %d vs got %d
Unmatched number of event formats in %s: expect %d vs got %d
  Average%s PMU scanning took: %.3f usec (+- %.3f usec)
perf bench internals pmu-scan <options>Failed to open and load uprobes bench BPF skeleton
Failed to load and verify BPF skeleton
Failed to attach bench uprobe "%s": %s
binding to node %d, mask: %016lx => %d
WARNING: Could not enable THP - do: 'echo madvise > /sys/kernel/mm/transparent_hugepage/enabled'WARNING: Could not disable THP: run a CONFIG_TRANSPARENT_HUGEPAGE kernel?#  thread %2d / %2d global mem: %p, process mem: %p, thread mem: %p
 (injecting perturbalance, moved to CPU#%d)
 #%2d / %2d: %14.2lf nsecs/op [val: %016lx]

Test not applicable, system has only %d CPUs.

Test not applicable, bind_cpu_0 or bind_cpu_1 is offline
# NOTE: ignoring bind CPUs starting at CPU#%d
 ## NOTE: %d tasks bound, %d tasks unbound

Test not applicable, system has only %d nodes.

# NOTE: ignoring bind NODEs starting at NODE#%d
# NOTE: %d tasks mem-bound, %d tasks unbound
 # %d %s will execute (on %d nodes, %d CPUs):
 #      %5dx %5ldMB global  shared mem operations
 #      %5dx %5ldMB process shared mem operations
 #      %5dx %5ldMB thread  local  mem operations
 # Startup synchronization: ... # process %2d global mem: %p, process mem: %p
 threads initialized in %.6f seconds.
secs slowest (max) thread-runtimesecs fastest (min) thread-runtime% difference between max/avg runtime
 # Running %s "perf bench numaecho ' #'; echo ' # Running test on: '$(uname -a); echo ' #'perf bench numa mem [<options>]process serialized/locked memory access (MBs), <= process_memorymax number of loops to run (default: unlimited)max number of seconds to run (default: 5 secs)usecs to sleep per loop iterationaccess the data via reads (can be mixed with -W)access the data via writes (can be mixed with -R)access the data backwards as wellaccess the data via glibc bzero onlyaccess the data with random (32bit LFSR) walkrandomize the contents of the initial allocationsdo the initial allocations on CPU#0perturb thread 0/0 every X secs, to test convergence stabilityMADV_NOHUGEPAGE < 0 < MADV_HUGEPAGEshow convergence details, convergence is reached when each process (all its threads) is running on a single NUMA node.quiet mode (do not show any warnings or messages)bind the first N tasks to these specific cpus (the rest is unbound)bind the first N tasks to these specific memory nodes (the rest is unbound)SSSSSSSSSSSSSSSS��������@B������������.A�@P?0A�A@�@�e��AN@Y@�?�?����event-hyphenevent-two-hyphtest__syscall_openat_tp_fields�v���w���v���w���w���v���v���v���v���v���v��print_hists_outprint_hists_in�{���{���{��x{���{��write_mmapwrite_commleafloopsqrtloopbrstackdatasym������Q�����/F�������/F������

-
M
m
�
	�
1Qq�	�!Aa�	�=]}�	�C

C

��@� ��@� � �!�"�Y	sss��s������� ��������������������������������������������������������������?	@	����������������
 
������������������
���������������������������#È
2�
�b�
�
�
��"""�4�4"VV�������������cc�cc���		�		�3�3� �! ��!�S	"	S?"?bp_2bp_1v�annotate.objdump--- start ---
test child forked, pid %d
---- end(%d) ----
%3d.%1d: %-*s:%3d: %-*s: Ok
 Skip (%s)
 Skip
 FAILED!
%3d: %-*s:
%3d.%1d: %s:
%3d: %s:
---- end ----
tests to skipdont-forkDo not fork for testcaseRun the tests in parallelsequentialworkloadworkworkload to run for testingdso to test%3d: %s
%3d:%1d: %s
No workload found: %s
 Skip (user override)
./tools/perf/tests/shell./tests/shell./source/tests/shell/proc/self/exe%s/tests/shell%s/source/tests/shell -v/proc/%d/fd/%d.shbase_wrong number of entriestests/parse-events.cFAILED %s:%d %s
wrong typewrong configwrong periodwrong timewrong callgraphwrong config1wrong config2wrong config3wrong bp_typewrong bp_lenwrong exclude_userwrong exclude_kernelwrong exclude_hvwrong precise_ipwrong sample_typewrong sample_periodwrong type termwrong type valwrong valumaskrawr0xeadwrong pinnedwrong exclusivecheck_parse_fake %s failed
%s/bus/event_source/devices/... SKIP
Failed allocationcan't access trace events%s/event=1/,%s/event=1/wrong number of groupskravawrong namecpu/config=2/ubreakpoint1breakpoint2intel_ptevent not parsed as raw typePMU type expected onceNo PMU found for typeRaw PMU not matchedcan't open pmu event dir: %s
%s:u,%s/event=%s/ucachepmuwrong name settingwrong complex name parsingintel_pt//unumpmurawpmuinsnCan't open events dirheader_eventheader_pageCan't get sys pathCan't open sys dirwrong events countrunning test %d '%s'
unexpected PMU typePMU missing eventwrong exclude guestwrong exclude hostwrong leaderwrong core.nr_memberswrong group_idxwrong sample_readinstructionswrong exclude idlecache-misseswrong group namegroup1branch-missesmem:0:umem:0:r:hpmem:0:w:upmem:0:x:kl1dmem:0:rw:kpTest event parsingpermissionspmu_eventspmu_events2aliasno aliases in sysfspmu_events_alias2Parsing of aliased eventsterms2software/r1a/software/r0x1a/cpu/L1-dcache-load-miss/cpu/L1-dcache-load-miss/kpcpu/instructions/cpu/instructions/hcpu/instructions/Gcpu/instructions/Hcpu/instructions/uDpcpu/instructions/Icpu/instructions/kIGcpu/cycles/ucpu/cycles/kcpu/instructions/uepcpu/cycles,name=name/cpu/cycles,name=l1d/syscalls:sys_enter_openatsyscalls:*r1a1:1cycles/period=100000,config2/faultsL1-dcache-load-missmem:0mem:0:xmem:0:rmem:0:wsyscalls:sys_enter_openat:ksyscalls:*:ur1a:kp1:1:hpinstructions:hfaults:uL1-dcache-load-miss:kpinstructions:Ginstructions:Hmem:0:rw{instructions:k,cycles:upp}{cycles:u,instructions:kp}:p*:*{cycles,cache-misses:G}:H{cycles,cache-misses:H}:G{cycles:G,cache-misses:H}:u{cycles:G,cache-misses:H}:uGinstructions:uDpmem:0/1mem:0/2:wmem:0/4:rw:uinstructions:Iinstructions:kIGtask-clock:P,cyclesinstructions/name=insn/r1234/name=rawpmu/4:0x6530160/name=numpmu/cycles//ucycles:kinstructions:uepcycles/name=name/cycles/name=l1d/mem:0/name=breakpoint/mem:0:x/name=breakpoint/mem:0:r/name=breakpoint/mem:0:w/name=breakpoint/mem:0/name=breakpoint/umem:0:x/name=breakpoint/kmem:0:r/name=breakpoint/hpmem:0:w/name=breakpoint/upmem:0:rw/name=breakpoint/mem:0:rw/name=breakpoint/kpmem:0/1/name=breakpoint/mem:0/2:w/name=breakpoint/mem:0/4:rw/name=breakpoint/u9p:9p_client_req%s/self/fdfd path: %s
failed to open fd directorytests/dso-data.cmkstemp failedNo test fileFailed to add dsoWrong sizeWrong dataENOMEM
Failed to access to dsofile limit %ld, new %d
failed to set file limitfailed to get dso filefailed to get dsofailed to add dsofailed to create dsos
failed to get fdfailed to read dsodsos[0] is not openfailed to close dsos[0]nr start %ld, nr stop %ld
failed leaking filesfailed to open extra fdfailed to close dso_0failed to close dso_1DSO data testsdso_datadso_data_cachedso_data_reopen%s/event-%d-%llu-%dw+[event-%d-%llu-%d]
group_fd=%d
cpu=%d
pid=%d
flags=%lu
size=%u
config=%llu
sample_period=%llu
read_format=%llu
disabled=%d
inherit=%d
pinned=%d
exclusive=%d
exclude_user=%d
exclude_kernel=%d
exclude_hv=%d
exclude_idle=%d
mmap=%d
comm=%d
freq=%d
inherit_stat=%d
enable_on_exec=%d
task=%d
watermark=%d
precise_ip=%d
mmap_data=%d
sample_id_all=%d
exclude_host=%d
exclude_guest=%d
exclude_callchain_kernel=%d
exclude_callchain_user=%d
mmap2=%d
comm_exec=%d
context_switch=%d
write_backward=%d
namespaces=%d
use_clockid=%d
wakeup_events=%u
bp_type=%u
config1=%llu
config2=%llu
branch_sample_type=%llu
sample_regs_user=%llu
sample_stack_user=%u
Skip test on hybrid systems./tests./perf%s/tests/usr/bin%s/perfPERF_TEST_ATTRtest attr FAILEDSetup struct perf_event_attr:
WARN: *%lx-%lx %lxkallsyms_addresses_from_arm.long_branch.machine__load_kallsyms failed__kernel_syscall_via_break__kernel_syscall_via_epc__kernel_sigtramp__gpWARN: Maps only in kallsyms:
WARN: Maps only in vmlinux:
.plt_branch._from_thumb_veneer__crc___efistub___kvm_nvhe_$__kvm_nvhe_.L__AArch64ADRPThunk___ARMV5PILongThunk___ARMV7PILongThunk___ThumbV7PILongThunk___LA25Thunk___microLA25Thunk_kallsyms_offsetskallsyms_relative_basekallsyms_num_symskallsyms_nameskallsyms_markerskallsyms_token_tablekallsyms_token_index_SDA_BASE__SDA2_BASE_vmlinux_matches_kallsymsthread_map__new
syscalls/etc/passwdevsel__read_on_cpu
Detect openat syscall eventopenat_syscall_eventperf_cpu_map__new
Ignoring CPU %d
%s: evlist__new
%s: evsel__newtp
%s: evlist__create_maps
perf_evlist__open: %s
Can't parse sample, err = %d
flags%s: no events!
syscall_openat_tp_fieldsenabledfailed to create threadstests/mmap-basic.cfailed to create evselfailed to open evsel: %s
failed to mmap evsel: %s
	loop = %u, count = %llu
getsidgetppidsys_enter_%sevsel__new(%s)
unexpected %s event
mmap interface testsbasic_mmapmmap_user_read_instrmmap_user_read_cyclesCouldn't run the workload!
sched_getaffinitysched_setaffinity: %s
Couldn't parse sample
%lu %d %s with different pid/tid!
%s with unexpected comm!
coreutilslibc[vdso]No PERF_RECORD_EXIT event!
%s with unexpected pid/tid
PERF_RECORDFailed to alloc evlistRoundtrip evsel->name%s: "%s" field not found!
evsel__newtp failed with %ld
prev_commprev_prioprev_statenext_commnext_pidnext_priosched_wakeuptarget_cpuperf_evsel__tp_sched_test
fdarray__new() failed!before
%s:  afterbefore growing arrayafter 3rd addafter 4th addfdarray__addfdarray__filterpmunameExact matchtests/pmu.cFAILED %s:%d %s (%d != %d)
longertokenLonger tokenShorter tokenpmuname_10pmuname_2Diff suffix_pmuname_1Sub suffix_Same suffix_No suffix_pmuname_Underscore_pmunaSubstring_pmuname_ab23Diff suffix hex_pmuname_abSub suffix hex_Same suffix hex_No suffix hex_Underscore hex_Substring hex_pmuname10pmuname2Diff suffixpmuname1Sub suffixSame suffixNo suffixUnderscoreSubstringpmunameab23Diff suffix hexpmunameabSub suffix hexSame suffix hexNo suffix hexUnderscore hexSubstring hexpmuname_a3Diff suffix 2 hex_pmuname_aSub suffix 2 hex_Same suffix 2 hex_No suffix 2 hex_Underscore 2 hex_Substring 2 hex_pmuname_5pmu*Glob 1nomatch*Glob 2pmuname_[12345]Seq 1pmuname_[67890]Seq 2pmuname_?? 1pmuname_1?? 2i915cpum_cfcpum_cecpum_d0Strips uncore_cha suffixStrips mrvl_ddr_pmu suffixSysfs not mounted
Error opening "%s"
krava01config:0-1,62-63
krava02config:10-17
krava03config:5
krava11config1:0,2,4,6,8,20-28
krava12config1:63
krava13config1:45-47
krava21krava22config2:8,18,48,58
krava23config2:28-29,38
/tmp/perf-pmu-test-XXXXXXmkdtemp failed
rm -fr %sperf-pmu-testperf-pmu-test/type9999
perf-pmu-test/formatperf-pmu-test/format/%sFailed to write to file "%s"
perf-pmu-test/eventsTest PMU creation failed
Failure to "%s"
Term parsing failed
perf_pmu__config_terms failedUnexpected config value %llx
perf-pmu-test/test-event/Sysfs PMU testspmu_formatParsing with PMU eventpmu_event_namesPMU event namesname_lenPMU name combiningname_cmpPMU name comparisonpmu_matchPMU cmdline matchmrvl_ddr_pmu_87e1b0000000mrvl_ddr_pmu_87e1b1000000mrvl_ddr_pmu_87e1b2000000mrvl_ddr_pmu_87e1b3000000mrvl_ddr_pmu_87e1b4000000mrvl_ddr_pmu_87e1b5000000mrvl_ddr_pmu_87e1b6000000mrvl_ddr_pmu_87e1b7000000mrvl_ddr_pmu_87e1b8000000mrvl_ddr_pmu_87e1b9000000mrvl_ddr_pmu_87e1ba000000mrvl_ddr_pmu_87e1bb000000mrvl_ddr_pmu_87e1bc000000mrvl_ddr_pmu_87e1bd000000mrvl_ddr_pmu_87e1be000000mrvl_ddr_pmu_87e1bf000000uncore_cha_0uncore_cha_1uncore_cha_2uncore_cha_3uncore_cha_4uncore_cha_5uncore_cha_6uncore_cha_7uncore_cha_8uncore_cha_9uncore_cha_10uncore_cha_11uncore_cha_12uncore_cha_13uncore_cha_14uncore_cha_15uncore_cha_16uncore_cha_17uncore_cha_18uncore_cha_19uncore_cha_20uncore_cha_21uncore_cha_22uncore_cha_23uncore_cha_24uncore_cha_25uncore_cha_26uncore_cha_27uncore_cha_28uncore_cha_29uncore_cha_30uncore_cha_31parsing '%s': '%s'
expr__ctx_new failedexpr__find_ids failed
check_parse_fake failed
expr__parse failed for %s
tma_clears_resteerstma_mispredicts_resteersFound metric '%s'
duration_timeDidn't find parsed metric %sResult %f
Broken metric %s
pmu_events__test_soc_systestcputestarchskipping testing core PMU %s
bp_l1_btb_correctdefault_coretesting event table %s: pass
PMU JSON event testspmu_event_tablePMU event table sanityPMU event map aliasessome metrics failedparsing_fakeparsing_thresholdhisi_sccl1_ddrc2uncore_cbox_0hisi_sccl3_l3c7uncore_imc_free_running_0uncore_imc_0uncore_sys_ddr_pmu0v8uncore_sys_ccn_pmu40x01uncore_sys_cmn_pmu0434014360243c0343a01sys_cmn_pmu.hnf_cache_miss(434|436|43c|43a).*eventid=1,type=5uncoreuncore_sys_cmn_pmueventid=0x1,type=0x5sys_ccn_pmu.read_cyclesconfig=0x2cccn read-cycles eventuncore_sys_ccn_pmusys_ddr_pmu.write_cyclesevent=0x2bddr write-cycles eventuncore_sys_ddr_pmuuncore_imc.cache_hitsevent=0x34Total cache hitsuncore_imcevent=0x12Total cache missesuncore_imc_free_runninguncore_hisi_l3c.rd_hit_cpipeevent=7Total read hitshisi_sccl,l3cevent=0x7event-two-hyphevent=0xc0UNC_CBO_TWO_HYPHuncore_cboxevent-hyphenevent=0xe0UNC_CBO_HYPHENevent=0x22,umask=0x81uncore_hisi_ddrc.flux_wcmdevent=2DDRC write commandshisi_sccl,ddrcevent=0x2l3_cache_rdevent=0x40L3 cache access, readeist_transevent=0x3a,period=200000otherevent=0x3a,period=0x30d40dispatch_blocked.anysegment_reg_loads.anybp_l2_btb_correctevent=0x8bL2 BTB Correctionbranchevent=0x8aL1 BTB Correction----- %s --------
bash[kernel]schedulepage_faultsys_perf_event_openreallocmainxmallocxfreerun_commandcmd_recordCan't find the matched entry
cpu-clocktask-clockMatch and link multiple histshists_linkNo memorytests/hists_filter.cNormal histogram
Invalid nr samplesInvalid nr hist entriesInvalid total periodUnmatched nr samplesUnmatched nr hist entriesUnmatched total periodHistogram for thread filter
Histogram for dso filter
Histogram for symbol filter
Histogram for socket filters
Histogram for all filters
Filter hist entrieshists_filtertests/hists_output.ccpu,pid,comm,dso,symdso,pid[fields = %s, sort = %s]
Invalid hist entryoverhead,cpudso,sym,comm,overhead,dsoSort output of hist entrieshists_outputtests/hists_cumulate.ccallchains expectedInvalid hist entry #%zdCumulate child hist entrieshists_cumulate2> /dev/null'/usr/bin/python3'python usage test: "%s"
'import perf' in pythonpython_usefailed opening event %llx
failed to read: %d
bp_signalcount %lld, overflow %d
Breakpoint overflow samplingbp_signal_overflowfailed to create wp
tests/bp_account.cwp %d created
failed to modify wp
wp 0 modified to bp
failed to create max wp
wp max created
Breakpoint accountingbp_accountingfailed opening event %x
WO watchpointtests/wp.cRW watchpointModify watchpointmissing kernel supportwp_roRead Only Watchpointmissing hardware supportwp_woWrite Only Watchpointwp_rwRead / Write Watchpointwp_modifyModify Watchpointevlist__new_dummy
Couldn't open the evlist: %s
received %d EXIT records
task_exitevsel__new
Error during parse sample
sw_clock_freqmmap failedtid = %d, map = %p
failed to notify
tests/mmap-thread-lookup.cfailed to destroy threadsfailed to synthesize mapslooking for map %p
failed, couldn't find map
map %p, addr %lx
failed with sythesizing allLookup mmap threadmmap_thread_lookuptests/thread-maps-share.cwrong refcntmaps don't matchfailed to find other leaderShare thread mapsthread_maps_sharethread_map__new failed!
perf_cpu_map__new failed!
evlist__new failed!
cpu-clock:ucycles:uFailed to parse event %s
sched:sched_switchNo sched_switch
Failed to create event %s
cycles event already at frontdummy:uTracking event not tracking
Not supported
evlist__mmap failed!
spin_sleep failed!
Test COMM 1PR_SET_NAME failed!
Test COMM 2Test COMM 3Test COMM 4malloc failed
evlist__parse_sample failed
event with no time
calloc failed
Missing sched_switch events
cycles event
Duplicate comm event
comm event: %s nr: %d
Unexpected comm event
Missing comm events
Missing cycles events
%u events recorded
Track with sched_switchswitch_trackingthreads failed!
cpus failed!
evlist failed!
keep_trackingqsort failed
map__load failed
perf_cpu_map__new failed
evlist__new failed
Parsing event '%s'
parse_events failed
evlist__mmap failed
pipe failed
thread__find_map failed
File is: %s
On file address is: %#lx
dso__data_read_offset failed
kcore map tested already - skipping
decompression failed
Objdump command is: %s
 2>/dev/nullpopen failed
getline failed
Reducing len to %zu
objdump failed for kcoreread_via_objdump failed
buf1 (dso):
0x%02x buf2 (objdump):
no vmlinux
no kcore
no access
no kernel obj
Object code readingcode_readingSamples differ at 'pid'
Samples differ at 'tid'
Samples differ at 'time'
Samples differ at 'addr'
Samples differ at 'id'
Samples differ at 'cpu'
Samples differ at 'period'
Samples differ at 'raw_size'
Samples differ at 'raw_data'
Samples differ at 'weight'
Samples differ at 'data_src'
Samples differ at 'cgroup'
perf_event__synthesize_sampleevsel__parse_sampleSamples differ at 'ip'
read_format %#lx
parse_no_sample_id_allkmod_path__parsetests/kmod-path.cwrong kmodwrong comp[x_x]/xxxx/xxxx/x-x.kois_kernel_modulefalse[x]/xxxx/xxxx/x.ko.gz/xxxx/xxxx/x.gz[test_module][test.module][vdso32][vdsox32][vsyscall][kernel.kallsyms]failed to set process nametests/thread-map.cfailed to alloc mapwrong nrwrong pidwrong commfailed to synthesize map%d,%dfailed to allocate map stringfailed to allocate thread_mapfailed to remove threadthread_map count != 1thread_map count != 0failed to not remove threadRemove thread mapthread_map_removeSynthesize thread mapthread_map_synthesizeThread mapcan't get templ filetests/topology.ctempl file: %s
can't get sessioncan't get evlistfailed to write headerfailed to get system cpumap
s390aarch64CPU %d, core %d, socket %d
Cpu map - Node ID is setCpu map - Thread IDX is setCore map - Node ID is setCore map - Thread IDX is setDie map - Node ID is setDie map - Core is setDie map - CPU is setDie map - Thread IDX is setSocket map - Node ID is setSocket map - Die ID is setSocket map - Core is setSocket map - CPU is setNode map - Socket is setNode map - Die ID is setNode map - Core is setNode map - CPU is setNode map - Thread IDX is setppc64leSession topologysession_topologytests/mem.cunexpected %sN/AL4 hitN/ARemote L4 hitN/APMEM missN/ARemote PMEM missFwdRemote RAM missTest data source output1-2tests/cpumap.cnot equalpair4,2,14,5,7failed to merge map: bad nr1-2,4-5,70,2-201,2561-256wrong cpuwrong any_cpuwrong start_cpuwrong end_cpuwrong long_size6-86-91-86-8,156-9,151-8,12-20failed to convert map1,52-51,3-6,8-10,24,35-371-10,12-20,22-30,32-40CPU mapcpu_map_synthesizeSynthesize cpu mapcpu_map_printPrint cpu mapcpu_map_mergeMerge cpu mapcpu_map_intersectIntersect cpu mapcpu_map_equalEqual cpu maptests/stat.cwrong threadwrong idwrong runwrong enawrong aggr_modewrong scalewrong intervalSynthesize stat roundsynthesize_stat_roundSynthesize statsynthesize_statSynthesize stat configsynthesize_stat_configfailed to get evlisttests/event_update.cfailed to allocate idsKRAVA1,2,3wrong cpuswrong unitSynthesize attr updateevent_updatefailed to create event list
  SKIP  : not enough rights
failed to attachtests/event-times.cfailed to detachOK      %s: ena %lu, run %lu
Event timesevent_timesids__newtests/expr.cfooids__insertbarbazget_cpuidIntelids_unionexpr__ctx_newFOO1+1parse test failedunexpected valueFOO+BAR(BAR/2)%21 - -4(FOO-1)*2 + (BAR/2)%2 - -41-1 | 11-1 & 1min(1,2) + 1max(1,2) + 11+1 if 3*4 else 01.1 + 2.1.1 + 2.d_ratio(1, 2)d_ratio(2.5, 0)1.1 < 2.22.2 > 1.11.1 < 1.12.2 > 2.22.2 < 1.11.1 > 2.21.1e10 < 1.1e1001.1e2 > 1.1e-2FOO/0division by zeroBAR/missing operandFOO + BAR + BAZ + BOZOfind idsBAZEVENT1,param=3@EVENT2,param=3@dash\-event1 - dash\-event2dash-event1dash-event2EVENT1 if #smt_on else EVENT2EVENT10 & EVENT1 > 0EVENT1 > 0 & 01 & EVENT1 > 0EVENT1 > 0 & 11 | EVENT1 > 0EVENT1 > 0 | 10 | EVENT1 > 0EVENT1 > 0 | 0#num_cpus#num_cpus >= #num_cpus_online#num_cpus >= #num_cores#num_cores >= #num_dies#num_dies >= #num_packages#system_tsc_freq#system_tsc_freq > 0#system_tsc_freq == 0source_count(EVENT1)source countstrcmp_cpuid_str(0x0)\-\,\=strcmp_cpuid_str(%s)has_event(cycles)Simple expression parserexprp:%d
Unexpected record of type %d
Read backward ring bufferbackward_ring_buffertest_targetsdt_perfProbe SDT eventssdt_eventfailed: test %u
is_printable_arraybitmap: %s
tests/bitmap.cPrint bitmapbitmap_printSetting failed: %d (%p)
perf hooksperf_hooks1B10K20M30G0Bn %lu, str '%s', buf '%s'
unit_number__scnprintfunit_number__scnprintfailed: alloc bitmaptests/mem2node.cfailed: mem2node__initfailed: mem2node__nodemem2node5-7,9Expected %d maps, got %dExpected:
Got:
bpf_prog_1bpf_prog_2bpf_prog_3kcore1kcore3failed to create mapstests/maps.cfailed to create mapfailed to insert mapkcore2failed to merge mapmerge check failedmaps__merge_in
parse_nsec_time("%s")
error %d
failed to keep 0
failed to skip %lu
failed to keep %lu
0.0000000011.000000001123456.12345618446744073.709551615
perf_time__parse_str("%s")
Error %d
Failed. Expected %lu to %lu
1234567.123456789,0,1234567.12345678910%/110%/1,10%/210%/1,10%/3,10%/10time utilstime_utilsWriting jit code to: %s
Test jit_write_elfshort writeFailed to open '%s'
Failed to allocate memorytests/api-io.c%s:%d: %d != %d
%s:%d: %lld != %lld
12345678abcdef90a
b
c
d
	
1
2
3
12345678ABCDEF90;a;b0x1x2xx1x12345678;1;2Test api ioapi_ioLjava/lang/Object;<init>()Vvoid java.lang.Object<init>()FAILED: %s: %s != %s
Demangle Javademangle_java(null)camlStdlib__array__map_154Stdlib.array.map_154Stdlib.bytes.++_2205Demangle OCamldemangle_ocamlTest libpfm4 supportpfm_eventsnot compiled inpfm_grouptest groups of --pfm-eventsinst_retired.anycpu_clk_unhalted.threadIPCfailed to compute metrictests/parse-metric.cidq_uops_not_delivered.corecpu_clk_unhalted.ref_xclkFrontend_Bound_SMTIPC failedl2_rqsts.demand_data_rd_hitl2_rqsts.pf_hitl2_rqsts.rfo_hitl2_rqsts.all_demand_data_rdl2_rqsts.pf_missl2_rqsts.rfo_missDCache_L2_Hitsfrontend failedDCache_L2_MissesM1DCache_L2 failedfailed to find recursionM3l1d.replacementL1D_Cache_Fill_BWrecursion fail failedl1d-loads-missesl1i-loads-missescache_miss_cyclesMemory bandwidthcache_miss_cycles failedgroup IPC failed, wrong ratiotest metric groupParse and process metricsparse_metricPE file supportpe_file_parsingevlist is emptytests/expand-cgroup.cevent count doesn't match
event name doesn't match:
cgroup name doesn't match:
failed to expand event grouplibpfm was not enabled
failed to parse '%s' metric
Event expansion for cgroupsexpand_cgroup_eventsevlist__open() failed
Convert perf time to TSCtsc_is_supportedTSC supportperf_time_to_tscPerf time to TSC >/dev/null 2>&1Command: %s
Failed with return value %d
dlfilter-test-api-v%d.sodlfilter to test v%d C API/tmp/dlfilter-test-%u-prog.c/tmp/dlfilter-test-%u-prog%s/dlfilters/%sdlfilters not foundChecking for gcc
gcc --versiongcc not founddlfilters path: %s
Failed to write test C filecat %s ; echogcc -g -o %s %sobjdump -x -dS %sFailed to write sample%s script -i %s -Ddlfilter C APIdlfilterFAILED sigaction(): %s
sigtrapFAILED pthread_create(): %s
misfired signal?tests/sigtrap.cenable failedpthread_join() faileddisable failedspinlockrt_mutex_baseunexpected sigtrapsunexpected si_addrSigtrapPassFailamd_l3amd_dfhv_24x7Event groupsevent_groupsOverlapping symbols:
Zero-length symbol:
machine__new_host() failed!
Testing %s
Failed to create map!dso__load() failed!
DSO has no symbols!
Symbolssymbols123empty stringtests/util.cno match124replace 1efabcabcefbcefbcreplace 2longlonglonglongbclonglongbcreplace longutilgot: %s 0x%lx, expecting %s
failed to get unwind sample
unwind failed
Could not get machine
Failed to create kernel maps
Could not init machine
Could not get thread
test__arch_unwind_sampletest_dwarf_unwind__threadtest_dwarf_unwind__comparebsearchtest_dwarf_unwind__krava_3test_dwarf_unwind__krava_2test_dwarf_unwind__krava_1test__dwarf_unwindTest dwarf unwindnoploopthlooplandlockfailed to get stack map
x86 hybridx86 hybrid event parsingnot hybridAMD IBS via core pmuamd_ibs_via_core_pmux86 Sample parsingx86_sample_parsingx86 bp modifybp_modifyIntel PTintel_pt_pkt_decoderIntel PT packet decoderintel_pt_hybrid_compatSamples differ at 'ins_lat'
arch/x86/tests/hybrid.cwrong hybrid typemissing pmucpu_unexpected pmucpu_core/cycles/{cpu-clock,cpu_core/cycles/}{cpu_core/cycles/,cpu-clock}cpu_core/r1a/cpu_core/LLC-loads/ %02xintel_pt_pkt_desc failed!
eax = 0x%08x
ebx = 0x%08x
ecx = 0x%08x
edx = 0x%08x
Decoding failed!
Decoding:  Packet context changed!
Decoded ok:not CPU %d not found
CPU %d OK
in %s
failed to PTRACE_TRACEME
tracee exited prematurely 1
failed to set dr7: %s
failed to PTRACE_CONT: %s
tracee exited prematurely 2
rip %lx, bp_1 %p
failed to PTRACE_DETACH: %smodify test 1 failed
arch/x86/tests/bp-modify.cfailed to set breakpoint: %s
modify test 2 failed
ibs_opFail
Pass
perf test [<options>] [{list <test-name-fragment>|[<test-name-fragments>|<test-numbers>]}]be more verbose (show symbol address, etc)Run the tests one after another rather than in parallelobjdump binary to use for disassembly and annotationsOut of memory while building script test suite list
Out of memory while duplicating test script string
failed to parse event '%s', err %d
%s/bus/event_source/devices/%s/alias%s/bus/event_source/devices/%s/events/skipping PMU %s events tests: %s
pmu event name crossed PATH_MAX(%d) size
can't open pmu event file for '%s'
 pmu event: %s is a null event
skipping parameterized PMU event: %s which contains ?
Test PMU event failed for '%s'COMPLEX_CYCLES_NAME:orig=cycles,desc=chip-clock-ticksconfig=10,config1,config2=3,config3=4,umask=1,read,r0xeadfailed to parse terms '%s', err %d
Event test failure: test %d '%s'Parse event definition stringsParsing of all PMU events from sysfsParsing of given PMU events from sysfsParsing of aliased events from sysfsParsing of terms (event modifiers)cpu/config=10,config1=1,config2=3,period=1000/ucpu/config=1,name=krava/u,cpu/config=2/ucpu/config=1,call-graph=fp,time,period=100000/,cpu/config=2,call-graph=no,time=0,period=2000/cpu/name='COMPLEX_CYCLES_NAME:orig=cycles,desc=chip-clock-ticks',period=0x1,event=0x2/ukpcpu/L1-dcache-misses,name=cachepmu/cpu/cycles,period=100000,config2/{cpu/instructions/k,cpu/cycles/upp}{cpu/cycles/u,cpu/instructions/kp}:p{cpu/cycles/,cpu/cache-misses/G}:H{cpu/cycles/,cpu/cache-misses/H}:G{cpu/cycles/G,cpu/cache-misses/H}:u{cpu/cycles/G,cpu/cache-misses/H}:uG{cpu/cycles/,cpu/cache-misses/,cpu/branch-misses/}:S{cpu/instructions/,cpu/branch-misses/}:Su{cpu/cycles/,cpu/cache-misses/,cpu/branch-misses/}:D{cpu/cycles/,cpu/cache-misses/,cpu/branch-misses/}:er1,syscalls:sys_enter_openat:k,1:1:hp{faults:k,branches}:u,cycles:kgroup1{syscalls:sys_enter_openat:H,cycles:kppp},group2{cycles,1:3}:G,instructions:u{cycles,instructions}:G,{cycles:G,instructions:G},cycles{cycles,cache-misses,branch-misses}:S{instructions,branch-misses}:Su{cycles,cache-misses,branch-misses}:DL1-dcache-misses/name=cachepmu/cycles/name='COMPLEX_CYCLES_NAME:orig=cycles,desc=chip-clock-ticks'/Duk{cycles,cache-misses,branch-misses}:emem:0/1/name=breakpoint1/,mem:0/4:rw/name=breakpoint2/test attr - failed to open event filetest attr - failed to write event file'/usr/bin/python3' %s/attr.py -d %s/attr/ -p %s %.*sWARN: Maps in vmlinux with a different name in kallsyms:
WARN: %lx-%lx %lx %s in kallsyms asmachine__create_kernel_maps failedCouldn't find a vmlinux that matches the kernel running on this machine, skipping test
WARN: %#lx: diff end addr for %s v: %#lx k: %#lx
WARN: %#lx: diff name v: %s k: %s
ERR : %#lx: %s not on kallsyms
vmlinux symtab matches kallsymsfailed to open counter: %s, tweak /proc/sys/kernel/perf_event_paranoid?
evsel__read_on_cpu: expected to intercept %d calls, got %lu
sched_setaffinity() failed on CPU %d: %s evsel__read_on_cpu: expected to intercept %d calls on cpu %d, got %lu
Detect openat syscall event on all cpusopenat_syscall_event_on_all_cpus%s: Expected flags=%#x, got %#x
syscalls:sys_enter_openat event fieldsfailed to get mmapped address
userspace counter access not %s
userspace counter width not set (%d)
failed to read value for evsel
failed to mmap events: %d (%s)
event with id %lu doesn't map to an evsel
expected %d %s events, got %d
Read samples using the mmap interfaceUser space counter reading of instructionsUser space counter reading of cyclessched__get_first_possible_cpu: %s
%s going backwards in time, prev=%lu, curr=%lu
%s with unexpected cpu, expected %d, got %d
%s with unexpected pid, expected %d, got %d
%s with unexpected tid, expected %d, got %d
Unexpected perf_event->header.type %d!
Excessive number of PERF_RECORD_COMM events!
Missing PERF_RECORD_COMM for %s!
PERF_RECORD_MMAP for %s missing!
PERF_RECORD_* events & perf_sample fieldsFailure to parse cache event '%s' possibly as PMUs don't support itperf_evsel__roundtrip_name_test%s: "%s" signedness(%d) is wrong, should be %d
%s: "%s" size (%d) should be %d!
Parse sched tracepoints fields
fdarray__filter()=%d != %d shouldn't have filtered anything
fdarray__filter()=%d != %d, should have filtered all fds
filtering all but fda->entries[2]:
fdarray__filter()=%d != 1, should have left just one event
filtering all but (fda->entries[0], fda->entries[3]):
fdarray__filter()=%d != 2, should have left just two events
%d: fdarray__add(fda, %d, %d) failed!
%d: fdarray__add(fda, %d, %d)=%d != %d
%d: fda->entries[%d](%d) != %d!
%d: fda->entries[%d].revents(%d) != %d!Add fd to a fdarray, making it autogrowFilter fds with revents mask in a fdarrayuncore_cha suffixes ordered ltuncore_cha suffixes ordered gtmrvl_ddr_pmu suffixes ordered ltmrvl_ddr_pmu suffixes ordered gt%s/bus/event_source/devices/%s/type%s/bus/event_source/devices/%s/eventsSkipping as no event directory "%s"
Invalid sysfs event name: %s/%s
sysfs event '%s' should be all lower/upper case, it will be matched using legacy encoding.config2:0-3,10-13,20-23,30-33,40-43,50-53,60-63
Failed to open test directory "%s"
Failed to mkdir PMU directory
Failed to open for writing file "type"
Failed to write to 'type' file
Failed to mkdir PMU format directory
)Failure to set up path for "%s"
Failed to open for writing file "%s"
Failed to mkdir PMU events directory
perf-pmu-test/events/test-eventkrava01=15,krava02=170,krava03=1,krava11=27,krava12=1,krava13=2,krava21=119,krava22=11,krava23=2
Failed to write to 'test-event' file
Failure to set up buffer for "%s"
krava01=15,krava02=170,krava03=1,krava11=27,krava12=1,krava13=2,krava21=119,krava22=11,krava23=2Unexpected config1 value %llx
Unexpected config2 value %llx
Parsing with PMU format directoryExpected broken metric %s skipping
testing event table: found %d, but expected %d
Missing test event in test architecturetesting core PMU %s aliases: failed
testing core PMU %s aliases: no events to match
testing core PMU %s aliases: pass
testing aliases uncore PMU %s: mismatch expected aliases (%d) vs found (%d)
testing aliases uncore PMU %s: mismatched matching_pmu, %s vs %s
testing aliases uncore PMU %s: could not match alias %s
testing aliases uncore PMU %s: mismatch found aliases (%d) vs matched (%d)
testing aliases PMU %s: mismatched name, %s vs %s
testing aliases PMU %s: mismatched desc, %s vs %s
testing aliases PMU %s: mismatched long_desc, %s vs %s
testing aliases PMU %s: mismatched topic, %s vs %s
testing aliases PMU %s: mismatched str, %s vs %s
testing aliases PMU %s: mismatched pmu_name, %s vs %s
testing aliases core PMU %s: matched event %s
testing event e1 %s: mismatched name string, %s vs %s
testing event e1 %s: mismatched compat string, %s vs %s
testing event e1 %s: mismatched event, %s vs %s
testing event e1 %s: mismatched desc, %s vs %s
testing event e1 %s: mismatched topic, %s vs %s
testing event e1 %s: mismatched long_desc, %s vs %s
testing event e1 %s: mismatched pmu string, %s vs %s
testing event e1 %s: mismatched unit, %s vs %s
testing event e1 %s: mismatched perpkg, %d vs %d
testing event e1 %s: mismatched deprecated, %d vs %d
testing sys event table %s: pass
testing sys event table: could not find event %s
testing event table: could not find event %s
Parsing of PMU event table metricsParsing of PMU event table metrics with fake PMUsParsing of metric thresholds with fake PMUs(unc_p_power_state_occupancy.cores_c0 / unc_p_clockticks) * 100.imx8_ddr0@read\-cycles@ * 4 * 4imx8_ddr0@axid\-read\,axi_mask\=0xffff\,axi_id\=0x0000@ * 4(cstate_pkg@c2\-residency@ / msr@tsc@) * 100(imx8_ddr0@read\-cycles@ + imx8_ddr0@write\-cycles@)Counts total cache misses in first lookup result (high priority)uncore_imc_free_running.cache_missunc_cbo_xsnp_response.miss_evictionA cross-core snoop resulted from L3 Eviction which misses in some processor coreAttributable Level 3 cache access, readNumber of Enhanced Intel SpeedStep(R) Technology (EIST) transitionsevent=9,period=200000,umask=0x20Memory cluster signals to block micro-op dispatch for any reasonevent=0x9,period=0x30d40,umask=0x20event=6,period=200000,umask=0x80Number of segment register loadsevent=0x6,period=0x30d40,umask=0x80Not enough memory for machine setup
%2d: entry: %-8s [%-8s] %20s: period = %lu
%2d: entry: %8s:%5d [%-8s] %20s: period = %lu/%lu
Invalid count for matched entries: %zd of %zd
A entry from the other hists should have pair
Invalid count of dummy entries: %zd of %zd
Invalid count of total leader entries: %zd of %zd
Invalid count of total other entries: %zd of %zd
Other hists should not have dummy entries: %zd
Not enough memory for adding a hist entry
Unmatched nr samples for thread filterUnmatched nr hist entries for thread filterUnmatched total period for thread filterUnmatched nr samples for dso filterUnmatched nr hist entries for dso filterUnmatched total period for dso filterUnmatched nr samples for symbol filterUnmatched nr hist entries for symbol filterUnmatched total period for symbol filterUnmatched nr samples for socket filterUnmatched nr hist entries for socket filterUnmatched total period for socket filterUnmatched nr samples for all filterUnmatched nr hist entries for all filterUnmatched total period for all filteruse callchain: %d, cumulate callchain: %d
Incorrect number of hist entryInvalid callchain entry #%zd/%zdIncorrect number of callchain entryecho "import sys ; sys.path.insert(0, '%s'); import perf" | %s %sfailed setting up signal handler
failed setting up signal handler 2
count1 %lld, count2 %lld, count3 %lld, overflow %d, overflows_2 %d
failed: RF EFLAG recursion issue detected
failed: wrong count for bp1: %lld, expected 1
failed: wrong overflow (%d) hit, expected 3
failed: wrong overflow_2 (%d) hit, expected 3
failed: wrong count for bp2 (%lld), expected 3
failed: wrong count for bp3 (%lld), expected 2
Breakpoint overflow signal handler	Wrong number of executions %lld != %d
	Wrong number of overflows %d != %d
way too many debug registers, fix the test
watchpoints count %d, breakpoints count %d, has_ioctl %d, share %d
ioctl(PERF_EVENT_IOC_MODIFY_ATTRIBUTES) failed
Failed after retrying 1000 times
Number of exit events of a simple workload/proc/sys/kernel/perf_event_max_sample_rateCouldn't open evlist: %s
Hint: check %s, using %lu in this test.
failed to mmap event: %d (%s)
All (%d) samples have period value of 1!
Software clock events period valuesfailed with sythesizing processFailed to parse event dummy:u
Failed to move cycles event to frontFront event no longer at frontNon-tracking event is tracking
perf_evlist__disable_event failed!
sched_switch: cpu: %d prev_tid %d next_tid %d
cycles events even though event was disabled
parse_event(evlist, "dummy:u") failed!
parse_event(evlist, "cycles:u") failed!
Unable to open dummy and cycles event
evlist__mmap(evlist, UINT_MAX) failed!
prctl(PR_SET_NAME, (unsigned long)comm, 0, 0, 0) failed!
First time, failed to find tracking event.
evsel__disable(evsel) failed!
Second time, failed to find tracking event.
Use a dummy software event to keep trackingmachine__create_kernel_maps failed
thread_map__new_by_tid failed
perf_event__synthesize_thread_map failed
machine__findnew_thread failed
perf_evlist__open() failed!
%s
temp-perf-code-reading-test-file--Reading object code for memory address: %#lx
Hypervisor address can not be resolved - skipping
Unexpected kernel address - skipping
skipping the module address %#lx after text end
Too many kcore maps - skipping
addr going backwards, read beyond section?
Bytes read differ from those read by objdump
Bytes read match those read by objdump
machine__process_event failed, event type %u
objdump read too few bytes: %zd
%s -z -d --start-address=0x%lx --stop-address=0x%lx %sSamples differ at 'stream_id'
Samples differ at 'read.group.nr'
Samples differ at 'read.one.value'
Samples differ at 'read.time_enabled'
Samples differ at 'read.time_running'
Samples differ at 'read.group.values[i]'
Samples differ at 'read.one.id'
Samples differ at 'read.one.lost'
Samples differ at 'callchain->nr'
Samples differ at 'callchain->ips[i]'
Samples differ at 'branch_stack->nr'
Samples differ at 'branch_stack->hw_idx'
Samples differ at 'branch_stack->entries[i]'
Samples differ at 'user_regs.mask'
Samples differ at 'user_regs.abi'
Samples differ at 'user_regs'
Samples differ at 'user_stack.size'
Samples differ at 'user_stack'
Samples differ at 'transaction'
Samples differ at 'intr_regs.mask'
Samples differ at 'intr_regs.abi'
Samples differ at 'intr_regs'
Samples differ at 'phys_addr'
Samples differ at 'data_page_size'
Samples differ at 'code_page_size'
Samples differ at 'aux_sample.size'
Samples differ at 'aux_sample'
%s failed for sample_type %#lx, error %d
Event size mismatch: actual %zu vs expected %zu
parsing failed for sample_type %#lx
perf_event__process_attr failed
Parse with no sample_id_all bit set%s - alloc name %d, kmod %d, comp %d, name '%s'
%s (cpumode: %d) - is_kernel_module: %s
Session header CPU map not setCpu map - CPU ID doesn't matchCpu map - Core ID doesn't matchCpu map - Socket ID doesn't matchCpu map - Die ID doesn't matchCore map - Core ID doesn't matchCore map - Socket ID doesn't matchCore map - Die ID doesn't matchDie map - Socket ID doesn't matchDie map - Die ID doesn't matchSocket map - Socket ID doesn't matchSocket map - Thread IDX is setNode map - Node ID doesn't matchfailed to intersect map: bad nrfailed to intersect map: bad resultfailed to merge map: bad result1,3,5,7,9,11,13,15,17,19,21-40failed to synthesize stat_configfailed to synthesize attr update unitfailed to synthesize attr update scalefailed to synthesize attr update namefailed to synthesize attr update cpusfailed to parse event cpu-clock:u
attaching to spawned child, enable on exec
attaching to current thread as enabled
failed to call thread_map__new
attaching to current thread as disabled
Failed to open event cpu-clock:u
attaching to CPU 0 as enabled
failed to call perf_cpu_map__new
100 if 1 else 200 if 1 else 300100 if 0 else 200 if 1 else 300100 if 1 else 200 if 0 else 300100 if 0 else 200 if 0 else 300EVENT1\,param\=?@ + EVENT2\,param\=?@EVENT1 if #core_wide else EVENT21.0 if EVENT1 > 100.0 else 1.0syscalls:sys_enter_prctl/overwrite/Failed to parse tracepoint event, try use root
Unexpected counter: sample_count=%d, comm_count=%d
Failed to get correct path of perf
Failed to make a tempdir for build-id cache
Failed to read build id of %s
Failed to add build id cache of %s
Failed to open probe cache of %s
Failed to find %s:%s in the cache
SIGSEGV is observed as expected, try to recover.
	start: %lu end: %lu name: '%s' refcnt: %d
	start: %lu end: %lu name: '%s' refcnt: 1
Failed. ptime %lu expected %lu

perf_time__parse_for_ranges("%s")
first_sample_time %lu last_sample_time %lu
bad size: range_size %d range_num %d expected num %d
bad range %d expected %lu to %lu
1234567.123456789,1234567.1234567891234567.123456789,1234567.1234567901234567.123456789,1234567.123456790 7654321.987654321,7654321.987654444 8000000,8000000.00000000510000000000000000000000000000abcdefgh99i10000000000000000000000000000000000000000000000000000000000123456789ab99cLjava/lang/StringLatin1;equals([B[B)Zboolean java.lang.StringLatin1.equals(byte[], byte[])Ljava/util/zip/ZipUtils;CENSIZ([BI)Jlong java.util.zip.ZipUtils.CENSIZ(byte[], int)Ljava/util/regex/Pattern$BmpCharProperty;match(Ljava/util/regex/Matcher;ILjava/lang/CharSequence;)Zboolean java.util.regex.Pattern$BmpCharProperty.match(java.util.regex.Matcher, int, java.lang.CharSequence)Ljava/lang/AbstractStringBuilder;appendChars(Ljava/lang/String;II)Vvoid java.lang.AbstractStringBuilder.appendChars(java.lang.String, int, int)camlStdlib__anon_fn$5bstdlib$2eml$3a334$2c0$2d$2d54$5d_1453Stdlib.anon_fn[stdlib.ml:334,0--54]_1453camlStdlib__bytes__$2b$2b_2205test of individual --pfm-eventscpu_clk_unhalted.one_thread_activeFrontend_Bound_SMT failed, wrong ratioDCache_L2_Hits failed, wrong ratioDCache_L2_Misses failed, wrong ratioL1D_Cache_Fill_BW, wrong ratiocache_miss_cycles failed, wrong ratiogroup cache_miss_cycles failed, wrong ratiofailed to expand events for cgroups
  evsel[%d]: %s
  expected: %s
event group doesn't match: got %s, expect %s
event group member doesn't match: %d vs %d
failed to expand default eventsfailed to expand metric eventsperf_read_tsc_conversion is not supported in current kernel
prctl(PR_SET_NAME, (unsigned long)comm1, 0, 0, 0) failed!
prctl(PR_SET_NAME, (unsigned long)comm2, 0, 0, 0) failed!
evsel = evlist__event2evsel(evlist, event) failed!
evsel__parse_sample(evsel, event, &sample) failed!
1st event perf time %lu tsc %lu
rdtsc          time %lu tsc %lu
2nd event perf time %lu tsc %lu
This architecture does not supportperf_read_tsc_conversion is not supported
-- Testing version %d API --
/tmp/dlfilter-test-%u-perf-dataFilter used by the 'dlfilter C API' perf testFailed to get expected filter descriptionint bar(){};int foo(){bar();};int main(){foo();return 0;}Creating new host machine structure
Failed to find program symbolsFailed to create test perf.data fileperf_header__write_pipe() failedperf_event__synthesize_attr() failedperf_event__synthesize_sample() failed%s script -i %s --dlfilter %s/%s --dlarg first --dlarg %d --dlarg %lu --dlarg %lu --dlarg %d --dlarg lastperf_event_attr doesn't have sigtrap
missing signals or incorrectly deliveredFAILED sys_perf_event_open(): %s
Expected %d sigtraps, got %d, running on a kernel with sleepable spinlocks.
See https://lore.kernel.org/all/e368f2c848d77fbc8d259f44e2055fe469c219cf.camel@gmx.de/
Using %s for uncore pmu event
0x%x 0x%lx, 0x%x 0x%lx, 0x%x 0x%lx: %s
machine__findnew_thread() failed!
Failed to find map for current kernel module %sfailed: crossed the max stack value %d
failed: got unresolved address 0x%lx
got wrong number of stack entries %lu != %d
failed to allocate sample uregs data
Intel PT hybrid CPU compatibilitySamples differ at 'retire_lat'
{cpu_core/cycles/,cpu_core/branches/}{cpu_core/cycles/k,cpu_core/branches/u}cpu_core/config=10,config1,config2=3,period=1000/u{cpu_core/cycles/,cpu_core/cpu-cycles/}sched_setaffinity() failed for CPU %d
CPU %d CPUID leaf 20 subleaf %d
intel_pt_get_packet returned %d
Expected length: %d   Decoded length %d
Expected type: %d   Decoded type %d
Expected count: %d   Decoded count %d
Expected payload: 0x%llx   Decoded payload 0x%llx
Expected packet context: %d   Decoded packet context %d
Is %shybrid : CPUID leaf 7 subleaf 0 edx %#x (bit-15 indicates hybrid)
CPU %d same caps as previous CPU
CPU %d subleaf %d reg %d FAIL %#x vs %#x
CPU %d subleaf 1 reg 0 FAIL address filter count %#x vs %#x
failed to set breakpoint, 1st time: %s
failed to set breakpoint, 2nd time: %s
failed to PTRACE_PEEKUSER: %s
failed, breakpoint set to bogus address
type: 0x%x, config: 0x%lx, fd: %d  -  ������������������fgn�rh��|�?@@@i@�r@������	@������@��������?�������?333333�?ffffff�?{�G�z�?�?90:0gfffgfffgfffgfff����
������������
����������������/tmp/perf-test-Xd����������

	�������������xV4

w�Uf3D"Z��������hijklmoqstާ�d�Y
��S�j	

����������������������xc
d��������./test-buildid-X�,�X �,�X ���Ld�&:pLS��bS��bS��bS��b���1,��14&�k4&�k���1-��1��3&�k4&�k�3|�1d�3|�1�3|�1	�3|�1��3|�1
�3|�1
�3|�1�3|�1	�3|�1�3|�1�Oh��'d�Oh��'�Oh��'	�Oh��'
�Oh��'�Oh��'��Oh��'�Oh��'�Oh��'�Oh��'	�Oh��'
�Oh��'�Oh��'�Oh��'�Oh��'�Oh��'Z�Oh��'d�Oh��'��Oh��'
�Oh��'�Oh��'�Oh��'Y�Oh��'e�Oh��'{cycles,instruct@@@test-prog���� G�� G�� G���G���G�� G��pG��pG�� G�� G�� G��$�������̏��Ԏ������t�������֭������������������������������������������������������V���������������������������������������������������������������������������������������������������������������������������������������e�����������������������������������������������������������������������������������.���������������p��������������������������������������������������������������������������������������������������������������������g�������P�������4���!���ǫ������M���.������������X���6�������Ȧ��Ȧ��Ȧ������Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��կ��Ȧ��Ȧ��Ȧ��~���Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ������Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ������Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ������Ȧ��������Ȧ��Ȧ��ɪ��Ȧ��V���Ȧ��Ȧ��Ȧ������Ȧ��߫��Ȧ��Ȧ��Ȧ��Ȧ������?���Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ��Ȧ���������Ȧ��Ȧ������Ȧ��x���Ȧ��\���I�����Ȧ��u���V����������6�������^���#���`���`���`�����`���`���`���`���`�������`���`���`���p���`���`���`���`���`���`���`���`���`���`���`���`���`�����`���`���`���`���`���`���`���`���`���`���`���`���`���`���`���`���`���`���`���Q���`���`���`���`���`���`���`���`���`���`���`���`�����`��������`���`�����`���{���`���`���`�������`������`���`���`���`���ԩ��d���`���`���`���`���`���`���`���`���`���`���`���`���`���`���`���`���`���-���Ϩ��`���`�������`�������`�������n������`�������{���-�����[������������4���4��x4���3���3��4���3���3���3���3���3���3���3���3���3���3���3���3���3���3���3���3���3���3���3���3���3���3���3���3���3���3���3���3���3��9���8���8��3��3��/8��3��3��3��3��3��3��3��3��3��3��3��3��3��3��3��3��3��3��3��3��3��3��3��3��3��3���6��3���6���>��h?��h?��h?��h?��h?���>��h?��h?��h?���?��h?��h?��h?��h?��h?��h?��h?��h?��h?��h?��h?��h?��h?���D��h?��h?��h?��h?��h?��h?��h?��h?��h?��h?��h?��h?��h?��h?��h?��h?��h?��h?��h?��JE���?���?���?���?���?���?���?���?���?���?��h?��h?��h?��h?��h?���D��h?��h?��h?��h?��h?��h?���D��h?��h?��h?��h?��h?���J��h?��h?��h?��EH��h?��h?���G��h?��h?���G��h?��h?��h?��h?��h?��h?��h?��h?��h?��h?���F��h?��h?���F��h?��{F��h?���D��_F��h?��3F��h?���?��h?��h?��h?���>���E���E���E��h?��h?��h?��h?��h?��*E�� q             Quit 
 B             Branch counter abbr list (Optional)
h/?/F1        Show this window
UP/DOWN/PGUP
PGDN/SPACE    Navigate
q/ESC/CTRL+C  Exit browser or go back to previous screen

For multiple event sessions:

TAB/UNTAB     Switch events

For symbolic views (--sort has sym):

ENTER         Zoom into DSO/Threads & Annotate current symbol
ESC           Zoom out
+             Expand/Collapse one callchain level
a             Annotate current symbol
C             Collapse all callchains
d             Zoom into current DSO
e             Expand/Collapse main entry callchains
E             Expand all callchains
F             Toggle percentage of filtered entries
H             Display column headers
k             Zoom into the kernel map
L             Change percent limit
m             Display context menu
S             Zoom into current Processor Socket
P             Print histograms to perf.hist.N
t             Zoom into current Thread
V             Verbose (DSO names in callchains, etc)
z             Toggle zeroing of samples
f             Enable/Disable events
/             Filter symbol by nameh/?/F1        Show this window
UP/DOWN/PGUP
PGDN/SPACE    Navigate
q/ESC/CTRL+C  Exit browser or go back to previous screen

For multiple event sessions:

TAB/UNTAB     Switch events

For symbolic views (--sort has sym):

ENTER         Zoom into DSO/Threads & Annotate current symbol
ESC           Zoom out
+             Expand/Collapse one callchain level
a             Annotate current symbol
C             Collapse all callchains
d             Zoom into current DSO
e             Expand/Collapse main entry callchains
E             Expand all callchains
F             Toggle percentage of filtered entries
H             Display column headers
k             Zoom into the kernel map
L             Change percent limit
m             Display context menu
S             Zoom into current Processor Socket
i             Show header information
P             Print histograms to perf.hist.N
r             Run available scripts
s             Switch to another data file in PWD
t             Zoom into current Thread
V             Verbose (DSO names in callchains, etc)
/             Filter symbol by name
0-9           Sort by event n in grouph/?/F1        Show this window
UP/DOWN/PGUP
PGDN/SPACE
LEFT/RIGHT    Navigate
q/ESC/CTRL+C  Exit browsercolor.uilibperf-gtk.soError:
Warning:
 %*.1f %*lu %*.2f%% %*sN/ASelfOverheadguest sysguest usrChildrenSamplesPeriodWeight1Weight2Weight3Not enough memory!            |          |
                %s
Bad callchain mode
%-*.*s / 
# %s%-.*s
#
%*sno entry >= %.2f%%
%.10s end
UNKNOWN%20s events: %10d  (%4.1f%%)
%20s events: %10d
colors.Press any key...Warning!topmediumgreennormalselectedblackyellowjump_arrowsbluemagentawhite@pltassertion failed at %s:%d
%s  %s [Percent: %s]String not found!Couldn't annotate %s:
%sPress ESC to exitNo source file location.Source file location: %sENTER: OK, ESC: CancelString: Searchui/browsers/annotate.cInvalid jump offset: %lxlocal hitsglobal hitslocal periodglobal period %10s %10s  %*s}; %10lu %10d %10.2f %#10x %#10x  %s%s %#10x %#10x  %*s%s	%s%sOffset%*s %10s %10s %10s  %sFieldSize%*s%c %s
 lost: %lu/%lu drop: %lu/%lu [z] -c %s  -S %s  --time %s,%s# Samples: %lu of event '%s'%lu%c%s%s: %ld%c%schunks LOST!%s%s%s<...>no entry >= %.2f%%Run scripts for all samples%sthe Kernel%.*lxAnnotate %sui/browsers/hists.cout ofintoCollapseExpandCollecting samples...perf.hist.%dCouldn't write to %s: %s%s written!Verbosity level set to %d
Symbol to showPercent LimitInvalid percent: %.2fDo you really want to exit?Annotate type %sZoom %s %s(%d) threadZoom %s %s threadBrowse map detailsZoom %s Processor Socket %dwith assemblerwith sourceExitAvailable samples? - helpBranch counter abbr list%*lx %*lx %c restart with -v to useSearch by name/addr%s not found!scripts.Running %s
Cannot run %s
 --inline-i perf script -s Show individual samples%s script %s -F +metric %s %s-F +disasm-F +srcline,+srccodeperf script command%s script %s%s%s %s %s%s 2>&1 | lessPress 'q' to exitHeader informationsamples.context--tid --cpu %s: CPU %d tid %d--show-lost-events TUI initialization failed.
^(kB)Fatal Error-------- backtrace --------Error:Warning:HelpEnter: Yes, ESC: No%s [%s/%s]GTK browser requested but could not find %s
Not enough memory to display remaining hits
WARN: jump target inconsistency, press 'o', notes->offsets[%#x] = NULL
Press 'h' for help on key bindingsui/browsers/../../util/annotate.hUP/DOWN/PGUP
PGDN/SPACE    Navigate
</>           Move to prev/next symbol
q/ESC/CTRL+C  Exit

ENTER         Go to target
H             Go to hottest instruction
TAB/shift+TAB Cycle thru hottest instructions
j             Toggle showing jump to target arrows
J             Toggle showing number of jump sources on targets
n             Search next string
o             Toggle disassembler output/simplified view
O             Bump offset level (jump targets -> +call -> all -> cycle thru)
s             Toggle source code view
t             Circulate percent, total period, samples view
c             Show min/max cycle
/             Search string
k             Toggle line numbers
l             Show full source file location
P             Print to [symbol_name].annotation file.
r             Run available scripts
p             Toggle percent type [local/global]
b             Toggle percent base [period/hits]
B             Branch counter abbr list (Optional)
?             Search string backwards
f             Toggle showing offsets to full address
Only available for source code lines.%d: nr_ent=%d, height=%d, idx=%d, top_idx=%d, nr_asm_entries=%dActions are only available for assembly lines.Actions are only available for function call/return & jump/branch instructions.
 The branch counter is not available.
Huh? No selection. Report to linux-kernel@vger.kernel.orgThe called function was not found.Not enough memory for annotating '%s' symbol!
Annotate type: '%s' (%d samples)UP/DOWN/PGUP
PGDN/SPACE    Navigate
</>           Move to prev/next symbol
e             Expand/Collapse current entry
E             Expand/Collapse all children of the current
q/ESC/CTRL+C  Exit

Can't search all data files due to memory shortage.
Too many perf data files in PWD!
Only the first 32 files will be listed.
Data switch failed due to memory shortage!
Won't switch the data files due to
no valid data file get selected!
Run scripts for samples of thread [%s]%sRun scripts for samples of symbol [%s]%sTo zoom out press ESC or ENTER + "Zoom out of %s DSO"To zoom out press ESC or ENTER + "Zoom out of %s(%d) thread"To zoom out press ESC or ENTER + "Zoom out of %s thread"Events are being lost, check IO/CPU overload!

You may want to run 'perf' using a RT scheduler policy:

 perf top -r 80

Or reduce the sampling frequency.%d: nr_ent=(%d,%d), etl: %d, rows=%d, idx=%d, fve: idx=%d, row_off=%d, nrows=%dPress 'f' again to re-enable the eventsPress '?' for help on key bindingsPress 'f' to disable the events or 'h' to see other hotkeysMax event group index to sort is %d (index from 0 to %d)Annotation is only available for symbolic views, include "sym*" in --sort to use it.No samples for the "%s" symbol.

Probably appeared just in a callchainToo many perf.hist.N files, nothing written!Please enter the name of symbol you want to see.
To remove the filter later, press / + ENTER.Please enter the value you want to hide entries under that percent.Zoom %s %s DSO (use the 'k' hotkey to zoom directly into the kernel)%s [%s] callchain (one level, same as '+' hotkey, use 'e'/'c' for the whole main level entry)Show context for individual samples %sSwitch to another data file in PWDESC: exit, ENTER|->: Browse histogramsPress ESC to exit, %s / to searchPrefix with 0x to search by addressShow individual samples with assemblerShow individual samples with sourceShow samples with custom perf script argumentsEnter perf script command line (without perf script prefix)--show-switch-events --show-task-events %s script %s%s --time %s %s%s %s%s --ns %s %s %s %s %s | less +/%sESC: exit, ENTER|->: Select optionmaximum size of symbol name reached!{�G�z�?����hhXXSort by index only available with group events! -F +brstackinsn��default_corebp_l1_btb_correctbranchL1 BTB Correctionevent=0x8a00bp_l2_btb_correctbranchL2 BTB Correctionevent=0x8b00l3_cache_rdcacheL3 cache access, readevent=0x4000Attributable Level 3 cache access, readsegment_reg_loads.anyotherNumber of segment register loadsevent=6,period=200000,umask=0x8000dispatch_blocked.anyotherMemory cluster signals to block micro-op dispatch for any reasonevent=9,period=200000,umask=0x2000eist_transotherNumber of Enhanced Intel SpeedStep(R) Technology (EIST) transitionsevent=0x3a,period=20000000hisi_sccl,ddrcuncore_hisi_ddrc.flux_wcmduncoreDDRC write commandsevent=200DDRC write commandsuncore_cboxunc_cbo_xsnp_response.miss_evictionuncoreA cross-core snoop resulted from L3 Eviction which misses in some processor coreevent=0x22,umask=0x8100A cross-core snoop resulted from L3 Eviction which misses in some processor coreevent-hyphenuncoreUNC_CBO_HYPHENevent=0xe000UNC_CBO_HYPHENevent-two-hyphuncoreUNC_CBO_TWO_HYPHevent=0xc000UNC_CBO_TWO_HYPHhisi_sccl,l3cuncore_hisi_l3c.rd_hit_cpipeuncoreTotal read hitsevent=700Total read hitsuncore_imc_free_runninguncore_imc_free_running.cache_missuncoreTotal cache missesevent=0x1200Total cache missesuncore_imcuncore_imc.cache_hitsuncoreTotal cache hitsevent=0x3400Total cache hitsuncore_sys_ddr_pmusys_ddr_pmu.write_cyclesuncoreddr write-cycles eventevent=0x2bv800uncore_sys_ccn_pmusys_ccn_pmu.read_cyclesuncoreccn read-cycles eventconfig=0x2c0x0100uncore_sys_cmn_pmusys_cmn_pmu.hnf_cache_missuncoreCounts total cache misses in first lookup result (high priority)eventid=1,type=5(434|436|43c|43a).*00l1d.hwpf_misscacheL1D.HWPF_MISSevent=0x51,period=1000003,umask=0x2000l1d.replacementcacheCounts the number of cache lines replaced in L1 data cacheevent=0x51,period=100003,umask=100Counts L1D data line replacements including opportunistic replacements, and replacements that require stall-for-replace or block-for-replacel1d_pend_miss.fb_fullcacheNumber of cycles a demand request has waited due to L1D Fill Buffer (FB) unavailabilityevent=0x48,period=1000003,umask=200Counts number of cycles a demand request has waited due to L1D Fill Buffer (FB) unavailability. Demand requests include cacheable/uncacheable demand load, store, lock or SW prefetch accessesl1d_pend_miss.fb_full_periodscacheNumber of phases a demand request has waited due to L1D Fill Buffer (FB) unavailabilityevent=0x48,cmask=1,edge=1,period=1000003,umask=200Counts number of phases a demand request has waited due to L1D Fill Buffer (FB) unavailability. Demand requests include cacheable/uncacheable demand load, store, lock or SW prefetch accessesl1d_pend_miss.l2_stallcacheThis event is deprecated. Refer to new event L1D_PEND_MISS.L2_STALLSevent=0x48,period=1000003,umask=410l1d_pend_miss.l2_stallscacheNumber of cycles a demand request has waited due to L1D due to lack of L2 resourcesevent=0x48,period=1000003,umask=400Counts number of cycles a demand request has waited due to L1D due to lack of L2 resources. Demand requests include cacheable/uncacheable demand load, store, lock or SW prefetch accessesl1d_pend_miss.pendingcacheNumber of L1D misses that are outstandingevent=0x48,period=1000003,umask=100Counts number of L1D misses that are outstanding in each cycle, that is each cycle the number of Fill Buffers (FB) outstanding required by Demand Reads. FB either is held by demand loads, or it is held by non-demand loads and gets hit at least once by demand. The valid outstanding interval is defined until the FB deallocation by one of the following ways: from FB allocation, if FB is allocated by demand from the demand Hit FB, if it is allocated by hardware or software prefetch. Note: In the L1D, a Demand Read contains cacheable or noncacheable demand loads, including ones causing cache-line splits and reads due to page walks resulted from any request typel1d_pend_miss.pending_cyclescacheCycles with L1D load Misses outstandingevent=0x48,cmask=1,period=1000003,umask=100Counts duration of L1D miss outstanding in cyclesl2_lines_in.allcacheL2 cache lines filling L2event=0x25,period=100003,umask=0x1f00Counts the number of L2 cache lines filling the L2. Counting does not cover rejectsl2_lines_out.useless_hwpfcacheCache lines that have been L2 hardware prefetched but not used by demand accessesevent=0x26,period=200003,umask=400Counts the number of cache lines that have been prefetched by the L2 hardware prefetcher but not used by demand access when evicted from the L2 cachel2_request.allcacheAll accesses to L2 cache [This event is alias to L2_RQSTS.REFERENCES]event=0x24,period=200003,umask=0xff00Counts all requests that were hit or true misses in L2 cache. True-miss excludes misses that were merged with ongoing L2 misses. [This event is alias to L2_RQSTS.REFERENCES]l2_request.misscacheRead requests with true-miss in L2 cache. [This event is alias to L2_RQSTS.MISS]event=0x24,period=200003,umask=0x3f00Counts read requests of any type with true-miss in the L2 cache. True-miss excludes L2 misses that were merged with ongoing L2 misses. [This event is alias to L2_RQSTS.MISS]l2_rqsts.all_code_rdcacheL2 code requestsevent=0x24,period=200003,umask=0xe400Counts the total number of L2 code requestsl2_rqsts.all_demand_data_rdcacheDemand Data Read access L2 cacheevent=0x24,period=200003,umask=0xe100Counts Demand Data Read requests accessing the L2 cache. These requests may hit or miss L2 cache. True-miss exclude misses that were merged with ongoing L2 misses. An access is counted oncel2_rqsts.all_demand_misscacheDemand requests that miss L2 cacheevent=0x24,period=200003,umask=0x2700Counts demand requests that miss L2 cachel2_rqsts.all_hwpfcacheL2_RQSTS.ALL_HWPFevent=0x24,period=200003,umask=0xf000l2_rqsts.all_rfocacheRFO requests to L2 cacheevent=0x24,period=200003,umask=0xe200Counts the total number of RFO (read for ownership) requests to L2 cache. L2 RFO requests include both L1D demand RFO misses as well as L1D RFO prefetchesl2_rqsts.code_rd_hitcacheL2 cache hits when fetching instructions, code readsevent=0x24,period=200003,umask=0xc400Counts L2 cache hits when fetching instructions, code readsl2_rqsts.code_rd_misscacheL2 cache misses when fetching instructionsevent=0x24,period=200003,umask=0x2400Counts L2 cache misses when fetching instructionsl2_rqsts.demand_data_rd_hitcacheDemand Data Read requests that hit L2 cacheevent=0x24,period=200003,umask=0xc100Counts the number of demand Data Read requests initiated by load instructions that hit L2 cachel2_rqsts.demand_data_rd_misscacheDemand Data Read miss L2 cacheevent=0x24,period=200003,umask=0x2100Counts demand Data Read requests with true-miss in the L2 cache. True-miss excludes misses that were merged with ongoing L2 misses. An access is counted oncel2_rqsts.hwpf_misscacheL2_RQSTS.HWPF_MISSevent=0x24,period=200003,umask=0x3000l2_rqsts.misscacheRead requests with true-miss in L2 cache. [This event is alias to L2_REQUEST.MISS]event=0x24,period=200003,umask=0x3f00Counts read requests of any type with true-miss in the L2 cache. True-miss excludes L2 misses that were merged with ongoing L2 misses. [This event is alias to L2_REQUEST.MISS]l2_rqsts.referencescacheAll accesses to L2 cache [This event is alias to L2_REQUEST.ALL]event=0x24,period=200003,umask=0xff00Counts all requests that were hit or true misses in L2 cache. True-miss excludes misses that were merged with ongoing L2 misses. [This event is alias to L2_REQUEST.ALL]l2_rqsts.rfo_hitcacheRFO requests that hit L2 cacheevent=0x24,period=200003,umask=0xc200Counts the RFO (Read-for-Ownership) requests that hit L2 cachel2_rqsts.rfo_misscacheRFO requests that miss L2 cacheevent=0x24,period=200003,umask=0x2200Counts the RFO (Read-for-Ownership) requests that miss L2 cachel2_rqsts.swpf_hitcacheSW prefetch requests that hit L2 cacheevent=0x24,period=200003,umask=0xc800Counts Software prefetch requests that hit the L2 cache. Accounts for PREFETCHNTA and PREFETCHT0/1/2 instructions when FB is not fulll2_rqsts.swpf_misscacheSW prefetch requests that miss L2 cacheevent=0x24,period=200003,umask=0x2800Counts Software prefetch requests that miss the L2 cache. Accounts for PREFETCHNTA and PREFETCHT0/1/2 instructions when FB is not fulll2_trans.l2_wbcacheL2 writebacks that access L2 cacheevent=0x23,period=200003,umask=0x4000Counts L2 writebacks that access L2 cachelongest_lat_cache.misscacheCounts the number of cacheable memory requests that miss in the LLC. Counts on a per core basisevent=0x2e,period=200003,umask=0x4100Counts the number of cacheable memory requests that miss in the Last Level Cache (LLC). Requests include demand loads, reads for ownership (RFO), instruction fetches and L1 HW prefetches. If the core has access to an L3 cache, the LLC is the L3 cache, otherwise it is the L2 cache. Counts on a per core basislongest_lat_cache.misscacheCore-originated cacheable requests that missed L3  (Except hardware prefetches to the L3)event=0x2e,period=100003,umask=0x4100Counts core-originated cacheable requests that miss the L3 cache (Longest Latency cache). Requests include data and code reads, Reads-for-Ownership (RFOs), speculative accesses and hardware prefetches to the L1 and L2.  It does not include hardware prefetches to the L3, and may not count other types of requests to the L3longest_lat_cache.referencecacheCounts the number of cacheable memory requests that access the LLC. Counts on a per core basisevent=0x2e,period=200003,umask=0x4f00Counts the number of cacheable memory requests that access the Last Level Cache (LLC). Requests include demand loads, reads for ownership (RFO), instruction fetches and L1 HW prefetches. If the core has access to an L3 cache, the LLC is the L3 cache, otherwise it is the L2 cache. Counts on a per core basislongest_lat_cache.referencecacheCore-originated cacheable requests that refer to L3 (Except hardware prefetches to the L3)event=0x2e,period=100003,umask=0x4f00Counts core-originated cacheable requests to the L3 cache (Longest Latency cache). Requests include data and code reads, Reads-for-Ownership (RFOs), speculative accesses and hardware prefetches to the L1 and L2.  It does not include hardware prefetches to the L3, and may not count other types of requests to the L3mem_bound_stalls.ifetchcacheCounts the number of cycles the core is stalled due to an instruction cache or TLB miss which hit in the L2, LLC, DRAM or MMIO (Non-DRAM)event=0x34,period=200003,umask=0x3800Counts the number of cycles the core is stalled due to an instruction cache or translation lookaside buffer (TLB) miss which hit in the L2, LLC, DRAM or MMIO (Non-DRAM)mem_bound_stalls.ifetch_dram_hitcacheCounts the number of cycles the core is stalled due to an instruction cache or TLB miss which hit in DRAM or MMIO (Non-DRAM)event=0x34,period=200003,umask=0x2000Counts the number of cycles the core is stalled due to an instruction cache or translation lookaside buffer (TLB) miss which hit in DRAM or MMIO (non-DRAM)mem_bound_stalls.ifetch_l2_hitcacheCounts the number of cycles the core is stalled due to an instruction cache or TLB miss which hit in the L2 cacheevent=0x34,period=200003,umask=800Counts the number of cycles the core is stalled due to an instruction cache or Translation Lookaside Buffer (TLB) miss which hit in the L2 cachemem_bound_stalls.ifetch_llc_hitcacheCounts the number of cycles the core is stalled due to an instruction cache or TLB miss which hit in the LLC or other core with HITE/F/Mevent=0x34,period=200003,umask=0x1000Counts the number of cycles the core is stalled due to an instruction cache or Translation Lookaside Buffer (TLB) miss which hit in the Last Level Cache (LLC) or other core with HITE/F/Mmem_bound_stalls.loadcacheCounts the number of cycles the core is stalled due to a demand load miss which hit in the L2, LLC, DRAM or MMIO (Non-DRAM)event=0x34,period=200003,umask=700mem_bound_stalls.load_dram_hitcacheCounts the number of cycles the core is stalled due to a demand load miss which hit in DRAM or MMIO (Non-DRAM)event=0x34,period=200003,umask=400mem_bound_stalls.load_l2_hitcacheCounts the number of cycles the core is stalled due to a demand load which hit in the L2 cacheevent=0x34,period=200003,umask=100mem_bound_stalls.load_llc_hitcacheCounts the number of cycles the core is stalled due to a demand load which hit in the LLC or other core with HITE/F/Mevent=0x34,period=200003,umask=200Counts the number of cycles the core is stalled due to a demand load which hit in the Last Level Cache (LLC) or other core with HITE/F/Mmem_inst_retired.all_loadscacheRetired load instructions  Supports address when precise (Precise event)event=0xd0,period=1000003,umask=0x8100Counts all retired load instructions. This event accounts for SW prefetch instructions of PREFETCHNTA or PREFETCHT0/1/2 or PREFETCHW  Supports address when precise (Precise event)mem_inst_retired.all_storescacheRetired store instructions  Supports address when precise (Precise event)event=0xd0,period=1000003,umask=0x8200Counts all retired store instructions  Supports address when precise (Precise event)mem_inst_retired.anycacheAll retired memory instructions  Supports address when precise (Precise event)event=0xd0,period=1000003,umask=0x8300Counts all retired memory instructions - loads and stores  Supports address when precise (Precise event)mem_inst_retired.lock_loadscacheRetired load instructions with locked access  Supports address when precise (Precise event)event=0xd0,period=100007,umask=0x2100Counts retired load instructions with locked access  Supports address when precise (Precise event)mem_inst_retired.split_loadscacheRetired load instructions that split across a cacheline boundary  Supports address when precise (Precise event)event=0xd0,period=100003,umask=0x4100Counts retired load instructions that split across a cacheline boundary  Supports address when precise (Precise event)mem_inst_retired.split_storescacheRetired store instructions that split across a cacheline boundary  Supports address when precise (Precise event)event=0xd0,period=100003,umask=0x4200Counts retired store instructions that split across a cacheline boundary  Supports address when precise (Precise event)mem_inst_retired.stlb_miss_loadscacheRetired load instructions that miss the STLB  Supports address when precise (Precise event)event=0xd0,period=100003,umask=0x1100Number of retired load instructions that (start a) miss in the 2nd-level TLB (STLB)  Supports address when precise (Precise event)mem_inst_retired.stlb_miss_storescacheRetired store instructions that miss the STLB  Supports address when precise (Precise event)event=0xd0,period=100003,umask=0x1200Number of retired store instructions that (start a) miss in the 2nd-level TLB (STLB)  Supports address when precise (Precise event)mem_load_completed.l1_miss_anycacheCompleted demand load uops that miss the L1 d-cacheevent=0x43,period=1000003,umask=0xfd00Number of completed demand load requests that missed the L1 data cache including shadow misses (FB hits, merge to an ongoing L1D miss)mem_load_l3_hit_retired.xsnp_fwdcacheRetired load instructions whose data sources were HitM responses from shared L3  Supports address when precise (Precise event)event=0xd2,period=20011,umask=400Counts retired load instructions whose data sources were HitM responses from shared L3  Supports address when precise (Precise event)mem_load_l3_hit_retired.xsnp_hitcacheRetired load instructions whose data sources were L3 and cross-core snoop hits in on-pkg core cache  Supports address when precise (Precise event)event=0xd2,period=20011,umask=200Counts retired load instructions whose data sources were L3 and cross-core snoop hits in on-pkg core cache  Supports address when precise (Precise event)mem_load_l3_hit_retired.xsnp_hitmcacheRetired load instructions whose data sources were HitM responses from shared L3  Supports address when precise (Precise event)event=0xd2,period=20011,umask=400Counts retired load instructions whose data sources were HitM responses from shared L3  Supports address when precise (Precise event)mem_load_l3_hit_retired.xsnp_misscacheRetired load instructions whose data sources were L3 hit and cross-core snoop missed in on-pkg core cache  Supports address when precise (Precise event)event=0xd2,period=20011,umask=100Counts the retired load instructions whose data sources were L3 hit and cross-core snoop missed in on-pkg core cache  Supports address when precise (Precise event)mem_load_l3_hit_retired.xsnp_nonecacheRetired load instructions whose data sources were hits in L3 without snoops required  Supports address when precise (Precise event)event=0xd2,period=100003,umask=800Counts retired load instructions whose data sources were hits in L3 without snoops required  Supports address when precise (Precise event)mem_load_l3_hit_retired.xsnp_no_fwdcacheRetired load instructions whose data sources were L3 and cross-core snoop hits in on-pkg core cache  Supports address when precise (Precise event)event=0xd2,period=20011,umask=200Counts retired load instructions whose data sources were L3 and cross-core snoop hits in on-pkg core cache  Supports address when precise (Precise event)mem_load_l3_miss_retired.local_dramcacheRetired load instructions which data sources missed L3 but serviced from local dram  Supports address when precise (Precise event)event=0xd3,period=100007,umask=100Retired load instructions which data sources missed L3 but serviced from local DRAM  Supports address when precise (Precise event)mem_load_misc_retired.uccacheRetired instructions with at least 1 uncacheable load or lock  Supports address when precise (Precise event)event=0xd4,period=100007,umask=400Retired instructions with at least one load to uncacheable memory-type, or at least one cache-line split locked access (Bus Lock)  Supports address when precise (Precise event)mem_load_retired.fb_hitcacheNumber of completed demand load requests that missed the L1, but hit the FB(fill buffer), because a preceding miss to the same cacheline initiated the line to be brought into L1, but data is not yet ready in L1  Supports address when precise (Precise event)event=0xd1,period=100007,umask=0x4000Counts retired load instructions with at least one uop was load missed in L1 but hit FB (Fill Buffers) due to preceding miss to the same cache line with data not ready  Supports address when precise (Precise event)mem_load_retired.l1_hitcacheRetired load instructions with L1 cache hits as data sources  Supports address when precise (Precise event)event=0xd1,period=1000003,umask=100Counts retired load instructions with at least one uop that hit in the L1 data cache. This event includes all SW prefetches and lock instructions regardless of the data source  Supports address when precise (Precise event)mem_load_retired.l1_misscacheRetired load instructions missed L1 cache as data sources  Supports address when precise (Precise event)event=0xd1,period=200003,umask=800Counts retired load instructions with at least one uop that missed in the L1 cache  Supports address when precise (Precise event)mem_load_retired.l2_hitcacheRetired load instructions with L2 cache hits as data sources  Supports address when precise (Precise event)event=0xd1,period=200003,umask=200Counts retired load instructions with L2 cache hits as data sources  Supports address when precise (Precise event)mem_load_retired.l2_misscacheRetired load instructions missed L2 cache as data sources  Supports address when precise (Precise event)event=0xd1,period=100021,umask=0x1000Counts retired load instructions missed L2 cache as data sources  Supports address when precise (Precise event)mem_load_retired.l3_hitcacheRetired load instructions with L3 cache hits as data sources  Supports address when precise (Precise event)event=0xd1,period=100021,umask=400Counts retired load instructions with at least one uop that hit in the L3 cache  Supports address when precise (Precise event)mem_load_retired.l3_misscacheRetired load instructions missed L3 cache as data sources  Supports address when precise (Precise event)event=0xd1,period=50021,umask=0x2000Counts retired load instructions with at least one uop that missed in the L3 cache  Supports address when precise (Precise event)mem_load_uops_retired.dram_hitcacheCounts the number of load uops retired that hit in DRAM  Supports address when precise (Precise event)event=0xd1,period=200003,umask=0x8000mem_load_uops_retired.l2_hitcacheCounts the number of load uops retired that hit in the L2 cache  Supports address when precise (Precise event)event=0xd1,period=200003,umask=200mem_load_uops_retired.l3_hitcacheCounts the number of load uops retired that hit in the L3 cache  Supports address when precise (Precise event)event=0xd1,period=200003,umask=400mem_scheduler_block.allcacheCounts the number of cycles that uops are blocked for any of the following reasons:  load buffer, store buffer or RSV fullevent=4,period=20003,umask=700mem_scheduler_block.ld_bufcacheCounts the number of cycles that uops are blocked due to a load buffer full conditionevent=4,period=20003,umask=200mem_scheduler_block.rsvcacheCounts the number of cycles that uops are blocked due to an RSV full conditionevent=4,period=20003,umask=400mem_scheduler_block.st_bufcacheCounts the number of cycles that uops are blocked due to a store buffer full conditionevent=4,period=20003,umask=100mem_store_retired.l2_hitcacheMEM_STORE_RETIRED.L2_HITevent=0x44,period=200003,umask=100mem_uops_retired.all_loadscacheCounts the number of load uops retired  Supports address when precise (Precise event)event=0xd0,period=200003,umask=0x8100Counts the total number of load uops retired  Supports address when precise (Precise event)mem_uops_retired.all_storescacheCounts the number of store uops retired  Supports address when precise (Precise event)event=0xd0,period=200003,umask=0x8200Counts the total number of store uops retired  Supports address when precise (Precise event)mem_uops_retired.load_latency_gt_128cacheCounts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 128 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled  Supports address when precise (Must be precise)event=0xd0,period=1000003,umask=5,ldlat=0x8000Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 128 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled. If a PEBS record is generated, will populate the PEBS Latency and PEBS Data Source fields accordingly  Supports address when precise (Must be precise)mem_uops_retired.load_latency_gt_16cacheCounts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 16 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled  Supports address when precise (Must be precise)event=0xd0,period=1000003,umask=5,ldlat=0x1000Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 16 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled. If a PEBS record is generated, will populate the PEBS Latency and PEBS Data Source fields accordingly  Supports address when precise (Must be precise)mem_uops_retired.load_latency_gt_256cacheCounts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 256 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled  Supports address when precise (Must be precise)event=0xd0,period=1000003,umask=5,ldlat=0x10000Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 256 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled. If a PEBS record is generated, will populate the PEBS Latency and PEBS Data Source fields accordingly  Supports address when precise (Must be precise)mem_uops_retired.load_latency_gt_32cacheCounts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 32 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled  Supports address when precise (Must be precise)event=0xd0,period=1000003,umask=5,ldlat=0x2000Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 32 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled. If a PEBS record is generated, will populate the PEBS Latency and PEBS Data Source fields accordingly  Supports address when precise (Must be precise)mem_uops_retired.load_latency_gt_4cacheCounts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 4 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled  Supports address when precise (Must be precise)event=0xd0,period=1000003,umask=5,ldlat=0x400Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 4 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled. If a PEBS record is generated, will populate the PEBS Latency and PEBS Data Source fields accordingly  Supports address when precise (Must be precise)mem_uops_retired.load_latency_gt_512cacheCounts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 512 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled  Supports address when precise (Must be precise)event=0xd0,period=1000003,umask=5,ldlat=0x20000Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 512 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled. If a PEBS record is generated, will populate the PEBS Latency and PEBS Data Source fields accordingly  Supports address when precise (Must be precise)mem_uops_retired.load_latency_gt_64cacheCounts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 64 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled  Supports address when precise (Must be precise)event=0xd0,period=1000003,umask=5,ldlat=0x4000Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 64 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled. If a PEBS record is generated, will populate the PEBS Latency and PEBS Data Source fields accordingly  Supports address when precise (Must be precise)mem_uops_retired.load_latency_gt_8cacheCounts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 8 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled  Supports address when precise (Must be precise)event=0xd0,period=1000003,umask=5,ldlat=0x800Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 8 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled. If a PEBS record is generated, will populate the PEBS Latency and PEBS Data Source fields accordingly  Supports address when precise (Must be precise)mem_uops_retired.lock_loadscacheCounts the number of load uops retired that performed one or more locks  Supports address when precise (Precise event)event=0xd0,period=200003,umask=0x2100mem_uops_retired.split_loadscacheCounts the number of retired split load uops  Supports address when precise (Precise event)event=0xd0,period=200003,umask=0x4100mem_uops_retired.store_latencycacheCounts the number of stores uops retired. Counts with or without PEBS enabled  Supports address when precise (Must be precise)event=0xd0,period=1000003,umask=600Counts the number of stores uops retired. Counts with or without PEBS enabled. If PEBS is enabled and a PEBS record is generated, will populate PEBS Latency and PEBS Data Source fields accordingly  Supports address when precise (Must be precise)mem_uop_retired.anycacheRetired memory uops for any accessevent=0xe5,period=1000003,umask=300Number of retired micro-operations (uops) for load or store memory accessesocr.demand_data_rd.l3_hitcacheCounts demand data reads that were supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C000100ocr.demand_data_rd.l3_hit.snoop_hitmcacheCounts demand data reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000100ocr.demand_data_rd.l3_hit.snoop_hitmcacheCounts demand data reads that resulted in a snoop hit in another cores caches, data forwarding is required as the data is modifiedevent=0x2a,period=100003,umask=1,offcore_rsp=0x10003C000100ocr.demand_data_rd.l3_hit.snoop_hit_no_fwdcacheCounts demand data reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C000100ocr.demand_data_rd.l3_hit.snoop_hit_with_fwdcacheCounts demand data reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C000100ocr.demand_data_rd.l3_hit.snoop_hit_with_fwdcacheCounts demand data reads that resulted in a snoop hit in another cores caches which forwarded the unmodified data to the requesting coreevent=0x2a,period=100003,umask=1,offcore_rsp=0x8003C000100ocr.demand_rfo.l3_hitcacheCounts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C000200ocr.demand_rfo.l3_hit.snoop_hitmcacheCounts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000200ocr.demand_rfo.l3_hit.snoop_hitmcacheCounts demand read for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that resulted in a snoop hit in another cores caches, data forwarding is required as the data is modifiedevent=0x2a,period=100003,umask=1,offcore_rsp=0x10003C000200offcore_requests.all_requestscacheOFFCORE_REQUESTS.ALL_REQUESTSevent=0x21,period=100003,umask=0x8000offcore_requests.data_rdcacheDemand and prefetch data readsevent=0x21,period=100003,umask=800Counts the demand and prefetch data reads. All Core Data Reads include cacheable 'Demands' and L2 prefetchers (not L3 prefetchers). Counting also covers reads due to page walks resulted from any request typeoffcore_requests.demand_code_rdcacheCacheable and noncacheable code read requestsevent=0x21,period=100003,umask=200Counts both cacheable and non-cacheable code read requestsoffcore_requests.demand_data_rdcacheDemand Data Read requests sent to uncoreevent=0x21,period=100003,umask=100Counts the Demand Data Read requests sent to uncore. Use it in conjunction with OFFCORE_REQUESTS_OUTSTANDING to determine average latency in the uncoreoffcore_requests.demand_rfocacheDemand RFO requests including regular RFOs, locks, ItoMevent=0x21,period=100003,umask=400Counts the demand RFO (read for ownership) requests including regular RFOs, locks, ItoMoffcore_requests_outstanding.all_data_rdcacheThis event is deprecated. Refer to new event OFFCORE_REQUESTS_OUTSTANDING.DATA_RD  Spec update: ADL038event=0x20,period=1000003,umask=810offcore_requests_outstanding.cycles_with_data_rdcacheOFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD  Spec update: ADL038event=0x20,cmask=1,period=1000003,umask=800offcore_requests_outstanding.cycles_with_demand_code_rdcacheCycles with offcore outstanding Code Reads transactions in the SuperQueue (SQ), queue to uncoreevent=0x20,cmask=1,period=1000003,umask=200Counts the number of offcore outstanding Code Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTSoffcore_requests_outstanding.cycles_with_demand_data_rdcacheCycles where at least 1 outstanding demand data read request is pendingevent=0x20,cmask=1,period=2000003,umask=100offcore_requests_outstanding.cycles_with_demand_rfocacheFor every cycle where the core is waiting on at least 1 outstanding Demand RFO request, increments by 1event=0x20,cmask=1,period=1000003,umask=400OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFOoffcore_requests_outstanding.data_rdcacheOFFCORE_REQUESTS_OUTSTANDING.DATA_RD  Spec update: ADL038event=0x20,period=1000003,umask=800offcore_requests_outstanding.demand_code_rdcacheOffcore outstanding Code Reads transactions in the SuperQueue (SQ), queue to uncore, every cycleevent=0x20,period=1000003,umask=200Counts the number of offcore outstanding Code Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTSoffcore_requests_outstanding.demand_data_rdcacheFor every cycle, increments by the number of outstanding demand data read requests pendingevent=0x20,period=1000003,umask=100For every cycle, increments by the number of outstanding demand data read requests pending.   Requests are considered outstanding from the time they miss the core's L2 cache until the transaction completion message is sent to the requestorsq_misc.bus_lockcacheCounts bus locks, accounts for cache line split locks and UC locksevent=0x2c,period=100003,umask=0x1000Counts the more expensive bus lock needed to enforce cache coherency for certain memory accesses that need to be done atomically.  Can be created by issuing an atomic instruction (via the LOCK prefix) which causes a cache line split or accesses uncacheable memorysw_prefetch_access.anycacheCounts the number of PREFETCHNTA, PREFETCHW, PREFETCHT0, PREFETCHT1 or PREFETCHT2 instructions executedevent=0x40,period=100003,umask=0xf00sw_prefetch_access.ntacacheNumber of PREFETCHNTA instructions executedevent=0x40,period=100003,umask=100Counts the number of PREFETCHNTA instructions executedsw_prefetch_access.prefetchwcacheNumber of PREFETCHW instructions executedevent=0x40,period=100003,umask=800Counts the number of PREFETCHW instructions executedsw_prefetch_access.t0cacheNumber of PREFETCHT0 instructions executedevent=0x40,period=100003,umask=200Counts the number of PREFETCHT0 instructions executedsw_prefetch_access.t1_t2cacheNumber of PREFETCHT1 or PREFETCHT2 instructions executedevent=0x40,period=100003,umask=400Counts the number of PREFETCHT1 or PREFETCHT2 instructions executedtopdown_fe_bound.icachecacheCounts the number of issue slots every cycle that were not delivered by the frontend due to instruction cache missesevent=0x71,period=1000003,umask=0x2000arith.fpdiv_activefloating pointARITH.FPDIV_ACTIVEevent=0xb0,cmask=1,period=1000003,umask=100assists.fpfloating pointCounts all microcode FP assistsevent=0xc1,period=100003,umask=200Counts all microcode Floating Point assistsassists.sse_avx_mixfloating pointASSISTS.SSE_AVX_MIXevent=0xc1,period=1000003,umask=0x1000fp_arith_dispatched.port_0floating pointFP_ARITH_DISPATCHED.PORT_0 [This event is alias to FP_ARITH_DISPATCHED.V0]event=0xb3,period=2000003,umask=100fp_arith_dispatched.port_1floating pointFP_ARITH_DISPATCHED.PORT_1 [This event is alias to FP_ARITH_DISPATCHED.V1]event=0xb3,period=2000003,umask=200fp_arith_dispatched.port_5floating pointFP_ARITH_DISPATCHED.PORT_5 [This event is alias to FP_ARITH_DISPATCHED.V2]event=0xb3,period=2000003,umask=400fp_arith_dispatched.v0floating pointFP_ARITH_DISPATCHED.V0 [This event is alias to FP_ARITH_DISPATCHED.PORT_0]event=0xb3,period=2000003,umask=100fp_arith_dispatched.v1floating pointFP_ARITH_DISPATCHED.V1 [This event is alias to FP_ARITH_DISPATCHED.PORT_1]event=0xb3,period=2000003,umask=200fp_arith_dispatched.v2floating pointFP_ARITH_DISPATCHED.V2 [This event is alias to FP_ARITH_DISPATCHED.PORT_5]event=0xb3,period=2000003,umask=400fp_arith_inst_retired.128b_packed_doublefloating pointCounts number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 2 computation operations, one for each element.  Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per elementevent=0xc7,period=100003,umask=400Number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 2 computation operations, one for each element.  Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.128b_packed_singlefloating pointNumber of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 4 computation operations, one for each element.  Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per elementevent=0xc7,period=100003,umask=800Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 4 computation operations, one for each element.  Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.256b_packed_doublefloating pointCounts number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 4 computation operations, one for each element.  Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB.  FM(N)ADD/SUB instructions count twice as they perform 2 calculations per elementevent=0xc7,period=100003,umask=0x1000Number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 4 computation operations, one for each element.  Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB.  FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.256b_packed_singlefloating pointCounts number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 8 computation operations, one for each element.  Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per elementevent=0xc7,period=100003,umask=0x2000Number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 8 computation operations, one for each element.  Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.4_flopsfloating pointNumber of SSE/AVX computational 128-bit packed single and 256-bit packed double precision FP instructions retired; some instructions will count twice as noted below.  Each count represents 2 or/and 4 computation operations, 1 for each element.  Applies to SSE* and AVX* packed single precision and packed double precision FP instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB count twice as they perform 2 calculations per elementevent=0xc7,period=100003,umask=0x1800Number of SSE/AVX computational 128-bit packed single precision and 256-bit packed double precision  floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 2 or/and 4 computation operations, one for each element.  Applies to SSE* and AVX* packed single precision floating-point and packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.scalarfloating pointNumber of SSE/AVX computational scalar floating-point instructions retired; some instructions will count twice as noted below.  Applies to SSE* and AVX* scalar, double and single precision floating-point: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 RANGE SQRT DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per elementevent=0xc7,period=1000003,umask=300Number of SSE/AVX computational scalar single precision and double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB.  FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.scalar_doublefloating pointCounts number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 1 computational operation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB.  FM(N)ADD/SUB instructions count twice as they perform 2 calculations per elementevent=0xc7,period=100003,umask=100Number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 1 computational operation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB.  FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.scalar_singlefloating pointCounts number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB.  FM(N)ADD/SUB instructions count twice as they perform 2 calculations per elementevent=0xc7,period=100003,umask=200Number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB.  FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.vectorfloating pointNumber of any Vector retired FP arithmetic instructionsevent=0xc7,period=1000003,umask=0xfc00Number of any Vector retired FP arithmetic instructions.  The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsmachine_clears.fp_assistfloating pointCounts the number of floating point operations retired that required microcode assistevent=0xc3,period=20003,umask=400Counts the number of floating point operations retired that required microcode assist, which is not a reflection of the number of FP operations, instructions or uopsuops_retired.fpdivfloating pointCounts the number of floating point divide uops retired (x87 and SSE, including x87 sqrt) (Precise event)event=0xc2,period=2000003,umask=800baclears.anyfrontendCounts the total number of BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branchesevent=0xe6,period=100003,umask=100Counts the total number of BACLEARS, which occur when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend.  Includes BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branchesbaclears.anyfrontendClears due to Unknown Branchesevent=0x60,period=100003,umask=100Number of times the front-end is resteered when it finds a branch instruction in a fetch line. This is called Unknown Branch which occurs for the first time a branch instruction is fetched or when the branch is not tracked by the BPU (Branch Prediction Unit) anymoredecode.lcpfrontendStalls caused by changing prefix length of the instructionevent=0x87,period=500009,umask=100Counts cycles that the Instruction Length decoder (ILD) stalls occurred due to dynamically changing prefix length of the decoded instruction (by operand size prefix instruction 0x66, address size prefix instruction 0x67 or REX.W for Intel64). Count is proportional to the number of prefixes in a 16B-line. This may result in a three-cycle penalty for each LCP (Length changing prefix) in a 16-byte chunkdecode.ms_busyfrontendCycles the Microcode Sequencer is busyevent=0x87,period=500009,umask=200dsb2mite_switches.penalty_cyclesfrontendDSB-to-MITE switch true penalty cyclesevent=0x61,period=100003,umask=200Decode Stream Buffer (DSB) is a Uop-cache that holds translations of previously fetched instructions that were decoded by the legacy x86 decode pipeline (MITE). This event counts fetch penalty cycles when a transition occurs from DSB to MITEfrontend_retired.any_dsb_missfrontendRetired Instructions who experienced DSB miss (Precise event)event=0xc6,period=100007,umask=1,frontend=0x100Counts retired Instructions that experienced DSB (Decode stream buffer i.e. the decoded instruction-cache) miss (Precise event)frontend_retired.dsb_missfrontendRetired Instructions who experienced a critical DSB miss (Precise event)event=0xc6,period=100007,umask=1,frontend=0x1100Number of retired Instructions that experienced a critical DSB (Decode stream buffer i.e. the decoded instruction-cache) miss. Critical means stalls were exposed to the back-end as a result of the DSB miss (Precise event)frontend_retired.itlb_missfrontendRetired Instructions who experienced iTLB true miss (Precise event)event=0xc6,period=100007,umask=1,frontend=0x1400Counts retired Instructions that experienced iTLB (Instruction TLB) true miss (Precise event)frontend_retired.l1i_missfrontendRetired Instructions who experienced Instruction L1 Cache true miss (Precise event)event=0xc6,period=100007,umask=1,frontend=0x1200Counts retired Instructions who experienced Instruction L1 Cache true miss (Precise event)frontend_retired.l2_missfrontendRetired Instructions who experienced Instruction L2 Cache true miss (Precise event)event=0xc6,period=100007,umask=1,frontend=0x1300Counts retired Instructions who experienced Instruction L2 Cache true miss (Precise event)frontend_retired.latency_ge_1frontendRetired instructions after front-end starvation of at least 1 cycle (Precise event)event=0xc6,period=100007,umask=1,frontend=0x60010600Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of at least 1 cycle which was not interrupted by a back-end stall (Precise event)frontend_retired.latency_ge_128frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=1,frontend=0x60800600Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall (Precise event)frontend_retired.latency_ge_16frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 16 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=1,frontend=0x60100600Counts retired instructions that are delivered to the back-end after a front-end stall of at least 16 cycles. During this period the front-end delivered no uops (Precise event)frontend_retired.latency_ge_2frontendRetired instructions after front-end starvation of at least 2 cycles (Precise event)event=0xc6,period=100007,umask=1,frontend=0x60020600Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of at least 2 cycles which was not interrupted by a back-end stall (Precise event)frontend_retired.latency_ge_256frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=1,frontend=0x61000600Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall (Precise event)frontend_retired.latency_ge_2_bubbles_ge_1frontendRetired instructions that are fetched after an interval where the front-end had at least 1 bubble-slot for a period of 2 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=1,frontend=0x10020600Counts retired instructions that are delivered to the back-end after the front-end had at least 1 bubble-slot for a period of 2 cycles. A bubble-slot is an empty issue-pipeline slot while there was no RAT stall (Precise event)frontend_retired.latency_ge_32frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 32 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=1,frontend=0x60200600Counts retired instructions that are delivered to the back-end after a front-end stall of at least 32 cycles. During this period the front-end delivered no uops (Precise event)frontend_retired.latency_ge_4frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=1,frontend=0x60040600Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall (Precise event)frontend_retired.latency_ge_512frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=1,frontend=0x62000600Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall (Precise event)frontend_retired.latency_ge_64frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=1,frontend=0x60400600Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall (Precise event)frontend_retired.latency_ge_8frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 8 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=1,frontend=0x60080600Counts retired instructions that are delivered to the back-end after a front-end stall of at least 8 cycles. During this period the front-end delivered no uops (Precise event)frontend_retired.ms_flowsfrontendFRONTEND_RETIRED.MS_FLOWS (Precise event)event=0xc6,period=100007,umask=1,frontend=0x800frontend_retired.stlb_missfrontendRetired Instructions who experienced STLB (2nd level TLB) true miss (Precise event)event=0xc6,period=100007,umask=1,frontend=0x1500Counts retired Instructions that experienced STLB (2nd level TLB) true miss (Precise event)frontend_retired.unknown_branchfrontendFRONTEND_RETIRED.UNKNOWN_BRANCH (Precise event)event=0xc6,period=100007,umask=1,frontend=0x1700icache.accessesfrontendCounts the number of requests to the instruction cache for one or more bytes of a cache lineevent=0x80,period=200003,umask=300Counts the total number of requests to the instruction cache.  The event only counts new cache line accesses, so that multiple back to back fetches to the exact same cache line or byte chunk count as one.  Specifically, the event counts when accesses from sequential code crosses the cache line boundary, or when a branch target is moved to a new line or to a non-sequential byte chunk of the same lineicache.missesfrontendCounts the number of instruction cache missesevent=0x80,period=200003,umask=200Counts the number of missed requests to the instruction cache.  The event only counts new cache line accesses, so that multiple back to back fetches to the exact same cache line and byte chunk count as one.  Specifically, the event counts when accesses from sequential code crosses the cache line boundary, or when a branch target is moved to a new line or to a non-sequential byte chunk of the same lineicache_data.stallsfrontendCycles where a code fetch is stalled due to L1 instruction cache missevent=0x80,period=500009,umask=400Counts cycles where a code line fetch is stalled due to an L1 instruction cache miss. The decode pipeline works at a 32 Byte granularityicache_data.stall_periodsfrontendICACHE_DATA.STALL_PERIODSevent=0x80,cmask=1,edge=1,period=500009,umask=400icache_tag.stallsfrontendCycles where a code fetch is stalled due to L1 instruction cache tag missevent=0x83,period=200003,umask=400Counts cycles where a code fetch is stalled due to L1 instruction cache tag missidq.dsb_cycles_anyfrontendCycles Decode Stream Buffer (DSB) is delivering any Uopevent=0x79,cmask=1,period=2000003,umask=800Counts the number of cycles uops were delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) pathidq.dsb_cycles_okfrontendCycles DSB is delivering optimal number of Uopsevent=0x79,cmask=6,period=2000003,umask=800Counts the number of cycles where optimal number of uops was delivered to the Instruction Decode Queue (IDQ) from the DSB (Decode Stream Buffer) path. Count includes uops that may 'bypass' the IDQidq.dsb_uopsfrontendUops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) pathevent=0x79,period=2000003,umask=800Counts the number of uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) pathidq.mite_cycles_anyfrontendCycles MITE is delivering any Uopevent=0x79,cmask=1,period=2000003,umask=400Counts the number of cycles uops were delivered to the Instruction Decode Queue (IDQ) from the MITE (legacy decode pipeline) path. During these cycles uops are not being delivered from the Decode Stream Buffer (DSB)idq.mite_cycles_okfrontendCycles MITE is delivering optimal number of Uopsevent=0x79,cmask=6,period=2000003,umask=400Counts the number of cycles where optimal number of uops was delivered to the Instruction Decode Queue (IDQ) from the MITE (legacy decode pipeline) path. During these cycles uops are not being delivered from the Decode Stream Buffer (DSB)idq.mite_uopsfrontendUops delivered to Instruction Decode Queue (IDQ) from MITE pathevent=0x79,period=2000003,umask=400Counts the number of uops delivered to Instruction Decode Queue (IDQ) from the MITE path. This also means that uops are not being delivered from the Decode Stream Buffer (DSB)idq.ms_cycles_anyfrontendCycles when uops are being delivered to IDQ while MS is busyevent=0x79,cmask=1,period=2000003,umask=0x2000Counts cycles during which uops are being delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Uops maybe initiated by Decode Stream Buffer (DSB) or MITEidq.ms_switchesfrontendNumber of switches from DSB or MITE to the MSevent=0x79,cmask=1,edge=1,period=100003,umask=0x2000Number of switches from DSB (Decode Stream Buffer) or MITE (legacy decode pipeline) to the Microcode Sequenceridq.ms_uopsfrontendUops delivered to IDQ while MS is busyevent=0x79,period=1000003,umask=0x2000Counts the total number of uops delivered by the Microcode Sequencer (MS)idq_bubbles.corefrontendUops not delivered by IDQ when backend of the machine is not stalled [This event is alias to IDQ_UOPS_NOT_DELIVERED.CORE]event=0x9c,period=1000003,umask=100Counts the number of uops not delivered to by the Instruction Decode Queue (IDQ) to the back-end of the pipeline when there was no back-end stalls. This event counts for one SMT thread in a given cycle. [This event is alias to IDQ_UOPS_NOT_DELIVERED.CORE]idq_bubbles.cycles_0_uops_deliv.corefrontendCycles when no uops are not delivered by the IDQ when backend of the machine is not stalled [This event is alias to IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE]event=0x9c,cmask=6,period=1000003,umask=100Counts the number of cycles when no uops were delivered by the Instruction Decode Queue (IDQ) to the back-end of the pipeline when there was no back-end stalls. This event counts for one SMT thread in a given cycle. [This event is alias to IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE]idq_bubbles.cycles_fe_was_okfrontendCycles when optimal number of uops was delivered to the back-end when the back-end is not stalled [This event is alias to IDQ_UOPS_NOT_DELIVERED.CYCLES_FE_WAS_OK]event=0x9c,cmask=1,inv=1,period=1000003,umask=100Counts the number of cycles when the optimal number of uops were delivered by the Instruction Decode Queue (IDQ) to the back-end of the pipeline when there was no back-end stalls. This event counts for one SMT thread in a given cycle. [This event is alias to IDQ_UOPS_NOT_DELIVERED.CYCLES_FE_WAS_OK]idq_uops_not_delivered.corefrontendUops not delivered by IDQ when backend of the machine is not stalled [This event is alias to IDQ_BUBBLES.CORE]event=0x9c,period=1000003,umask=100Counts the number of uops not delivered to by the Instruction Decode Queue (IDQ) to the back-end of the pipeline when there was no back-end stalls. This event counts for one SMT thread in a given cycle. [This event is alias to IDQ_BUBBLES.CORE]idq_uops_not_delivered.cycles_0_uops_deliv.corefrontendCycles when no uops are not delivered by the IDQ when backend of the machine is not stalled [This event is alias to IDQ_BUBBLES.CYCLES_0_UOPS_DELIV.CORE]event=0x9c,cmask=6,period=1000003,umask=100Counts the number of cycles when no uops were delivered by the Instruction Decode Queue (IDQ) to the back-end of the pipeline when there was no back-end stalls. This event counts for one SMT thread in a given cycle. [This event is alias to IDQ_BUBBLES.CYCLES_0_UOPS_DELIV.CORE]idq_uops_not_delivered.cycles_fe_was_okfrontendCycles when optimal number of uops was delivered to the back-end when the back-end is not stalled [This event is alias to IDQ_BUBBLES.CYCLES_FE_WAS_OK]event=0x9c,cmask=1,inv=1,period=1000003,umask=100Counts the number of cycles when the optimal number of uops were delivered by the Instruction Decode Queue (IDQ) to the back-end of the pipeline when there was no back-end stalls. This event counts for one SMT thread in a given cycle. [This event is alias to IDQ_BUBBLES.CYCLES_FE_WAS_OK]cycle_activity.stalls_l3_missmemoryExecution stalls while L3 cache miss demand load is outstandingevent=0xa3,cmask=6,period=1000003,umask=600ld_head.any_at_retmemoryCounts the number of cycles that the head (oldest load) of the load buffer is stalled due to any number of reasons, including an L1 miss, WCB full, pagewalk, store address block or store data block, on a load that retiresevent=5,period=1000003,umask=0xff00ld_head.l1_bound_at_retmemoryCounts the number of cycles that the head (oldest load) of the load buffer is stalled due to a core bound stall including a store address match, a DTLB miss or a page walk that detains the load from retiringevent=5,period=1000003,umask=0xf400ld_head.l1_miss_at_retmemoryCounts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to a DL1 missevent=5,period=1000003,umask=0x8100ld_head.other_at_retmemoryCounts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to other block casesevent=5,period=1000003,umask=0xc000Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to other block cases such as pipeline conflicts, fences, etcld_head.pgwalk_at_retmemoryCounts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to a pagewalkevent=5,period=1000003,umask=0xa000ld_head.st_addr_at_retmemoryCounts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to a store address matchevent=5,period=1000003,umask=0x8400machine_clears.memory_orderingmemoryCounts the number of machine clears due to memory ordering caused by a snoop from an external agent. Does not count internally generated machine clears such as those due to memory disambiguationevent=0xc3,period=20003,umask=200machine_clears.memory_orderingmemoryNumber of machine clears due to memory ordering conflictsevent=0xc3,period=100003,umask=200Counts the number of Machine Clears detected dye to memory ordering. Memory Ordering Machine Clears may apply when a memory read may not conform to the memory ordering rules of the x86 architecturememory_activity.cycles_l1d_missmemoryCycles while L1 cache miss demand load is outstandingevent=0x47,cmask=2,period=1000003,umask=200memory_activity.stalls_l1d_missmemoryExecution stalls while L1 cache miss demand load is outstandingevent=0x47,cmask=3,period=1000003,umask=300memory_activity.stalls_l2_missmemoryExecution stalls while L2 cache miss demand cacheable load request is outstandingevent=0x47,cmask=5,period=1000003,umask=500Execution stalls while L2 cache miss demand cacheable load request is outstanding (will not count for uncacheable demand requests e.g. bus lock)memory_activity.stalls_l3_missmemoryExecution stalls while L3 cache miss demand cacheable load request is outstandingevent=0x47,cmask=9,period=1000003,umask=900Execution stalls while L3 cache miss demand cacheable load request is outstanding (will not count for uncacheable demand requests e.g. bus lock)mem_trans_retired.load_latency_gt_1024memoryCounts randomly selected loads when the latency from first dispatch to completion is greater than 1024 cycles  Supports address when precise (Must be precise)event=0xcd,period=53,umask=1,ldlat=0x40000Counts randomly selected loads when the latency from first dispatch to completion is greater than 1024 cycles.  Reported latency may be longer than just the memory latency  Supports address when precise (Must be precise)mem_trans_retired.load_latency_gt_128memoryCounts randomly selected loads when the latency from first dispatch to completion is greater than 128 cycles  Supports address when precise (Must be precise)event=0xcd,period=1009,umask=1,ldlat=0x8000Counts randomly selected loads when the latency from first dispatch to completion is greater than 128 cycles.  Reported latency may be longer than just the memory latency  Supports address when precise (Must be precise)mem_trans_retired.load_latency_gt_16memoryCounts randomly selected loads when the latency from first dispatch to completion is greater than 16 cycles  Supports address when precise (Must be precise)event=0xcd,period=20011,umask=1,ldlat=0x1000Counts randomly selected loads when the latency from first dispatch to completion is greater than 16 cycles.  Reported latency may be longer than just the memory latency  Supports address when precise (Must be precise)mem_trans_retired.load_latency_gt_256memoryCounts randomly selected loads when the latency from first dispatch to completion is greater than 256 cycles  Supports address when precise (Must be precise)event=0xcd,period=503,umask=1,ldlat=0x10000Counts randomly selected loads when the latency from first dispatch to completion is greater than 256 cycles.  Reported latency may be longer than just the memory latency  Supports address when precise (Must be precise)mem_trans_retired.load_latency_gt_32memoryCounts randomly selected loads when the latency from first dispatch to completion is greater than 32 cycles  Supports address when precise (Must be precise)event=0xcd,period=100007,umask=1,ldlat=0x2000Counts randomly selected loads when the latency from first dispatch to completion is greater than 32 cycles.  Reported latency may be longer than just the memory latency  Supports address when precise (Must be precise)mem_trans_retired.load_latency_gt_4memoryCounts randomly selected loads when the latency from first dispatch to completion is greater than 4 cycles  Supports address when precise (Must be precise)event=0xcd,period=100003,umask=1,ldlat=0x400Counts randomly selected loads when the latency from first dispatch to completion is greater than 4 cycles.  Reported latency may be longer than just the memory latency  Supports address when precise (Must be precise)mem_trans_retired.load_latency_gt_512memoryCounts randomly selected loads when the latency from first dispatch to completion is greater than 512 cycles  Supports address when precise (Must be precise)event=0xcd,period=101,umask=1,ldlat=0x20000Counts randomly selected loads when the latency from first dispatch to completion is greater than 512 cycles.  Reported latency may be longer than just the memory latency  Supports address when precise (Must be precise)mem_trans_retired.load_latency_gt_64memoryCounts randomly selected loads when the latency from first dispatch to completion is greater than 64 cycles  Supports address when precise (Must be precise)event=0xcd,period=2003,umask=1,ldlat=0x4000Counts randomly selected loads when the latency from first dispatch to completion is greater than 64 cycles.  Reported latency may be longer than just the memory latency  Supports address when precise (Must be precise)mem_trans_retired.load_latency_gt_8memoryCounts randomly selected loads when the latency from first dispatch to completion is greater than 8 cycles  Supports address when precise (Must be precise)event=0xcd,period=50021,umask=1,ldlat=0x800Counts randomly selected loads when the latency from first dispatch to completion is greater than 8 cycles.  Reported latency may be longer than just the memory latency  Supports address when precise (Must be precise)mem_trans_retired.store_samplememoryRetired memory store access operations. A PDist event for PEBS Store Latency Facility  Supports address when precise (Must be precise)event=0xcd,period=1000003,umask=200Counts Retired memory accesses with at least 1 store operation. This PEBS event is the precisely-distributed (PDist) trigger covering all stores uops for sampling by the PEBS Store Latency Facility. The facility is described in Intel SDM Volume 3 section 19.9.8  Supports address when precise (Must be precise)ocr.demand_data_rd.l3_missmemoryCounts demand data reads that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8440000100ocr.demand_data_rd.l3_missmemoryCounts demand data reads that were not supplied by the L3 cacheevent=0x2a,period=100003,umask=1,offcore_rsp=0x3FBFC0000100ocr.demand_data_rd.l3_miss_localmemoryCounts demand data reads that were not supplied by the L3 cache. [L3_MISS_LOCAL is alias to L3_MISS]event=0xb7,period=100003,umask=1,offcore_rsp=0x3F8440000100ocr.demand_rfo.l3_missmemoryCounts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8440000200ocr.demand_rfo.l3_missmemoryCounts demand read for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cacheevent=0x2a,period=100003,umask=1,offcore_rsp=0x3FBFC0000200ocr.demand_rfo.l3_miss_localmemoryCounts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cache. [L3_MISS_LOCAL is alias to L3_MISS]event=0xb7,period=100003,umask=1,offcore_rsp=0x3F8440000200offcore_requests.l3_miss_demand_data_rdmemoryCounts demand data read requests that miss the L3 cacheevent=0x21,period=100003,umask=0x1000offcore_requests_outstanding.l3_miss_demand_data_rdmemoryFor every cycle, increments by the number of demand data read requests pending that are known to have missed the L3 cacheevent=0x20,period=2000003,umask=0x1000For every cycle, increments by the number of demand data read requests pending that are known to have missed the L3 cache.  Note that this does not capture all elapsed cycles while requests are outstanding - only cycles from when the requests were known by the requesting core to have missed the L3 cacheassists.hardwareotherASSISTS.HARDWAREevent=0xc1,period=100003,umask=400assists.page_faultotherASSISTS.PAGE_FAULTevent=0xc1,period=1000003,umask=800core_power.license_1otherCORE_POWER.LICENSE_1event=0x28,period=200003,umask=200core_power.license_2otherCORE_POWER.LICENSE_2event=0x28,period=200003,umask=400core_power.license_3otherCORE_POWER.LICENSE_3event=0x28,period=200003,umask=800lbr_inserts.anyotherThis event is deprecated. [This event is alias to MISC_RETIRED.LBR_INSERTS] (Precise event)event=0xe4,period=1000003,umask=110ocr.corewb_m.any_responseotherCounts modified writebacks from L1 cache and L2 cache that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000800ocr.demand_data_rd.any_responseotherCounts demand data reads that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000100ocr.demand_data_rd.any_responseotherCounts demand data reads that have any type of responseevent=0x2a,period=100003,umask=1,offcore_rsp=0x1000100ocr.demand_data_rd.dramotherCounts demand data reads that were supplied by DRAMevent=0x2a,period=100003,umask=1,offcore_rsp=0x18400000100ocr.demand_rfo.any_responseotherCounts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000200ocr.demand_rfo.any_responseotherCounts demand read for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that have any type of responseevent=0x2a,period=100003,umask=1,offcore_rsp=0x1000200ocr.streaming_wr.any_responseotherCounts streaming stores that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x1080000ocr.streaming_wr.any_responseotherCounts streaming stores that have any type of responseevent=0x2a,period=100003,umask=1,offcore_rsp=0x1080000rs.emptyotherCycles when Reservation Station (RS) is empty for the threadevent=0xa5,period=1000003,umask=700Counts cycles during which the reservation station (RS) is empty for this logical processor. This is usually caused when the front-end pipeline runs into starvation periods (e.g. branch mispredictions or i-cache misses)rs.empty_countotherCounts end of periods where the Reservation Station (RS) was emptyevent=0xa5,cmask=1,edge=1,inv=1,period=100003,umask=700Counts end of periods where the Reservation Station (RS) was empty. Could be useful to closely sample on front-end latency issues (see the FRONTEND_RETIRED event of designated precise events)rs.empty_resourceotherCycles when Reservation Station (RS) is empty due to a resource in the back-endevent=0xa5,period=1000003,umask=100rs_empty.countotherThis event is deprecated. Refer to new event RS.EMPTY_COUNTevent=0xa5,cmask=1,edge=1,inv=1,period=100003,umask=710rs_empty.cyclesotherThis event is deprecated. Refer to new event RS.EMPTYevent=0xa5,period=1000003,umask=710serialization.c01_ms_scbotherCounts the number of issue slots in a UMWAIT or TPAUSE instruction where no uop issues due to the instruction putting the CPU into the C0.1 activity state. For Tremont, UMWAIT and TPAUSE will only put the CPU into C0.1 activity state (not C0.2 activity state)event=0x75,period=200003,umask=400xq.full_cyclesotherCycles the uncore cannot take further requestsevent=0x2d,cmask=1,period=1000003,umask=100number of cycles when the thread is active and the uncore cannot take any further requests (for example prefetches, loads or stores initiated by the Core that miss the L2 cache)arith.divider_activepipelineThis event is deprecated. Refer to new event ARITH.DIV_ACTIVEevent=0xb0,cmask=1,period=1000003,umask=910arith.div_activepipelineCycles when divide unit is busy executing divide or square root operationsevent=0xb0,cmask=1,period=1000003,umask=900Counts cycles when divide unit is busy executing divide or square root operations. Accounts for integer and floating-point operationsarith.fp_divider_activepipelineThis event is deprecated. Refer to new event ARITH.FPDIV_ACTIVEevent=0xb0,cmask=1,period=1000003,umask=110arith.idiv_activepipelineThis event counts the cycles the integer divider is busyevent=0xb0,cmask=1,period=1000003,umask=800arith.int_divider_activepipelineThis event is deprecated. Refer to new event ARITH.IDIV_ACTIVEevent=0xb0,cmask=1,period=1000003,umask=810assists.anypipelineNumber of occurrences where a microcode assist is invoked by hardwareevent=0xc1,period=100003,umask=0x1b00Counts the number of occurrences where a microcode assist is invoked by hardware. Examples include AD (page Access Dirty), FP and AVX related assistsbr_inst_retired.all_branchespipelineCounts the total number of branch instructions retired for all branch types (Precise event)event=0xc4,period=20000300Counts the total number of instructions in which the instruction pointer (IP) of the processor is resteered due to a branch instruction and the branch instruction successfully retires.  All branch type instructions are accounted for (Precise event)br_inst_retired.all_branchespipelineAll branch instructions retired (Precise event)event=0xc4,period=40000900Counts all branch instructions retired (Precise event)br_inst_retired.callpipelineThis event is deprecated. Refer to new event BR_INST_RETIRED.NEAR_CALL (Precise event)event=0xc4,period=200003,umask=0xf910br_inst_retired.condpipelineCounts the number of retired JCC (Jump on Conditional Code) branch instructions retired, includes both taken and not taken branches (Precise event)event=0xc4,period=200003,umask=0x7e00br_inst_retired.condpipelineConditional branch instructions retired (Precise event)event=0xc4,period=400009,umask=0x1100Counts conditional branch instructions retired (Precise event)br_inst_retired.cond_ntakenpipelineNot taken branch instructions retired (Precise event)event=0xc4,period=400009,umask=0x1000Counts not taken branch instructions retired (Precise event)br_inst_retired.cond_takenpipelineCounts the number of taken JCC (Jump on Conditional Code) branch instructions retired (Precise event)event=0xc4,period=200003,umask=0xfe00br_inst_retired.cond_takenpipelineTaken conditional branch instructions retired (Precise event)event=0xc4,period=400009,umask=100Counts taken conditional branch instructions retired (Precise event)br_inst_retired.far_branchpipelineCounts the number of far branch instructions retired, includes far jump, far call and return, and interrupt call and return (Precise event)event=0xc4,period=200003,umask=0xbf00br_inst_retired.far_branchpipelineFar branch instructions retired (Precise event)event=0xc4,period=100007,umask=0x4000Counts far branch instructions retired (Precise event)br_inst_retired.indirectpipelineCounts the number of near indirect JMP and near indirect CALL branch instructions retired (Precise event)event=0xc4,period=200003,umask=0xeb00br_inst_retired.indirectpipelineIndirect near branch instructions retired (excluding returns) (Precise event)event=0xc4,period=100003,umask=0x8000Counts near indirect branch instructions retired excluding returns. TSX abort is an indirect branch (Precise event)br_inst_retired.indirect_callpipelineCounts the number of near indirect CALL branch instructions retired (Precise event)event=0xc4,period=200003,umask=0xfb00br_inst_retired.ind_callpipelineThis event is deprecated. Refer to new event BR_INST_RETIRED.INDIRECT_CALL (Precise event)event=0xc4,period=200003,umask=0xfb10br_inst_retired.jccpipelineThis event is deprecated. Refer to new event BR_INST_RETIRED.COND (Precise event)event=0xc4,period=200003,umask=0x7e10br_inst_retired.near_callpipelineCounts the number of near CALL branch instructions retired (Precise event)event=0xc4,period=200003,umask=0xf900br_inst_retired.near_callpipelineDirect and indirect near call instructions retired (Precise event)event=0xc4,period=100007,umask=200Counts both direct and indirect near call instructions retired (Precise event)br_inst_retired.near_returnpipelineCounts the number of near RET branch instructions retired (Precise event)event=0xc4,period=200003,umask=0xf700br_inst_retired.near_returnpipelineReturn instructions retired (Precise event)event=0xc4,period=100007,umask=800Counts return instructions retired (Precise event)br_inst_retired.near_takenpipelineCounts the number of near taken branch instructions retired (Precise event)event=0xc4,period=200003,umask=0xc000br_inst_retired.near_takenpipelineTaken branch instructions retired (Precise event)event=0xc4,period=400009,umask=0x2000Counts taken branch instructions retired (Precise event)br_inst_retired.non_return_indpipelineThis event is deprecated. Refer to new event BR_INST_RETIRED.INDIRECT (Precise event)event=0xc4,period=200003,umask=0xeb10br_inst_retired.rel_callpipelineCounts the number of near relative CALL branch instructions retired (Precise event)event=0xc4,period=200003,umask=0xfd00br_inst_retired.returnpipelineThis event is deprecated. Refer to new event BR_INST_RETIRED.NEAR_RETURN (Precise event)event=0xc4,period=200003,umask=0xf710br_inst_retired.taken_jccpipelineThis event is deprecated. Refer to new event BR_INST_RETIRED.COND_TAKEN (Precise event)event=0xc4,period=200003,umask=0xfe10br_misp_retired.all_branchespipelineCounts the total number of mispredicted branch instructions retired for all branch types (Precise event)event=0xc5,period=20000300Counts the total number of mispredicted branch instructions retired.  All branch type instructions are accounted for.  Prediction of the branch target address enables the processor to begin executing instructions before the non-speculative execution path is known. The branch prediction unit (BPU) predicts the target address based on the instruction pointer (IP) of the branch and on the execution path through which execution reached this IP.    A branch misprediction occurs when the prediction is wrong, and results in discarding all instructions executed in the speculative path and re-fetching from the correct path (Precise event)br_misp_retired.all_branchespipelineAll mispredicted branch instructions retired (Precise event)event=0xc5,period=40000900Counts all the retired branch instructions that were mispredicted by the processor. A branch misprediction occurs when the processor incorrectly predicts the destination of the branch.  When the misprediction is discovered at execution, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path (Precise event)br_misp_retired.condpipelineCounts the number of mispredicted JCC (Jump on Conditional Code) branch instructions retired (Precise event)event=0xc5,period=200003,umask=0x7e00br_misp_retired.condpipelineMispredicted conditional branch instructions retired (Precise event)event=0xc5,period=400009,umask=0x1100Counts mispredicted conditional branch instructions retired (Precise event)br_misp_retired.cond_ntakenpipelineMispredicted non-taken conditional branch instructions retired (Precise event)event=0xc5,period=400009,umask=0x1000Counts the number of conditional branch instructions retired that were mispredicted and the branch direction was not taken (Precise event)br_misp_retired.cond_takenpipelineCounts the number of mispredicted taken JCC (Jump on Conditional Code) branch instructions retired (Precise event)event=0xc5,period=200003,umask=0xfe00br_misp_retired.cond_takenpipelinenumber of branch instructions retired that were mispredicted and taken (Precise event)event=0xc5,period=400009,umask=100Counts taken conditional mispredicted branch instructions retired (Precise event)br_misp_retired.indirectpipelineCounts the number of mispredicted near indirect JMP and near indirect CALL branch instructions retired (Precise event)event=0xc5,period=200003,umask=0xeb00br_misp_retired.indirectpipelineMiss-predicted near indirect branch instructions retired (excluding returns) (Precise event)event=0xc5,period=100003,umask=0x8000Counts miss-predicted near indirect branch instructions retired excluding returns. TSX abort is an indirect branch (Precise event)br_misp_retired.indirect_callpipelineCounts the number of mispredicted near indirect CALL branch instructions retired (Precise event)event=0xc5,period=200003,umask=0xfb00br_misp_retired.indirect_callpipelineMispredicted indirect CALL retired (Precise event)event=0xc5,period=400009,umask=200Counts retired mispredicted indirect (near taken) CALL instructions, including both register and memory indirect (Precise event)br_misp_retired.ind_callpipelineThis event is deprecated. Refer to new event BR_MISP_RETIRED.INDIRECT_CALL (Precise event)event=0xc5,period=200003,umask=0xfb10br_misp_retired.jccpipelineThis event is deprecated. Refer to new event BR_MISP_RETIRED.COND (Precise event)event=0xc5,period=200003,umask=0x7e10br_misp_retired.near_takenpipelineCounts the number of mispredicted near taken branch instructions retired (Precise event)event=0xc5,period=200003,umask=0x8000br_misp_retired.near_takenpipelineNumber of near branch instructions retired that were mispredicted and taken (Precise event)event=0xc5,period=400009,umask=0x2000Counts number of near branch instructions retired that were mispredicted and taken (Precise event)br_misp_retired.non_return_indpipelineThis event is deprecated. Refer to new event BR_MISP_RETIRED.INDIRECT (Precise event)event=0xc5,period=200003,umask=0xeb10br_misp_retired.retpipelineThis event counts the number of mispredicted ret instructions retired. Non PEBS (Precise event)event=0xc5,period=100007,umask=800This is a non-precise version (that is, does not use PEBS) of the event that counts mispredicted return instructions retired (Precise event)br_misp_retired.returnpipelineCounts the number of mispredicted near RET branch instructions retired (Precise event)event=0xc5,period=200003,umask=0xf700br_misp_retired.taken_jccpipelineThis event is deprecated. Refer to new event BR_MISP_RETIRED.COND_TAKEN (Precise event)event=0xc5,period=200003,umask=0xfe10cpu_clk_unhalted.c01pipelineCore clocks when the thread is in the C0.1 light-weight slower wakeup time but more power saving optimized stateevent=0xec,period=2000003,umask=0x1000Counts core clocks when the thread is in the C0.1 light-weight slower wakeup time but more power saving optimized state.  This state can be entered via the TPAUSE or UMWAIT instructionscpu_clk_unhalted.c02pipelineCore clocks when the thread is in the C0.2 light-weight faster wakeup time but less power saving optimized stateevent=0xec,period=2000003,umask=0x2000Counts core clocks when the thread is in the C0.2 light-weight faster wakeup time but less power saving optimized state.  This state can be entered via the TPAUSE or UMWAIT instructionscpu_clk_unhalted.c0_waitpipelineCore clocks when the thread is in the C0.1 or C0.2 or running a PAUSE in C0 ACPI stateevent=0xec,period=2000003,umask=0x7000Counts core clocks when the thread is in the C0.1 or C0.2 power saving optimized states (TPAUSE or UMWAIT instructions) or running the PAUSE instructioncpu_clk_unhalted.corepipelineCounts the number of unhalted core clock cycles. (Fixed event)event=0x3c,period=200000300Counts the number of core cycles while the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. The core frequency may change from time to time. For this reason this event may have a changing ratio with regards to time. This event uses fixed counter 1cpu_clk_unhalted.core_ppipelineCounts the number of unhalted core clock cyclesevent=0x3c,period=200000300Counts the number of core cycles while the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. The core frequency may change from time to time. For this reason this event may have a changing ratio with regards to time. This event uses a programmable general purpose performance countercpu_clk_unhalted.distributedpipelineCycle counts are evenly distributed between active threads in the Coreevent=0xec,period=2000003,umask=200This event distributes cycle counts between active hyperthreads, i.e., those in C0.  A hyperthread becomes inactive when it executes the HLT or MWAIT instructions.  If all other hyperthreads are inactive (or disabled or do not exist), all counts are attributed to this hyperthread. To obtain the full count when the Core is active, sum the counts from each hyperthreadcpu_clk_unhalted.one_thread_activepipelineCore crystal clock cycles when this thread is unhalted and the other thread is haltedevent=0x3c,period=25003,umask=200Counts Core crystal clock cycles when current thread is unhalted and the other thread is haltedcpu_clk_unhalted.pausepipelineCPU_CLK_UNHALTED.PAUSEevent=0xec,period=2000003,umask=0x4000cpu_clk_unhalted.pause_instpipelineCPU_CLK_UNHALTED.PAUSE_INSTevent=0xec,cmask=1,edge=1,period=2000003,umask=0x4000cpu_clk_unhalted.ref_distributedpipelineCore crystal clock cycles. Cycle counts are evenly distributed between active threads in the Coreevent=0x3c,period=2000003,umask=800This event distributes Core crystal clock cycle counts between active hyperthreads, i.e., those in C0 sleep-state. A hyperthread becomes inactive when it executes the HLT or MWAIT instructions. If one thread is active in a core, all counts are attributed to this hyperthread. To obtain the full count when the Core is active, sum the counts from each hyperthreadcpu_clk_unhalted.ref_tscpipelineCounts the number of unhalted reference clock cycles at TSC frequency. (Fixed event)event=0,period=2000003,umask=300Counts the number of reference cycles that the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. This event is not affected by core frequency changes and increments at a fixed frequency that is also used for the Time Stamp Counter (TSC). This event uses fixed counter 2cpu_clk_unhalted.ref_tscpipelineReference cycles when the core is not in halt stateevent=0,period=2000003,umask=300Counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. It is counted on a dedicated fixed counter, leaving the eight programmable counters available for other events. Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'.  The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'.  After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this casecpu_clk_unhalted.ref_tsc_ppipelineCounts the number of unhalted reference clock cycles at TSC frequencyevent=0x3c,period=2000003,umask=100Counts the number of reference cycles that the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. This event is not affected by core frequency changes and increments at a fixed frequency that is also used for the Time Stamp Counter (TSC). This event uses a programmable general purpose performance countercpu_clk_unhalted.ref_tsc_ppipelineReference cycles when the core is not in halt stateevent=0x3c,period=2000003,umask=100Counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'.  The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'.  After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this casecpu_clk_unhalted.threadpipelineCounts the number of unhalted core clock cycles. (Fixed event)event=0x3c,period=200000300Counts the number of core cycles while the core is not in a halt state.  The core enters the halt state when it is running the HLT instruction. The core frequency may change from time to time. For this reason this event may have a changing ratio with regards to time.  This event uses fixed counter 1cpu_clk_unhalted.threadpipelineCore cycles when the thread is not in halt stateevent=0x3c,period=200000300Counts the number of core cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios. The core frequency may change from time to time due to transitions associated with Enhanced Intel SpeedStep Technology or TM2. For this reason this event may have a changing ratio with regards to time. When the core frequency is constant, this event can approximate elapsed time while the core was not in the halt state. It is counted on a dedicated fixed counter, leaving the eight programmable counters available for other eventscpu_clk_unhalted.thread_ppipelineCounts the number of unhalted core clock cyclesevent=0x3c,period=200000300Counts the number of core cycles while the core is not in a halt state.  The core enters the halt state when it is running the HLT instruction. The core frequency may change from time to time. For this reason this event may have a changing ratio with regards to time. This event uses a programmable general purpose performance countercpu_clk_unhalted.thread_ppipelineThread cycles when thread is not in halt stateevent=0x3c,period=200000300This is an architectural event that counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling. For this reason, this event may have a changing ratio with regards to wall clock timecycle_activity.cycles_l1d_misspipelineCycles while L1 cache miss demand load is outstandingevent=0xa3,cmask=8,period=1000003,umask=800cycle_activity.cycles_l2_misspipelineCycles while L2 cache miss demand load is outstandingevent=0xa3,cmask=1,period=1000003,umask=100cycle_activity.cycles_mem_anypipelineCycles while memory subsystem has an outstanding loadevent=0xa3,cmask=16,period=1000003,umask=0x1000cycle_activity.stalls_l1d_misspipelineExecution stalls while L1 cache miss demand load is outstandingevent=0xa3,cmask=12,period=1000003,umask=0xc00cycle_activity.stalls_l2_misspipelineExecution stalls while L2 cache miss demand load is outstandingevent=0xa3,cmask=5,period=1000003,umask=500cycle_activity.stalls_totalpipelineTotal execution stallsevent=0xa3,cmask=4,period=1000003,umask=400exe_activity.1_ports_utilpipelineCycles total of 1 uop is executed on all ports and Reservation Station was not emptyevent=0xa6,period=2000003,umask=200Counts cycles during which a total of 1 uop was executed on all ports and Reservation Station (RS) was not emptyexe_activity.2_3_ports_utilpipelineCycles total of 2 or 3 uops are executed on all ports and Reservation Station (RS) was not emptyevent=0xa6,period=2000003,umask=0xc00exe_activity.2_ports_utilpipelineCycles total of 2 uops are executed on all ports and Reservation Station was not emptyevent=0xa6,period=2000003,umask=400Counts cycles during which a total of 2 uops were executed on all ports and Reservation Station (RS) was not emptyexe_activity.3_ports_utilpipelineCycles total of 3 uops are executed on all ports and Reservation Station was not emptyevent=0xa6,period=2000003,umask=800Cycles total of 3 uops are executed on all ports and Reservation Station (RS) was not emptyexe_activity.4_ports_utilpipelineCycles total of 4 uops are executed on all ports and Reservation Station was not emptyevent=0xa6,period=2000003,umask=0x1000Cycles total of 4 uops are executed on all ports and Reservation Station (RS) was not emptyexe_activity.bound_on_loadspipelineExecution stalls while memory subsystem has an outstanding loadevent=0xa6,cmask=5,period=2000003,umask=0x2100exe_activity.bound_on_storespipelineCycles where the Store Buffer was full and no loads caused an execution stallevent=0xa6,cmask=2,period=1000003,umask=0x4000Counts cycles where the Store Buffer was full and no loads caused an execution stallexe_activity.exe_bound_0_portspipelineCycles no uop executed while RS was not empty, the SB was not full and there was no outstanding loadevent=0xa6,period=1000003,umask=0x8000Number of cycles total of 0 uops executed on all ports, Reservation Station (RS) was not empty, the Store Buffer (SB) was not full and there was no outstanding loadinst_decoded.decoderspipelineInstruction decoders utilized in a cycleevent=0x75,period=2000003,umask=100Number of decoders utilized in a cycle when the MITE (legacy decode pipeline) fetches instructionsinst_retired.anypipelineCounts the total number of instructions retired. (Fixed event) (Precise event)event=0xc0,period=200000300Counts the total number of instructions that retired. For instructions that consist of multiple uops, this event counts the retirement of the last uop of the instruction. This event continues counting during hardware interrupts, traps, and inside interrupt handlers. This event uses fixed counter 0 (Precise event)inst_retired.anypipelineNumber of instructions retired. Fixed Counter - architectural event (Precise event)event=0xc0,period=200000300Counts the number of X86 instructions retired - an Architectural PerfMon event. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter freeing up programmable counters to count other events. INST_RETIRED.ANY_P is counted by a programmable counter (Precise event)inst_retired.any_ppipelineCounts the total number of instructions retired (Precise event)event=0xc0,period=200000300Counts the total number of instructions that retired. For instructions that consist of multiple uops, this event counts the retirement of the last uop of the instruction. This event continues counting during hardware interrupts, traps, and inside interrupt handlers. This event uses a programmable general purpose performance counter (Precise event)inst_retired.any_ppipelineNumber of instructions retired. General Counter - architectural event (Precise event)event=0xc0,period=200000300Counts the number of X86 instructions retired - an Architectural PerfMon event. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter freeing up programmable counters to count other events. INST_RETIRED.ANY_P is counted by a programmable counter (Precise event)inst_retired.macro_fusedpipelineINST_RETIRED.MACRO_FUSED (Precise event)event=0xc0,period=2000003,umask=0x1000inst_retired.noppipelineRetired NOP instructions (Precise event)event=0xc0,period=2000003,umask=200Counts all retired NOP or ENDBR32/64 instructions (Precise event)inst_retired.prec_distpipelinePrecise instruction retired with PEBS precise-distribution (Precise event)event=0,period=2000003,umask=100A version of INST_RETIRED that allows for a precise distribution of samples across instructions retired. It utilizes the Precise Distribution of Instructions Retired (PDIR++) feature to fix bias in how retired instructions get sampled. Use on Fixed Counter 0 (Precise event)inst_retired.rep_iterationpipelineIterations of Repeat string retired instructions (Precise event)event=0xc0,period=2000003,umask=800Number of iterations of Repeat (REP) string retired instructions such as MOVS, CMPS, and SCAS. Each has a byte, word, and doubleword version and string instructions can be repeated using a repetition prefix, REP, that allows their architectural execution to be repeated a number of times as specified by the RCX register. Note the number of iterations is implementation-dependent (Precise event)int_misc.clears_countpipelineClears speculative countevent=0xad,cmask=1,edge=1,period=500009,umask=100Counts the number of speculative clears due to any type of branch misprediction or machine clearsint_misc.clear_resteer_cyclespipelineCounts cycles after recovery from a branch misprediction or machine clear till the first uop is issued from the resteered pathevent=0xad,period=500009,umask=0x8000Cycles after recovery from a branch misprediction or machine clear till the first uop is issued from the resteered pathint_misc.recovery_cyclespipelineCore cycles the allocator was stalled due to recovery from earlier clear event for this threadevent=0xad,period=500009,umask=100Counts core cycles when the Resource allocator was stalled due to recovery from an earlier branch misprediction or machine clear eventint_misc.unknown_branch_cyclespipelineBubble cycles of BAClear (Unknown Branch)event=0xad,period=1000003,umask=0x40,frontend=0x700int_misc.uop_droppingpipelineTMA slots where uops got droppedevent=0xad,period=1000003,umask=0x1000Estimated number of Top-down Microarchitecture Analysis slots that got dropped due to non front-end reasonsint_vec_retired.128bitpipelineINT_VEC_RETIRED.128BITevent=0xe7,period=1000003,umask=0x1300int_vec_retired.256bitpipelineINT_VEC_RETIRED.256BITevent=0xe7,period=1000003,umask=0xac00int_vec_retired.add_128pipelineinteger ADD, SUB, SAD 128-bit vector instructionsevent=0xe7,period=1000003,umask=300Number of retired integer ADD/SUB (regular or horizontal), SAD 128-bit vector instructionsint_vec_retired.add_256pipelineinteger ADD, SUB, SAD 256-bit vector instructionsevent=0xe7,period=1000003,umask=0xc00Number of retired integer ADD/SUB (regular or horizontal), SAD 256-bit vector instructionsint_vec_retired.mul_256pipelineINT_VEC_RETIRED.MUL_256event=0xe7,period=1000003,umask=0x8000int_vec_retired.shufflespipelineINT_VEC_RETIRED.SHUFFLESevent=0xe7,period=1000003,umask=0x4000int_vec_retired.vnni_128pipelineINT_VEC_RETIRED.VNNI_128event=0xe7,period=1000003,umask=0x1000int_vec_retired.vnni_256pipelineINT_VEC_RETIRED.VNNI_256event=0xe7,period=1000003,umask=0x2000ld_blocks.4k_aliaspipelineThis event is deprecated. Refer to new event LD_BLOCKS.ADDRESS_ALIAS (Precise event)event=3,period=1000003,umask=410ld_blocks.address_aliaspipelineCounts the number of retired loads that are blocked because it initially appears to be store forward blocked, but subsequently is shown not to be blocked based on 4K alias check (Precise event)event=3,period=1000003,umask=400ld_blocks.address_aliaspipelineFalse dependencies in MOB due to partial compare on addressevent=3,period=100003,umask=400Counts the number of times a load got blocked due to false dependencies in MOB due to partial compare on addressld_blocks.data_unknownpipelineCounts the number of retired loads that are blocked because its address exactly matches an older store whose data is not ready (Precise event)event=3,period=1000003,umask=100ld_blocks.no_srpipelineThe number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in useevent=3,period=100003,umask=0x8800Counts the number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in useld_blocks.store_forwardpipelineLoads blocked due to overlapping with a preceding store that cannot be forwardedevent=3,period=100003,umask=0x8200Counts the number of times where store forwarding was prevented for a load operation. The most common case is a load blocked due to the address of memory access (partially) overlapping with a preceding uncompleted store. Note: See the table of not supported store forwards in the Optimization Guideload_hit_prefetch.swpfpipelineCounts the number of demand load dispatches that hit L1D fill buffer (FB) allocated for software prefetchevent=0x4c,period=100003,umask=100Counts all not software-prefetch load dispatches that hit the fill buffer (FB) allocated for the software prefetch. It can also be incremented by some lock instructions. So it should only be used with profiling so that the locks can be excluded by ASM (Assembly File) inspection of the nearby instructionslsd.cycles_activepipelineCycles Uops delivered by the LSD, but didn't come from the decoderevent=0xa8,cmask=1,period=2000003,umask=100Counts the cycles when at least one uop is delivered by the LSD (Loop-stream detector)lsd.cycles_okpipelineCycles optimal number of Uops delivered by the LSD, but did not come from the decoderevent=0xa8,cmask=6,period=2000003,umask=100Counts the cycles when optimal number of uops is delivered by the LSD (Loop-stream detector)lsd.uopspipelineNumber of Uops delivered by the LSDevent=0xa8,period=2000003,umask=100Counts the number of uops delivered to the back-end by the LSD(Loop Stream Detector)machine_clears.countpipelineNumber of machine clears (nukes) of any typeevent=0xc3,cmask=1,edge=1,period=100003,umask=100Counts the number of machine clears (nukes) of any typemachine_clears.disambiguationpipelineCounts the number of machine clears due to memory ordering in which an internal load passes an older store within the same CPUevent=0xc3,period=20003,umask=800machine_clears.mrn_nukepipelineCounts the number of machines clears due to memory renamingevent=0xc3,period=1000003,umask=0x8000machine_clears.page_faultpipelineCounts the number of machine clears due to a page fault.  Counts both I-Side and D-Side (Loads/Stores) page faults.  A page fault occurs when either the page is not present, or an access violation occursevent=0xc3,period=20003,umask=0x2000machine_clears.slowpipelineCounts the number of machine clears that flush the pipeline and restart the machine with the use of microcode due to SMC, MEMORY_ORDERING, FP_ASSISTS, PAGE_FAULT, DISAMBIGUATION, and FPC_VIRTUAL_TRAPevent=0xc3,period=20003,umask=0x6f00machine_clears.smcpipelineCounts the number of machine clears due to program modifying data (self modifying code) within 1K of a recently fetched code pageevent=0xc3,period=20003,umask=100machine_clears.smcpipelineSelf-modifying code (SMC) detectedevent=0xc3,period=100003,umask=400Counts self-modifying code (SMC) detected, which causes a machine clearmisc2_retired.lfencepipelineLFENCE instructions retiredevent=0xe0,period=400009,umask=0x2000number of LFENCE retired instructionsmisc_retired.lbr_insertspipelineCounts the number of LBR entries recorded. Requires LBRs to be enabled in IA32_LBR_CTL. [This event is alias to LBR_INSERTS.ANY] (Precise event)event=0xe4,period=1000003,umask=100Counts the number of LBR entries recorded. Requires LBRs to be enabled in IA32_LBR_CTL. This event is PDIR on GP0 and NPEBS on all other GPs [This event is alias to LBR_INSERTS.ANY] (Precise event)misc_retired.lbr_insertspipelineIncrements whenever there is an update to the LBR arrayevent=0xcc,period=100003,umask=0x2000Increments when an entry is added to the Last Branch Record (LBR) array (or removed from the array in case of RETURNs in call stack mode). The event requires LBR enable via IA32_DEBUGCTL MSR and branch type selection via MSR_LBR_SELECTresource_stalls.sbpipelineCycles stalled due to no store buffers available. (not including draining form sync)event=0xa2,period=100003,umask=800Counts allocation stall cycles caused by the store buffer (SB) being full. This counts cycles that the pipeline back-end blocked uop delivery from the front-endresource_stalls.scoreboardpipelineCounts cycles where the pipeline is stalled due to serializing operationsevent=0xa2,period=100003,umask=200serialization.non_c01_ms_scbpipelineCounts the number of issue slots not consumed by the backend due to a micro-sequencer (MS) scoreboard, which stalls the front-end from issuing from the UROM until a specified older uop retiresevent=0x75,period=200003,umask=200Counts the number of issue slots not consumed by the backend due to a micro-sequencer (MS) scoreboard, which stalls the front-end from issuing from the UROM until a specified older uop retires. The most commonly executed instruction with an MS scoreboard is PAUSEtopdown.backend_bound_slotspipelineTMA slots where no uops were being issued due to lack of back-end resourcesevent=0xa4,period=10000003,umask=200Number of slots in TMA method where no micro-operations were being issued from front-end to back-end of the machine due to lack of back-end resourcestopdown.bad_spec_slotspipelineTMA slots wasted due to incorrect speculationsevent=0xa4,period=10000003,umask=400Number of slots of TMA method that were wasted due to incorrect speculation. It covers all types of control-flow or data-related mis-speculationstopdown.br_mispredict_slotspipelineTMA slots wasted due to incorrect speculation by branch mispredictionsevent=0xa4,period=10000003,umask=800Number of TMA slots that were wasted due to incorrect speculation by (any type of) branch mispredictions. This event estimates number of speculative operations that were issued but not retired as well as the out-of-order engine recovery past a branch mispredictiontopdown.memory_bound_slotspipelineTOPDOWN.MEMORY_BOUND_SLOTSevent=0xa4,period=10000003,umask=0x1000topdown.slotspipelineTMA slots available for an unhalted logical processor. Fixed counter - architectural eventevent=0,period=10000003,umask=400Number of available slots for an unhalted logical processor. The event increments by machine-width of the narrowest pipeline as employed by the Top-down Microarchitecture Analysis method (TMA). The count is distributed among unhalted logical processors (hyper-threads) who share the same physical core. Software can use this event as the denominator for the top-level metrics of the TMA method. This architectural event is counted on a designated fixed counter (Fixed Counter 3)topdown.slots_ppipelineTMA slots available for an unhalted logical processor. General counter - architectural eventevent=0xa4,period=10000003,umask=100Counts the number of available slots for an unhalted logical processor. The event increments by machine-width of the narrowest pipeline as employed by the Top-down Microarchitecture Analysis method. The count is distributed among unhalted logical processors (hyper-threads) who share the same physical coretopdown_bad_speculation.allpipelineCounts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clearevent=0x73,period=100000300Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear. Only issue slots wasted due to fast nukes such as memory ordering nukes are counted. Other nukes are not accounted for. Counts all issue slots blocked during this recovery window including relevant microcode flows and while uops are not yet available in the instruction queue (IQ) even if an FE_bound event occurs during this period. Also includes the issue slots that were consumed by the backend but were thrown away because they were younger than the mispredict or machine cleartopdown_bad_speculation.fastnukepipelineCounts the number of issue slots every cycle that were not consumed by the backend due to fast nukes such as memory ordering and memory disambiguation machine clearsevent=0x73,period=1000003,umask=200topdown_bad_speculation.machine_clearspipelineCounts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a machine clear (nuke) of any kind including memory ordering and memory disambiguationevent=0x73,period=1000003,umask=300topdown_bad_speculation.mispredictpipelineCounts the number of issue slots every cycle that were not consumed by the backend due to branch mispredictsevent=0x73,period=1000003,umask=400topdown_bad_speculation.nukepipelineCounts the number of issue slots every cycle that were not consumed by the backend due to a machine clear (nuke)event=0x73,period=1000003,umask=100topdown_be_bound.allpipelineCounts the total number of issue slots every cycle that were not consumed by the backend due to backend stallsevent=0x74,period=100000300topdown_be_bound.alloc_restrictionspipelineCounts the number of issue slots every cycle that were not consumed by the backend due to certain allocation restrictionsevent=0x74,period=1000003,umask=100topdown_be_bound.mem_schedulerpipelineCounts the number of issue slots every cycle that were not consumed by the backend due to memory reservation stalls in which a scheduler is not able to accept uopsevent=0x74,period=1000003,umask=200topdown_be_bound.non_mem_schedulerpipelineCounts the number of issue slots every cycle that were not consumed by the backend due to IEC or FPC RAT stalls, which can be due to FIQ or IEC reservation stalls in which the integer, floating point or SIMD scheduler is not able to accept uopsevent=0x74,period=1000003,umask=800topdown_be_bound.registerpipelineCounts the number of issue slots every cycle that were not consumed by the backend due to the physical register file unable to accept an entry (marble stalls)event=0x74,period=1000003,umask=0x2000topdown_be_bound.reorder_bufferpipelineCounts the number of issue slots every cycle that were not consumed by the backend due to the reorder buffer being full (ROB stalls)event=0x74,period=1000003,umask=0x4000topdown_be_bound.serializationpipelineCounts the number of issue slots every cycle that were not consumed by the backend due to scoreboards from the instruction queue (IQ), jump execution unit (JEU), or microcode sequencer (MS)event=0x74,period=1000003,umask=0x1000topdown_fe_bound.allpipelineCounts the total number of issue slots every cycle that were not consumed by the backend due to frontend stallsevent=0x71,period=100000300topdown_fe_bound.branch_detectpipelineCounts the number of issue slots every cycle that were not delivered by the frontend due to BACLEARSevent=0x71,period=1000003,umask=200Counts the number of issue slots every cycle that were not delivered by the frontend due to BACLEARS, which occurs when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend. Includes BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branchestopdown_fe_bound.branch_resteerpipelineCounts the number of issue slots every cycle that were not delivered by the frontend due to BTCLEARSevent=0x71,period=1000003,umask=0x4000Counts the number of issue slots every cycle that were not delivered by the frontend due to BTCLEARS, which occurs when the Branch Target Buffer (BTB) predicts a taken branchtopdown_fe_bound.ciscpipelineCounts the number of issue slots every cycle that were not delivered by the frontend due to the microcode sequencer (MS)event=0x71,period=1000003,umask=100topdown_fe_bound.decodepipelineCounts the number of issue slots every cycle that were not delivered by the frontend due to decode stallsevent=0x71,period=1000003,umask=800topdown_fe_bound.frontend_bandwidthpipelineCounts the number of issue slots every cycle that were not delivered by the frontend due to frontend bandwidth restrictions due to decode, predecode, cisc, and other limitationsevent=0x71,period=1000003,umask=0x8d00topdown_fe_bound.frontend_latencypipelineCounts the number of issue slots every cycle that were not delivered by the frontend due to a latency related stalls including BACLEARs, BTCLEARs, ITLB misses, and ICache missesevent=0x71,period=1000003,umask=0x7200topdown_fe_bound.itlbpipelineCounts the number of issue slots every cycle that were not delivered by the frontend due to ITLB missesevent=0x71,period=1000003,umask=0x1000Counts the number of issue slots every cycle that were not delivered by the frontend due to Instruction Table Lookaside Buffer (ITLB) missestopdown_fe_bound.otherpipelineCounts the number of issue slots every cycle that were not delivered by the frontend due to other common frontend stalls not categorizedevent=0x71,period=1000003,umask=0x8000topdown_fe_bound.predecodepipelineCounts the number of issue slots every cycle that were not delivered by the frontend due to wrong predecodesevent=0x71,period=1000003,umask=400topdown_retiring.allpipelineCounts the total number of consumed retirement slots (Precise event)event=0xc2,period=100000300uops_decoded.dec0_uopspipelineUOPS_DECODED.DEC0_UOPSevent=0x76,period=1000003,umask=100uops_dispatched.port_0pipelineUops executed on port 0event=0xb2,period=2000003,umask=100Number of uops dispatch to execution  port 0uops_dispatched.port_1pipelineUops executed on port 1event=0xb2,period=2000003,umask=200Number of uops dispatch to execution  port 1uops_dispatched.port_2_3_10pipelineUops executed on ports 2, 3 and 10event=0xb2,period=2000003,umask=400Number of uops dispatch to execution ports 2, 3 and 10uops_dispatched.port_4_9pipelineUops executed on ports 4 and 9event=0xb2,period=2000003,umask=0x1000Number of uops dispatch to execution ports 4 and 9uops_dispatched.port_5_11pipelineUops executed on ports 5 and 11event=0xb2,period=2000003,umask=0x2000Number of uops dispatch to execution ports 5 and 11uops_dispatched.port_6pipelineUops executed on port 6event=0xb2,period=2000003,umask=0x4000Number of uops dispatch to execution  port 6uops_dispatched.port_7_8pipelineUops executed on ports 7 and 8event=0xb2,period=2000003,umask=0x8000Number of uops dispatch to execution  ports 7 and 8uops_executed.core_cycles_ge_1pipelineCycles at least 1 micro-op is executed from any thread on physical coreevent=0xb1,cmask=1,period=2000003,umask=200Counts cycles when at least 1 micro-op is executed from any thread on physical coreuops_executed.core_cycles_ge_2pipelineCycles at least 2 micro-op is executed from any thread on physical coreevent=0xb1,cmask=2,period=2000003,umask=200Counts cycles when at least 2 micro-ops are executed from any thread on physical coreuops_executed.core_cycles_ge_3pipelineCycles at least 3 micro-op is executed from any thread on physical coreevent=0xb1,cmask=3,period=2000003,umask=200Counts cycles when at least 3 micro-ops are executed from any thread on physical coreuops_executed.core_cycles_ge_4pipelineCycles at least 4 micro-op is executed from any thread on physical coreevent=0xb1,cmask=4,period=2000003,umask=200Counts cycles when at least 4 micro-ops are executed from any thread on physical coreuops_executed.cycles_ge_1pipelineCycles where at least 1 uop was executed per-threadevent=0xb1,cmask=1,period=2000003,umask=100Cycles where at least 1 uop was executed per-threaduops_executed.cycles_ge_2pipelineCycles where at least 2 uops were executed per-threadevent=0xb1,cmask=2,period=2000003,umask=100Cycles where at least 2 uops were executed per-threaduops_executed.cycles_ge_3pipelineCycles where at least 3 uops were executed per-threadevent=0xb1,cmask=3,period=2000003,umask=100Cycles where at least 3 uops were executed per-threaduops_executed.cycles_ge_4pipelineCycles where at least 4 uops were executed per-threadevent=0xb1,cmask=4,period=2000003,umask=100Cycles where at least 4 uops were executed per-threaduops_executed.stallspipelineCounts number of cycles no uops were dispatched to be executed on this threadevent=0xb1,cmask=1,inv=1,period=2000003,umask=100Counts cycles during which no uops were dispatched from the Reservation Station (RS) per threaduops_executed.stall_cyclespipelineThis event is deprecated. Refer to new event UOPS_EXECUTED.STALLSevent=0xb1,cmask=1,inv=1,period=2000003,umask=110uops_executed.threadpipelineCounts the number of uops to be executed per-thread each cycleevent=0xb1,period=2000003,umask=100uops_executed.x87pipelineCounts the number of x87 uops dispatchedevent=0xb1,period=2000003,umask=0x1000Counts the number of x87 uops executeduops_issued.anypipelineCounts the number of uops issued by the front end every cycleevent=0xe,period=20000300Counts the number of uops issued by the front end every cycle. When 4-uops are requested and only 2-uops are delivered, the event counts 2.  Uops_issued correlates to the number of ROB entries.  If uop takes 2 ROB slots it counts as 2 uops_issueduops_issued.anypipelineUops that RAT issues to RSevent=0xae,period=2000003,umask=100Counts the number of uops that the Resource Allocation Table (RAT) issues to the Reservation Station (RS)uops_issued.cyclespipelineUOPS_ISSUED.CYCLESevent=0xae,cmask=1,period=2000003,umask=100uops_retired.allpipelineCounts the total number of uops retired (Precise event)event=0xc2,period=200000300uops_retired.cyclespipelineCycles with retired uop(s)event=0xc2,cmask=1,period=1000003,umask=200Counts cycles where at least one uop has retireduops_retired.heavypipelineRetired uops except the last uop of each instructionevent=0xc2,period=2000003,umask=100Counts the number of retired micro-operations (uops) except the last uop of each instruction. An instruction that is decoded into less than two uops does not contribute to the countuops_retired.idivpipelineCounts the number of integer divide uops retired (Precise event)event=0xc2,period=2000003,umask=0x1000uops_retired.mspipelineCounts the number of uops that are from complex flows issued by the micro-sequencer (MS) (Precise event)event=0xc2,period=2000003,umask=100Counts the number of uops that are from complex flows issued by the Microcode Sequencer (MS). This includes uops from flows due to complex instructions, faults, assists, and inserted flows (Precise event)uops_retired.mspipelineUOPS_RETIRED.MSevent=0xc2,period=2000003,umask=4,frontend=0x800uops_retired.slotspipelineRetirement slots usedevent=0xc2,period=2000003,umask=200Counts the retirement slots used each cycleuops_retired.stallspipelineCycles without actually retired uopsevent=0xc2,cmask=1,inv=1,period=1000003,umask=200This event counts cycles without actually retired uopsuops_retired.stall_cyclespipelineThis event is deprecated. Refer to new event UOPS_RETIRED.STALLSevent=0xc2,cmask=1,inv=1,period=1000003,umask=210uops_retired.x87pipelineCounts the number of x87 uops retired, includes those in MS flows (Precise event)event=0xc2,period=2000003,umask=200uncore_arbunc_arb_coh_trk_requests.alluncore interconnectNumber of requests allocated in Coherency Trackerevent=0x84,umask=101unc_arb_dat_occupancy.alluncore interconnectEach cycle counts number of any coherent request at memory controller that were issued by any coreevent=0x85,umask=101unc_arb_dat_occupancy.rduncore interconnectEach cycle counts number of coherent reads pending on data return from memory controller that were issued by any coreevent=0x85,umask=201unc_arb_dat_requests.rduncore interconnectThis event is deprecated. Refer to new event UNC_ARB_REQ_TRK_REQUEST.DRDevent=0x81,umask=211unc_arb_ifa_occupancy.alluncore interconnectThis event is deprecated. Refer to new event UNC_ARB_DAT_OCCUPANCY.ALLevent=0x85,umask=111unc_arb_req_trk_occupancy.drduncore interconnectEach cycle count number of 'valid' coherent Data Read entries . Such entry is defined as valid when it is allocated till deallocation. Doesn't include prefetches [This event is alias to UNC_ARB_TRK_OCCUPANCY.RD]event=0x80,umask=201unc_arb_req_trk_request.drduncore interconnectNumber of all coherent Data Read entries. Doesn't include prefetches [This event is alias to UNC_ARB_TRK_REQUESTS.RD]event=0x81,umask=201unc_arb_trk_occupancy.alluncore interconnectEach cycle counts number of all outgoing valid entries in ReqTrk. Such entry is defined as valid from its allocation in ReqTrk till deallocation. Accounts for Coherent and non-coherent trafficevent=0x80,umask=101unc_arb_trk_occupancy.rduncore interconnectEach cycle count number of 'valid' coherent Data Read entries . Such entry is defined as valid when it is allocated till deallocation. Doesn't include prefetches [This event is alias to UNC_ARB_REQ_TRK_OCCUPANCY.DRD]event=0x80,umask=201unc_arb_trk_requests.alluncore interconnectCounts the number of coherent and in-coherent requests initiated by IA cores, processor graphic units, or LLCevent=0x81,umask=101unc_arb_trk_requests.rduncore interconnectNumber of all coherent Data Read entries. Doesn't include prefetches [This event is alias to UNC_ARB_REQ_TRK_REQUEST.DRD]event=0x81,umask=201uncore_imc_free_running_0unc_mc0_rdcas_count_freerununcore memoryCounts every 64B read  request entering the Memory Controller 0 to DRAM (sum of all channels)event=0xff,umask=0x2001Counts every 64B read request entering the Memory Controller 0 to DRAM (sum of all channels)unc_mc0_wrcas_count_freerununcore memoryCounts every 64B write request entering the Memory Controller 0 to DRAM (sum of all channels). Each write request counts as a new request incrementing this counter. However, same cache line write requests (both full and partial) are combined to a single 64 byte data transfer to DRAMevent=0xff,umask=0x3001uncore_imc_free_running_1unc_mc1_rdcas_count_freerununcore memoryCounts every 64B read request entering the Memory Controller 1 to DRAM (sum of all channels)event=0xff,umask=0x2001Counts every 64B read entering the Memory Controller 1 to DRAM (sum of all channels)unc_mc1_wrcas_count_freerununcore memoryCounts every 64B write request entering the Memory Controller 1 to DRAM (sum of all channels). Each write request counts as a new request incrementing this counter. However, same cache line write requests (both full and partial) are combined to a single 64 byte data transfer to DRAMevent=0xff,umask=0x3001unc_m_act_count_rduncore memoryACT command for a read request sent to DRAMevent=0x2401unc_m_act_count_totaluncore memoryACT command sent to DRAMevent=0x2601unc_m_act_count_wruncore memoryACT command for a write request sent to DRAMevent=0x2501unc_m_cas_count_rduncore memoryRead CAS command sent to DRAMevent=0x2201unc_m_cas_count_wruncore memoryWrite CAS command sent to DRAMevent=0x2301unc_m_clockticksuncore memoryNumber of clocksevent=101unc_m_dram_page_empty_rduncore memoryincoming read request page status is Page Emptyevent=0x1d01unc_m_dram_page_empty_wruncore memoryincoming write request page status is Page Emptyevent=0x2001unc_m_dram_page_hit_rduncore memoryincoming read request page status is Page Hitevent=0x1c01unc_m_dram_page_hit_wruncore memoryincoming write request page status is Page Hitevent=0x1f01unc_m_dram_page_miss_rduncore memoryincoming read request page status is Page Missevent=0x1e01unc_m_dram_page_miss_wruncore memoryincoming write request page status is Page Missevent=0x2101unc_m_dram_thermal_hotuncore memoryAny Rank at Hot stateevent=0x1901unc_m_dram_thermal_warmuncore memoryAny Rank at Warm stateevent=0x1a01unc_m_prefetch_rduncore memoryIncoming read prefetch request from IAevent=0xa01unc_m_pre_count_idleuncore memoryPRE command sent to DRAM due to page table idle timer expirationevent=0x2801unc_m_pre_count_page_missuncore memoryPRE command sent to DRAM for a read/write requestevent=0x2701unc_m_vc0_requests_rduncore memoryIncoming VC0 read requestevent=201unc_m_vc0_requests_wruncore memoryIncoming VC0 write requestevent=301unc_m_vc1_requests_rduncore memoryIncoming VC1 read requestevent=401unc_m_vc1_requests_wruncore memoryIncoming VC1 write requestevent=501uncore_clockunc_clock.socketuncore otherThis 48-bit fixed counter counts the UCLK cyclesevent=0xff01dtlb_load_misses.stlb_hitvirtual memoryLoads that miss the DTLB and hit the STLBevent=0x12,period=100003,umask=0x2000Counts loads that miss the DTLB (Data TLB) and hit the STLB (Second level TLB)dtlb_load_misses.walk_activevirtual memoryCycles when at least one PMH is busy with a page walk for a demand loadevent=0x12,cmask=1,period=100003,umask=0x1000Counts cycles when at least one PMH (Page Miss Handler) is busy with a page walk for a demand loaddtlb_load_misses.walk_completedvirtual memoryCounts the number of page walks completed due to load DTLB misses to any page sizeevent=8,period=200003,umask=0xe00Counts the number of page walks completed due to loads (including SW prefetches) whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to any page size. Includes page walks that page faultdtlb_load_misses.walk_completedvirtual memoryLoad miss in all TLB levels causes a page walk that completes. (All page sizes)event=0x12,period=100003,umask=0xe00Counts completed page walks  (all page sizes) caused by demand data loads. This implies it missed in the DTLB and further levels of TLB. The page walk can end with or without a faultdtlb_load_misses.walk_completed_1gvirtual memoryPage walks completed due to a demand data load to a 1G pageevent=0x12,period=100003,umask=800Counts completed page walks  (1G sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a faultdtlb_load_misses.walk_completed_2m_4mvirtual memoryPage walks completed due to a demand data load to a 2M/4M pageevent=0x12,period=100003,umask=400Counts completed page walks  (2M/4M sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a faultdtlb_load_misses.walk_completed_4kvirtual memoryPage walks completed due to a demand data load to a 4K pageevent=0x12,period=100003,umask=200Counts completed page walks  (4K sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a faultdtlb_load_misses.walk_pendingvirtual memoryNumber of page walks outstanding for a demand load in the PMH each cycleevent=0x12,period=100003,umask=0x1000Counts the number of page walks outstanding for a demand load in the PMH (Page Miss Handler) each cycledtlb_store_misses.stlb_hitvirtual memoryStores that miss the DTLB and hit the STLBevent=0x13,period=100003,umask=0x2000Counts stores that miss the DTLB (Data TLB) and hit the STLB (2nd Level TLB)dtlb_store_misses.walk_activevirtual memoryCycles when at least one PMH is busy with a page walk for a storeevent=0x13,cmask=1,period=100003,umask=0x1000Counts cycles when at least one PMH (Page Miss Handler) is busy with a page walk for a storedtlb_store_misses.walk_completedvirtual memoryCounts the number of page walks completed due to store DTLB misses to any page sizeevent=0x49,period=2000003,umask=0xe00Counts the number of page walks completed due to stores whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to any page size.  Includes page walks that page faultdtlb_store_misses.walk_completedvirtual memoryStore misses in all TLB levels causes a page walk that completes. (All page sizes)event=0x13,period=100003,umask=0xe00Counts completed page walks  (all page sizes) caused by demand data stores. This implies it missed in the DTLB and further levels of TLB. The page walk can end with or without a faultdtlb_store_misses.walk_completed_1gvirtual memoryPage walks completed due to a demand data store to a 1G pageevent=0x13,period=100003,umask=800Counts completed page walks  (1G sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a faultdtlb_store_misses.walk_completed_2m_4mvirtual memoryPage walks completed due to a demand data store to a 2M/4M pageevent=0x13,period=100003,umask=400Counts completed page walks  (2M/4M sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a faultdtlb_store_misses.walk_completed_4kvirtual memoryPage walks completed due to a demand data store to a 4K pageevent=0x13,period=100003,umask=200Counts completed page walks  (4K sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a faultdtlb_store_misses.walk_pendingvirtual memoryNumber of page walks outstanding for a store in the PMH each cycleevent=0x13,period=100003,umask=0x1000Counts the number of page walks outstanding for a store in the PMH (Page Miss Handler) each cycleitlb_misses.miss_caused_walkvirtual memoryCounts the number of page walks initiated by a instruction fetch that missed the first and second level TLBsevent=0x85,period=1000003,umask=100itlb_misses.pde_cache_missvirtual memoryCounts the number of page walks due to an instruction fetch that miss the PDE (Page Directory Entry) cacheevent=0x85,period=2000003,umask=0x8000itlb_misses.stlb_hitvirtual memoryInstruction fetch requests that miss the ITLB and hit the STLBevent=0x11,period=100003,umask=0x2000Counts instruction fetch requests that miss the ITLB (Instruction TLB) and hit the STLB (Second-level TLB)itlb_misses.walk_activevirtual memoryCycles when at least one PMH is busy with a page walk for code (instruction fetch) requestevent=0x11,cmask=1,period=100003,umask=0x1000Counts cycles when at least one PMH (Page Miss Handler) is busy with a page walk for a code (instruction fetch) requestitlb_misses.walk_completedvirtual memoryCounts the number of page walks completed due to instruction fetch misses to any page sizeevent=0x85,period=200003,umask=0xe00Counts the number of page walks completed due to instruction fetches whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to any page size.  Includes page walks that page faultitlb_misses.walk_completedvirtual memoryCode miss in all TLB levels causes a page walk that completes. (All page sizes)event=0x11,period=100003,umask=0xe00Counts completed page walks (all page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a faultitlb_misses.walk_completed_2m_4mvirtual memoryCode miss in all TLB levels causes a page walk that completes. (2M/4M)event=0x11,period=100003,umask=400Counts completed page walks (2M/4M page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a faultitlb_misses.walk_completed_4kvirtual memoryCode miss in all TLB levels causes a page walk that completes. (4K)event=0x11,period=100003,umask=200Counts completed page walks (4K page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a faultitlb_misses.walk_pendingvirtual memoryNumber of page walks outstanding for an outstanding code request in the PMH each cycleevent=0x11,period=100003,umask=0x1000Counts the number of page walks outstanding for an outstanding code (instruction fetch) request in the PMH (Page Miss Handler) each cycleld_head.dtlb_miss_at_retvirtual memoryCounts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to a DTLB missevent=5,period=1000003,umask=0x9000bp_dyn_ind_predbranchDynamic Indirect Predictionsevent=0x8e00Indirect Branch Prediction for potential multi-target branch (speculative)bp_de_redirectbranchDecoder Overrides Existing Branch Prediction (speculative)event=0x9100bp_l1_tlb_fetch_hitbranchThe number of instruction fetches that hit in the L1 ITLBevent=0x9400ic_fw32cacheThe number of 32B fetch windows transferred from IC pipe to DE instruction decoder (includes non-cacheable and cacheable fill responses)event=0x8000ic_fw32_misscacheThe number of 32B fetch windows tried to read the L1 IC and missed in the full tagevent=0x8100ic_cache_fill_l2cacheThe number of 64 byte instruction cache line was fulfilled from the L2 cacheevent=0x8200ic_cache_fill_syscacheThe number of 64 byte instruction cache line fulfilled from system memory or another cacheevent=0x8300bp_l1_tlb_miss_l2_hitcacheThe number of instruction fetches that miss in the L1 ITLB but hit in the L2 ITLBevent=0x8400bp_l1_tlb_miss_l2_misscacheThe number of instruction fetches that miss in both the L1 and L2 TLBsevent=0x8500bp_snp_re_synccacheThe number of pipeline restarts caused by invalidating probes that hit on the instruction stream currently being executed. This would happen if the active instruction stream was being modified by another processor in an MP system - typically a highly unlikely eventevent=0x8600ic_fetch_stall.ic_stall_anycacheInstruction Pipe Stall. IC pipe was stalled during this clock cycle for any reason (nothing valid in pipe ICM1)event=0x87,umask=400ic_fetch_stall.ic_stall_dq_emptycacheInstruction Pipe Stall. IC pipe was stalled during this clock cycle (including IC to OC fetches) due to DQ emptyevent=0x87,umask=200ic_fetch_stall.ic_stall_back_pressurecacheInstruction Pipe Stall. IC pipe was stalled during this clock cycle (including IC to OC fetches) due to back-pressureevent=0x87,umask=100ic_cache_inval.l2_invalidating_probecacheIC line invalidated due to L2 invalidating probe (external or LS). The number of instruction cache lines invalidated. A non-SMC event is CMC (cross modifying code), either from the other thread of the core or another coreevent=0x8c,umask=200ic_cache_inval.fill_invalidatedcacheIC line invalidated due to overwriting fill response. The number of instruction cache lines invalidated. A non-SMC event is CMC (cross modifying code), either from the other thread of the core or another coreevent=0x8c,umask=100bp_tlb_relcacheThe number of ITLB reload requestsevent=0x9900l2_request_g1.rd_blk_lcacheAll L2 Cache Requests (Breakdown 1 - Common). Data cache reads (including hardware and software prefetch)event=0x60,umask=0x8000l2_request_g1.rd_blk_xcacheAll L2 Cache Requests (Breakdown 1 - Common). Data cache storesevent=0x60,umask=0x4000l2_request_g1.ls_rd_blk_c_scacheAll L2 Cache Requests (Breakdown 1 - Common). Data cache shared readsevent=0x60,umask=0x2000l2_request_g1.cacheable_ic_readcacheAll L2 Cache Requests (Breakdown 1 - Common). Instruction cache readsevent=0x60,umask=0x1000l2_request_g1.change_to_xcacheAll L2 Cache Requests (Breakdown 1 - Common). Data cache state change requests. Request change to writable, check L2 for current stateevent=0x60,umask=800l2_request_g1.prefetch_l2_cmdcacheAll L2 Cache Requests (Breakdown 1 - Common). PrefetchL2Cmdevent=0x60,umask=400l2_request_g1.l2_hw_pfcacheAll L2 Cache Requests (Breakdown 1 - Common). L2 Prefetcher. All prefetches accepted by L2 pipeline, hit or miss. Types of PF and L2 hit/miss broken out in a separate perfmon eventevent=0x60,umask=200l2_request_g1.group2cacheMiscellaneous events covered in more detail by l2_request_g2 (PMCx061)event=0x60,umask=100l2_request_g1.all_no_prefetchcacheevent=0x60,umask=0xf900l2_request_g2.group1cacheMiscellaneous events covered in more detail by l2_request_g1 (PMCx060)event=0x61,umask=0x8000l2_request_g2.ls_rd_sizedcacheAll L2 Cache Requests (Breakdown 2 - Rare). Data cache read sizedevent=0x61,umask=0x4000l2_request_g2.ls_rd_sized_nccacheAll L2 Cache Requests (Breakdown 2 - Rare). Data cache read sized non-cacheableevent=0x61,umask=0x2000l2_request_g2.ic_rd_sizedcacheAll L2 Cache Requests (Breakdown 2 - Rare). Instruction cache read sizedevent=0x61,umask=0x1000l2_request_g2.ic_rd_sized_nccacheAll L2 Cache Requests (Breakdown 2 - Rare). Instruction cache read sized non-cacheableevent=0x61,umask=800l2_request_g2.smc_invalcacheAll L2 Cache Requests (Breakdown 2 - Rare). Self-modifying code invalidatesevent=0x61,umask=400l2_request_g2.bus_locks_originatorcacheAll L2 Cache Requests (Breakdown 2 - Rare). Bus locksevent=0x61,umask=200l2_request_g2.bus_locks_responsescacheAll L2 Cache Requests (Breakdown 2 - Rare). Bus lock responseevent=0x61,umask=100l2_latency.l2_cycles_waiting_on_fillscacheTotal cycles spent waiting for L2 fills to complete from L3 or memory, divided by four. Event counts are for both threads. To calculate average latency, the number of fills from both threads must be usedevent=0x62,umask=100l2_wcb_req.wcb_writecacheLS to L2 WCB write requests. LS (Load/Store unit) to L2 WCB (Write Combining Buffer) write requestsevent=0x63,umask=0x4000l2_wcb_req.wcb_closecacheLS to L2 WCB close requests. LS (Load/Store unit) to L2 WCB (Write Combining Buffer) close requestsevent=0x63,umask=0x2000l2_wcb_req.zero_byte_storecacheLS to L2 WCB zero byte store requests. LS (Load/Store unit) to L2 WCB (Write Combining Buffer) zero byte store requestsevent=0x63,umask=400l2_wcb_req.cl_zerocacheLS to L2 WCB cache line zeroing requests. LS (Load/Store unit) to L2 WCB (Write Combining Buffer) cache line zeroing requestsevent=0x63,umask=100l2_cache_req_stat.ls_rd_blk_cscacheCore to L2 cacheable request access status (not including L2 Prefetch). Data cache shared read hit in L2event=0x64,umask=0x8000l2_cache_req_stat.ls_rd_blk_l_hit_xcacheCore to L2 cacheable request access status (not including L2 Prefetch). Data cache read hit in L2event=0x64,umask=0x4000l2_cache_req_stat.ls_rd_blk_l_hit_scacheCore to L2 cacheable request access status (not including L2 Prefetch). Data cache read hit on shared line in L2event=0x64,umask=0x2000l2_cache_req_stat.ls_rd_blk_xcacheCore to L2 cacheable request access status (not including L2 Prefetch). Data cache store or state change hit in L2event=0x64,umask=0x1000l2_cache_req_stat.ls_rd_blk_ccacheCore to L2 cacheable request access status (not including L2 Prefetch). Data cache request miss in L2 (all types)event=0x64,umask=800l2_cache_req_stat.ic_fill_hit_xcacheCore to L2 cacheable request access status (not including L2 Prefetch). Instruction cache hit modifiable line in L2event=0x64,umask=400l2_cache_req_stat.ic_fill_hit_scacheCore to L2 cacheable request access status (not including L2 Prefetch). Instruction cache hit clean line in L2event=0x64,umask=200l2_cache_req_stat.ic_fill_misscacheCore to L2 cacheable request access status (not including L2 Prefetch). Instruction cache request miss in L2event=0x64,umask=100l2_cache_req_stat.ic_access_in_l2cacheCore to L2 cacheable request access status (not including L2 Prefetch). Instruction cache requests in L2event=0x64,umask=700l2_cache_req_stat.ic_dc_miss_in_l2cacheCore to L2 cacheable request access status (not including L2 Prefetch). Instruction cache request miss in L2 and Data cache request miss in L2 (all types)event=0x64,umask=900l2_cache_req_stat.ic_dc_hit_in_l2cacheCore to L2 cacheable request access status (not including L2 Prefetch). Instruction cache request hit in L2 and Data cache request hit in L2 (all types)event=0x64,umask=0xf600l2_fill_pending.l2_fill_busycacheCycles with fill pending from L2. Total cycles spent with one or more fill requests in flight from L2event=0x6d,umask=100l2_pf_hit_l2cacheL2 prefetch hit in L2. Use l2_cache_hits_from_l2_hwpf insteadevent=0x70,umask=0xff00l2_pf_miss_l2_hit_l3cacheL2 prefetcher hits in L3. Counts all L2 prefetches accepted by the L2 pipeline which miss the L2 cache and hit the L3event=0x71,umask=0xff00l2_pf_miss_l2_l3cacheL2 prefetcher misses in L3. All L2 prefetches accepted by the L2 pipeline which miss the L2 and the L3 cachesevent=0x72,umask=0xff00amd_l3l3_request_g1.caching_l3_cache_accessescacheCaching: L3 cache accessesevent=1,umask=0x8000l3_lookup_state.all_l3_req_typscacheAll L3 Request Typesevent=4,umask=0xff00l3_comb_clstr_state.other_l3_miss_typscacheOther L3 Miss Request Typesevent=6,umask=0xfe00l3_comb_clstr_state.request_misscacheL3 cache missesevent=6,umask=100xi_sys_fill_latencycacheL3 Cache Miss Latency. Total cycles for all transactions divided by 16. Ignores SliceMask and ThreadMaskevent=0x9000xi_ccx_sdp_req1.all_l3_miss_req_typscacheAll L3 Miss Request Types. Ignores SliceMask and ThreadMaskevent=0x9a,umask=0x3f00ex_ret_instrcoreRetired Instructionsevent=0xc000ex_ret_copscoreRetired Uopsevent=0xc100The number of uOps retired. This includes all processor activity (instructions, exceptions, interrupts, microcode assists, etc.). The number of events logged per cycle can vary from 0 to 4ex_ret_brncoreRetired Branch Instructionsevent=0xc200The number of branch instructions retired. This includes all types of architectural control flow changes, including exceptions and interruptsex_ret_brn_mispcoreRetired Branch Instructions Mispredictedevent=0xc300The number of branch instructions retired, of any type, that were not correctly predicted. This includes those for which prediction is not attempted (far control transfers, exceptions and interrupts)ex_ret_brn_tkncoreRetired Taken Branch Instructionsevent=0xc400The number of taken branches that were retired. This includes all types of architectural control flow changes, including exceptions and interruptsex_ret_brn_tkn_mispcoreRetired Taken Branch Instructions Mispredictedevent=0xc500The number of retired taken branch instructions that were mispredictedex_ret_brn_farcoreRetired Far Control Transfersevent=0xc600The number of far control transfers retired including far call/jump/return, IRET, SYSCALL and SYSRET, plus exceptions and interrupts. Far control transfers are not subject to branch predictionex_ret_brn_resynccoreRetired Branch Resyncsevent=0xc700The number of resync branches. These reflect pipeline restarts due to certain microcode assists and events such as writes to the active instruction stream, among other things. Each occurrence reflects a restart penalty similar to a branch mispredict. This is relatively rareex_ret_near_retcoreRetired Near Returnsevent=0xc800The number of near return instructions (RET or RET Iw) retiredex_ret_near_ret_mispredcoreRetired Near Returns Mispredictedevent=0xc900The number of near returns retired that were not correctly predicted by the return address predictor. Each such mispredict incurs the same penalty as a mispredicted conditional branch instructionex_ret_brn_ind_mispcoreRetired Indirect Branch Instructions Mispredictedevent=0xca00ex_ret_mmx_fp_instr.sse_instrcoreSSE instructions (SSE, SSE2, SSE3, SSSE3, SSE4A, SSE41, SSE42, AVX)event=0xcb,umask=400The number of MMX, SSE or x87 instructions retired. The UnitMask allows the selection of the individual classes of instructions as given in the table. Each increment represents one complete instruction. Since this event includes non-numeric instructions it is not suitable for measuring MFLOPS. SSE instructions (SSE, SSE2, SSE3, SSSE3, SSE4A, SSE41, SSE42, AVX)ex_ret_mmx_fp_instr.mmx_instrcoreMMX instructionsevent=0xcb,umask=200The number of MMX, SSE or x87 instructions retired. The UnitMask allows the selection of the individual classes of instructions as given in the table. Each increment represents one complete instruction. Since this event includes non-numeric instructions it is not suitable for measuring MFLOPS. MMX instructionsex_ret_mmx_fp_instr.x87_instrcorex87 instructionsevent=0xcb,umask=100The number of MMX, SSE or x87 instructions retired. The UnitMask allows the selection of the individual classes of instructions as given in the table. Each increment represents one complete instruction. Since this event includes non-numeric instructions it is not suitable for measuring MFLOPS. x87 instructionsex_ret_condcoreRetired Conditional Branch Instructionsevent=0xd100ex_div_busycoreDiv Cycles Busy countevent=0xd300ex_div_countcoreDiv Op Countevent=0xd400ex_tagged_ibs_ops.ibs_count_rollovercoreTagged IBS Ops. Number of times an op could not be tagged by IBS because of a previous tagged op that has not retiredevent=0x1cf,umask=400ex_tagged_ibs_ops.ibs_tagged_ops_retcoreTagged IBS Ops. Number of Ops tagged by IBS that retiredevent=0x1cf,umask=200ex_tagged_ibs_ops.ibs_tagged_opscoreTagged IBS Ops. Number of Ops tagged by IBSevent=0x1cf,umask=100ex_ret_fus_brnch_instcoreThe number of fused retired branch instructions retired per cycle. The number of events logged per cycle can vary from 0 to 3event=0x1d000amd_dfremote_outbound_data_controller_0data fabricevent=0x7c7,umask=201Remote Link Controller Outbound Packet Types: Data (32B): Remote Link Controller 0remote_outbound_data_controller_1data fabricevent=0x807,umask=201Remote Link Controller Outbound Packet Types: Data (32B): Remote Link Controller 1remote_outbound_data_controller_2data fabricevent=0x847,umask=201Remote Link Controller Outbound Packet Types: Data (32B): Remote Link Controller 2remote_outbound_data_controller_3data fabricevent=0x887,umask=201Remote Link Controller Outbound Packet Types: Data (32B): Remote Link Controller 3dram_channel_data_controller_0data fabricevent=7,umask=0x3801DRAM Channel Controller Request Types: Requests with Data (64B): DRAM Channel Controller 0dram_channel_data_controller_1data fabricevent=0x47,umask=0x3801DRAM Channel Controller Request Types: Requests with Data (64B): DRAM Channel Controller 0dram_channel_data_controller_2data fabricevent=0x87,umask=0x3801DRAM Channel Controller Request Types: Requests with Data (64B): DRAM Channel Controller 0dram_channel_data_controller_3data fabricevent=0xc7,umask=0x3801DRAM Channel Controller Request Types: Requests with Data (64B): DRAM Channel Controller 0dram_channel_data_controller_4data fabricevent=0x107,umask=0x3801DRAM Channel Controller Request Types: Requests with Data (64B): DRAM Channel Controller 0dram_channel_data_controller_5data fabricevent=0x147,umask=0x3801DRAM Channel Controller Request Types: Requests with Data (64B): DRAM Channel Controller 0dram_channel_data_controller_6data fabricevent=0x187,umask=0x3801DRAM Channel Controller Request Types: Requests with Data (64B): DRAM Channel Controller 0dram_channel_data_controller_7data fabricevent=0x1c7,umask=0x3801DRAM Channel Controller Request Types: Requests with Data (64B): DRAM Channel Controller 0fpu_pipe_assignment.dualfloating pointTotal number multi-pipe uOps assigned to all pipesevent=0,umask=0xf000The number of operations (uOps) and dual-pipe uOps dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number multi-pipe uOps assigned to all pipesfpu_pipe_assignment.dual3floating pointTotal number multi-pipe uOps assigned to pipe 3event=0,umask=0x8000The number of operations (uOps) and dual-pipe uOps dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number multi-pipe uOps assigned to pipe 3fpu_pipe_assignment.dual2floating pointTotal number multi-pipe uOps assigned to pipe 2event=0,umask=0x4000The number of operations (uOps) and dual-pipe uOps dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number multi-pipe uOps assigned to pipe 2fpu_pipe_assignment.dual1floating pointTotal number multi-pipe uOps assigned to pipe 1event=0,umask=0x2000The number of operations (uOps) and dual-pipe uOps dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number multi-pipe uOps assigned to pipe 1fpu_pipe_assignment.dual0floating pointTotal number multi-pipe uOps assigned to pipe 0event=0,umask=0x1000The number of operations (uOps) and dual-pipe uOps dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number multi-pipe uOps assigned to pipe 0fpu_pipe_assignment.totalfloating pointTotal number uOps assigned to all fpu pipesevent=0,umask=0xf00The number of operations (uOps) and dual-pipe uOps dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number uOps assigned to all pipesfpu_pipe_assignment.total3floating pointTotal number of fp uOps on pipe 3event=0,umask=800The number of operations (uOps) dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one-cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number uOps assigned to pipe 3fpu_pipe_assignment.total2floating pointTotal number of fp uOps on pipe 2event=0,umask=400The number of operations (uOps) dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number uOps assigned to pipe 2fpu_pipe_assignment.total1floating pointTotal number of fp uOps on pipe 1event=0,umask=200The number of operations (uOps) dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number uOps assigned to pipe 1fpu_pipe_assignment.total0floating pointTotal number of fp uOps  on pipe 0event=0,umask=100The number of operations (uOps) dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number uOps assigned to pipe 0fp_sched_emptyfloating pointThis is a speculative event. The number of cycles in which the FPU scheduler is empty. Note that some Ops like FP loads bypass the schedulerevent=100fp_retx87_fp_ops.allfloating pointAll Opsevent=2,umask=700The number of x87 floating-point Ops that have retired. The number of events logged per cycle can vary from 0 to 8fp_retx87_fp_ops.div_sqr_r_opsfloating pointDivide and square root Opsevent=2,umask=400The number of x87 floating-point Ops that have retired. The number of events logged per cycle can vary from 0 to 8. Divide and square root Opsfp_retx87_fp_ops.mul_opsfloating pointMultiply Opsevent=2,umask=200The number of x87 floating-point Ops that have retired. The number of events logged per cycle can vary from 0 to 8. Multiply Opsfp_retx87_fp_ops.add_sub_opsfloating pointAdd/subtract Opsevent=2,umask=100The number of x87 floating-point Ops that have retired. The number of events logged per cycle can vary from 0 to 8. Add/subtract Opsfp_ret_sse_avx_ops.allfloating pointAll FLOPSevent=3,umask=0xff00This is a retire-based event. The number of retired SSE/AVX FLOPS. The number of events logged per cycle can vary from 0 to 64. This event can count above 15fp_ret_sse_avx_ops.dp_mult_add_flopsfloating pointDouble precision multiply-add FLOPS. Multiply-add counts as 2 FLOPSevent=3,umask=0x8000This is a retire-based event. The number of retired SSE/AVX FLOPS. The number of events logged per cycle can vary from 0 to 64. This event can count above 15. Double precision multiply-add FLOPS. Multiply-add counts as 2 FLOPSfp_ret_sse_avx_ops.dp_div_flopsfloating pointDouble precision divide/square root FLOPSevent=3,umask=0x4000This is a retire-based event. The number of retired SSE/AVX FLOPS. The number of events logged per cycle can vary from 0 to 64. This event can count above 15. Double precision divide/square root FLOPSfp_ret_sse_avx_ops.dp_mult_flopsfloating pointDouble precision multiply FLOPSevent=3,umask=0x2000This is a retire-based event. The number of retired SSE/AVX FLOPS. The number of events logged per cycle can vary from 0 to 64. This event can count above 15. Double precision multiply FLOPSfp_ret_sse_avx_ops.dp_add_sub_flopsfloating pointDouble precision add/subtract FLOPSevent=3,umask=0x1000This is a retire-based event. The number of retired SSE/AVX FLOPS. The number of events logged per cycle can vary from 0 to 64. This event can count above 15. Double precision add/subtract FLOPSfp_ret_sse_avx_ops.sp_mult_add_flopsfloating pointSingle precision multiply-add FLOPS. Multiply-add counts as 2 FLOPSevent=3,umask=800This is a retire-based event. The number of retired SSE/AVX FLOPS. The number of events logged per cycle can vary from 0 to 64. This event can count above 15. Single precision multiply-add FLOPS. Multiply-add counts as 2 FLOPSfp_ret_sse_avx_ops.sp_div_flopsfloating pointSingle-precision divide/square root FLOPSevent=3,umask=400This is a retire-based event. The number of retired SSE/AVX FLOPS. The number of events logged per cycle can vary from 0 to 64. This event can count above 15. Single-precision divide/square root FLOPSfp_ret_sse_avx_ops.sp_mult_flopsfloating pointSingle-precision multiply FLOPSevent=3,umask=200This is a retire-based event. The number of retired SSE/AVX FLOPS. The number of events logged per cycle can vary from 0 to 64. This event can count above 15. Single-precision multiply FLOPSfp_ret_sse_avx_ops.sp_add_sub_flopsfloating pointSingle-precision add/subtract FLOPSevent=3,umask=100This is a retire-based event. The number of retired SSE/AVX FLOPS. The number of events logged per cycle can vary from 0 to 64. This event can count above 15. Single-precision add/subtract FLOPSfp_num_mov_elim_scal_op.optimizedfloating pointNumber of Scalar Ops optimizedevent=4,umask=800This is a dispatch based speculative event, and is useful for measuring the effectiveness of the Move elimination and Scalar code optimization schemes. Number of Scalar Ops optimizedfp_num_mov_elim_scal_op.opt_potentialfloating pointNumber of Ops that are candidates for optimization (have Z-bit either set or pass)event=4,umask=400This is a dispatch based speculative event, and is useful for measuring the effectiveness of the Move elimination and Scalar code optimization schemes. Number of Ops that are candidates for optimization (have Z-bit either set or pass)fp_num_mov_elim_scal_op.sse_mov_ops_elimfloating pointNumber of SSE Move Ops eliminatedevent=4,umask=200This is a dispatch based speculative event, and is useful for measuring the effectiveness of the Move elimination and Scalar code optimization schemes. Number of SSE Move Ops eliminatedfp_num_mov_elim_scal_op.sse_mov_opsfloating pointNumber of SSE Move Opsevent=4,umask=100This is a dispatch based speculative event, and is useful for measuring the effectiveness of the Move elimination and Scalar code optimization schemes. Number of SSE Move Opsfp_retired_ser_ops.x87_ctrl_retfloating pointx87 control word mispredict traps due to mispredictions in RC or PC, or changes in mask bitsevent=5,umask=800The number of serializing Ops retired. x87 control word mispredict traps due to mispredictions in RC or PC, or changes in mask bitsfp_retired_ser_ops.x87_bot_retfloating pointx87 bottom-executing uOps retiredevent=5,umask=400The number of serializing Ops retired. x87 bottom-executing uOps retiredfp_retired_ser_ops.sse_ctrl_retfloating pointSSE control word mispredict traps due to mispredictions in RC, FTZ or DAZ, or changes in mask bitsevent=5,umask=200The number of serializing Ops retired. SSE control word mispredict traps due to mispredictions in RC, FTZ or DAZ, or changes in mask bitsfp_retired_ser_ops.sse_bot_retfloating pointSSE bottom-executing uOps retiredevent=5,umask=100The number of serializing Ops retired. SSE bottom-executing uOps retiredls_locks.bus_lockmemoryBus lock when a locked operations crosses a cache boundary or is done on an uncacheable memory typeevent=0x25,umask=100ls_dispatch.ld_st_dispatchmemoryCounts the number of operations dispatched to the LS unit. Unit Masks ADDed. Load-op-Storesevent=0x29,umask=400ls_dispatch.store_dispatchmemoryCounts the number of stores dispatched to the LS unit. Unit Masks ADDedevent=0x29,umask=200ls_dispatch.ld_dispatchmemoryCounts the number of loads dispatched to the LS unit. Unit Masks ADDedevent=0x29,umask=100ls_stlfmemoryNumber of STLF hitsevent=0x3500ls_dc_accessesmemoryThe number of accesses to the data cache for load and store references. This may include certain microcode scratchpad accesses, although these are generally rare. Each increment represents an eight-byte access, although the instruction may only be accessing a portion of that. This event is a speculative eventevent=0x4000ls_mab_alloc.dc_prefetchermemoryLS MAB allocates by type - DC prefetcherevent=0x41,umask=800ls_mab_alloc.storesmemoryLS MAB allocates by type - storesevent=0x41,umask=200ls_mab_alloc.loadsmemoryLS MAB allocates by type - loadsevent=0x41,umask=100ls_l1_d_tlb_miss.allmemoryL1 DTLB Miss or Reload off all sizesevent=0x45,umask=0xff00ls_l1_d_tlb_miss.tlb_reload_1g_l2_missmemoryL1 DTLB Miss of a page of 1G sizeevent=0x45,umask=0x8000ls_l1_d_tlb_miss.tlb_reload_2m_l2_missmemoryL1 DTLB Miss of a page of 2M sizeevent=0x45,umask=0x4000ls_l1_d_tlb_miss.tlb_reload_32k_l2_missmemoryL1 DTLB Miss of a page of 32K sizeevent=0x45,umask=0x2000ls_l1_d_tlb_miss.tlb_reload_4k_l2_missmemoryL1 DTLB Miss of a page of 4K sizeevent=0x45,umask=0x1000ls_l1_d_tlb_miss.tlb_reload_1g_l2_hitmemoryL1 DTLB Reload of a page of 1G sizeevent=0x45,umask=800ls_l1_d_tlb_miss.tlb_reload_2m_l2_hitmemoryL1 DTLB Reload of a page of 2M sizeevent=0x45,umask=400ls_l1_d_tlb_miss.tlb_reload_32k_l2_hitmemoryL1 DTLB Reload of a page of 32K sizeevent=0x45,umask=200ls_l1_d_tlb_miss.tlb_reload_4k_l2_hitmemoryL1 DTLB Reload of a page of 4K sizeevent=0x45,umask=100ls_tablewalker.isidememoryTotal Page Table Walks on I-sideevent=0x46,umask=0xc00ls_tablewalker.ic_type1memoryTotal Page Table Walks IC Type 1event=0x46,umask=800ls_tablewalker.ic_type0memoryTotal Page Table Walks IC Type 0event=0x46,umask=400ls_tablewalker.dsidememoryTotal Page Table Walks on D-sideevent=0x46,umask=300ls_tablewalker.dc_type1memoryTotal Page Table Walks DC Type 1event=0x46,umask=200ls_tablewalker.dc_type0memoryTotal Page Table Walks DC Type 0event=0x46,umask=100ls_misal_accessesmemoryMisaligned loadsevent=0x4700ls_pref_instr_disp.prefetch_ntamemorySoftware Prefetch Instructions (PREFETCHNTA instruction) Dispatchedevent=0x4b,umask=400ls_pref_instr_disp.store_prefetch_wmemorySoftware Prefetch Instructions (3DNow PREFETCHW instruction) Dispatchedevent=0x4b,umask=200ls_pref_instr_disp.load_prefetch_wmemorySoftware Prefetch Instructions Dispatched. Prefetch, Prefetch_T0_T1_T2event=0x4b,umask=100ls_inef_sw_pref.mab_mch_cntmemoryThe number of software prefetches that did not fetch data outside of the processor core. Software PREFETCH instruction saw a match on an already-allocated miss request bufferevent=0x52,umask=200ls_inef_sw_pref.data_pipe_sw_pf_dc_hitmemoryThe number of software prefetches that did not fetch data outside of the processor core. Software PREFETCH instruction saw a DC hitevent=0x52,umask=100ls_not_halted_cycmemoryCycles not in Haltevent=0x7600ic_oc_mode_switch.oc_ic_mode_switchotherOC Mode Switch. OC to IC mode switchevent=0x28a,umask=200ic_oc_mode_switch.ic_oc_mode_switchotherOC Mode Switch. IC to OC mode switchevent=0x28a,umask=100de_dis_dispatch_token_stalls0.retire_token_stallotherCycles where a dispatch group is valid but does not get dispatched due to a token stall. RETIRE Tokens unavailableevent=0xaf,umask=0x4000de_dis_dispatch_token_stalls0.agsq_token_stallotherCycles where a dispatch group is valid but does not get dispatched due to a token stall. AGSQ Tokens unavailableevent=0xaf,umask=0x2000de_dis_dispatch_token_stalls0.alu_token_stallotherCycles where a dispatch group is valid but does not get dispatched due to a token stall. ALU tokens total unavailableevent=0xaf,umask=0x1000de_dis_dispatch_token_stalls0.alsq3_0_token_stallotherCycles where a dispatch group is valid but does not get dispatched due to a token stall. ALSQ 3_0 Tokens unavailableevent=0xaf,umask=800de_dis_dispatch_token_stalls0.alsq3_token_stallotherCycles where a dispatch group is valid but does not get dispatched due to a token stall. ALSQ 3 Tokens unavailableevent=0xaf,umask=400de_dis_dispatch_token_stalls0.alsq2_token_stallotherCycles where a dispatch group is valid but does not get dispatched due to a token stall. ALSQ 2 Tokens unavailableevent=0xaf,umask=200de_dis_dispatch_token_stalls0.alsq1_token_stallotherCycles where a dispatch group is valid but does not get dispatched due to a token stall. ALSQ 1 Tokens unavailableevent=0xaf,umask=100all_dc_accessesrecommendedAll L1 Data Cache Accessesevent=0x29,umask=700l2_cache_accesses_from_ic_missesrecommendedL2 Cache Accesses from L1 Instruction Cache Misses (including prefetch)event=0x60,umask=0x1000l2_cache_accesses_from_dc_missesrecommendedL2 Cache Accesses from L1 Data Cache Misses (including prefetch)event=0x60,umask=0xc800l2_cache_misses_from_ic_missrecommendedL2 Cache Misses from L1 Instruction Cache Missesevent=0x64,umask=100l2_cache_misses_from_dc_missesrecommendedL2 Cache Misses from L1 Data Cache Missesevent=0x64,umask=800l2_cache_hits_from_ic_missesrecommendedL2 Cache Hits from L1 Instruction Cache Missesevent=0x64,umask=600l2_cache_hits_from_dc_missesrecommendedL2 Cache Hits from L1 Data Cache Missesevent=0x64,umask=0x7000l2_cache_hits_from_l2_hwpfrecommendedL2 Cache Hits from L2 HWPFevent=0x70,umask=0xff00l3_accessesrecommendedL3 Accessesevent=4,umask=0xff00l3_missesrecommendedL3 Misses (includes Chg2X)event=4,umask=100l2_itlb_missesrecommendedL2 ITLB Misses & Instruction page walksevent=0x85,umask=700l1_dtlb_missesrecommendedL1 DTLB Missesevent=0x45,umask=0xff00l2_dtlb_missesrecommendedL2 DTLB Misses & Data page walksevent=0x45,umask=0xf000all_tlbs_flushedrecommendedAll TLBs Flushedevent=0x78,umask=0xdf00uops_dispatchedrecommendedMicro-ops Dispatchedevent=0xaa,umask=300sse_avx_stallsrecommendedMixed SSE/AVX Stallsevent=0xe,umask=0xe00uops_retiredrecommendedMicro-ops Retiredevent=0xc100bp_l1_btb_correctbranchL1 Branch Prediction Overrides Existing Prediction (speculative)event=0x8a00bp_l2_btb_correctbranchL2 Branch Prediction Overrides Existing Prediction (speculative)event=0x8b00bp_l1_tlb_fetch_hitbranchThe number of instruction fetches that hit in the L1 ITLBevent=0x94,umask=0xff00bp_l1_tlb_fetch_hit.if1gbranchThe number of instruction fetches that hit in the L1 ITLB. Instruction fetches to a 1GB pageevent=0x94,umask=400bp_l1_tlb_fetch_hit.if2mbranchThe number of instruction fetches that hit in the L1 ITLB. Instruction fetches to a 2MB pageevent=0x94,umask=200bp_l1_tlb_fetch_hit.if4kbranchThe number of instruction fetches that hit in the L1 ITLB. Instruction fetches to a 4KB pageevent=0x94,umask=100bp_tlb_relbranchThe number of ITLB reload requestsevent=0x9900bp_l1_tlb_miss_l2_tlb_misscacheThe number of instruction fetches that miss in both the L1 and L2 TLBsevent=0x85,umask=0xff00bp_l1_tlb_miss_l2_tlb_miss.if1gcacheThe number of instruction fetches that miss in both the L1 and L2 TLBs. Instruction fetches to a 1GB pageevent=0x85,umask=400bp_l1_tlb_miss_l2_tlb_miss.if2mcacheThe number of instruction fetches that miss in both the L1 and L2 TLBs. Instruction fetches to a 2MB pageevent=0x85,umask=200bp_l1_tlb_miss_l2_tlb_miss.if4kcacheThe number of instruction fetches that miss in both the L1 and L2 TLBs. Instruction fetches to a 4KB pageevent=0x85,umask=100ic_oc_mode_switch.oc_ic_mode_switchcacheOC Mode Switch. OC to IC mode switchevent=0x28a,umask=200ic_oc_mode_switch.ic_oc_mode_switchcacheOC Mode Switch. IC to OC mode switchevent=0x28a,umask=100ex_ret_copscoreRetired Uopsevent=0xc100The number of micro-ops retired. This count includes all processor activity (instructions, exceptions, interrupts, microcode assists, etc.). The number of events logged per cycle can vary from 0 to 8ex_ret_cond_mispcoreRetired Conditional Branch Instructions Mispredictedevent=0xd200ex_ret_fus_brnch_instcoreRetired Fused Instructions. The number of fuse-branch instructions retired per cycle. The number of events logged per cycle can vary from 0-8event=0x1d000fpu_pipe_assignment.totalfloating pointTotal number of fp uOpsevent=0,umask=0xf00Total number of fp uOps. The number of operations (uOps) dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPSfpu_pipe_assignment.total3floating pointTotal number uOps assigned to pipe 3event=0,umask=800The number of operations (uOps) dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one-cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number uOps assigned to pipe 3fpu_pipe_assignment.total2floating pointTotal number uOps assigned to pipe 2event=0,umask=400The number of operations (uOps) dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number uOps assigned to pipe 2fpu_pipe_assignment.total1floating pointTotal number uOps assigned to pipe 1event=0,umask=200The number of operations (uOps) dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number uOps assigned to pipe 1fp_ret_sse_avx_ops.allfloating pointAll FLOPS. This is a retire-based event. The number of retired SSE/AVX FLOPS. The number of events logged per cycle can vary from 0 to 64. This event can count above 15event=3,umask=0xff00fp_ret_sse_avx_ops.mac_flopsfloating pointMultiply-add FLOPS. Multiply-add counts as 2 FLOPS. This is a retire-based event. The number of retired SSE/AVX FLOPS. The number of events logged per cycle can vary from 0 to 64. This event can count above 15event=3,umask=800fp_ret_sse_avx_ops.div_flopsfloating pointDivide/square root FLOPS. This is a retire-based event. The number of retired SSE/AVX FLOPS. The number of events logged per cycle can vary from 0 to 64. This event can count above 15event=3,umask=400fp_ret_sse_avx_ops.mult_flopsfloating pointMultiply FLOPS. This is a retire-based event. The number of retired SSE/AVX FLOPS. The number of events logged per cycle can vary from 0 to 64. This event can count above 15event=3,umask=200fp_ret_sse_avx_ops.add_sub_flopsfloating pointAdd/subtract FLOPS. This is a retire-based event. The number of retired SSE/AVX FLOPS. The number of events logged per cycle can vary from 0 to 64. This event can count above 15event=3,umask=100fp_num_mov_elim_scal_op.optimizedfloating pointNumber of Scalar Ops optimized. This is a dispatch based speculative event, and is useful for measuring the effectiveness of the Move elimination and Scalar code optimization schemesevent=4,umask=800fp_num_mov_elim_scal_op.opt_potentialfloating pointNumber of Ops that are candidates for optimization (have Z-bit either set or pass). This is a dispatch based speculative event, and is useful for measuring the effectiveness of the Move elimination and Scalar code optimization schemesevent=4,umask=400fp_num_mov_elim_scal_op.sse_mov_ops_elimfloating pointNumber of SSE Move Ops eliminated. This is a dispatch based speculative event, and is useful for measuring the effectiveness of the Move elimination and Scalar code optimization schemesevent=4,umask=200fp_num_mov_elim_scal_op.sse_mov_opsfloating pointNumber of SSE Move Ops. This is a dispatch based speculative event, and is useful for measuring the effectiveness of the Move elimination and Scalar code optimization schemesevent=4,umask=100fp_retired_ser_ops.sse_bot_retfloating pointSSE bottom-executing uOps retired. The number of serializing Ops retiredevent=5,umask=800fp_retired_ser_ops.sse_ctrl_retfloating pointThe number of serializing Ops retired. SSE control word mispredict traps due to mispredictions in RC, FTZ or DAZ, or changes in mask bitsevent=5,umask=400fp_retired_ser_ops.x87_bot_retfloating pointx87 bottom-executing uOps retired. The number of serializing Ops retiredevent=5,umask=200fp_retired_ser_ops.x87_ctrl_retfloating pointx87 control word mispredict traps due to mispredictions in RC or PC, or changes in mask bits. The number of serializing Ops retiredevent=5,umask=100fp_disp_faults.ymm_spill_faultfloating pointFloating Point Dispatch Faults. YMM spill faultevent=0xe,umask=800fp_disp_faults.ymm_fill_faultfloating pointFloating Point Dispatch Faults. YMM fill faultevent=0xe,umask=400fp_disp_faults.xmm_fill_faultfloating pointFloating Point Dispatch Faults. XMM fill faultevent=0xe,umask=200fp_disp_faults.x87_fill_faultfloating pointFloating Point Dispatch Faults. x87 fill faultevent=0xe,umask=100ls_bad_status2.stli_othermemoryNon-forwardable conflict; used to reduce STLI's via software. All reasons. Store To Load Interlock (STLI) are loads that were unable to complete because of a possible match with an older store, and the older store could not do STLF for some reasonevent=0x24,umask=200Store-to-load conflicts: A load was unable to complete due to a non-forwardable conflict with an older store. Most commonly, a load's address range partially but not completely overlaps with an uncompleted older store. Software can avoid this problem by using same-size and same-alignment loads and stores when accessing the same data. Vector/SIMD code is particularly susceptible to this problem; software should construct wide vector stores by manipulating vector elements in registers using shuffle/blend/swap instructions prior to storing to memory, instead of using narrow element-by-element storesls_locks.spec_lock_hi_specmemoryRetired lock instructions. High speculative cacheable lock speculation succeededevent=0x25,umask=800ls_locks.spec_lock_lo_specmemoryRetired lock instructions. Low speculative cacheable lock speculation succeededevent=0x25,umask=400ls_locks.non_spec_lockmemoryRetired lock instructions. Non-speculative lock succeededevent=0x25,umask=200ls_locks.bus_lockmemoryRetired lock instructions. Bus lock when a locked operations crosses a cache boundary or is done on an uncacheable memory type. Comparable to legacy bus lockevent=0x25,umask=100ls_ret_cl_flushmemoryNumber of retired CLFLUSH instructionsevent=0x2600ls_ret_cpuidmemoryNumber of retired CPUID instructionsevent=0x2700ls_dispatch.ld_st_dispatchmemoryDispatch of a single op that performs a load from and store to the same memory address. Number of single ops that do load/store to an addressevent=0x29,umask=400ls_dispatch.store_dispatchmemoryNumber of stores dispatched. Counts the number of operations dispatched to the LS unit. Unit Masks ADDedevent=0x29,umask=200ls_dispatch.ld_dispatchmemoryNumber of loads dispatched. Counts the number of operations dispatched to the LS unit. Unit Masks ADDedevent=0x29,umask=100ls_smi_rxmemoryNumber of SMIs receivedevent=0x2b00ls_int_takenmemoryNumber of interrupts takenevent=0x2c00ls_rdtscmemoryNumber of reads of the TSC (RDTSC instructions). The count is speculativeevent=0x2d00ls_st_commit_cancel2.st_commit_cancel_wcb_fullmemoryA non-cacheable store and the non-cacheable commit buffer is fullevent=0x3700ls_dc_accessesmemoryNumber of accesses to the dcache for load/store referencesevent=0x4000The number of accesses to the data cache for load and store references. This may include certain microcode scratchpad accesses, although these are generally rare. Each increment represents an eight-byte access, although the instruction may only be accessing a portion of that. This event is a speculative eventls_mab_alloc.dc_prefetchermemoryLS MAB Allocates by Type. DC prefetcherevent=0x41,umask=800ls_mab_alloc.storesmemoryLS MAB Allocates by Type. Storesevent=0x41,umask=200ls_mab_alloc.loadsmemoryLS MAB Allocates by Type. Loadsevent=0x41,umask=100ls_refills_from_sys.ls_mabresp_rmt_drammemoryDemand Data Cache Fills by Data Source. DRAM or IO from different dieevent=0x43,umask=0x4000ls_refills_from_sys.ls_mabresp_rmt_cachememoryDemand Data Cache Fills by Data Source. Hit in cache; Remote CCX and the address's Home Node is on a different dieevent=0x43,umask=0x1000ls_refills_from_sys.ls_mabresp_lcl_drammemoryDemand Data Cache Fills by Data Source. DRAM or IO from this thread's dieevent=0x43,umask=800ls_refills_from_sys.ls_mabresp_lcl_cachememoryDemand Data Cache Fills by Data Source. Hit in cache; local CCX (not Local L2), or Remote CCX and the address's Home Node is on this thread's dieevent=0x43,umask=200ls_refills_from_sys.ls_mabresp_lcl_l2memoryDemand Data Cache Fills by Data Source. Local L2 hitevent=0x43,umask=100ls_l1_d_tlb_miss.allmemoryAll L1 DTLB Misses or Reloadsevent=0x45,umask=0xff00ls_l1_d_tlb_miss.tlb_reload_1g_l2_missmemoryL1 DTLB Miss. DTLB reload to a 1G page that miss in the L2 TLBevent=0x45,umask=0x8000ls_l1_d_tlb_miss.tlb_reload_2m_l2_missmemoryL1 DTLB Miss. DTLB reload to a 2M page that miss in the L2 TLBevent=0x45,umask=0x4000ls_l1_d_tlb_miss.tlb_reload_coalesced_page_missmemoryL1 DTLB Miss. DTLB reload coalesced page missevent=0x45,umask=0x2000ls_l1_d_tlb_miss.tlb_reload_4k_l2_missmemoryL1 DTLB Miss. DTLB reload to a 4K page that miss the L2 TLBevent=0x45,umask=0x1000ls_l1_d_tlb_miss.tlb_reload_1g_l2_hitmemoryL1 DTLB Miss. DTLB reload to a 1G page that hit in the L2 TLBevent=0x45,umask=800ls_l1_d_tlb_miss.tlb_reload_2m_l2_hitmemoryL1 DTLB Miss. DTLB reload to a 2M page that hit in the L2 TLBevent=0x45,umask=400ls_l1_d_tlb_miss.tlb_reload_coalesced_page_hitmemoryL1 DTLB Miss. DTLB reload hit a coalesced pageevent=0x45,umask=200ls_l1_d_tlb_miss.tlb_reload_4k_l2_hitmemoryL1 DTLB Miss. DTLB reload to a 4K page that hit in the L2 TLBevent=0x45,umask=100ls_pref_instr_dispmemorySoftware Prefetch Instructions Dispatched (Speculative)event=0x4b,umask=0xff00ls_pref_instr_disp.prefetch_ntamemorySoftware Prefetch Instructions Dispatched (Speculative). PrefetchNTA instruction. See docAPM3 PREFETCHlevelevent=0x4b,umask=400ls_pref_instr_disp.prefetch_wmemorySoftware Prefetch Instructions Dispatched (Speculative). See docAPM3 PREFETCHWevent=0x4b,umask=200ls_pref_instr_disp.prefetchmemorySoftware Prefetch Instructions Dispatched (Speculative). Prefetch_T0_T1_T2. PrefetchT0, T1 and T2 instructions. See docAPM3 PREFETCHlevelevent=0x4b,umask=100ls_sw_pf_dc_fill.ls_mabresp_rmt_drammemorySoftware Prefetch Data Cache Fills by Data Source. From DRAM (home node remote)event=0x59,umask=0x4000ls_sw_pf_dc_fill.ls_mabresp_rmt_cachememorySoftware Prefetch Data Cache Fills by Data Source. From another cache (home node remote)event=0x59,umask=0x1000ls_sw_pf_dc_fill.ls_mabresp_lcl_drammemorySoftware Prefetch Data Cache Fills by Data Source. DRAM or IO from this thread's die.  From DRAM (home node local)event=0x59,umask=800ls_sw_pf_dc_fill.ls_mabresp_lcl_cachememorySoftware Prefetch Data Cache Fills by Data Source. From another cache (home node local)event=0x59,umask=200ls_sw_pf_dc_fill.ls_mabresp_lcl_l2memorySoftware Prefetch Data Cache Fills by Data Source. Local L2 hitevent=0x59,umask=100ls_hw_pf_dc_fill.ls_mabresp_rmt_drammemoryHardware Prefetch Data Cache Fills by Data Source. From DRAM (home node remote)event=0x5a,umask=0x4000ls_hw_pf_dc_fill.ls_mabresp_rmt_cachememoryHardware Prefetch Data Cache Fills by Data Source. From another cache (home node remote)event=0x5a,umask=0x1000ls_hw_pf_dc_fill.ls_mabresp_lcl_drammemoryHardware Prefetch Data Cache Fills by Data Source. From DRAM (home node local)event=0x5a,umask=800ls_hw_pf_dc_fill.ls_mabresp_lcl_cachememoryHardware Prefetch Data Cache Fills by Data Source. From another cache (home node local)event=0x5a,umask=200ls_hw_pf_dc_fill.ls_mabresp_lcl_l2memoryHardware Prefetch Data Cache Fills by Data Source. Local L2 hitevent=0x5a,umask=100ls_tlb_flushmemoryAll TLB Flushesevent=0x7800de_dis_uop_queue_empty_di0otherCycles where the Micro-Op Queue is emptyevent=0xa900de_dis_uops_from_decoderotherOps dispatched from either the decoders, OpCache or bothevent=0xaa,umask=0xff00de_dis_uops_from_decoder.opcache_dispatchedotherCount of dispatched Ops from OpCacheevent=0xaa,umask=200de_dis_uops_from_decoder.decoder_dispatchedotherCount of dispatched Ops from Decoderevent=0xaa,umask=100de_dis_dispatch_token_stalls1.fp_misc_rsrc_stallotherCycles where a dispatch group is valid but does not get dispatched due to a token stall. FP Miscellaneous resource unavailable. Applies to the recovery of mispredicts with FP opsevent=0xae,umask=0x8000de_dis_dispatch_token_stalls1.fp_sch_rsrc_stallotherCycles where a dispatch group is valid but does not get dispatched due to a token stall. FP scheduler resource stall. Applies to ops that use the FP schedulerevent=0xae,umask=0x4000de_dis_dispatch_token_stalls1.fp_reg_file_rsrc_stallotherCycles where a dispatch group is valid but does not get dispatched due to a token stall. Floating point register file resource stall. Applies to all FP ops that have a destination registerevent=0xae,umask=0x2000de_dis_dispatch_token_stalls1.taken_branch_buffer_rsrc_stallotherCycles where a dispatch group is valid but does not get dispatched due to a token stall. Taken branch buffer resource stallevent=0xae,umask=0x1000de_dis_dispatch_token_stalls1.int_sched_misc_token_stallotherCycles where a dispatch group is valid but does not get dispatched due to a token stall. Integer Scheduler miscellaneous resource stallevent=0xae,umask=800de_dis_dispatch_token_stalls1.store_queue_token_stallotherCycles where a dispatch group is valid but does not get dispatched due to a token stall. Store queue resource stall. Applies to all ops with store semanticsevent=0xae,umask=400de_dis_dispatch_token_stalls1.load_queue_token_stallotherCycles where a dispatch group is valid but does not get dispatched due to a token stall. Load queue resource stall. Applies to all ops with load semanticsevent=0xae,umask=200de_dis_dispatch_token_stalls1.int_phy_reg_file_token_stallotherCycles where a dispatch group is valid but does not get dispatched due to a token stall. Integer Physical Register File resource stall. Applies to all ops that have an integer destination registerevent=0xae,umask=100de_dis_dispatch_token_stalls0.sc_agu_dispatch_stallotherCycles where a dispatch group is valid but does not get dispatched due to a token stall. SC AGU dispatch stallevent=0xaf,umask=0x4000de_dis_dispatch_token_stalls0.retire_token_stallotherCycles where a dispatch group is valid but does not get dispatched due to a token stall. RETIRE Tokens unavailableevent=0xaf,umask=0x2000de_dis_dispatch_token_stalls0.agsq_token_stallotherCycles where a dispatch group is valid but does not get dispatched due to a token stall. AGSQ Tokens unavailableevent=0xaf,umask=0x1000de_dis_dispatch_token_stalls0.alu_token_stallotherCycles where a dispatch group is valid but does not get dispatched due to a token stall. ALU tokens total unavailableevent=0xaf,umask=800de_dis_dispatch_token_stalls0.alsq3_0_token_stallotherCycles where a dispatch group is valid but does not get dispatched due to a token stall. ALSQ3_0_TokenStallevent=0xaf,umask=400bp_dyn_ind_predbranchDynamic Indirect Predictionsevent=0x8e00The number of times a branch used the indirect predictor to make a predictionbp_de_redirectbranchDecode Redirectsevent=0x9100The number of times the instruction decoder overrides the predicted targetbp_l1_tlb_fetch_hit.if1gbranchThe number of instruction fetches that hit in the L1 ITLB. L1 Instruction TLB hit (1G page size)event=0x94,umask=400bp_l1_tlb_fetch_hit.if2mbranchThe number of instruction fetches that hit in the L1 ITLB. L1 Instruction TLB hit (2M page size)event=0x94,umask=200bp_l1_tlb_fetch_hit.if4kbranchThe number of instruction fetches that hit in the L1 ITLB. L1 Instrcution TLB hit (4K or 16K page size)event=0x94,umask=100l2_cache_req_stat.ls_rd_blk_l_hit_xcacheCore to L2 cacheable request access status (not including L2 Prefetch). Data cache read hit in L2. Modifiableevent=0x64,umask=0x4000l2_cache_req_stat.ls_rd_blk_l_hit_scacheCore to L2 cacheable request access status (not including L2 Prefetch). Data cache read hit non-modifiable line in L2event=0x64,umask=0x2000l2_cache_req_stat.ls_rd_blk_ccacheCore to L2 cacheable request access status (not including L2 Prefetch). Data cache request miss in L2 (all types). Use l2_cache_misses_from_dc_misses insteadevent=0x64,umask=800l2_cache_req_stat.ic_fill_hit_scacheCore to L2 cacheable request access status (not including L2 Prefetch). Instruction cache hit non-modifiable line in L2event=0x64,umask=200l2_cache_req_stat.ic_fill_misscacheCore to L2 cacheable request access status (not including L2 Prefetch). Instruction cache request miss in L2. Use l2_cache_misses_from_ic_miss insteadevent=0x64,umask=100l2_pf_miss_l2_l3cacheL2 prefetcher misses in L3. Counts all L2 prefetches accepted by the L2 pipeline which miss the L2 and the L3 cachesevent=0x72,umask=0xff00ic_cache_fill_l2cacheInstruction Cache Refills from L2. The number of 64 byte instruction cache line was fulfilled from the L2 cacheevent=0x8200ic_cache_fill_syscacheInstruction Cache Refills from System. The number of 64 byte instruction cache line fulfilled from system memory or another cacheevent=0x8300bp_l1_tlb_miss_l2_tlb_hitcacheL1 ITLB Miss, L2 ITLB Hit. The number of instruction fetches that miss in the L1 ITLB but hit in the L2 ITLBevent=0x8400bp_l1_tlb_miss_l2_tlb_miss.coalesced_4kcacheThe number of valid fills into the ITLB originating from the LS Page-Table Walker. Tablewalk requests are issued for L1-ITLB and L2-ITLB misses. Walk for >4K Coalesced pageevent=0x85,umask=800bp_l1_tlb_miss_l2_tlb_miss.if1gcacheThe number of valid fills into the ITLB originating from the LS Page-Table Walker. Tablewalk requests are issued for L1-ITLB and L2-ITLB misses. Walk for 1G pageevent=0x85,umask=400bp_l1_tlb_miss_l2_tlb_miss.if2mcacheThe number of valid fills into the ITLB originating from the LS Page-Table Walker. Tablewalk requests are issued for L1-ITLB and L2-ITLB misses. Walk for 2M pageevent=0x85,umask=200bp_l1_tlb_miss_l2_tlb_miss.if4kcacheThe number of valid fills into the ITLB originating from the LS Page-Table Walker. Tablewalk requests are issued for L1-ITLB and L2-ITLB misses. Walk to 4K pageevent=0x85,umask=100ic_tag_hit_miss.all_instruction_cache_accessescacheAll Instruction Cache Accesses. Counts various IC tag related hit and miss eventsevent=0x18e,umask=0x1f00ic_tag_hit_miss.instruction_cache_misscacheInstruction Cache Miss. Counts various IC tag related hit and miss eventsevent=0x18e,umask=0x1800ic_tag_hit_miss.instruction_cache_hitcacheInstruction Cache Hit. Counts various IC tag related hit and miss eventsevent=0x18e,umask=700op_cache_hit_miss.all_op_cache_accessescacheAll Op Cache accesses. Counts Op Cache micro-tag hit/miss eventsevent=0x28f,umask=700op_cache_hit_miss.op_cache_misscacheOp Cache Miss. Counts Op Cache micro-tag hit/miss eventsevent=0x28f,umask=400op_cache_hit_miss.op_cache_hitcacheOp Cache Hit. Counts Op Cache micro-tag hit/miss eventsevent=0x28f,umask=300l3_lookup_state.all_l3_req_typscacheAll L3 Request Types. All L3 cache Requestsevent=4,umask=0xff00xi_ccx_sdp_req1cacheL3 Misses by Request Type. Ignores SliceID, EnAllSlices, CoreID, EnAllCores and ThreadMask. Requires unit mask 0xFF to engage event for countingevent=0x9a,umask=0xff00ex_ret_opscoreRetired Ops. Use macro_ops_retired insteadevent=0xc100The number of macro-ops retiredex_ret_brn_mispcoreRetired Branch Instructions Mispredictedevent=0xc300The number of retired branch instructions, that were mispredictedex_ret_brn_ind_mispcoreRetired Indirect Branch Instructions Mispredictedevent=0xca00The number of indirect branches retired that were not correctly predicted. Each such mispredict incurs the same penalty as a mispredicted conditional branch instruction. Note that only EX mispredicts are countedex_ret_mmx_fp_instr.sse_instrcoreSSE instructions (SSE, SSE2, SSE3, SSSE3, SSE4A, SSE41, SSE42, AVX)event=0xcb,umask=400The number of MMX, SSE or x87 instructions retired. The UnitMask allows the selection of the individual classes of instructions as given in the table. Each increment represents one complete instruction. Since this event includes non-numeric instructions it is not suitable for measuring MFLOPSex_ret_ind_brch_instrcoreRetired Indirect Branch Instructions. The number of indirect branches retiredevent=0xcc00ex_ret_msprd_brnch_instr_dir_msmtchcoreRetired Mispredicted Branch Instructions due to Direction Mismatchevent=0x1c700The number of retired conditional branch instructions that were not correctly predicted because of a branch direction mismatchex_ret_fused_instrcoreCounts retired Fused Instructionsevent=0x1d000fp_ret_sse_avx_ops.mac_flopsfloating pointMultiply-Accumulate FLOPs. Each MAC operation is counted as 2 FLOPS. This is a retire-based event. The number of retired SSE/AVX FLOPs. The number of events logged per cycle can vary from 0 to 64. This event requires the use of the MergeEvent since it can count above 15 events per cycle. See 2.1.17.3 [Large Increment per Cycle Events]. It does not provide a useful count without the use of the MergeEventevent=3,umask=800fp_ret_sse_avx_ops.div_flopsfloating pointDivide/square root FLOPs. This is a retire-based event. The number of retired SSE/AVX FLOPs. The number of events logged per cycle can vary from 0 to 64. This event requires the use of the MergeEvent since it can count above 15 events per cycle. See 2.1.17.3 [Large Increment per Cycle Events]. It does not provide a useful count without the use of the MergeEventevent=3,umask=400fp_ret_sse_avx_ops.mult_flopsfloating pointMultiply FLOPs. This is a retire-based event. The number of retired SSE/AVX FLOPs. The number of events logged per cycle can vary from 0 to 64. This event requires the use of the MergeEvent since it can count above 15 events per cycle. See 2.1.17.3 [Large Increment per Cycle Events]. It does not provide a useful count without the use of the MergeEventevent=3,umask=200fp_ret_sse_avx_ops.add_sub_flopsfloating pointAdd/subtract FLOPs. This is a retire-based event. The number of retired SSE/AVX FLOPs. The number of events logged per cycle can vary from 0 to 64. This event requires the use of the MergeEvent since it can count above 15 events per cycle. See 2.1.17.3 [Large Increment per Cycle Events]. It does not provide a useful count without the use of the MergeEventevent=3,umask=100fp_retired_ser_ops.sse_bot_retfloating pointSSE/AVX bottom-executing ops retired. The number of serializing Ops retiredevent=5,umask=800fp_retired_ser_ops.sse_ctrl_retfloating pointSSE/AVX control word mispredict traps. The number of serializing Ops retiredevent=5,umask=400fp_retired_ser_ops.x87_bot_retfloating pointx87 bottom-executing ops retired. The number of serializing Ops retiredevent=5,umask=200ls_locks.bus_lockmemoryRetired lock instructions. Comparable to legacy bus lockevent=0x25,umask=100ls_ret_cl_flushmemoryThe number of retired CLFLUSH instructions. This is a non-speculative eventevent=0x2600ls_ret_cpuidmemoryThe number of CPUID instructions retiredevent=0x2700ls_dispatch.ld_st_dispatchmemoryLoad-op-Store Dispatch. Dispatch of a single op that performs a load from and store to the same memory address. Counts the number of operations dispatched to the LS unit. Unit Masks ADDedevent=0x29,umask=400ls_dispatch.store_dispatchmemoryDispatch of a single op that performs a memory store. Counts the number of operations dispatched to the LS unit. Unit Masks ADDedevent=0x29,umask=200ls_dispatch.ld_dispatchmemoryDispatch of a single op that performs a memory load. Counts the number of operations dispatched to the LS unit. Unit Masks ADDedevent=0x29,umask=100ls_smi_rxmemoryCounts the number of SMIs receivedevent=0x2b00ls_int_takenmemoryCounts the number of interrupts takenevent=0x2c00ls_st_commit_cancel2.st_commit_cancel_wcb_fullmemoryA non-cacheable store and the non-cacheable commit buffer is fullevent=0x37,umask=100ls_mab_alloc.all_allocationsmemoryAll Allocations. Counts when a LS pipe allocates a MAB entryevent=0x41,umask=0x7f00ls_mab_alloc.hardware_prefetcher_allocationsmemoryHardware Prefetcher Allocations. Counts when a LS pipe allocates a MAB entryevent=0x41,umask=0x4000ls_mab_alloc.load_store_allocationsmemoryLoad Store Allocations. Counts when a LS pipe allocates a MAB entryevent=0x41,umask=0x3f00ls_dmnd_fills_from_sys.mem_io_remotememoryDemand Data Cache Fills by Data Source. From DRAM or IO connected in different Nodeevent=0x43,umask=0x4000ls_dmnd_fills_from_sys.ext_cache_remotememoryDemand Data Cache Fills by Data Source. From CCX Cache in different Nodeevent=0x43,umask=0x1000ls_dmnd_fills_from_sys.mem_io_localmemoryDemand Data Cache Fills by Data Source. From DRAM or IO connected in same nodeevent=0x43,umask=800ls_dmnd_fills_from_sys.ext_cache_localmemoryDemand Data Cache Fills by Data Source. From cache of different CCX in same nodeevent=0x43,umask=400ls_dmnd_fills_from_sys.int_cachememoryDemand Data Cache Fills by Data Source. From L3 or different L2 in same CCXevent=0x43,umask=200ls_dmnd_fills_from_sys.lcl_l2memoryDemand Data Cache Fills by Data Source. From Local L2 to the coreevent=0x43,umask=100ls_any_fills_from_sys.mem_io_remotememoryAny Data Cache Fills by Data Source. From DRAM or IO connected in different Nodeevent=0x44,umask=0x4000ls_any_fills_from_sys.ext_cache_remotememoryAny Data Cache Fills by Data Source. From CCX Cache in different Nodeevent=0x44,umask=0x1000ls_any_fills_from_sys.mem_io_localmemoryAny Data Cache Fills by Data Source. From DRAM or IO connected in same nodeevent=0x44,umask=800ls_any_fills_from_sys.ext_cache_localmemoryAny Data Cache Fills by Data Source. From cache of different CCX in same nodeevent=0x44,umask=400ls_any_fills_from_sys.int_cachememoryAny Data Cache Fills by Data Source. From L3 or different L2 in same CCXevent=0x44,umask=200ls_any_fills_from_sys.lcl_l2memoryAny Data Cache Fills by Data Source. From Local L2 to the coreevent=0x44,umask=100ls_l1_d_tlb_miss.allmemoryAll L1 DTLB Misses or Reloads. Use l1_dtlb_misses insteadevent=0x45,umask=0xff00ls_l1_d_tlb_miss.tlb_reload_1g_l2_missmemoryL1 DTLB Miss. DTLB reload to a 1G page that also missed in the L2 TLBevent=0x45,umask=0x8000ls_l1_d_tlb_miss.tlb_reload_2m_l2_missmemoryL1 DTLB Miss. DTLB reload to a 2M page that also missed in the L2 TLBevent=0x45,umask=0x4000ls_l1_d_tlb_miss.tlb_reload_coalesced_page_missmemoryL1 DTLB Miss. DTLB reload coalesced page that also missed in the L2 TLBevent=0x45,umask=0x2000ls_l1_d_tlb_miss.tlb_reload_4k_l2_missmemoryL1 DTLB Miss. DTLB reload to a 4K page that missed the L2 TLBevent=0x45,umask=0x1000ls_l1_d_tlb_miss.tlb_reload_coalesced_page_hitmemoryL1 DTLB Miss. DTLB reload to a coalesced page that hit in the L2 TLBevent=0x45,umask=200ls_misal_loads.ma4kmemoryThe number of 4KB misaligned (i.e., page crossing) loadsevent=0x47,umask=200ls_misal_loads.ma64memoryThe number of 64B misaligned (i.e., cacheline crossing) loadsevent=0x47,umask=100ls_pref_instr_disp.prefetch_wmemorySoftware Prefetch Instructions Dispatched (Speculative). PrefetchW instruction. See docAPM3 PREFETCHWevent=0x4b,umask=200ls_pref_instr_disp.prefetchmemorySoftware Prefetch Instructions Dispatched (Speculative). PrefetchT0, T1 and T2 instructions. See docAPM3 PREFETCHlevelevent=0x4b,umask=100ls_sw_pf_dc_fills.mem_io_remotememorySoftware Prefetch Data Cache Fills by Data Source. From DRAM or IO connected in different Nodeevent=0x59,umask=0x4000ls_sw_pf_dc_fills.ext_cache_remotememorySoftware Prefetch Data Cache Fills by Data Source. From CCX Cache in different Nodeevent=0x59,umask=0x1000ls_sw_pf_dc_fills.mem_io_localmemorySoftware Prefetch Data Cache Fills by Data Source. From DRAM or IO connected in same nodeevent=0x59,umask=800ls_sw_pf_dc_fills.ext_cache_localmemorySoftware Prefetch Data Cache Fills by Data Source. From cache of different CCX in same nodeevent=0x59,umask=400ls_sw_pf_dc_fills.int_cachememorySoftware Prefetch Data Cache Fills by Data Source. From L3 or different L2 in same CCXevent=0x59,umask=200ls_sw_pf_dc_fills.lcl_l2memorySoftware Prefetch Data Cache Fills by Data Source. From Local L2 to the coreevent=0x59,umask=100ls_hw_pf_dc_fills.mem_io_remotememoryHardware Prefetch Data Cache Fills by Data Source. From DRAM or IO connected in different Nodeevent=0x5a,umask=0x4000ls_hw_pf_dc_fills.ext_cache_remotememoryHardware Prefetch Data Cache Fills by Data Source. From CCX Cache in different Nodeevent=0x5a,umask=0x1000ls_hw_pf_dc_fills.mem_io_localmemoryHardware Prefetch Data Cache Fills by Data Source. From DRAM or IO connected in same nodeevent=0x5a,umask=800ls_hw_pf_dc_fills.ext_cache_localmemoryHardware Prefetch Data Cache Fills by Data Source. From cache of different CCX in same nodeevent=0x5a,umask=400ls_hw_pf_dc_fills.int_cachememoryHardware Prefetch Data Cache Fills by Data Source. From L3 or different L2 in same CCXevent=0x5a,umask=200ls_hw_pf_dc_fills.lcl_l2memoryHardware Prefetch Data Cache Fills by Data Source. From Local L2 to the coreevent=0x5a,umask=100ls_alloc_mab_countmemoryCount of Allocated Mabsevent=0x5f00This event counts the in-flight L1 data cache misses (allocated Miss Address Buffers) divided by 4 and rounded down each cycle unless used with the MergeEvent functionality. If the MergeEvent is used, it counts the exact number of outstanding L1 data cache misses. See 2.1.17.3 [Large Increment per Cycle Events]ls_tlb_flush.all_tlb_flushesmemoryAll TLB Flushes. Requires unit mask 0xFF to engage event for counting. Use all_tlbs_flushed insteadevent=0x78,umask=0xff00de_dis_cops_from_decoder.disp_op_type.any_integer_dispatchotherAny Integer dispatch. Types of Oops Dispatched from Decoderevent=0xab,umask=800de_dis_cops_from_decoder.disp_op_type.any_fp_dispatchotherAny FP dispatch. Types of Oops Dispatched from Decoderevent=0xab,umask=400de_dis_dispatch_token_stalls1.fp_flush_recovery_stallotherCycles where a dispatch group is valid but does not get dispatched due to a Token Stall. Also counts cycles when the thread is not selected to dispatch but would have been stalled due to a Token Stall. FP Flush recovery stallevent=0xae,umask=0x8000de_dis_dispatch_token_stalls1.fp_sch_rsrc_stallotherCycles where a dispatch group is valid but does not get dispatched due to a Token Stall. Also counts cycles when the thread is not selected to dispatch but would have been stalled due to a Token Stall. FP scheduler resource stall. Applies to ops that use the FP schedulerevent=0xae,umask=0x4000de_dis_dispatch_token_stalls1.fp_reg_file_rsrc_stallotherCycles where a dispatch group is valid but does not get dispatched due to a Token Stall. Also counts cycles when the thread is not selected to dispatch but would have been stalled due to a Token Stall. Floating point register file resource stall. Applies to all FP ops that have a destination registerevent=0xae,umask=0x2000de_dis_dispatch_token_stalls1.taken_brnch_buffer_rsrcotherCycles where a dispatch group is valid but does not get dispatched due to a Token Stall. Also counts cycles when the thread is not selected to dispatch but would have been stalled due to a Token Stall. Taken branch buffer resource stallevent=0xae,umask=0x1000de_dis_dispatch_token_stalls1.store_queue_rsrc_stallotherCycles where a dispatch group is valid but does not get dispatched due to a Token Stall. Also counts cycles when the thread is not selected to dispatch but would have been stalled due to a Token Stall. Store Queue resource stall. Applies to all ops with store semanticsevent=0xae,umask=400de_dis_dispatch_token_stalls1.load_queue_rsrc_stallotherCycles where a dispatch group is valid but does not get dispatched due to a Token Stall. Also counts cycles when the thread is not selected to dispatch but would have been stalled due to a Token Stall. Load Queue resource stall. Applies to all ops with load semanticsevent=0xae,umask=200de_dis_dispatch_token_stalls1.int_phy_reg_file_rsrc_stallotherCycles where a dispatch group is valid but does not get dispatched due to a Token Stall. Also counts cycles when the thread is not selected to dispatch but would have been stalled due to a Token Stall. Integer Physical Register File resource stall. Integer Physical Register File, applies to all ops that have an integer destination registerevent=0xae,umask=100de_dis_dispatch_token_stalls2.retire_token_stallotherCycles where a dispatch group is valid but does not get dispatched due to a token stall. Insufficient Retire Queue tokens availableevent=0xaf,umask=0x2000de_dis_dispatch_token_stalls2.agsq_token_stallotherCycles where a dispatch group is valid but does not get dispatched due to a token stall. AGSQ Tokens unavailableevent=0xaf,umask=0x1000de_dis_dispatch_token_stalls2.int_sch3_token_stallotherCycles where a dispatch group is valid but does not get dispatched due to a token stall. No tokens for Integer Scheduler Queue 3 availableevent=0xaf,umask=800de_dis_dispatch_token_stalls2.int_sch2_token_stallotherCycles where a dispatch group is valid but does not get dispatched due to a token stall. No tokens for Integer Scheduler Queue 2 availableevent=0xaf,umask=400de_dis_dispatch_token_stalls2.int_sch1_token_stallotherCycles where a dispatch group is valid but does not get dispatched due to a token stall. No tokens for Integer Scheduler Queue 1 availableevent=0xaf,umask=200de_dis_dispatch_token_stalls2.int_sch0_token_stallotherCycles where a dispatch group is valid but does not get dispatched due to a token stall. No tokens for Integer Scheduler Queue 0 availableevent=0xaf,umask=100all_data_cache_accessesrecommendedAll L1 Data Cache Accessesevent=0x29,umask=700l2_cache_accesses_from_dc_missesrecommendedL2 Cache Accesses from L1 Data Cache Misses (including prefetch)event=0x60,umask=0xe800l2_cache_hits_from_dc_missesrecommendedL2 Cache Hits from L1 Data Cache Missesevent=0x64,umask=0xf000l2_cache_hits_from_l2_hwpfrecommendedL2 Cache Hits from L2 Cache HWPFevent=0x70,umask=0xff00l3_cache_accessesrecommendedL3 Cache Accessesevent=4,umask=0xff00l3_missesrecommendedL3 Misses (includes cacheline state change requests)event=4,umask=100l1_data_cache_fills_from_memoryrecommendedL1 Data Cache Fills: From Memoryevent=0x44,umask=0x4800l1_data_cache_fills_from_remote_noderecommendedL1 Data Cache Fills: From Remote Nodeevent=0x44,umask=0x5000l1_data_cache_fills_from_within_same_ccxrecommendedL1 Data Cache Fills: From within same CCXevent=0x44,umask=300l1_data_cache_fills_from_external_ccx_cacherecommendedL1 Data Cache Fills: From External CCX Cacheevent=0x44,umask=0x1400l1_data_cache_fills_allrecommendedL1 Data Cache Fills: Allevent=0x44,umask=0xff00all_tlbs_flushedrecommendedAll TLBs Flushedevent=0x78,umask=0xff00macro_ops_retiredrecommendedMacro-ops Retiredevent=0xc100bp_l2_btb_correctbranchL2 branch prediction overrides existing prediction (speculative)event=0x8b00bp_dyn_ind_predbranchDynamic indirect predictions (branch used the indirect predictor to make a prediction)event=0x8e00bp_de_redirectbranchInstruction decoder corrects the predicted target and resteers the branch predictorevent=0x9100ex_ret_brnbranchRetired branch instructions (all types of architectural control flow changes, including exceptions and interrupts)event=0xc200ex_ret_brn_mispbranchRetired branch instructions mispredictedevent=0xc300ex_ret_brn_tknbranchRetired taken branch instructions (all types of architectural control flow changes, including exceptions and interrupts)event=0xc400ex_ret_brn_tkn_mispbranchRetired taken branch instructions mispredictedevent=0xc500ex_ret_brn_farbranchRetired far control transfers (far call/jump/return, IRET, SYSCALL and SYSRET, plus exceptions and interrupts). Far control transfers are not subject to branch predictionevent=0xc600ex_ret_near_retbranchRetired near returns (RET or RET Iw)event=0xc800ex_ret_near_ret_mispredbranchRetired near returns mispredicted. Each misprediction incurs the same penalty as a mispredicted conditional branch instructionevent=0xc900ex_ret_brn_ind_mispbranchRetired indirect branch instructions mispredicted (only EX mispredicts). Each misprediction incurs the same penalty as a mispredicted conditional branch instructionevent=0xca00ex_ret_ind_brch_instrbranchRetired indirect branch instructionsevent=0xcc00ex_ret_condbranchRetired conditional branch instructionsevent=0xd100ex_ret_msprd_brnch_instr_dir_msmtchbranchRetired branch instructions mispredicted due to direction mismatchevent=0x1c700ex_ret_uncond_brnch_instr_mispredbranchRetired unconditional indirect branch instructions mispredictedevent=0x1c800ex_ret_uncond_brnch_instrbranchRetired unconditional branch instructionsevent=0x1c900ls_mab_alloc.load_store_allocationscacheMiss Address Buffer (MAB) entries allocated by a Load-Store (LS) pipe for load-store allocationsevent=0x41,umask=0x3f00ls_mab_alloc.hardware_prefetcher_allocationscacheMiss Address Buffer (MAB) entries allocated by a Load-Store (LS) pipe for hardware prefetcher allocationsevent=0x41,umask=0x4000ls_mab_alloc.all_allocationscacheMiss Address Buffer (MAB) entries allocated by a Load-Store (LS) pipe for all types of allocationsevent=0x41,umask=0x7f00ls_dmnd_fills_from_sys.local_l2cacheDemand data cache fills from local L2 cacheevent=0x43,umask=100ls_dmnd_fills_from_sys.local_ccxcacheDemand data cache fills from L3 cache or different L2 cache in the same CCXevent=0x43,umask=200ls_dmnd_fills_from_sys.near_cachecacheDemand data cache fills from cache of another CCX when the address was in the same NUMA nodeevent=0x43,umask=400ls_dmnd_fills_from_sys.dram_io_nearcacheDemand data cache fills from either DRAM or MMIO in the same NUMA nodeevent=0x43,umask=800ls_dmnd_fills_from_sys.far_cachecacheDemand data cache fills from cache of another CCX when the address was in a different NUMA nodeevent=0x43,umask=0x1000ls_dmnd_fills_from_sys.dram_io_farcacheDemand data cache fills from either DRAM or MMIO in a different NUMA node (same or different socket)event=0x43,umask=0x4000ls_dmnd_fills_from_sys.alternate_memoriescacheDemand data cache fills from extension memoryevent=0x43,umask=0x8000ls_dmnd_fills_from_sys.allcacheDemand data cache fills from all types of data sourcesevent=0x43,umask=0xff00ls_any_fills_from_sys.local_l2cacheAny data cache fills from local L2 cacheevent=0x44,umask=100ls_any_fills_from_sys.local_ccxcacheAny data cache fills from L3 cache or different L2 cache in the same CCXevent=0x44,umask=200ls_any_fills_from_sys.local_allcacheAny data cache fills from local L2 cache or L3 cache or different L2 cache in the same CCXevent=0x44,umask=300ls_any_fills_from_sys.near_cachecacheAny data cache fills from cache of another CCX when the address was in the same NUMA nodeevent=0x44,umask=400ls_any_fills_from_sys.dram_io_nearcacheAny data cache fills from either DRAM or MMIO in the same NUMA nodeevent=0x44,umask=800ls_any_fills_from_sys.far_cachecacheAny data cache fills from cache of another CCX when the address was in a different NUMA nodeevent=0x44,umask=0x1000ls_any_fills_from_sys.remote_cachecacheAny data cache fills from cache of another CCX when the address was in the same or a different NUMA nodeevent=0x44,umask=0x1400ls_any_fills_from_sys.dram_io_farcacheAny data cache fills from either DRAM or MMIO in a different NUMA node (same or different socket)event=0x44,umask=0x4000ls_any_fills_from_sys.dram_io_allcacheAny data cache fills from either DRAM or MMIO in any NUMA node (same or different socket)event=0x44,umask=0x4800ls_any_fills_from_sys.far_allcacheAny data cache fills from either cache of another CCX, DRAM or MMIO when the address was in a different NUMA node (same or different socket)event=0x44,umask=0x5000ls_any_fills_from_sys.all_dram_iocacheAny data cache fills from either DRAM or MMIO in any NUMA node (same or different socket)event=0x44,umask=0x4800ls_any_fills_from_sys.alternate_memoriescacheAny data cache fills from extension memoryevent=0x44,umask=0x8000ls_any_fills_from_sys.allcacheAny data cache fills from all types of data sourcesevent=0x44,umask=0xff00ls_pref_instr_disp.prefetchcacheSoftware prefetch instructions dispatched (speculative) of type PrefetchT0 (move data to all cache levels), T1 (move data to all cache levels except L1) and T2 (move data to all cache levels except L1 and L2)event=0x4b,umask=100ls_pref_instr_disp.prefetch_wcacheSoftware prefetch instructions dispatched (speculative) of type PrefetchW (move data to L1 cache and mark it modifiable)event=0x4b,umask=200ls_pref_instr_disp.prefetch_ntacacheSoftware prefetch instructions dispatched (speculative) of type PrefetchNTA (move data with minimum cache pollution i.e. non-temporal access)event=0x4b,umask=400ls_pref_instr_disp.allcacheSoftware prefetch instructions dispatched (speculative) of all typesevent=0x4b,umask=700ls_inef_sw_pref.data_pipe_sw_pf_dc_hitcacheSoftware prefetches that did not fetch data outside of the processor core as the PREFETCH instruction saw a data cache hitevent=0x52,umask=100ls_inef_sw_pref.mab_mch_cntcacheSoftware prefetches that did not fetch data outside of the processor core as the PREFETCH instruction saw a match on an already allocated Miss Address Buffer (MAB)event=0x52,umask=200ls_inef_sw_pref.allcacheevent=0x52,umask=300ls_sw_pf_dc_fills.local_l2cacheSoftware prefetch data cache fills from local L2 cacheevent=0x59,umask=100ls_sw_pf_dc_fills.local_ccxcacheSoftware prefetch data cache fills from L3 cache or different L2 cache in the same CCXevent=0x59,umask=200ls_sw_pf_dc_fills.near_cachecacheSoftware prefetch data cache fills from cache of another CCX in the same NUMA nodeevent=0x59,umask=400ls_sw_pf_dc_fills.dram_io_nearcacheSoftware prefetch data cache fills from either DRAM or MMIO in the same NUMA nodeevent=0x59,umask=800ls_sw_pf_dc_fills.far_cachecacheSoftware prefetch data cache fills from cache of another CCX in a different NUMA nodeevent=0x59,umask=0x1000ls_sw_pf_dc_fills.dram_io_farcacheSoftware prefetch data cache fills from either DRAM or MMIO in a different NUMA node (same or different socket)event=0x59,umask=0x4000ls_sw_pf_dc_fills.alternate_memoriescacheSoftware prefetch data cache fills from extension memoryevent=0x59,umask=0x8000ls_sw_pf_dc_fills.allcacheSoftware prefetch data cache fills from all types of data sourcesevent=0x59,umask=0xdf00ls_hw_pf_dc_fills.local_l2cacheHardware prefetch data cache fills from local L2 cacheevent=0x5a,umask=100ls_hw_pf_dc_fills.local_ccxcacheHardware prefetch data cache fills from L3 cache or different L2 cache in the same CCXevent=0x5a,umask=200ls_hw_pf_dc_fills.near_cachecacheHardware prefetch data cache fills from cache of another CCX when the address was in the same NUMA nodeevent=0x5a,umask=400ls_hw_pf_dc_fills.dram_io_nearcacheHardware prefetch data cache fills from either DRAM or MMIO in the same NUMA nodeevent=0x5a,umask=800ls_hw_pf_dc_fills.far_cachecacheHardware prefetch data cache fills from cache of another CCX when the address was in a different NUMA nodeevent=0x5a,umask=0x1000ls_hw_pf_dc_fills.dram_io_farcacheHardware prefetch data cache fills from either DRAM or MMIO in a different NUMA node (same or different socket)event=0x5a,umask=0x4000ls_hw_pf_dc_fills.alternate_memoriescacheHardware prefetch data cache fills from extension memoryevent=0x5a,umask=0x8000ls_hw_pf_dc_fills.allcacheHardware prefetch data cache fills from all types of data sourcesevent=0x5a,umask=0xdf00ls_alloc_mab_countcacheIn-flight L1 data cache misses i.e. Miss Address Buffer (MAB) allocations each cycleevent=0x5f00l2_request_g1.group2cacheL2 cache requests of non-cacheable type (non-cached data and instructions reads, self-modifying code checks)event=0x60,umask=100l2_request_g1.l2_hw_pfcacheL2 cache requests: from hardware prefetchers to prefetch directly into L2 (hit or miss)event=0x60,umask=200l2_request_g1.prefetch_l2_cmdcacheL2 cache requests: prefetch directly into L2event=0x60,umask=400l2_request_g1.change_to_xcacheL2 cache requests: data cache state change to writable, check L2 for current stateevent=0x60,umask=800l2_request_g1.cacheable_ic_readcacheL2 cache requests: instruction cache readsevent=0x60,umask=0x1000l2_request_g1.ls_rd_blk_c_scacheL2 cache requests: data cache shared readsevent=0x60,umask=0x2000l2_request_g1.rd_blk_xcacheL2 cache requests: data cache storesevent=0x60,umask=0x4000l2_request_g1.rd_blk_lcacheL2 cache requests: data cache reads including hardware and software prefetchevent=0x60,umask=0x8000l2_request_g1.all_dccacheL2 cache requests of common types from L1 data cache (including prefetches)event=0x60,umask=0xe800l2_request_g1.all_no_prefetchcacheL2 cache requests of common types not including prefetchesevent=0x60,umask=0xf900l2_request_g1.allcacheL2 cache requests of all typesevent=0x60,umask=0xff00l2_cache_req_stat.ic_fill_misscacheCore to L2 cache requests (not including L2 prefetch) with status: instruction cache request miss in L2event=0x64,umask=100l2_cache_req_stat.ic_fill_hit_scacheCore to L2 cache requests (not including L2 prefetch) with status: instruction cache hit non-modifiable line in L2event=0x64,umask=200l2_cache_req_stat.ic_fill_hit_xcacheCore to L2 cache requests (not including L2 prefetch) with status: instruction cache hit modifiable line in L2event=0x64,umask=400l2_cache_req_stat.ic_hit_in_l2cacheCore to L2 cache requests (not including L2 prefetch) for instruction cache hitsevent=0x64,umask=600l2_cache_req_stat.ic_access_in_l2cacheCore to L2 cache requests (not including L2 prefetch) for instruction cache accessevent=0x64,umask=700l2_cache_req_stat.ls_rd_blk_ccacheCore to L2 cache requests (not including L2 prefetch) with status: data cache request miss in L2event=0x64,umask=800l2_cache_req_stat.ic_dc_miss_in_l2cacheCore to L2 cache requests (not including L2 prefetch) for data and instruction cache missesevent=0x64,umask=900l2_cache_req_stat.ls_rd_blk_xcacheCore to L2 cache requests (not including L2 prefetch) with status: data cache store or state change hit in L2event=0x64,umask=0x1000l2_cache_req_stat.ls_rd_blk_l_hit_scacheCore to L2 cache requests (not including L2 prefetch) with status: data cache read hit non-modifiable line in L2event=0x64,umask=0x2000l2_cache_req_stat.ls_rd_blk_l_hit_xcacheCore to L2 cache requests (not including L2 prefetch) with status: data cache read hit modifiable line in L2event=0x64,umask=0x4000l2_cache_req_stat.ls_rd_blk_cscacheCore to L2 cache requests (not including L2 prefetch) with status: data cache shared read hit in L2event=0x64,umask=0x8000l2_cache_req_stat.dc_hit_in_l2cacheCore to L2 cache requests (not including L2 prefetch) for data cache hitsevent=0x64,umask=0xf000l2_cache_req_stat.ic_dc_hit_in_l2cacheCore to L2 cache requests (not including L2 prefetch) for data and instruction cache hitsevent=0x64,umask=0xf600l2_cache_req_stat.dc_access_in_l2cacheCore to L2 cache requests (not including L2 prefetch) for data cache accessevent=0x64,umask=0xf800l2_cache_req_stat.allcacheCore to L2 cache requests (not including L2 prefetch) for data and instruction cache accessevent=0x64,umask=0xff00l2_pf_hit_l2.l2_streamcacheL2 prefetches accepted by the L2 pipeline which hit in the L2 cache of type L2Stream (fetch additional sequential lines into L2 cache)event=0x70,umask=100l2_pf_hit_l2.l2_next_linecacheL2 prefetches accepted by the L2 pipeline which hit in the L2 cache of type L2NextLine (fetch the next line into L2 cache)event=0x70,umask=200l2_pf_hit_l2.l2_up_downcacheL2 prefetches accepted by the L2 pipeline which hit in the L2 cache of type L2UpDown (fetch the next or previous line into L2 cache for all memory accesses)event=0x70,umask=400l2_pf_hit_l2.l2_burstcacheL2 prefetches accepted by the L2 pipeline which hit in the L2 cache of type L2Burst (aggressively fetch additional sequential lines into L2 cache)event=0x70,umask=800l2_pf_hit_l2.l2_stridecacheL2 prefetches accepted by the L2 pipeline which hit in the L2 cache of type L2Stride (fetch additional lines into L2 cache when each access is at a constant distance from the previous)event=0x70,umask=0x1000l2_pf_hit_l2.l1_streamcacheL2 prefetches accepted by the L2 pipeline which hit in the L2 cache of type L1Stream (fetch additional sequential lines into L1 cache)event=0x70,umask=0x2000l2_pf_hit_l2.l1_stridecacheL2 prefetches accepted by the L2 pipeline which hit in the L2 cache of type L1Stride (fetch additional lines into L1 cache when each access is a constant distance from the previous)event=0x70,umask=0x4000l2_pf_hit_l2.l1_regioncacheL2 prefetches accepted by the L2 pipeline which hit in the L2 cache of type L1Region (fetch additional lines into L1 cache when the data access for a given instruction tends to be followed by a consistent pattern of other accesses within a localized region)event=0x70,umask=0x8000l2_pf_hit_l2.allcacheL2 prefetches accepted by the L2 pipeline which hit in the L2 cache of all typesevent=0x70,umask=0xff00l2_pf_miss_l2_hit_l3.l2_streamcacheL2 prefetches accepted by the L2 pipeline which miss the L2 cache and hit in the L3 cache of type L2Stream (fetch additional sequential lines into L2 cache)event=0x71,umask=100l2_pf_miss_l2_hit_l3.l2_next_linecacheL2 prefetches accepted by the L2 pipeline which miss the L2 cache and hit in the L3 cache of type L2NextLine (fetch the next line into L2 cache)event=0x71,umask=200l2_pf_miss_l2_hit_l3.l2_up_downcacheL2 prefetches accepted by the L2 pipeline which miss the L2 cache and hit in the L3 cache of type L2UpDown (fetch the next or previous line into L2 cache for all memory accesses)event=0x71,umask=400l2_pf_miss_l2_hit_l3.l2_burstcacheL2 prefetches accepted by the L2 pipeline which miss the L2 cache and hit in the L3 cache of type L2Burst (aggressively fetch additional sequential lines into L2 cache)event=0x71,umask=800l2_pf_miss_l2_hit_l3.l2_stridecacheL2 prefetches accepted by the L2 pipeline which miss the L2 cache and hit in the L3 cache of type L2Stride (fetch additional lines into L2 cache when each access is a constant distance from the previous)event=0x71,umask=0x1000l2_pf_miss_l2_hit_l3.l1_streamcacheL2 prefetches accepted by the L2 pipeline which miss the L2 cache and hit in the L3 cache of type L1Stream (fetch additional sequential lines into L1 cache)event=0x71,umask=0x2000l2_pf_miss_l2_hit_l3.l1_stridecacheL2 prefetches accepted by the L2 pipeline which miss the L2 cache and hit in the L3 cache of type L1Stride (fetch additional lines into L1 cache when each access is a constant distance from the previous)event=0x71,umask=0x4000l2_pf_miss_l2_hit_l3.l1_regioncacheL2 prefetches accepted by the L2 pipeline which miss the L2 cache and hit in the L3 cache of type L1Region (fetch additional lines into L1 cache when the data access for a given instruction tends to be followed by a consistent pattern of other accesses within a localized region)event=0x71,umask=0x8000l2_pf_miss_l2_hit_l3.allcacheL2 prefetches accepted by the L2 pipeline which miss the L2 cache and hit in the L3 cache cache of all typesevent=0x71,umask=0xff00l2_pf_miss_l2_l3.l2_streamcacheL2 prefetches accepted by the L2 pipeline which miss the L2 and the L3 caches of type L2Stream (fetch additional sequential lines into L2 cache)event=0x72,umask=100l2_pf_miss_l2_l3.l2_next_linecacheL2 prefetches accepted by the L2 pipeline which miss the L2 and the L3 caches of type L2NextLine (fetch the next line into L2 cache)event=0x72,umask=200l2_pf_miss_l2_l3.l2_up_downcacheL2 prefetches accepted by the L2 pipeline which miss the L2 and the L3 caches of type L2UpDown (fetch the next or previous line into L2 cache for all memory accesses)event=0x72,umask=400l2_pf_miss_l2_l3.l2_burstcacheL2 prefetches accepted by the L2 pipeline which miss the L2 and the L3 caches of type L2Burst (aggressively fetch additional sequential lines into L2 cache)event=0x72,umask=800l2_pf_miss_l2_l3.l2_stridecacheL2 prefetches accepted by the L2 pipeline which miss the L2 and the L3 caches of type L2Stride (fetch additional lines into L2 cache when each access is a constant distance from the previous)event=0x72,umask=0x1000l2_pf_miss_l2_l3.l1_streamcacheL2 prefetches accepted by the L2 pipeline which miss the L2 and the L3 caches of type L1Stream (fetch additional sequential lines into L1 cache)event=0x72,umask=0x2000l2_pf_miss_l2_l3.l1_stridecacheL2 prefetches accepted by the L2 pipeline which miss the L2 and the L3 caches of type L1Stride (fetch additional lines into L1 cache when each access is a constant distance from the previous)event=0x72,umask=0x4000l2_pf_miss_l2_l3.l1_regioncacheL2 prefetches accepted by the L2 pipeline which miss the L2 and the L3 caches of type L1Region (fetch additional lines into L1 cache when the data access for a given instruction tends to be followed by a consistent pattern of other accesses within a localized region)event=0x72,umask=0x8000l2_pf_miss_l2_l3.allcacheL2 prefetches accepted by the L2 pipeline which miss the L2 and the L3 caches of all typesevent=0x72,umask=0xff00ic_cache_fill_l2cacheInstruction cache lines (64 bytes) fulfilled from the L2 cacheevent=0x8200ic_cache_fill_syscacheInstruction cache lines (64 bytes) fulfilled from system memory or another cacheevent=0x8300ic_tag_hit_miss.instruction_cache_hitcacheInstruction cache hitsevent=0x18e,umask=700ic_tag_hit_miss.instruction_cache_misscacheInstruction cache missesevent=0x18e,umask=0x1800ic_tag_hit_miss.all_instruction_cache_accessescacheInstruction cache accesses of all typesevent=0x18e,umask=0x1f00op_cache_hit_miss.op_cache_hitcacheOp cache hitsevent=0x28f,umask=300op_cache_hit_miss.op_cache_misscacheOp cache missesevent=0x28f,umask=400op_cache_hit_miss.all_op_cache_accessescacheOp cache accesses of all typesevent=0x28f,umask=700l3_lookup_state.l3_misscacheL3 cache missesevent=4,umask=100l3_lookup_state.l3_hitcacheL3 cache hitsevent=4,umask=0xfe00l3_lookup_state.all_coherent_accesses_to_l3cacheL3 cache requests for all coherent accessesevent=4,umask=0xff00l3_xi_sampled_latency.dram_nearcacheAverage sampled latency when data is sourced from DRAM in the same NUMA nodeevent=0xac,umask=1,enallcores=1,enallslices=1,sliceid=3,threadmask=300l3_xi_sampled_latency.dram_farcacheAverage sampled latency when data is sourced from DRAM in a different NUMA nodeevent=0xac,umask=2,enallcores=1,enallslices=1,sliceid=3,threadmask=300l3_xi_sampled_latency.near_cachecacheAverage sampled latency when data is sourced from another CCX's cache when the address was in the same NUMA nodeevent=0xac,umask=4,enallcores=1,enallslices=1,sliceid=3,threadmask=300l3_xi_sampled_latency.far_cachecacheAverage sampled latency when data is sourced from another CCX's cache when the address was in a different NUMA nodeevent=0xac,umask=8,enallcores=1,enallslices=1,sliceid=3,threadmask=300l3_xi_sampled_latency.ext_nearcacheAverage sampled latency when data is sourced from extension memory (CXL) in the same NUMA nodeevent=0xac,umask=0x10,enallcores=1,enallslices=1,sliceid=3,threadmask=300l3_xi_sampled_latency.ext_farcacheAverage sampled latency when data is sourced from extension memory (CXL) in a different NUMA nodeevent=0xac,umask=0x20,enallcores=1,enallslices=1,sliceid=3,threadmask=300l3_xi_sampled_latency.allcacheAverage sampled latency from all data sourcesevent=0xac,umask=0x3f,enallcores=1,enallslices=1,sliceid=3,threadmask=300l3_xi_sampled_latency_requests.dram_nearcacheL3 cache fill requests sourced from DRAM in the same NUMA nodeevent=0xad,umask=1,enallcores=1,enallslices=1,sliceid=3,threadmask=300l3_xi_sampled_latency_requests.dram_farcacheL3 cache fill requests sourced from DRAM in a different NUMA nodeevent=0xad,umask=2,enallcores=1,enallslices=1,sliceid=3,threadmask=300l3_xi_sampled_latency_requests.near_cachecacheL3 cache fill requests sourced from another CCX's cache when the address was in the same NUMA nodeevent=0xad,umask=4,enallcores=1,enallslices=1,sliceid=3,threadmask=300l3_xi_sampled_latency_requests.far_cachecacheL3 cache fill requests sourced from another CCX's cache when the address was in a different NUMA nodeevent=0xad,umask=8,enallcores=1,enallslices=1,sliceid=3,threadmask=300l3_xi_sampled_latency_requests.ext_nearcacheL3 cache fill requests sourced from extension memory (CXL) in the same NUMA nodeevent=0xad,umask=0x10,enallcores=1,enallslices=1,sliceid=3,threadmask=300l3_xi_sampled_latency_requests.ext_farcacheL3 cache fill requests sourced from extension memory (CXL) in a different NUMA nodeevent=0xad,umask=0x20,enallcores=1,enallslices=1,sliceid=3,threadmask=300l3_xi_sampled_latency_requests.allcacheL3 cache fill requests sourced from all data sourcesevent=0xad,umask=0x3f,enallcores=1,enallslices=1,sliceid=3,threadmask=300ls_locks.bus_lockcoreRetired Lock instructions which caused a bus lockevent=0x25,umask=100ls_ret_cl_flushcoreRetired CLFLUSH instructionsevent=0x2600ls_ret_cpuidcoreRetired CPUID instructionsevent=0x2700ls_smi_rxcoreSMIs receivedevent=0x2b00ls_int_takencoreInterrupts takenevent=0x2c00ls_not_halted_cyccoreCore cycles not in haltevent=0x7600ex_ret_instrcoreRetired instructionsevent=0xc000ex_ret_opscoreRetired macro-opsevent=0xc100ex_div_busycoreNumber of cycles the divider is busyevent=0xd300ex_div_countcoreDivide ops executedevent=0xd400ex_no_retire.emptycoreCycles with no retire due  to the lack of valid ops in the retire queue (may be caused by front-end bottlenecks or pipeline redirects)event=0xd6,umask=100ex_no_retire.not_completecoreCycles with no retire while the oldest op is waiting to be executedevent=0xd6,umask=200ex_no_retire.othercoreCycles with no retire caused by other reasons (retire breaks, traps, faults, etc.)event=0xd6,umask=800ex_no_retire.thread_not_selectedcoreCycles with no retire because thread arbitration did not select the threadevent=0xd6,umask=0x1000ex_no_retire.load_not_completecoreCycles with no retire while the oldest op is waiting for load dataevent=0xd6,umask=0xa200ex_no_retire.allcoreCycles with no retire for any reasonevent=0xd6,umask=0x1b00ls_not_halted_p0_cyc.p0_freq_cyccoreReference cycles (P0 frequency) not in halt event=0x120,umask=100ex_ret_ucode_instrcoreRetired microcoded instructionsevent=0x1c100ex_ret_ucode_opscoreRetired microcode opsevent=0x1c200ex_tagged_ibs_ops.ibs_tagged_opscoreOps tagged by IBSevent=0x1cf,umask=100ex_tagged_ibs_ops.ibs_tagged_ops_retcoreOps tagged by IBS that retiredevent=0x1cf,umask=200ex_ret_fused_instrcoreRetired fused instructionsevent=0x1d000local_processor_read_data_beats_cs0data fabricevent=0x1f,umask=0x7fe01Read data beats (64 bytes) for local processor at Coherent Station (CS) 0local_processor_read_data_beats_cs1data fabricevent=0x5f,umask=0x7fe01Read data beats (64 bytes) for local processor at Coherent Station (CS) 1local_processor_read_data_beats_cs2data fabricevent=0x9f,umask=0x7fe01Read data beats (64 bytes) for local processor at Coherent Station (CS) 2local_processor_read_data_beats_cs3data fabricevent=0xdf,umask=0x7fe01Read data beats (64 bytes) for local processor at Coherent Station (CS) 3local_processor_read_data_beats_cs4data fabricevent=0x11f,umask=0x7fe01Read data beats (64 bytes) for local processor at Coherent Station (CS) 4local_processor_read_data_beats_cs5data fabricevent=0x15f,umask=0x7fe01Read data beats (64 bytes) for local processor at Coherent Station (CS) 5local_processor_read_data_beats_cs6data fabricevent=0x19f,umask=0x7fe01Read data beats (64 bytes) for local processor at Coherent Station (CS) 6local_processor_read_data_beats_cs7data fabricevent=0x1df,umask=0x7fe01Read data beats (64 bytes) for local processor at Coherent Station (CS) 7local_processor_read_data_beats_cs8data fabricevent=0x21f,umask=0x7fe01Read data beats (64 bytes) for local processor at Coherent Station (CS) 8local_processor_read_data_beats_cs9data fabricevent=0x25f,umask=0x7fe01Read data beats (64 bytes) for local processor at Coherent Station (CS) 9local_processor_read_data_beats_cs10data fabricevent=0x29f,umask=0x7fe01Read data beats (64 bytes) for local processor at Coherent Station (CS) 10local_processor_read_data_beats_cs11data fabricevent=0x2df,umask=0x7fe01Read data beats (64 bytes) for local processor at Coherent Station (CS) 11local_processor_write_data_beats_cs0data fabricevent=0x1f,umask=0x7ff01Write data beats (64 bytes) for local processor at Coherent Station (CS) 0local_processor_write_data_beats_cs1data fabricevent=0x5f,umask=0x7ff01Write data beats (64 bytes) for local processor at Coherent Station (CS) 1local_processor_write_data_beats_cs2data fabricevent=0x9f,umask=0x7ff01Write data beats (64 bytes) for local processor at Coherent Station (CS) 2local_processor_write_data_beats_cs3data fabricevent=0xdf,umask=0x7ff01Write data beats (64 bytes) for local processor at Coherent Station (CS) 3local_processor_write_data_beats_cs4data fabricevent=0x11f,umask=0x7ff01Write data beats (64 bytes) for local processor at Coherent Station (CS) 4local_processor_write_data_beats_cs5data fabricevent=0x15f,umask=0x7ff01Write data beats (64 bytes) for local processor at Coherent Station (CS) 5local_processor_write_data_beats_cs6data fabricevent=0x19f,umask=0x7ff01Write data beats (64 bytes) for local processor at Coherent Station (CS) 6local_processor_write_data_beats_cs7data fabricevent=0x1df,umask=0x7ff01Write data beats (64 bytes) for local processor at Coherent Station (CS) 7local_processor_write_data_beats_cs8data fabricevent=0x21f,umask=0x7ff01Write data beats (64 bytes) for local processor at Coherent Station (CS) 8local_processor_write_data_beats_cs9data fabricevent=0x25f,umask=0x7ff01Write data beats (64 bytes) for local processor at Coherent Station (CS) 9local_processor_write_data_beats_cs10data fabricevent=0x29f,umask=0x7ff01Write data beats (64 bytes) for local processor at Coherent Station (CS) 10local_processor_write_data_beats_cs11data fabricevent=0x2df,umask=0x7ff01Write data beats (64 bytes) for local processor at Coherent Station (CS) 11remote_processor_read_data_beats_cs0data fabricevent=0x1f,umask=0xbfe01Read data beats (64 bytes) for remote processor at Coherent Station (CS) 0remote_processor_read_data_beats_cs1data fabricevent=0x5f,umask=0xbfe01Read data beats (64 bytes) for remote processor at Coherent Station (CS) 1remote_processor_read_data_beats_cs2data fabricevent=0x9f,umask=0xbfe01Read data beats (64 bytes) for remote processor at Coherent Station (CS) 2remote_processor_read_data_beats_cs3data fabricevent=0xdf,umask=0xbfe01Read data beats (64 bytes) for remote processor at Coherent Station (CS) 3remote_processor_read_data_beats_cs4data fabricevent=0x11f,umask=0xbfe01Read data beats (64 bytes) for remote processor at Coherent Station (CS) 4remote_processor_read_data_beats_cs5data fabricevent=0x15f,umask=0xbfe01Read data beats (64 bytes) for remote processor at Coherent Station (CS) 5remote_processor_read_data_beats_cs6data fabricevent=0x19f,umask=0xbfe01Read data beats (64 bytes) for remote processor at Coherent Station (CS) 6remote_processor_read_data_beats_cs7data fabricevent=0x1df,umask=0xbfe01Read data beats (64 bytes) for remote processor at Coherent Station (CS) 7remote_processor_read_data_beats_cs8data fabricevent=0x21f,umask=0xbfe01Read data beats (64 bytes) for remote processor at Coherent Station (CS) 8remote_processor_read_data_beats_cs9data fabricevent=0x25f,umask=0xbfe01Read data beats (64 bytes) for remote processor at Coherent Station (CS) 9remote_processor_read_data_beats_cs10data fabricevent=0x29f,umask=0xbfe01Read data beats (64 bytes) for remote processor at Coherent Station (CS) 10remote_processor_read_data_beats_cs11data fabricevent=0x2df,umask=0xbfe01Read data beats (64 bytes) for remote processor at Coherent Station (CS) 11remote_processor_write_data_beats_cs0data fabricevent=0x1f,umask=0xbff01Write data beats (64 bytes) for remote processor at Coherent Station (CS) 0remote_processor_write_data_beats_cs1data fabricevent=0x5f,umask=0xbff01Write data beats (64 bytes) for remote processor at Coherent Station (CS) 1remote_processor_write_data_beats_cs2data fabricevent=0x9f,umask=0xbff01Write data beats (64 bytes) for remote processor at Coherent Station (CS) 2remote_processor_write_data_beats_cs3data fabricevent=0xdf,umask=0xbff01Write data beats (64 bytes) for remote processor at Coherent Station (CS) 3remote_processor_write_data_beats_cs4data fabricevent=0x11f,umask=0xbff01Write data beats (64 bytes) for remote processor at Coherent Station (CS) 4remote_processor_write_data_beats_cs5data fabricevent=0x15f,umask=0xbff01Write data beats (64 bytes) for remote processor at Coherent Station (CS) 5remote_processor_write_data_beats_cs6data fabricevent=0x19f,umask=0xbff01Write data beats (64 bytes) for remote processor at Coherent Station (CS) 6remote_processor_write_data_beats_cs7data fabricevent=0x1df,umask=0xbff01Write data beats (64 bytes) for remote processor at Coherent Station (CS) 7remote_processor_write_data_beats_cs8data fabricevent=0x21f,umask=0xbff01Write data beats (64 bytes) for remote processor at Coherent Station (CS) 8remote_processor_write_data_beats_cs9data fabricevent=0x25f,umask=0xbff01Write data beats (64 bytes) for remote processor at Coherent Station (CS) 9remote_processor_write_data_beats_cs10data fabricevent=0x29f,umask=0xbff01Write data beats (64 bytes) for remote processor at Coherent Station (CS) 10remote_processor_write_data_beats_cs11data fabricevent=0x2df,umask=0xbff01Write data beats (64 bytes) for remote processor at Coherent Station (CS) 11local_socket_upstream_read_beats_iom0data fabricevent=0x81f,umask=0x7fe01Read data beats (64 bytes) for local socket upstream DMA at IO Moderator (IOM) 0local_socket_upstream_read_beats_iom1data fabricevent=0x85f,umask=0x7fe01Read data beats (64 bytes) for local socket upstream DMA at IO Moderator (IOM) 1local_socket_upstream_read_beats_iom2data fabricevent=0x89f,umask=0x7fe01Read data beats (64 bytes) for local socket upstream DMA at IO Moderator (IOM) 2local_socket_upstream_read_beats_iom3data fabricevent=0x8df,umask=0x7fe01Read data beats (64 bytes) for local socket upstream DMA at IO Moderator (IOM) 3local_socket_upstream_write_beats_iom0data fabricevent=0x81f,umask=0x7ff01Write data beats (64 bytes) for local socket upstream DMA at IO Moderator (IOM) 0local_socket_upstream_write_beats_iom1data fabricevent=0x85f,umask=0x7ff01Write data beats (64 bytes) for local socket upstream DMA at IO Moderator (IOM) 1local_socket_upstream_write_beats_iom2data fabricevent=0x89f,umask=0x7ff01Write data beats (64 bytes) for local socket upstream DMA at IO Moderator (IOM) 2local_socket_upstream_write_beats_iom3data fabricevent=0x8df,umask=0x7ff01Write data beats (64 bytes) for local socket upstream DMA at IO Moderator (IOM) 3remote_socket_upstream_read_beats_iom0data fabricevent=0x81f,umask=0xbfe01Read data beats (64 bytes) for remote socket upstream DMA at IO Moderator (IOM) 0remote_socket_upstream_read_beats_iom1data fabricevent=0x85f,umask=0xbfe01Read data beats (64 bytes) for remote socket upstream DMA at IO Moderator (IOM) 1remote_socket_upstream_read_beats_iom2data fabricevent=0x89f,umask=0xbfe01Read data beats (64 bytes) for remote socket upstream DMA at IO Moderator (IOM) 2remote_socket_upstream_read_beats_iom3data fabricevent=0x8df,umask=0xbfe01Read data beats (64 bytes) for remote socket upstream DMA at IO Moderator (IOM) 3remote_socket_upstream_write_beats_iom0data fabricevent=0x81f,umask=0xbff01Write data beats (64 bytes) for remote socket upstream DMA at IO Moderator (IOM) 0remote_socket_upstream_write_beats_iom1data fabricevent=0x85f,umask=0xbff01Write data beats (64 bytes) for remote socket upstream DMA at IO Moderator (IOM) 1remote_socket_upstream_write_beats_iom2data fabricevent=0x89f,umask=0xbff01Write data beats (64 bytes) for remote socket upstream DMA at IO Moderator (IOM) 2remote_socket_upstream_write_beats_iom3data fabricevent=0x8df,umask=0xbff01Write data beats (64 bytes) for remote socket upstream DMA at IO Moderator (IOM) 3local_socket_inf0_inbound_data_beats_ccm0data fabricevent=0x41e,umask=0x7fe01Data beats (32 bytes) at interface 0 for local socket inbound data to CPU Moderator (CCM) 0local_socket_inf0_inbound_data_beats_ccm1data fabricevent=0x45e,umask=0x7fe01Data beats (32 bytes) at interface 0 for local socket inbound data to CPU Moderator (CCM) 1local_socket_inf0_inbound_data_beats_ccm2data fabricevent=0x49e,umask=0x7fe01Data beats (32 bytes) at interface 0 for local socket inbound data to CPU Moderator (CCM) 2local_socket_inf0_inbound_data_beats_ccm3data fabricevent=0x4de,umask=0x7fe01Data beats (32 bytes) at interface 0 for local socket inbound data to CPU Moderator (CCM) 3local_socket_inf0_inbound_data_beats_ccm4data fabricevent=0x51e,umask=0x7fe01Data beats (32 bytes) at interface 0 for local socket inbound data to CPU Moderator (CCM) 4local_socket_inf0_inbound_data_beats_ccm5data fabricevent=0x55e,umask=0x7fe01Data beats (32 bytes) at interface 0 for local socket inbound data to CPU Moderator (CCM) 5local_socket_inf0_inbound_data_beats_ccm6data fabricevent=0x59e,umask=0x7fe01Data beats (32 bytes) at interface 0 for local socket inbound data to CPU Moderator (CCM) 6local_socket_inf0_inbound_data_beats_ccm7data fabricevent=0x5de,umask=0x7fe01Data beats (32 bytes) at interface 0 for local socket inbound data to CPU Moderator (CCM) 7local_socket_inf1_inbound_data_beats_ccm0data fabricevent=0x41f,umask=0x7fe01Data beats (32 bytes) at interface 1 for local socket inbound data to CPU Moderator (CCM) 0local_socket_inf1_inbound_data_beats_ccm1data fabricevent=0x45f,umask=0x7fe01Data beats (32 bytes) at interface 1 for local socket inbound data to CPU Moderator (CCM) 1local_socket_inf1_inbound_data_beats_ccm2data fabricevent=0x49f,umask=0x7fe01Data beats (32 bytes) at interface 1 for local socket inbound data to CPU Moderator (CCM) 2local_socket_inf1_inbound_data_beats_ccm3data fabricevent=0x4df,umask=0x7fe01Data beats (32 bytes) at interface 1 for local socket inbound data to CPU Moderator (CCM) 3local_socket_inf1_inbound_data_beats_ccm4data fabricevent=0x51f,umask=0x7fe01Data beats (32 bytes) at interface 1 for local socket inbound data to CPU Moderator (CCM) 4local_socket_inf1_inbound_data_beats_ccm5data fabricevent=0x55f,umask=0x7fe01Data beats (32 bytes) at interface 1 for local socket inbound data to CPU Moderator (CCM) 5local_socket_inf1_inbound_data_beats_ccm6data fabricevent=0x59f,umask=0x7fe01Data beats (32 bytes) at interface 1 for local socket inbound data to CPU Moderator (CCM) 6local_socket_inf1_inbound_data_beats_ccm7data fabricevent=0x5df,umask=0x7fe01Data beats (32 bytes) at interface 1 for local socket inbound data to CPU Moderator (CCM) 7local_socket_inf0_outbound_data_beats_ccm0data fabricevent=0x41e,umask=0x7ff01Data beats (64 bytes) at interface 0 for local socket outbound data from CPU Moderator (CCM) 0local_socket_inf0_outbound_data_beats_ccm1data fabricevent=0x45e,umask=0x7ff01Data beats (64 bytes) at interface 0 for local socket outbound data from CPU Moderator (CCM) 1local_socket_inf0_outbound_data_beats_ccm2data fabricevent=0x49e,umask=0x7ff01Data beats (64 bytes) at interface 0 for local socket outbound data from CPU Moderator (CCM) 2local_socket_inf0_outbound_data_beats_ccm3data fabricevent=0x4de,umask=0x7ff01Data beats (64 bytes) at interface 0 for local socket outbound data from CPU Moderator (CCM) 3local_socket_inf0_outbound_data_beats_ccm4data fabricevent=0x51e,umask=0x7ff01Data beats (64 bytes) at interface 0 for local socket outbound data from CPU Moderator (CCM) 4local_socket_inf0_outbound_data_beats_ccm5data fabricevent=0x55e,umask=0x7ff01Data beats (64 bytes) at interface 0 for local socket outbound data from CPU Moderator (CCM) 5local_socket_inf0_outbound_data_beats_ccm6data fabricevent=0x59e,umask=0x7ff01Data beats (64 bytes) at interface 0 for local socket outbound data from CPU Moderator (CCM) 6local_socket_inf0_outbound_data_beats_ccm7data fabricevent=0x5de,umask=0x7ff01Data beats (64 bytes) at interface 0 for local socket outbound data from CPU Moderator (CCM) 7local_socket_inf1_outbound_data_beats_ccm0data fabricevent=0x41f,umask=0x7ff01Data beats (64 bytes) at interface 1 for local socket outbound data from CPU Moderator (CCM) 0local_socket_inf1_outbound_data_beats_ccm1data fabricevent=0x45f,umask=0x7ff01Data beats (64 bytes) at interface 1 for local socket outbound data from CPU Moderator (CCM) 1local_socket_inf1_outbound_data_beats_ccm2data fabricevent=0x49f,umask=0x7ff01Data beats (64 bytes) at interface 1 for local socket outbound data from CPU Moderator (CCM) 2local_socket_inf1_outbound_data_beats_ccm3data fabricevent=0x4df,umask=0x7ff01Data beats (64 bytes) at interface 1 for local socket outbound data from CPU Moderator (CCM) 3local_socket_inf1_outbound_data_beats_ccm4data fabricevent=0x51f,umask=0x7ff01Data beats (64 bytes) at interface 1 for local socket outbound data from CPU Moderator (CCM) 4local_socket_inf1_outbound_data_beats_ccm5data fabricevent=0x55f,umask=0x7ff01Data beats (64 bytes) at interface 1 for local socket outbound data from CPU Moderator (CCM) 5local_socket_inf1_outbound_data_beats_ccm6data fabricevent=0x59f,umask=0x7ff01Data beats (64 bytes) at interface 1 for local socket outbound data from CPU Moderator (CCM) 6local_socket_inf1_outbound_data_beats_ccm7data fabricevent=0x5df,umask=0x7ff01Data beats (64 bytes) at interface 1 for local socket outbound data from CPU Moderator (CCM) 7remote_socket_inf0_inbound_data_beats_ccm0data fabricevent=0x41e,umask=0xbfe01Data beats (32 bytes) at interface 0 for remote socket inbound data to CPU Moderator (CCM) 0remote_socket_inf0_inbound_data_beats_ccm1data fabricevent=0x45e,umask=0xbfe01Data beats (32 bytes) at interface 0 for remote socket inbound data to CPU Moderator (CCM) 1remote_socket_inf0_inbound_data_beats_ccm2data fabricevent=0x49e,umask=0xbfe01Data beats (32 bytes) at interface 0 for remote socket inbound data to CPU Moderator (CCM) 2remote_socket_inf0_inbound_data_beats_ccm3data fabricevent=0x4de,umask=0xbfe01Data beats (32 bytes) at interface 0 for remote socket inbound data to CPU Moderator (CCM) 3remote_socket_inf0_inbound_data_beats_ccm4data fabricevent=0x51e,umask=0xbfe01Data beats (32 bytes) at interface 0 for remote socket inbound data to CPU Moderator (CCM) 4remote_socket_inf0_inbound_data_beats_ccm5data fabricevent=0x55e,umask=0xbfe01Data beats (32 bytes) at interface 0 for remote socket inbound data to CPU Moderator (CCM) 5remote_socket_inf0_inbound_data_beats_ccm6data fabricevent=0x59e,umask=0xbfe01Data beats (32 bytes) at interface 0 for remote socket inbound data to CPU Moderator (CCM) 6remote_socket_inf0_inbound_data_beats_ccm7data fabricevent=0x5de,umask=0xbfe01Data beats (32 bytes) at interface 0 for remote socket inbound data to CPU Moderator (CCM) 7remote_socket_inf1_inbound_data_beats_ccm0data fabricevent=0x41f,umask=0xbfe01Data beats (32 bytes) at interface 1 for remote socket inbound data to CPU Moderator (CCM) 0remote_socket_inf1_inbound_data_beats_ccm1data fabricevent=0x45f,umask=0xbfe01Data beats (32 bytes) at interface 1 for remote socket inbound data to CPU Moderator (CCM) 1remote_socket_inf1_inbound_data_beats_ccm2data fabricevent=0x49f,umask=0xbfe01Data beats (32 bytes) at interface 1 for remote socket inbound data to CPU Moderator (CCM) 2remote_socket_inf1_inbound_data_beats_ccm3data fabricevent=0x4df,umask=0xbfe01Data beats (32 bytes) at interface 1 for remote socket inbound data to CPU Moderator (CCM) 3remote_socket_inf1_inbound_data_beats_ccm4data fabricevent=0x51f,umask=0xbfe01Data beats (32 bytes) at interface 1 for remote socket inbound data to CPU Moderator (CCM) 4remote_socket_inf1_inbound_data_beats_ccm5data fabricevent=0x55f,umask=0xbfe01Data beats (32 bytes) at interface 1 for remote socket inbound data to CPU Moderator (CCM) 5remote_socket_inf1_inbound_data_beats_ccm6data fabricevent=0x59f,umask=0xbfe01Data beats (32 bytes) at interface 1 for remote socket inbound data to CPU Moderator (CCM) 6remote_socket_inf1_inbound_data_beats_ccm7data fabricevent=0x5df,umask=0xbfe01Data beats (32 bytes) at interface 1 for remote socket inbound data to CPU Moderator (CCM) 7remote_socket_inf0_outbound_data_beats_ccm0data fabricevent=0x41e,umask=0xbff01Data beats (64 bytes) at interface 0 for remote socket outbound data from CPU Moderator (CCM) 0remote_socket_inf0_outbound_data_beats_ccm1data fabricevent=0x45e,umask=0xbff01Data beats (64 bytes) at interface 0 for remote socket outbound data from CPU Moderator (CCM) 1remote_socket_inf0_outbound_data_beats_ccm2data fabricevent=0x49e,umask=0xbff01Data beats (64 bytes) at interface 0 for remote socket outbound data from CPU Moderator (CCM) 2remote_socket_inf0_outbound_data_beats_ccm3data fabricevent=0x4de,umask=0xbff01Data beats (64 bytes) at interface 0 for remote socket outbound data from CPU Moderator (CCM) 3remote_socket_inf0_outbound_data_beats_ccm4data fabricevent=0x51e,umask=0xbff01Data beats (64 bytes) at interface 0 for remote socket outbound data from CPU Moderator (CCM) 4remote_socket_inf0_outbound_data_beats_ccm5data fabricevent=0x55e,umask=0xbff01Data beats (64 bytes) at interface 0 for remote socket outbound data from CPU Moderator (CCM) 5remote_socket_inf0_outbound_data_beats_ccm6data fabricevent=0x59e,umask=0xbff01Data beats (64 bytes) at interface 0 for remote socket outbound data from CPU Moderator (CCM) 6remote_socket_inf0_outbound_data_beats_ccm7data fabricevent=0x5de,umask=0xbff01Data beats (64 bytes) at interface 0 for remote socket outbound data from CPU Moderator (CCM) 7remote_socket_inf1_outbound_data_beats_ccm0data fabricevent=0x41f,umask=0xbff01Data beats (64 bytes) at interface 1 for remote socket outbound data from CPU Moderator (CCM) 0remote_socket_inf1_outbound_data_beats_ccm1data fabricevent=0x45f,umask=0xbff01Data beats (64 bytes) at interface 1 for remote socket outbound data from CPU Moderator (CCM) 1remote_socket_inf1_outbound_data_beats_ccm2data fabricevent=0x49f,umask=0xbff01Data beats (64 bytes) at interface 1 for remote socket outbound data from CPU Moderator (CCM) 2remote_socket_inf1_outbound_data_beats_ccm3data fabricevent=0x4df,umask=0xbff01Data beats (64 bytes) at interface 1 for remote socket outbound data from CPU Moderator (CCM) 3remote_socket_inf1_outbound_data_beats_ccm4data fabricevent=0x51f,umask=0xbff01Data beats (64 bytes) at interface 1 for remote socket outbound data from CPU Moderator (CCM) 4remote_socket_inf1_outbound_data_beats_ccm5data fabricevent=0x55f,umask=0xbff01Data beats (64 bytes) at interface 1 for remote socket outbound data from CPU Moderator (CCM) 5remote_socket_inf1_outbound_data_beats_ccm6data fabricevent=0x59f,umask=0xbff01Data beats (64 bytes) at interface 1 for remote socket outbound data from CPU Moderator (CCM) 6remote_socket_inf1_outbound_data_beats_ccm7data fabricevent=0x5df,umask=0xbff01Data beats (64 bytes) at interface 1 for remote socket outbound data from CPU Moderator (CCM) 7local_socket_outbound_data_beats_link0data fabricevent=0xb5f,umask=0xf3e01Data beats (64 bytes) for local socket outbound data from inter-socket xGMI link 0local_socket_outbound_data_beats_link1data fabricevent=0xb9f,umask=0xf3e01Data beats (64 bytes) for local socket outbound data from inter-socket xGMI link 1local_socket_outbound_data_beats_link2data fabricevent=0xbdf,umask=0xf3e01Data beats (64 bytes) for local socket outbound data from inter-socket xGMI link 2local_socket_outbound_data_beats_link3data fabricevent=0xc1f,umask=0xf3e01Data beats (64 bytes) for local socket outbound data from inter-socket xGMI link 3local_socket_outbound_data_beats_link4data fabricevent=0xc5f,umask=0xf3e01Data beats (64 bytes) for local socket outbound data from inter-socket xGMI link 4local_socket_outbound_data_beats_link5data fabricevent=0xc9f,umask=0xf3e01Data beats (64 bytes) for local socket outbound data from inter-socket xGMI link 5local_socket_outbound_data_beats_link6data fabricevent=0xcdf,umask=0xf3e01Data beats (64 bytes) for local socket outbound data from inter-socket xGMI link 6local_socket_outbound_data_beats_link7data fabricevent=0xd1f,umask=0xf3e01Data beats (64 bytes) for local socket outbound data from inter-socket xGMI link 7fp_ret_x87_fp_ops.add_sub_opsfloating pointRetired x87 floating-point add and subtract opsevent=2,umask=100fp_ret_x87_fp_ops.mul_opsfloating pointRetired x87 floating-point multiply opsevent=2,umask=200fp_ret_x87_fp_ops.div_sqrt_opsfloating pointRetired x87 floating-point divide and square root opsevent=2,umask=400fp_ret_x87_fp_ops.allfloating pointRetired x87 floating-point ops of all typesevent=2,umask=700fp_ret_sse_avx_ops.add_sub_flopsfloating pointRetired SSE and AVX floating-point add and subtract opsevent=3,umask=100fp_ret_sse_avx_ops.mult_flopsfloating pointRetired SSE and AVX floating-point multiply opsevent=3,umask=200fp_ret_sse_avx_ops.div_flopsfloating pointRetired SSE and AVX floating-point divide and square root opsevent=3,umask=400fp_ret_sse_avx_ops.mac_flopsfloating pointRetired SSE and AVX floating-point multiply-accumulate ops (each operation is counted as 2 ops)event=3,umask=800fp_ret_sse_avx_ops.bfloat_mac_flopsfloating pointRetired SSE and AVX floating-point bfloat multiply-accumulate ops (each operation is counted as 2 ops)event=3,umask=0x1000fp_ret_sse_avx_ops.allfloating pointRetired SSE and AVX floating-point ops of all typesevent=3,umask=0x1f00fp_retired_ser_ops.x87_ctrl_retfloating pointRetired x87 control word mispredict traps due to mispredictions in RC or PC, or changes in exception mask bitsevent=5,umask=100fp_retired_ser_ops.x87_bot_retfloating pointRetired x87 bottom-executing ops. Bottom-executing ops wait for all older ops to retire before executingevent=5,umask=200fp_retired_ser_ops.sse_ctrl_retfloating pointRetired SSE and AVX control word mispredict trapsevent=5,umask=400fp_retired_ser_ops.sse_bot_retfloating pointRetired SSE and AVX bottom-executing ops. Bottom-executing ops wait for all older ops to retire before executingevent=5,umask=800fp_retired_ser_ops.allfloating pointRetired SSE and AVX serializing ops of all typesevent=5,umask=0xf00fp_ops_retired_by_width.x87_uops_retiredfloating pointRetired x87 floating-point opsevent=8,umask=100fp_ops_retired_by_width.mmx_uops_retiredfloating pointRetired MMX floating-point opsevent=8,umask=200fp_ops_retired_by_width.scalar_uops_retiredfloating pointRetired scalar floating-point opsevent=8,umask=400fp_ops_retired_by_width.pack_128_uops_retiredfloating pointRetired packed 128-bit floating-point opsevent=8,umask=800fp_ops_retired_by_width.pack_256_uops_retiredfloating pointRetired packed 256-bit floating-point opsevent=8,umask=0x1000fp_ops_retired_by_width.pack_512_uops_retiredfloating pointRetired packed 512-bit floating-point opsevent=8,umask=0x2000fp_ops_retired_by_width.allfloating pointRetired floating-point ops of all widthsevent=8,umask=0x3f00fp_ops_retired_by_type.scalar_addfloating pointRetired scalar floating-point add opsevent=0xa,umask=100fp_ops_retired_by_type.scalar_subfloating pointRetired scalar floating-point subtract opsevent=0xa,umask=200fp_ops_retired_by_type.scalar_mulfloating pointRetired scalar floating-point multiply opsevent=0xa,umask=300fp_ops_retired_by_type.scalar_macfloating pointRetired scalar floating-point multiply-accumulate opsevent=0xa,umask=400fp_ops_retired_by_type.scalar_divfloating pointRetired scalar floating-point divide opsevent=0xa,umask=500fp_ops_retired_by_type.scalar_sqrtfloating pointRetired scalar floating-point square root opsevent=0xa,umask=600fp_ops_retired_by_type.scalar_cmpfloating pointRetired scalar floating-point compare opsevent=0xa,umask=700fp_ops_retired_by_type.scalar_cvtfloating pointRetired scalar floating-point convert opsevent=0xa,umask=800fp_ops_retired_by_type.scalar_blendfloating pointRetired scalar floating-point blend opsevent=0xa,umask=900fp_ops_retired_by_type.scalar_otherfloating pointRetired scalar floating-point ops of other typesevent=0xa,umask=0xe00fp_ops_retired_by_type.scalar_allfloating pointRetired scalar floating-point ops of all typesevent=0xa,umask=0xf00fp_ops_retired_by_type.vector_addfloating pointRetired vector floating-point add opsevent=0xa,umask=0x1000fp_ops_retired_by_type.vector_subfloating pointRetired vector floating-point subtract opsevent=0xa,umask=0x2000fp_ops_retired_by_type.vector_mulfloating pointRetired vector floating-point multiply opsevent=0xa,umask=0x3000fp_ops_retired_by_type.vector_macfloating pointRetired vector floating-point multiply-accumulate opsevent=0xa,umask=0x4000fp_ops_retired_by_type.vector_divfloating pointRetired vector floating-point divide opsevent=0xa,umask=0x5000fp_ops_retired_by_type.vector_sqrtfloating pointRetired vector floating-point square root opsevent=0xa,umask=0x6000fp_ops_retired_by_type.vector_cmpfloating pointRetired vector floating-point compare opsevent=0xa,umask=0x7000fp_ops_retired_by_type.vector_cvtfloating pointRetired vector floating-point convert opsevent=0xa,umask=0x8000fp_ops_retired_by_type.vector_blendfloating pointRetired vector floating-point blend opsevent=0xa,umask=0x9000fp_ops_retired_by_type.vector_shufflefloating pointRetired vector floating-point shuffle ops (may include instructions not necessarily thought of as including shuffles e.g. horizontal add, dot product, and certain MOV instructions)event=0xa,umask=0xb000fp_ops_retired_by_type.vector_logicalfloating pointRetired vector floating-point logical opsevent=0xa,umask=0xd000fp_ops_retired_by_type.vector_otherfloating pointRetired vector floating-point ops of other typesevent=0xa,umask=0xe000fp_ops_retired_by_type.vector_allfloating pointRetired vector floating-point ops of all typesevent=0xa,umask=0xf000fp_ops_retired_by_type.allfloating pointRetired floating-point ops of all typesevent=0xa,umask=0xff00sse_avx_ops_retired.mmx_addfloating pointRetired MMX integer addevent=0xb,umask=100sse_avx_ops_retired.mmx_subfloating pointRetired MMX integer subtract opsevent=0xb,umask=200sse_avx_ops_retired.mmx_mulfloating pointRetired MMX integer multiply opsevent=0xb,umask=300sse_avx_ops_retired.mmx_macfloating pointRetired MMX integer multiply-accumulate opsevent=0xb,umask=400sse_avx_ops_retired.mmx_cmpfloating pointRetired MMX integer compare opsevent=0xb,umask=700sse_avx_ops_retired.mmx_shiftfloating pointRetired MMX integer shift opsevent=0xb,umask=900sse_avx_ops_retired.mmx_movfloating pointRetired MMX integer MOV opsevent=0xb,umask=0xa00sse_avx_ops_retired.mmx_shufflefloating pointRetired MMX integer shuffle ops (may include instructions not necessarily thought of as including shuffles e.g. horizontal add, dot product, and certain MOV instructions)event=0xb,umask=0xb00sse_avx_ops_retired.mmx_packfloating pointRetired MMX integer pack opsevent=0xb,umask=0xc00sse_avx_ops_retired.mmx_logicalfloating pointRetired MMX integer logical opsevent=0xb,umask=0xd00sse_avx_ops_retired.mmx_otherfloating pointRetired MMX integer multiply ops of other typesevent=0xb,umask=0xe00sse_avx_ops_retired.mmx_allfloating pointRetired MMX integer ops of all typesevent=0xb,umask=0xf00sse_avx_ops_retired.sse_avx_addfloating pointRetired SSE and AVX integer add opsevent=0xb,umask=0x1000sse_avx_ops_retired.sse_avx_subfloating pointRetired SSE and AVX integer subtract opsevent=0xb,umask=0x2000sse_avx_ops_retired.sse_avx_mulfloating pointRetired SSE and AVX integer multiply opsevent=0xb,umask=0x3000sse_avx_ops_retired.sse_avx_macfloating pointRetired SSE and AVX integer multiply-accumulate opsevent=0xb,umask=0x4000sse_avx_ops_retired.sse_avx_aesfloating pointRetired SSE and AVX integer AES opsevent=0xb,umask=0x5000sse_avx_ops_retired.sse_avx_shafloating pointRetired SSE and AVX integer SHA opsevent=0xb,umask=0x6000sse_avx_ops_retired.sse_avx_cmpfloating pointRetired SSE and AVX integer compare opsevent=0xb,umask=0x7000sse_avx_ops_retired.sse_avx_clmfloating pointRetired SSE and AVX integer CLM opsevent=0xb,umask=0x8000sse_avx_ops_retired.sse_avx_shiftfloating pointRetired SSE and AVX integer shift opsevent=0xb,umask=0x9000sse_avx_ops_retired.sse_avx_movfloating pointRetired SSE and AVX integer MOV opsevent=0xb,umask=0xa000sse_avx_ops_retired.sse_avx_shufflefloating pointRetired SSE and AVX integer shuffle ops (may include instructions not necessarily thought of as including shuffles e.g. horizontal add, dot product, and certain MOV instructions)event=0xb,umask=0xb000sse_avx_ops_retired.sse_avx_packfloating pointRetired SSE and AVX integer pack opsevent=0xb,umask=0xc000sse_avx_ops_retired.sse_avx_logicalfloating pointRetired SSE and AVX integer logical opsevent=0xb,umask=0xd000sse_avx_ops_retired.sse_avx_otherfloating pointRetired SSE and AVX integer ops of other typesevent=0xb,umask=0xe000sse_avx_ops_retired.sse_avx_allfloating pointRetired SSE and AVX integer ops of all typesevent=0xb,umask=0xf000sse_avx_ops_retired.allfloating pointRetired SSE, AVX and MMX integer ops of all typesevent=0xb,umask=0xff00fp_pack_ops_retired.fp128_addfloating pointRetired 128-bit packed floating-point add opsevent=0xc,umask=100fp_pack_ops_retired.fp128_subfloating pointRetired 128-bit packed floating-point subtract opsevent=0xc,umask=200fp_pack_ops_retired.fp128_mulfloating pointRetired 128-bit packed floating-point multiply opsevent=0xc,umask=300fp_pack_ops_retired.fp128_macfloating pointRetired 128-bit packed floating-point multiply-accumulate opsevent=0xc,umask=400fp_pack_ops_retired.fp128_divfloating pointRetired 128-bit packed floating-point divide opsevent=0xc,umask=500fp_pack_ops_retired.fp128_sqrtfloating pointRetired 128-bit packed floating-point square root opsevent=0xc,umask=600fp_pack_ops_retired.fp128_cmpfloating pointRetired 128-bit packed floating-point compare opsevent=0xc,umask=700fp_pack_ops_retired.fp128_cvtfloating pointRetired 128-bit packed floating-point convert opsevent=0xc,umask=800fp_pack_ops_retired.fp128_blendfloating pointRetired 128-bit packed floating-point blend opsevent=0xc,umask=900fp_pack_ops_retired.fp128_shufflefloating pointRetired 128-bit packed floating-point shuffle ops (may include instructions not necessarily thought of as including shuffles e.g. horizontal add, dot product, and certain MOV instructions)event=0xc,umask=0xb00fp_pack_ops_retired.fp128_logicalfloating pointRetired 128-bit packed floating-point logical opsevent=0xc,umask=0xd00fp_pack_ops_retired.fp128_otherfloating pointRetired 128-bit packed floating-point ops of other typesevent=0xc,umask=0xe00fp_pack_ops_retired.fp128_allfloating pointRetired 128-bit packed floating-point ops of all typesevent=0xc,umask=0xf00fp_pack_ops_retired.fp256_addfloating pointRetired 256-bit packed floating-point add opsevent=0xc,umask=0x1000fp_pack_ops_retired.fp256_subfloating pointRetired 256-bit packed floating-point subtract opsevent=0xc,umask=0x2000fp_pack_ops_retired.fp256_mulfloating pointRetired 256-bit packed floating-point multiply opsevent=0xc,umask=0x3000fp_pack_ops_retired.fp256_macfloating pointRetired 256-bit packed floating-point multiply-accumulate opsevent=0xc,umask=0x4000fp_pack_ops_retired.fp256_divfloating pointRetired 256-bit packed floating-point divide opsevent=0xc,umask=0x5000fp_pack_ops_retired.fp256_sqrtfloating pointRetired 256-bit packed floating-point square root opsevent=0xc,umask=0x6000fp_pack_ops_retired.fp256_cmpfloating pointRetired 256-bit packed floating-point compare opsevent=0xc,umask=0x7000fp_pack_ops_retired.fp256_cvtfloating pointRetired 256-bit packed floating-point convert opsevent=0xc,umask=0x8000fp_pack_ops_retired.fp256_blendfloating pointRetired 256-bit packed floating-point blend opsevent=0xc,umask=0x9000fp_pack_ops_retired.fp256_shufflefloating pointRetired 256-bit packed floating-point shuffle ops (may include instructions not necessarily thought of as including shuffles e.g. horizontal add, dot product, and certain MOV instructions)event=0xc,umask=0xb000fp_pack_ops_retired.fp256_logicalfloating pointRetired 256-bit packed floating-point logical opsevent=0xc,umask=0xd000fp_pack_ops_retired.fp256_otherfloating pointRetired 256-bit packed floating-point ops of other typesevent=0xc,umask=0xe000fp_pack_ops_retired.fp256_allfloating pointRetired 256-bit packed floating-point ops of all typesevent=0xc,umask=0xf000fp_pack_ops_retired.allfloating pointRetired packed floating-point ops of all typesevent=0xc,umask=0xff00packed_int_op_type.int128_addfloating pointRetired 128-bit packed integer add opsevent=0xd,umask=100packed_int_op_type.int128_subfloating pointRetired 128-bit packed integer subtract opsevent=0xd,umask=200packed_int_op_type.int128_mulfloating pointRetired 128-bit packed integer multiply opsevent=0xd,umask=300packed_int_op_type.int128_macfloating pointRetired 128-bit packed integer multiply-accumulate opsevent=0xd,umask=400packed_int_op_type.int128_aesfloating pointRetired 128-bit packed integer AES opsevent=0xd,umask=500packed_int_op_type.int128_shafloating pointRetired 128-bit packed integer SHA opsevent=0xd,umask=600packed_int_op_type.int128_cmpfloating pointRetired 128-bit packed integer compare opsevent=0xd,umask=700packed_int_op_type.int128_clmfloating pointRetired 128-bit packed integer CLM opsevent=0xd,umask=800packed_int_op_type.int128_shiftfloating pointRetired 128-bit packed integer shift opsevent=0xd,umask=900packed_int_op_type.int128_movfloating pointRetired 128-bit packed integer MOV opsevent=0xd,umask=0xa00packed_int_op_type.int128_shufflefloating pointRetired 128-bit packed integer shuffle ops (may include instructions not necessarily thought of as including shuffles e.g. horizontal add, dot product, and certain MOV instructions)event=0xd,umask=0xb00packed_int_op_type.int128_packfloating pointRetired 128-bit packed integer pack opsevent=0xd,umask=0xc00packed_int_op_type.int128_logicalfloating pointRetired 128-bit packed integer logical opsevent=0xd,umask=0xd00packed_int_op_type.int128_otherfloating pointRetired 128-bit packed integer ops of other typesevent=0xd,umask=0xe00packed_int_op_type.int128_allfloating pointRetired 128-bit packed integer ops of all typesevent=0xd,umask=0xf00packed_int_op_type.int256_addfloating pointRetired 256-bit packed integer add opsevent=0xd,umask=0x1000packed_int_op_type.int256_subfloating pointRetired 256-bit packed integer subtract opsevent=0xd,umask=0x2000packed_int_op_type.int256_mulfloating pointRetired 256-bit packed integer multiply opsevent=0xd,umask=0x3000packed_int_op_type.int256_macfloating pointRetired 256-bit packed integer multiply-accumulate opsevent=0xd,umask=0x4000packed_int_op_type.int256_cmpfloating pointRetired 256-bit packed integer compare opsevent=0xd,umask=0x7000packed_int_op_type.int256_shiftfloating pointRetired 256-bit packed integer shift opsevent=0xd,umask=0x9000packed_int_op_type.int256_movfloating pointRetired 256-bit packed integer MOV opsevent=0xd,umask=0xa000packed_int_op_type.int256_shufflefloating pointRetired 256-bit packed integer shuffle ops (may include instructions not necessarily thought of as including shuffles e.g. horizontal add, dot product, and certain MOV instructions)event=0xd,umask=0xb000packed_int_op_type.int256_packfloating pointRetired 256-bit packed integer pack opsevent=0xd,umask=0xc000packed_int_op_type.int256_logicalfloating pointRetired 256-bit packed integer logical opsevent=0xd,umask=0xd000packed_int_op_type.int256_otherfloating pointRetired 256-bit packed integer ops of other typesevent=0xd,umask=0xe000packed_int_op_type.int256_allfloating pointRetired 256-bit packed integer ops of all typesevent=0xd,umask=0xf000packed_int_op_type.allfloating pointRetired packed integer ops of all typesevent=0xd,umask=0xff00fp_disp_faults.x87_fill_faultfloating pointFloating-point dispatch faults for x87 fillsevent=0xe,umask=100fp_disp_faults.xmm_fill_faultfloating pointFloating-point dispatch faults for XMM fillsevent=0xe,umask=200fp_disp_faults.ymm_fill_faultfloating pointFloating-point dispatch faults for YMM fillsevent=0xe,umask=400fp_disp_faults.ymm_spill_faultfloating pointFloating-point dispatch faults for YMM spillsevent=0xe,umask=800fp_disp_faults.sse_avx_allfloating pointFloating-point dispatch faults of all types for SSE and AVX opsevent=0xe,umask=0xe00fp_disp_faults.allfloating pointFloating-point dispatch faults of all typesevent=0xe,umask=0xf00amd_umcumc_mem_clkmemory controllerevent=001Number of memory clock cyclesumc_act_cmd.allmemory controllerevent=501Number of ACTIVATE commands sentumc_act_cmd.rdmemory controllerevent=5,rdwrmask=101Number of ACTIVATE commands sent for readsumc_act_cmd.wrmemory controllerevent=5,rdwrmask=201Number of ACTIVATE commands sent for writesumc_pchg_cmd.allmemory controllerevent=601Number of PRECHARGE commands sentumc_pchg_cmd.rdmemory controllerevent=6,rdwrmask=101Number of PRECHARGE commands sent for readsumc_pchg_cmd.wrmemory controllerevent=6,rdwrmask=201Number of PRECHARGE commands sent for writesumc_cas_cmd.allmemory controllerevent=0xa01Number of CAS commands sentumc_cas_cmd.rdmemory controllerevent=0xa,rdwrmask=101Number of CAS commands sent for readsumc_cas_cmd.wrmemory controllerevent=0xa,rdwrmask=201Number of CAS commands sent for writesumc_data_slot_clks.allmemory controllerevent=0x1401Number of clocks used by the data busumc_data_slot_clks.rdmemory controllerevent=0x14,rdwrmask=101Number of clocks used by the data bus for readsumc_data_slot_clks.wrmemory controllerevent=0x14,rdwrmask=201Number of clocks used by the data bus for writesls_bad_status2.stli_othermemoryStore-to-load conflicts (load unable to complete due to a non-forwardable conflict with an older store)event=0x24,umask=200ls_dispatch.ld_dispatchmemoryNumber of memory load operations dispatched to the load-store unitevent=0x29,umask=100ls_dispatch.store_dispatchmemoryNumber of memory store operations dispatched to the load-store unitevent=0x29,umask=200ls_dispatch.ld_st_dispatchmemoryNumber of memory load-store operations dispatched to the load-store unitevent=0x29,umask=400ls_stlfmemoryStore-to-load-forward (STLF) hitsevent=0x3500ls_st_commit_cancel2.st_commit_cancel_wcb_fullmemoryNon-cacheable store commits cancelled due to the non-cacheable commit buffer being fullevent=0x37,umask=100ls_l1_d_tlb_miss.tlb_reload_4k_l2_hitmemoryL1 DTLB misses with L2 DTLB hits for 4k pagesevent=0x45,umask=100ls_l1_d_tlb_miss.tlb_reload_coalesced_page_hitmemoryL1 DTLB misses with L2 DTLB hits for coalesced pages. A coalesced page is a 16k page created from four adjacent 4k pagesevent=0x45,umask=200ls_l1_d_tlb_miss.tlb_reload_2m_l2_hitmemoryL1 DTLB misses with L2 DTLB hits for 2M pagesevent=0x45,umask=400ls_l1_d_tlb_miss.tlb_reload_1g_l2_hitmemoryL1 DTLB misses with L2 DTLB hits for 1G pagesevent=0x45,umask=800ls_l1_d_tlb_miss.tlb_reload_4k_l2_missmemoryL1 DTLB misses with L2 DTLB misses (page-table walks are requested) for 4k pagesevent=0x45,umask=0x1000ls_l1_d_tlb_miss.tlb_reload_coalesced_page_missmemoryL1 DTLB misses with L2 DTLB misses (page-table walks are requested) for coalesced pages. A coalesced page is a 16k page created from four adjacent 4k pagesevent=0x45,umask=0x2000ls_l1_d_tlb_miss.tlb_reload_2m_l2_missmemoryL1 DTLB misses with L2 DTLB misses (page-table walks are requested) for 2M pagesevent=0x45,umask=0x4000ls_l1_d_tlb_miss.tlb_reload_1g_l2_missmemoryL1 DTLB misses with L2 DTLB misses (page-table walks are requested) for 1G pagesevent=0x45,umask=0x8000ls_l1_d_tlb_miss.all_l2_missmemoryL1 DTLB misses with L2 DTLB misses (page-table walks are requested) for all page sizesevent=0x45,umask=0xf000ls_l1_d_tlb_miss.allmemoryL1 DTLB misses for all page sizesevent=0x45,umask=0xff00ls_misal_loads.ma64memory64B misaligned (cacheline crossing) loadsevent=0x47,umask=100ls_misal_loads.ma4kmemory4kB misaligned (page crossing) loadsevent=0x47,umask=200ls_tlb_flush.allmemoryAll TLB Flushesevent=0x78,umask=0xff00bp_l1_tlb_miss_l2_tlb_hitmemoryInstruction fetches that miss in the L1 ITLB but hit in the L2 ITLBevent=0x8400bp_l1_tlb_miss_l2_tlb_miss.if4kmemoryInstruction fetches that miss in both the L1 and L2 ITLBs (page-table walks are requested) for 4k pagesevent=0x85,umask=100bp_l1_tlb_miss_l2_tlb_miss.if2mmemoryInstruction fetches that miss in both the L1 and L2 ITLBs (page-table walks are requested) for 2M pagesevent=0x85,umask=200bp_l1_tlb_miss_l2_tlb_miss.if1gmemoryInstruction fetches that miss in both the L1 and L2 ITLBs (page-table walks are requested) for 1G pagesevent=0x85,umask=400bp_l1_tlb_miss_l2_tlb_miss.coalesced_4kmemoryInstruction fetches that miss in both the L1 and L2 ITLBs (page-table walks are requested) for coalesced pages. A coalesced page is a 16k page created from four adjacent 4k pagesevent=0x85,umask=800bp_l1_tlb_miss_l2_tlb_miss.allmemoryInstruction fetches that miss in both the L1 and L2 ITLBs (page-table walks are requested) for all page sizesevent=0x85,umask=0xf00bp_l1_tlb_fetch_hit.if4kmemoryInstruction fetches that hit in the L1 ITLB for 4k or coalesced pages. A coalesced page is a 16k page created from four adjacent 4k pagesevent=0x94,umask=100bp_l1_tlb_fetch_hit.if2mmemoryInstruction fetches that hit in the L1 ITLB for 2M pagesevent=0x94,umask=200bp_l1_tlb_fetch_hit.if1gmemoryInstruction fetches that hit in the L1 ITLB for 1G pagesevent=0x94,umask=400bp_l1_tlb_fetch_hit.allmemoryInstruction fetches that hit in the L1 ITLB for all page sizesevent=0x94,umask=700resyncs_or_nc_redirectsotherPipeline restarts not caused by branch mispredictsevent=0x9600de_op_queue_emptyotherCycles when the op queue is empty. Such cycles indicate that the front-end is not delivering instructions fast enoughevent=0xa900de_src_op_disp.decoderotherOps fetched from instruction cache and dispatchedevent=0xaa,umask=100de_src_op_disp.op_cacheotherOps fetched from op cache and dispatchedevent=0xaa,umask=200de_src_op_disp.loop_bufferotherOps dispatched from loop bufferevent=0xaa,umask=400de_src_op_disp.allotherOps dispatched from any sourceevent=0xaa,umask=700de_dis_ops_from_decoder.any_fp_dispatchotherNumber of ops dispatched to the floating-point unitevent=0xab,umask=400de_dis_ops_from_decoder.disp_op_type.any_integer_dispatchotherNumber of ops dispatched to the integer execution unitevent=0xab,umask=800de_dis_dispatch_token_stalls1.int_phy_reg_file_rsrc_stallotherNumber of cycles dispatch is stalled for integer physical register file tokensevent=0xae,umask=100de_dis_dispatch_token_stalls1.load_queue_rsrc_stallotherNumber of cycles dispatch is stalled for Load queue tokenevent=0xae,umask=200de_dis_dispatch_token_stalls1.store_queue_rsrc_stallotherNumber of cycles dispatch is stalled for store queue tokensevent=0xae,umask=400de_dis_dispatch_token_stalls1.taken_brnch_buffer_rsrcotherNumber of cycles dispatch is stalled for taken branch buffer tokensevent=0xae,umask=0x1000de_dis_dispatch_token_stalls1.fp_reg_file_rsrc_stallotherNumber of cycles dispatch is stalled for floating-point register file tokensevent=0xae,umask=0x2000de_dis_dispatch_token_stalls1.fp_sch_rsrc_stallotherNumber of cycles dispatch is stalled for floating-point scheduler tokensevent=0xae,umask=0x4000de_dis_dispatch_token_stalls1.fp_flush_recovery_stallotherNumber of cycles dispatch is stalled for floating-point flush recoveryevent=0xae,umask=0x8000de_dis_dispatch_token_stalls2.int_sch0_token_stallotherNumber of cycles dispatch is stalled for integer scheduler queue 0 tokensevent=0xaf,umask=100de_dis_dispatch_token_stalls2.int_sch1_token_stallotherNumber of cycles dispatch is stalled for integer scheduler queue 1 tokensevent=0xaf,umask=200de_dis_dispatch_token_stalls2.int_sch2_token_stallotherNumber of cycles dispatch is stalled for integer scheduler queue 2 tokensevent=0xaf,umask=400de_dis_dispatch_token_stalls2.int_sch3_token_stallotherNumber of cycles dispatch is stalled for integer scheduler queue 3 tokensevent=0xaf,umask=800de_dis_dispatch_token_stalls2.retire_token_stallotherNumber of cycles dispatch is stalled for retire queue tokensevent=0xaf,umask=0x2000de_no_dispatch_per_slot.no_ops_from_frontendotherIn each cycle counts dispatch slots left empty because the front-end did not supply opsevent=0x1a0,umask=100de_no_dispatch_per_slot.backend_stallsotherIn each cycle counts ops unable to dispatch because of back-end stallsevent=0x1a0,umask=0x1e00de_no_dispatch_per_slot.smt_contentionotherIn each cycle counts ops unable to dispatch because the dispatch cycle was granted to the other SMT threadevent=0x1a0,umask=0x6000all_data_cache_accessesrecommendedAll data cache accessesevent=0x29,umask=700bp_l1_tlb_miss_l2_tlb_hitbranch predictionInstruction fetches that miss in the L1 ITLB but hit in the L2 ITLBevent=0x8400bp_l1_tlb_miss_l2_tlb_miss.if4kbranch predictionInstruction fetches that miss in both the L1 and L2 ITLBs (page-table walks are requested) for 4k pagesevent=0x85,umask=100bp_l1_tlb_miss_l2_tlb_miss.if2mbranch predictionInstruction fetches that miss in both the L1 and L2 ITLBs (page-table walks are requested) for 2M pagesevent=0x85,umask=200bp_l1_tlb_miss_l2_tlb_miss.if1gbranch predictionInstruction fetches that miss in both the L1 and L2 ITLBs (page-table walks are requested) for 1G pagesevent=0x85,umask=400bp_l1_tlb_miss_l2_tlb_miss.coalesced_4kbranch predictionInstruction fetches that miss in both the L1 and L2 ITLBs (page-table walks are requested) for coalesced pages. A coalesced page is a 16k page created from four adjacent 4k pagesevent=0x85,umask=800bp_l1_tlb_miss_l2_tlb_miss.allbranch predictionInstruction fetches that miss in both the L1 and L2 ITLBs (page-table walks are requested) for all page sizesevent=0x85,umask=0xf00bp_l2_btb_correctbranch predictionL2 branch prediction overrides existing prediction (speculative)event=0x8b00bp_dyn_ind_predbranch predictionDynamic indirect predictions (branch used the indirect predictor to make a prediction)event=0x8e00bp_de_redirectbranch predictionNumber of times an early redirect is sent to branch predictor. This happens when either the decoder or dispatch logic is able to detect that the branch predictor needs to be redirectedevent=0x9100bp_l1_tlb_fetch_hit.if4kbranch predictionInstruction fetches that hit in the L1 ITLB for 4k or coalesced pages. A coalesced page is a 16k page created from four adjacent 4k pagesevent=0x94,umask=100bp_l1_tlb_fetch_hit.if2mbranch predictionInstruction fetches that hit in the L1 ITLB for 2M pagesevent=0x94,umask=200bp_l1_tlb_fetch_hit.if1gbranch predictionInstruction fetches that hit in the L1 ITLB for 1G pagesevent=0x94,umask=400bp_l1_tlb_fetch_hit.allbranch predictionInstruction fetches that hit in the L1 ITLB for all page sizesevent=0x94,umask=700bp_redirects.resyncbranch predictionRedirects of the branch predictor caused by resyncsevent=0x9f,umask=100bp_redirects.ex_redirbranch predictionRedirects of the branch predictor caused by mispredictsevent=0x9f,umask=200bp_redirects.allbranch predictionRedirects of the branch predictorevent=0x9f00de_op_queue_emptydecodeCycles where the op queue is empty. Such cycles indicate that the front-end is not delivering instructions fast enoughevent=0xa900de_src_op_disp.x86_decoderdecodeOps dispatched from x86 decoderevent=0xaa,umask=100de_src_op_disp.op_cachedecodeOps dispatched from op cacheevent=0xaa,umask=200de_src_op_disp.alldecodeOps dispatched from any sourceevent=0xaa,umask=700de_dis_ops_from_decoder.any_fp_dispatchdecodeNumber of ops dispatched to the floating-point unitevent=0xab,umask=400de_dis_ops_from_decoder.any_integer_dispatchdecodeNumber of ops dispatched to the integer execution unitevent=0xab,umask=800de_dispatch_stall_cycle_dynamic_tokens_part1.int_phy_reg_file_rsrc_stalldecodeCycles where a dispatch group is valid but does not get dispatched due to an integer physical register file resource stallevent=0xae,umask=100de_dispatch_stall_cycle_dynamic_tokens_part1.load_queue_rsrc_stalldecodeCycles where a dispatch group is valid but does not get dispatched due to a lack of load queue tokensevent=0xae,umask=200de_dispatch_stall_cycle_dynamic_tokens_part1.store_queue_rsrc_stalldecodeCycles where a dispatch group is valid but does not get dispatched due to a lack of store queue tokensevent=0xae,umask=400de_dispatch_stall_cycle_dynamic_tokens_part1.taken_brnch_buffer_rsrcdecodeCycles where a dispatch group is valid but does not get dispatched due to a taken branch buffer resource stallevent=0xae,umask=0x1000de_dispatch_stall_cycle_dynamic_tokens_part1.fp_sch_rsrc_stalldecodeCycles where a dispatch group is valid but does not get dispatched due to a floating-point non-schedulable queue token stallevent=0xae,umask=0x4000de_dispatch_stall_cycle_dynamic_tokens_part2.al_tokensdecodeCycles where a dispatch group is valid but does not get dispatched due to unavailability of ALU tokensevent=0xaf,umask=100de_dispatch_stall_cycle_dynamic_tokens_part2.ag_tokensdecodeCycles where a dispatch group is valid but does not get dispatched due to unavailability of agen tokensevent=0xaf,umask=200de_dispatch_stall_cycle_dynamic_tokens_part2.ex_flush_recoverydecodeCycles where a dispatch group is valid but does not get dispatched due to a pending integer execution flush recoveryevent=0xaf,umask=400de_dispatch_stall_cycle_dynamic_tokens_part2.retqdecodeCycles where a dispatch group is valid but does not get dispatched due to unavailability of retire queue tokensevent=0xaf,umask=0x2000de_no_dispatch_per_slot.no_ops_from_frontenddecodeIn each cycle counts dispatch slots left empty because the front-end did not supply opsevent=0x1a0,umask=100de_no_dispatch_per_slot.backend_stallsdecodeIn each cycle counts ops unable to dispatch because of back-end stallsevent=0x1a0,umask=0x1e00de_no_dispatch_per_slot.smt_contentiondecodeIn each cycle counts ops unable to dispatch because the dispatch cycle was granted to the other SMT threadevent=0x1a0,umask=0x6000de_additional_resource_stalls.dispatch_stallsdecodeCounts additional cycles where dispatch is stalled due to a lack of dispatch resourcesevent=0x1a2,umask=0x3000ex_ret_instrexecutionRetired instructionsevent=0xc000ex_ret_opsexecutionRetired macro-opsevent=0xc100ex_ret_brnexecutionRetired branch instructions (all types of architectural control flow changes, including exceptions and interrupts)event=0xc200ex_ret_brn_mispexecutionRetired branch instructions mispredictedevent=0xc300ex_ret_brn_tknexecutionRetired taken branch instructions (all types of architectural control flow changes, including exceptions and interrupts)event=0xc400ex_ret_brn_tkn_mispexecutionRetired taken branch instructions mispredictedevent=0xc500ex_ret_brn_farexecutionRetired far control transfers (far call/jump/return, IRET, SYSCALL and SYSRET, plus exceptions and interrupts). Far control transfers are not subject to branch predictionevent=0xc600ex_ret_near_retexecutionRetired near returns (RET or RET Iw)event=0xc800ex_ret_near_ret_mispredexecutionRetired near returns mispredicted. Each misprediction incurs the same penalty as a mispredicted conditional branch instructionevent=0xc900ex_ret_brn_ind_mispexecutionRetired indirect branch instructions mispredicted (only EX mispredicts). Each misprediction incurs the same penalty as a mispredicted conditional branch instructionevent=0xca00ex_ret_mmx_fp_instr.x87executionRetired x87 instructionsevent=0xcb,umask=100ex_ret_mmx_fp_instr.mmxexecutionRetired MMX instructionsevent=0xcb,umask=200ex_ret_mmx_fp_instr.sseexecutionRetired SSE instructions (includes SSE, SSE2, SSE3, SSSE3, SSE4A, SSE41, SSE42 and AVX)event=0xcb,umask=400ex_ret_ind_brch_instrexecutionRetired indirect branch instructionsevent=0xcc00ex_ret_condexecutionRetired conditional branch instructionsevent=0xd100ex_div_busyexecutionNumber of cycles the divider is busyevent=0xd300ex_div_countexecutionDivide ops executedevent=0xd400ex_no_retire.emptyexecutionCycles with no retire due  to the lack of valid ops in the retire queue (may be caused by front-end bottlenecks or pipeline redirects)event=0xd6,umask=100ex_no_retire.not_completeexecutionCycles with no retire while the oldest op is waiting to be executedevent=0xd6,umask=200ex_no_retire.otherexecutionCycles with no retire caused by other reasons (retire breaks, traps, faults, etc.)event=0xd6,umask=800ex_no_retire.thread_not_selectedexecutionCycles with no retire because thread arbitration did not select the threadevent=0xd6,umask=0x1000ex_no_retire.load_not_completeexecutionCycles with no retire while the oldest op is waiting for load dataevent=0xd6,umask=0xa200ex_no_retire.allexecutionCycles with no retire for any reasonevent=0xd6,umask=0x1b00ex_ret_ucode_instrexecutionRetired microcoded instructionsevent=0x1c100ex_ret_ucode_opsexecutionRetired microcode opsevent=0x1c200ex_ret_msprd_brnch_instr_dir_msmtchexecutionRetired branch instructions mispredicted due to direction mismatchevent=0x1c700ex_ret_uncond_brnch_instr_mispredexecutionRetired unconditional indirect branch instructions mispredictedevent=0x1c800ex_ret_uncond_brnch_instrexecutionRetired unconditional branch instructionsevent=0x1c900ex_tagged_ibs_ops.ibs_tagged_opsexecutionOps tagged by IBSevent=0x1cf,umask=100ex_tagged_ibs_ops.ibs_tagged_ops_retexecutionOps tagged by IBS that retiredevent=0x1cf,umask=200ex_tagged_ibs_ops.ibs_count_rolloverexecutionOps not tagged by IBS due to a previous tagged op that has not yet signaled interruptevent=0x1cf,umask=400ex_ret_fused_instrexecutionRetired fused instructionsevent=0x1d000fp_ret_sse_avx_ops.bfloat16_flopsfloating pointRetired SSE and AVX floating-point bfloat16 opsevent=3,umask=0x2000fp_ret_sse_avx_ops.scalar_single_flopsfloating pointRetired SSE and AVX floating-point scalar single-precision opsevent=3,umask=0x4000fp_ret_sse_avx_ops.packed_single_flopsfloating pointRetired SSE and AVX floating-point packed single-precision opsevent=3,umask=0x6000fp_ret_sse_avx_ops.scalar_double_flopsfloating pointRetired SSE and AVX floating-point scalar double-precision opsevent=3,umask=0x8000fp_ret_sse_avx_ops.packed_double_flopsfloating pointRetired SSE and AVX floating-point packed double-precision opsevent=3,umask=0xa000fp_ret_sse_avx_ops.allfloating pointRetired SSE and AVX floating-point ops of all typesevent=3,umask=0xf00ic_cache_fill_l2inst cacheInstruction cache lines (64 bytes) fulfilled from the L2 cacheevent=0x8200ic_cache_fill_sysinst cacheInstruction cache lines (64 bytes) fulfilled from system memory or another cacheevent=0x8300ic_fetch_ibs_events.fetch_taggedinst cacheFetches tagged by Fetch IBS. Not all tagged fetches result in a valid sample and an IBS interruptevent=0x188,umask=200ic_fetch_ibs_events.sample_discardedinst cacheFetches discarded after being tagged by Fetch IBS due to reasons other than IBS filteringevent=0x188,umask=400ic_fetch_ibs_events.sample_filteredinst cacheFetches discarded after being tagged by Fetch IBS due to IBS filteringevent=0x188,umask=800ic_fetch_ibs_events.sample_validinst cacheFetches tagged by Fetch IBS that result in a valid sample and an IBS interruptevent=0x188,umask=0x1000ic_tag_hit_miss.instruction_cache_hitinst cacheInstruction cache hitsevent=0x18e,umask=700ic_tag_hit_miss.instruction_cache_missinst cacheInstruction cache missesevent=0x18e,umask=0x1800ic_tag_hit_miss.all_instruction_cache_accessesinst cacheInstruction cache accesses of all typesevent=0x18e,umask=0x1f00op_cache_hit_miss.op_cache_hitinst cacheOp cache hitsevent=0x28f,umask=300op_cache_hit_miss.op_cache_missinst cacheOp cache missesevent=0x28f,umask=400op_cache_hit_miss.all_op_cache_accessesinst cacheOp cache accesses of all typesevent=0x28f,umask=700l2_request_g1.group2l2 cacheL2 cache requests of non-cacheable type (non-cached data and instructions reads, self-modifying code checks)event=0x60,umask=100l2_request_g1.l2_hw_pfl2 cacheL2 cache requests: from hardware prefetchers to prefetch directly into L2 (hit or miss)event=0x60,umask=200l2_request_g1.prefetch_l2_cmdl2 cacheL2 cache requests: prefetch directly into L2event=0x60,umask=400l2_request_g1.cacheable_ic_readl2 cacheL2 cache requests: instruction cache readsevent=0x60,umask=0x1000l2_request_g1.ls_rd_blk_c_sl2 cacheL2 cache requests: data cache shared readsevent=0x60,umask=0x2000l2_request_g1.rd_blk_xl2 cacheL2 cache requests: data cache storesevent=0x60,umask=0x4000l2_request_g1.rd_blk_ll2 cacheL2 cache requests: data cache reads including hardware and software prefetchevent=0x60,umask=0x8000l2_request_g1.all_dcl2 cacheL2 cache requests of common types from L1 data cache (including prefetches)event=0x60,umask=0xe000l2_request_g1.all_no_prefetchl2 cacheL2 cache requests of common types not including prefetchesevent=0x60,umask=0xf100l2_request_g1.alll2 cacheL2 cache requests of all typesevent=0x60,umask=0xf700l2_request_g2.ls_rd_sized_ncl2 cacheL2 cache requests: non-coherent, non-cacheable LS sized readsevent=0x61,umask=0x2000l2_request_g2.ls_rd_sizedl2 cacheL2 cache requests: coherent, non-cacheable LS sized readsevent=0x61,umask=0x4000l2_wcb_req.wcb_closel2 cacheWrite Combining Buffer (WCB) closuresevent=0x63,umask=0x2000l2_cache_req_stat.ic_fill_missl2 cacheCore to L2 cache requests (not including L2 prefetch) with status: instruction cache request miss in L2event=0x64,umask=100l2_cache_req_stat.ic_fill_hit_sl2 cacheCore to L2 cache requests (not including L2 prefetch) with status: instruction cache hit non-modifiable line in L2event=0x64,umask=200l2_cache_req_stat.ic_fill_hit_xl2 cacheCore to L2 cache requests (not including L2 prefetch) with status: instruction cache hit modifiable line in L2event=0x64,umask=400l2_cache_req_stat.ic_hit_in_l2l2 cacheCore to L2 cache requests (not including L2 prefetch) for instruction cache hitsevent=0x64,umask=600l2_cache_req_stat.ic_access_in_l2l2 cacheCore to L2 cache requests (not including L2 prefetch) for instruction cache accessevent=0x64,umask=700l2_cache_req_stat.ls_rd_blk_cl2 cacheCore to L2 cache requests (not including L2 prefetch) with status: data cache request miss in L2event=0x64,umask=800l2_cache_req_stat.ic_dc_miss_in_l2l2 cacheCore to L2 cache requests (not including L2 prefetch) for data and instruction cache missesevent=0x64,umask=900l2_cache_req_stat.ls_rd_blk_xl2 cacheCore to L2 cache requests (not including L2 prefetch) with status: data cache store or state change hit in L2event=0x64,umask=0x1000l2_cache_req_stat.ls_rd_blk_l_hit_sl2 cacheCore to L2 cache requests (not including L2 prefetch) with status: data cache read hit non-modifiable line in L2event=0x64,umask=0x2000l2_cache_req_stat.ls_rd_blk_l_hit_xl2 cacheCore to L2 cache requests (not including L2 prefetch) with status: data cache read hit modifiable line in L2event=0x64,umask=0x4000l2_cache_req_stat.ls_rd_blk_csl2 cacheCore to L2 cache requests (not including L2 prefetch) with status: data cache shared read hit in L2event=0x64,umask=0x8000l2_cache_req_stat.dc_hit_in_l2l2 cacheCore to L2 cache requests (not including L2 prefetch) for data cache hitsevent=0x64,umask=0xf000l2_cache_req_stat.ic_dc_hit_in_l2l2 cacheCore to L2 cache requests (not including L2 prefetch) for data and instruction cache hitsevent=0x64,umask=0xf600l2_cache_req_stat.dc_access_in_l2l2 cacheCore to L2 cache requests (not including L2 prefetch) for data cache accessevent=0x64,umask=0xf800l2_cache_req_stat.alll2 cacheCore to L2 cache requests (not including L2 prefetch) for data and instruction cache accessevent=0x64,umask=0xff00l2_pf_hit_l2.l2_hwpfl2 cacheL2 prefetches accepted by the L2 pipeline which hit in the L2 cache and are generated from L2 hardware prefetchersevent=0x70,umask=0x1f00l2_pf_hit_l2.l1_dc_hwpfl2 cacheL2 prefetches accepted by the L2 pipeline which hit in the L2 cache and are generated from L1 data hardware prefetchersevent=0x70,umask=0xe000l2_pf_hit_l2.l1_dc_l2_hwpfl2 cacheL2 prefetches accepted by the L2 pipeline which hit in the L2 cache and are generated from L1 data and L2 hardware prefetchersevent=0x70,umask=0xff00l2_pf_miss_l2_hit_l3.l2_hwpfl2 cacheL2 prefetches accepted by the L2 pipeline which miss the L2 cache but hit in the L3 cache and are generated from L2 hardware prefetchersevent=0x71,umask=0x1f00l2_pf_miss_l2_hit_l3.l1_dc_hwpfl2 cacheL2 prefetches accepted by the L2 pipeline which miss the L2 cache but hit in the L3 cache and are generated from L1 data hardware prefetchersevent=0x71,umask=0xe000l2_pf_miss_l2_hit_l3.l1_dc_l2_hwpfl2 cacheL2 prefetches accepted by the L2 pipeline which miss the L2 cache but hit in the L3 cache and are generated from L1 data and L2 hardware prefetchersevent=0x71,umask=0xff00l2_pf_miss_l2_l3.l2_hwpfl2 cacheL2 prefetches accepted by the L2 pipeline which miss the L2 as well as the L3 caches and are generated from L2 hardware prefetchersevent=0x72,umask=0x1f00l2_pf_miss_l2_l3.l1_dc_hwpfl2 cacheL2 prefetches accepted by the L2 pipeline which miss the L2 as well as the L3 caches and are generated from L1 data hardware prefetchersevent=0x72,umask=0xe000l2_pf_miss_l2_l3.l1_dc_l2_hwpfl2 cacheL2 prefetches accepted by the L2 pipeline which miss the L2 as well as the L3 caches and are generated from L1 data and L2 hardware prefetchersevent=0x72,umask=0xff00l2_fill_rsp_src.local_ccxl2 cacheL2 cache fills from L3 cache or different L2 cache in the same CCXevent=0x165,umask=200l2_fill_rsp_src.near_cachel2 cacheL2 cache fills from cache of another CCX when the address was in the same NUMA nodeevent=0x165,umask=400l2_fill_rsp_src.dram_io_nearl2 cacheL2 cache fills from either DRAM or MMIO in the same NUMA nodeevent=0x165,umask=800l2_fill_rsp_src.far_cachel2 cacheL2 cache fills from cache of another CCX when the address was in a different NUMA nodeevent=0x165,umask=0x1000l2_fill_rsp_src.dram_io_farl2 cacheL2 cache fills from either DRAM or MMIO in a different NUMA node (same or different socket)event=0x165,umask=0x4000l2_fill_rsp_src.alternate_memoriesl2 cacheL2 cache fills from extension memoryevent=0x165,umask=0x8000l2_fill_rsp_src.alll2 cacheL2 cache fills from all types of data sourcesevent=0x165,umask=0xde00l3_lookup_state.l3_missl3 cacheL3 cache missesevent=4,umask=100l3_lookup_state.l3_hitl3 cacheL3 cache hitsevent=4,umask=0xfe00l3_lookup_state.all_coherent_accesses_to_l3l3 cacheL3 cache requests for all coherent accessesevent=4,umask=0xff00l3_xi_sampled_latency.dram_nearl3 cacheAverage sampled latency when data is sourced from DRAM in the same NUMA nodeevent=0xac,umask=1,enallcores=1,enallslices=1,sliceid=3,threadmask=300l3_xi_sampled_latency.dram_farl3 cacheAverage sampled latency when data is sourced from DRAM in a different NUMA nodeevent=0xac,umask=2,enallcores=1,enallslices=1,sliceid=3,threadmask=300l3_xi_sampled_latency.near_cachel3 cacheAverage sampled latency when data is sourced from another CCX's cache when the address was in the same NUMA nodeevent=0xac,umask=4,enallcores=1,enallslices=1,sliceid=3,threadmask=300l3_xi_sampled_latency.far_cachel3 cacheAverage sampled latency when data is sourced from another CCX's cache when the address was in a different NUMA nodeevent=0xac,umask=8,enallcores=1,enallslices=1,sliceid=3,threadmask=300l3_xi_sampled_latency.ext_nearl3 cacheAverage sampled latency when data is sourced from extension memory (CXL) in the same NUMA nodeevent=0xac,umask=0x10,enallcores=1,enallslices=1,sliceid=3,threadmask=300l3_xi_sampled_latency.ext_farl3 cacheAverage sampled latency when data is sourced from extension memory (CXL) in a different NUMA nodeevent=0xac,umask=0x20,enallcores=1,enallslices=1,sliceid=3,threadmask=300l3_xi_sampled_latency.alll3 cacheAverage sampled latency from all data sourcesevent=0xac,umask=0x3f,enallcores=1,enallslices=1,sliceid=3,threadmask=300l3_xi_sampled_latency_requests.dram_nearl3 cacheL3 cache fill requests sourced from DRAM in the same NUMA nodeevent=0xad,umask=1,enallcores=1,enallslices=1,sliceid=3,threadmask=300l3_xi_sampled_latency_requests.dram_farl3 cacheL3 cache fill requests sourced from DRAM in a different NUMA nodeevent=0xad,umask=2,enallcores=1,enallslices=1,sliceid=3,threadmask=300l3_xi_sampled_latency_requests.near_cachel3 cacheL3 cache fill requests sourced from another CCX's cache when the address was in the same NUMA nodeevent=0xad,umask=4,enallcores=1,enallslices=1,sliceid=3,threadmask=300l3_xi_sampled_latency_requests.far_cachel3 cacheL3 cache fill requests sourced from another CCX's cache when the address was in a different NUMA nodeevent=0xad,umask=8,enallcores=1,enallslices=1,sliceid=3,threadmask=300l3_xi_sampled_latency_requests.ext_nearl3 cacheL3 cache fill requests sourced from extension memory (CXL) in the same NUMA nodeevent=0xad,umask=0x10,enallcores=1,enallslices=1,sliceid=3,threadmask=300l3_xi_sampled_latency_requests.ext_farl3 cacheL3 cache fill requests sourced from extension memory (CXL) in a different NUMA nodeevent=0xad,umask=0x20,enallcores=1,enallslices=1,sliceid=3,threadmask=300l3_xi_sampled_latency_requests.alll3 cacheL3 cache fill requests sourced from all data sourcesevent=0xad,umask=0x3f,enallcores=1,enallslices=1,sliceid=3,threadmask=300ls_bad_status2.stli_otherload storeStore-to-load conflicts (load unable to complete due to a non-forwardable conflict with an older store)event=0x24,umask=200ls_locks.bus_lockload storeRetired Lock instructions which caused a bus lockevent=0x25,umask=100ls_ret_cl_flushload storeRetired CLFLUSH instructionsevent=0x2600ls_ret_cpuidload storeRetired CPUID instructionsevent=0x2700ls_dispatch.ld_dispatchload storeNumber of memory load operations dispatched to the load-store unitevent=0x29,umask=100ls_dispatch.store_dispatchload storeNumber of memory store operations dispatched to the load-store unitevent=0x29,umask=200ls_dispatch.ld_st_dispatchload storeNumber of memory load-store operations dispatched to the load-store unitevent=0x29,umask=400ls_dispatch.allload storeNumber of memory operations dispatched to the load-store unitevent=0x29,umask=700ls_smi_rxload storeSMIs receivedevent=0x2b00ls_int_takenload storeInterrupts takenevent=0x2c00ls_stlfload storeStore-to-load-forward (STLF) hitsevent=0x3500ls_st_commit_cancel2.st_commit_cancel_wcb_fullload storeNon-cacheable store commits cancelled due to the non-cacheable commit buffer being fullevent=0x37,umask=100ls_mab_alloc.load_store_allocationsload storeMiss Address Buffer (MAB) entries allocated by a Load-Store (LS) pipe for load-store allocationsevent=0x41,umask=0x3f00ls_mab_alloc.hardware_prefetcher_allocationsload storeMiss Address Buffer (MAB) entries allocated by a Load-Store (LS) pipe for hardware prefetcher allocationsevent=0x41,umask=0x4000ls_mab_alloc.all_allocationsload storeMiss Address Buffer (MAB) entries allocated by a Load-Store (LS) pipe for all types of allocationsevent=0x41,umask=0x7f00ls_dmnd_fills_from_sys.local_l2load storeDemand data cache fills from local L2 cacheevent=0x43,umask=100ls_dmnd_fills_from_sys.local_ccxload storeDemand data cache fills from L3 cache or different L2 cache in the same CCXevent=0x43,umask=200ls_dmnd_fills_from_sys.near_cacheload storeDemand data cache fills from cache of another CCX when the address was in the same NUMA nodeevent=0x43,umask=400ls_dmnd_fills_from_sys.dram_io_nearload storeDemand data cache fills from either DRAM or MMIO in the same NUMA nodeevent=0x43,umask=800ls_dmnd_fills_from_sys.far_cacheload storeDemand data cache fills from cache of another CCX when the address was in a different NUMA nodeevent=0x43,umask=0x1000ls_dmnd_fills_from_sys.dram_io_farload storeDemand data cache fills from either DRAM or MMIO in a different NUMA node (same or different socket)event=0x43,umask=0x4000ls_dmnd_fills_from_sys.alternate_memoriesload storeDemand data cache fills from extension memoryevent=0x43,umask=0x8000ls_dmnd_fills_from_sys.allload storeDemand data cache fills from all types of data sourcesevent=0x43,umask=0xff00ls_any_fills_from_sys.local_l2load storeAny data cache fills from local L2 cacheevent=0x44,umask=100ls_any_fills_from_sys.local_ccxload storeAny data cache fills from L3 cache or different L2 cache in the same CCXevent=0x44,umask=200ls_any_fills_from_sys.local_allload storeAny data cache fills from local L2 cache or L3 cache or different L2 cache in the same CCXevent=0x44,umask=300ls_any_fills_from_sys.near_cacheload storeAny data cache fills from cache of another CCX when the address was in the same NUMA nodeevent=0x44,umask=400ls_any_fills_from_sys.dram_io_nearload storeAny data cache fills from either DRAM or MMIO in the same NUMA nodeevent=0x44,umask=800ls_any_fills_from_sys.far_cacheload storeAny data cache fills from cache of another CCX when the address was in a different NUMA nodeevent=0x44,umask=0x1000ls_any_fills_from_sys.remote_cacheload storeAny data cache fills from cache of another CCX when the address was in the same or a different NUMA nodeevent=0x44,umask=0x1400ls_any_fills_from_sys.dram_io_farload storeAny data cache fills from either DRAM or MMIO in a different NUMA node (same or different socket)event=0x44,umask=0x4000ls_any_fills_from_sys.dram_io_allload storeAny data cache fills from either DRAM or MMIO in any NUMA node (same or different socket)event=0x44,umask=0x4800ls_any_fills_from_sys.far_allload storeAny data cache fills from either cache of another CCX, DRAM or MMIO when the address was in a different NUMA node (same or different socket)event=0x44,umask=0x5000ls_any_fills_from_sys.all_dram_ioload storeAny data cache fills from either DRAM or MMIO in any NUMA node (same or different socket)event=0x44,umask=0x4800ls_any_fills_from_sys.alternate_memoriesload storeAny data cache fills from extension memoryevent=0x44,umask=0x8000ls_any_fills_from_sys.allload storeAny data cache fills from all types of data sourcesevent=0x44,umask=0xff00ls_l1_d_tlb_miss.tlb_reload_4k_l2_hitload storeL1 DTLB misses with L2 DTLB hits for 4k pagesevent=0x45,umask=100ls_l1_d_tlb_miss.tlb_reload_coalesced_page_hitload storeL1 DTLB misses with L2 DTLB hits for coalesced pages. A coalesced page is a 16k page created from four adjacent 4k pagesevent=0x45,umask=200ls_l1_d_tlb_miss.tlb_reload_2m_l2_hitload storeL1 DTLB misses with L2 DTLB hits for 2M pagesevent=0x45,umask=400ls_l1_d_tlb_miss.tlb_reload_1g_l2_hitload storeL1 DTLB misses with L2 DTLB hits for 1G pagesevent=0x45,umask=800ls_l1_d_tlb_miss.tlb_reload_4k_l2_missload storeL1 DTLB misses with L2 DTLB misses (page-table walks are requested) for 4k pagesevent=0x45,umask=0x1000ls_l1_d_tlb_miss.tlb_reload_coalesced_page_missload storeL1 DTLB misses with L2 DTLB misses (page-table walks are requested) for coalesced pages. A coalesced page is a 16k page created from four adjacent 4k pagesevent=0x45,umask=0x2000ls_l1_d_tlb_miss.tlb_reload_2m_l2_missload storeL1 DTLB misses with L2 DTLB misses (page-table walks are requested) for 2M pagesevent=0x45,umask=0x4000ls_l1_d_tlb_miss.tlb_reload_1g_l2_missload storeL1 DTLB misses with L2 DTLB misses (page-table walks are requested) for 1G pagesevent=0x45,umask=0x8000ls_l1_d_tlb_miss.all_l2_missload storeL1 DTLB misses with L2 DTLB misses (page-table walks are requested) for all page sizesevent=0x45,umask=0xf000ls_l1_d_tlb_miss.allload storeL1 DTLB misses for all page sizesevent=0x45,umask=0xff00ls_misal_loads.ma64load store64B misaligned (cacheline crossing) loadsevent=0x47,umask=100ls_misal_loads.ma4kload store4kB misaligned (page crossing) loadsevent=0x47,umask=200ls_pref_instr_disp.prefetchload storeSoftware prefetch instructions dispatched (speculative) of type PrefetchT0 (move data to all cache levels), T1 (move data to all cache levels except L1) and T2 (move data to all cache levels except L1 and L2)event=0x4b,umask=100ls_pref_instr_disp.prefetch_wload storeSoftware prefetch instructions dispatched (speculative) of type PrefetchW (move data to L1 cache and mark it modifiable)event=0x4b,umask=200ls_pref_instr_disp.prefetch_ntaload storeSoftware prefetch instructions dispatched (speculative) of type PrefetchNTA (move data with minimum cache pollution i.e. non-temporal access)event=0x4b,umask=400ls_pref_instr_disp.allload storeSoftware prefetch instructions dispatched (speculative) of all typesevent=0x4b,umask=700wcb_close.full_line_64bload storeNumber of events that caused a Write Combining Buffer (WCB) entry to close because all 64 bytes of the entry have been written toevent=0x50,umask=100ls_inef_sw_pref.data_pipe_sw_pf_dc_hitload storeSoftware prefetches that did not fetch data outside of the processor core as the PREFETCH instruction saw a data cache hitevent=0x52,umask=100ls_inef_sw_pref.mab_mch_cntload storeSoftware prefetches that did not fetch data outside of the processor core as the PREFETCH instruction saw a match on an already allocated Miss Address Buffer (MAB)event=0x52,umask=200ls_inef_sw_pref.allload storeevent=0x52,umask=300ls_sw_pf_dc_fills.local_l2load storeSoftware prefetch data cache fills from local L2 cacheevent=0x59,umask=100ls_sw_pf_dc_fills.local_ccxload storeSoftware prefetch data cache fills from L3 cache or different L2 cache in the same CCXevent=0x59,umask=200ls_sw_pf_dc_fills.near_cacheload storeSoftware prefetch data cache fills from cache of another CCX in the same NUMA nodeevent=0x59,umask=400ls_sw_pf_dc_fills.dram_io_nearload storeSoftware prefetch data cache fills from either DRAM or MMIO in the same NUMA nodeevent=0x59,umask=800ls_sw_pf_dc_fills.far_cacheload storeSoftware prefetch data cache fills from cache of another CCX in a different NUMA nodeevent=0x59,umask=0x1000ls_sw_pf_dc_fills.dram_io_farload storeSoftware prefetch data cache fills from either DRAM or MMIO in a different NUMA node (same or different socket)event=0x59,umask=0x4000ls_sw_pf_dc_fills.alternate_memoriesload storeSoftware prefetch data cache fills from extension memoryevent=0x59,umask=0x8000ls_sw_pf_dc_fills.allload storeSoftware prefetch data cache fills from all types of data sourcesevent=0x59,umask=0xdf00ls_hw_pf_dc_fills.local_l2load storeHardware prefetch data cache fills from local L2 cacheevent=0x5a,umask=100ls_hw_pf_dc_fills.local_ccxload storeHardware prefetch data cache fills from L3 cache or different L2 cache in the same CCXevent=0x5a,umask=200ls_hw_pf_dc_fills.near_cacheload storeHardware prefetch data cache fills from cache of another CCX when the address was in the same NUMA nodeevent=0x5a,umask=400ls_hw_pf_dc_fills.dram_io_nearload storeHardware prefetch data cache fills from either DRAM or MMIO in the same NUMA nodeevent=0x5a,umask=800ls_hw_pf_dc_fills.far_cacheload storeHardware prefetch data cache fills from cache of another CCX when the address was in a different NUMA nodeevent=0x5a,umask=0x1000ls_hw_pf_dc_fills.dram_io_farload storeHardware prefetch data cache fills from either DRAM or MMIO in a different NUMA node (same or different socket)event=0x5a,umask=0x4000ls_hw_pf_dc_fills.alternate_memoriesload storeHardware prefetch data cache fills from extension memoryevent=0x5a,umask=0x8000ls_hw_pf_dc_fills.allload storeHardware prefetch data cache fills from all types of data sourcesevent=0x5a,umask=0xdf00ls_alloc_mab_countload storeIn-flight L1 data cache misses i.e. Miss Address Buffer (MAB) allocations each cycleevent=0x5f00ls_not_halted_cycload storeCore cycles not in haltevent=0x7600ls_tlb_flush.allload storeAll TLB Flushesevent=0x78,umask=0xff00ls_not_halted_p0_cyc.p0_freq_cycload storeReference cycles (P0 frequency) not in halt event=0x120,umask=100umc_mem_clkmemory controllerevent=001Number of memory clock (MEMCLK) cyclesumc_data_slot_clks.allmemory controllerevent=0x1401Number of clock cycles used by the data busumc_data_slot_clks.rdmemory controllerevent=0x14,rdwrmask=101Number of clock cycles used by the data bus for readsumc_data_slot_clks.wrmemory controllerevent=0x14,rdwrmask=201Number of clock cycles used by the data bus for writesl1d_cache.all_cache_refcacheL1 Data Cacheable reads and writesevent=0x40,period=2000000,umask=0xa300l1d_cache.all_refcacheL1 Data reads and writesevent=0x40,period=2000000,umask=0x8300l1d_cache.evictcacheModified cache lines evicted from the L1 data cacheevent=0x40,period=200000,umask=0x1000l1d_cache.ldcacheL1 Cacheable Data Readsevent=0x40,period=2000000,umask=0xa100l1d_cache.replcacheL1 Data line replacementsevent=0x40,period=200000,umask=800l1d_cache.replmcacheModified cache lines allocated in the L1 data cacheevent=0x40,period=200000,umask=0x4800l1d_cache.stcacheL1 Cacheable Data Writesevent=0x40,period=2000000,umask=0xa200l2_ads.selfcacheCycles L2 address bus is in useevent=0x21,period=200000,umask=0x4000l2_data_rqsts.self.e_statecacheAll data requests from the L1 data cacheevent=0x2c,period=200000,umask=0x4400l2_data_rqsts.self.i_statecacheAll data requests from the L1 data cacheevent=0x2c,period=200000,umask=0x4100l2_data_rqsts.self.mesicacheAll data requests from the L1 data cacheevent=0x2c,period=200000,umask=0x4f00l2_data_rqsts.self.m_statecacheAll data requests from the L1 data cacheevent=0x2c,period=200000,umask=0x4800l2_data_rqsts.self.s_statecacheAll data requests from the L1 data cacheevent=0x2c,period=200000,umask=0x4200l2_dbus_busy.selfcacheCycles the L2 cache data bus is busyevent=0x22,period=200000,umask=0x4000l2_dbus_busy_rd.selfcacheCycles the L2 transfers data to the coreevent=0x23,period=200000,umask=0x4000l2_ifetch.self.e_statecacheL2 cacheable instruction fetch requestsevent=0x28,period=200000,umask=0x4400l2_ifetch.self.i_statecacheL2 cacheable instruction fetch requestsevent=0x28,period=200000,umask=0x4100l2_ifetch.self.mesicacheL2 cacheable instruction fetch requestsevent=0x28,period=200000,umask=0x4f00l2_ifetch.self.m_statecacheL2 cacheable instruction fetch requestsevent=0x28,period=200000,umask=0x4800l2_ifetch.self.s_statecacheL2 cacheable instruction fetch requestsevent=0x28,period=200000,umask=0x4200l2_ld.self.any.e_statecacheL2 cache readsevent=0x29,period=200000,umask=0x7400l2_ld.self.any.i_statecacheL2 cache readsevent=0x29,period=200000,umask=0x7100l2_ld.self.any.mesicacheL2 cache readsevent=0x29,period=200000,umask=0x7f00l2_ld.self.any.m_statecacheL2 cache readsevent=0x29,period=200000,umask=0x7800l2_ld.self.any.s_statecacheL2 cache readsevent=0x29,period=200000,umask=0x7200l2_ld.self.demand.e_statecacheL2 cache readsevent=0x29,period=200000,umask=0x4400l2_ld.self.demand.i_statecacheL2 cache readsevent=0x29,period=200000,umask=0x4100l2_ld.self.demand.mesicacheL2 cache readsevent=0x29,period=200000,umask=0x4f00l2_ld.self.demand.m_statecacheL2 cache readsevent=0x29,period=200000,umask=0x4800l2_ld.self.demand.s_statecacheL2 cache readsevent=0x29,period=200000,umask=0x4200l2_ld.self.prefetch.e_statecacheL2 cache readsevent=0x29,period=200000,umask=0x5400l2_ld.self.prefetch.i_statecacheL2 cache readsevent=0x29,period=200000,umask=0x5100l2_ld.self.prefetch.mesicacheL2 cache readsevent=0x29,period=200000,umask=0x5f00l2_ld.self.prefetch.m_statecacheL2 cache readsevent=0x29,period=200000,umask=0x5800l2_ld.self.prefetch.s_statecacheL2 cache readsevent=0x29,period=200000,umask=0x5200l2_ld_ifetch.self.e_statecacheAll read requests from L1 instruction and data cachesevent=0x2d,period=200000,umask=0x4400l2_ld_ifetch.self.i_statecacheAll read requests from L1 instruction and data cachesevent=0x2d,period=200000,umask=0x4100l2_ld_ifetch.self.mesicacheAll read requests from L1 instruction and data cachesevent=0x2d,period=200000,umask=0x4f00l2_ld_ifetch.self.m_statecacheAll read requests from L1 instruction and data cachesevent=0x2d,period=200000,umask=0x4800l2_ld_ifetch.self.s_statecacheAll read requests from L1 instruction and data cachesevent=0x2d,period=200000,umask=0x4200l2_lines_in.self.anycacheL2 cache missesevent=0x24,period=200000,umask=0x7000l2_lines_in.self.demandcacheL2 cache missesevent=0x24,period=200000,umask=0x4000l2_lines_in.self.prefetchcacheL2 cache missesevent=0x24,period=200000,umask=0x5000l2_lines_out.self.anycacheL2 cache lines evictedevent=0x26,period=200000,umask=0x7000l2_lines_out.self.demandcacheL2 cache lines evictedevent=0x26,period=200000,umask=0x4000l2_lines_out.self.prefetchcacheL2 cache lines evictedevent=0x26,period=200000,umask=0x5000l2_lock.self.e_statecacheL2 locked accessesevent=0x2b,period=200000,umask=0x4400l2_lock.self.i_statecacheL2 locked accessesevent=0x2b,period=200000,umask=0x4100l2_lock.self.mesicacheL2 locked accessesevent=0x2b,period=200000,umask=0x4f00l2_lock.self.m_statecacheL2 locked accessesevent=0x2b,period=200000,umask=0x4800l2_lock.self.s_statecacheL2 locked accessesevent=0x2b,period=200000,umask=0x4200l2_m_lines_in.selfcacheL2 cache line modificationsevent=0x25,period=200000,umask=0x4000l2_m_lines_out.self.anycacheModified lines evicted from the L2 cacheevent=0x27,period=200000,umask=0x7000l2_m_lines_out.self.demandcacheModified lines evicted from the L2 cacheevent=0x27,period=200000,umask=0x4000l2_m_lines_out.self.prefetchcacheModified lines evicted from the L2 cacheevent=0x27,period=200000,umask=0x5000l2_no_req.selfcacheCycles no L2 cache requests are pendingevent=0x32,period=200000,umask=0x4000l2_reject_busq.self.any.e_statecacheRejected L2 cache requestsevent=0x30,period=200000,umask=0x7400l2_reject_busq.self.any.i_statecacheRejected L2 cache requestsevent=0x30,period=200000,umask=0x7100l2_reject_busq.self.any.mesicacheRejected L2 cache requestsevent=0x30,period=200000,umask=0x7f00l2_reject_busq.self.any.m_statecacheRejected L2 cache requestsevent=0x30,period=200000,umask=0x7800l2_reject_busq.self.any.s_statecacheRejected L2 cache requestsevent=0x30,period=200000,umask=0x7200l2_reject_busq.self.demand.e_statecacheRejected L2 cache requestsevent=0x30,period=200000,umask=0x4400l2_reject_busq.self.demand.i_statecacheRejected L2 cache requestsevent=0x30,period=200000,umask=0x4100l2_reject_busq.self.demand.mesicacheRejected L2 cache requestsevent=0x30,period=200000,umask=0x4f00l2_reject_busq.self.demand.m_statecacheRejected L2 cache requestsevent=0x30,period=200000,umask=0x4800l2_reject_busq.self.demand.s_statecacheRejected L2 cache requestsevent=0x30,period=200000,umask=0x4200l2_reject_busq.self.prefetch.e_statecacheRejected L2 cache requestsevent=0x30,period=200000,umask=0x5400l2_reject_busq.self.prefetch.i_statecacheRejected L2 cache requestsevent=0x30,period=200000,umask=0x5100l2_reject_busq.self.prefetch.mesicacheRejected L2 cache requestsevent=0x30,period=200000,umask=0x5f00l2_reject_busq.self.prefetch.m_statecacheRejected L2 cache requestsevent=0x30,period=200000,umask=0x5800l2_reject_busq.self.prefetch.s_statecacheRejected L2 cache requestsevent=0x30,period=200000,umask=0x5200l2_rqsts.self.any.e_statecacheL2 cache requestsevent=0x2e,period=200000,umask=0x7400l2_rqsts.self.any.i_statecacheL2 cache requestsevent=0x2e,period=200000,umask=0x7100l2_rqsts.self.any.mesicacheL2 cache requestsevent=0x2e,period=200000,umask=0x7f00l2_rqsts.self.any.m_statecacheL2 cache requestsevent=0x2e,period=200000,umask=0x7800l2_rqsts.self.any.s_statecacheL2 cache requestsevent=0x2e,period=200000,umask=0x7200l2_rqsts.self.demand.e_statecacheL2 cache requestsevent=0x2e,period=200000,umask=0x4400l2_rqsts.self.demand.i_statecacheL2 cache demand requests from this core that missed the L2event=0x2e,period=200000,umask=0x4100l2_rqsts.self.demand.mesicacheL2 cache demand requests from this coreevent=0x2e,period=200000,umask=0x4f00l2_rqsts.self.demand.m_statecacheL2 cache requestsevent=0x2e,period=200000,umask=0x4800l2_rqsts.self.demand.s_statecacheL2 cache requestsevent=0x2e,period=200000,umask=0x4200l2_rqsts.self.prefetch.e_statecacheL2 cache requestsevent=0x2e,period=200000,umask=0x5400l2_rqsts.self.prefetch.i_statecacheL2 cache requestsevent=0x2e,period=200000,umask=0x5100l2_rqsts.self.prefetch.mesicacheL2 cache requestsevent=0x2e,period=200000,umask=0x5f00l2_rqsts.self.prefetch.m_statecacheL2 cache requestsevent=0x2e,period=200000,umask=0x5800l2_rqsts.self.prefetch.s_statecacheL2 cache requestsevent=0x2e,period=200000,umask=0x5200l2_st.self.e_statecacheL2 store requestsevent=0x2a,period=200000,umask=0x4400l2_st.self.i_statecacheL2 store requestsevent=0x2a,period=200000,umask=0x4100l2_st.self.mesicacheL2 store requestsevent=0x2a,period=200000,umask=0x4f00l2_st.self.m_statecacheL2 store requestsevent=0x2a,period=200000,umask=0x4800l2_st.self.s_statecacheL2 store requestsevent=0x2a,period=200000,umask=0x4200mem_load_retired.l2_hitcacheRetired loads that hit the L2 cache (precise event)event=0xcb,period=200000,umask=100mem_load_retired.l2_misscacheRetired loads that miss the L2 cacheevent=0xcb,period=10000,umask=200fp_assist.arfloating pointFloating point assists for retired operationsevent=0x11,period=10000,umask=0x8100fp_assist.sfloating pointFloating point assistsevent=0x11,period=10000,umask=100simd_assistfloating pointSIMD assists invokedevent=0xcd,period=10000000simd_comp_inst_retired.packed_singlefloating pointRetired computational Streaming SIMD Extensions (SSE) packed-single instructionsevent=0xca,period=2000000,umask=100simd_comp_inst_retired.scalar_doublefloating pointRetired computational Streaming SIMD Extensions 2 (SSE2) scalar-double instructionsevent=0xca,period=2000000,umask=800simd_comp_inst_retired.scalar_singlefloating pointRetired computational Streaming SIMD Extensions (SSE) scalar-single instructionsevent=0xca,period=2000000,umask=200simd_instr_retiredfloating pointSIMD Instructions retiredevent=0xce,period=200000000simd_inst_retired.packed_singlefloating pointRetired Streaming SIMD Extensions (SSE) packed-single instructionsevent=0xc7,period=2000000,umask=100simd_inst_retired.scalar_doublefloating pointRetired Streaming SIMD Extensions 2 (SSE2) scalar-double instructionsevent=0xc7,period=2000000,umask=800simd_inst_retired.scalar_singlefloating pointRetired Streaming SIMD Extensions (SSE) scalar-single instructionsevent=0xc7,period=2000000,umask=200simd_inst_retired.vectorfloating pointRetired Streaming SIMD Extensions 2 (SSE2) vector instructionsevent=0xc7,period=2000000,umask=0x1000simd_sat_instr_retiredfloating pointSaturated arithmetic instructions retiredevent=0xcf,period=200000000simd_sat_uop_exec.arfloating pointSIMD saturated arithmetic micro-ops retiredevent=0xb1,period=2000000,umask=0x8000simd_sat_uop_exec.sfloating pointSIMD saturated arithmetic micro-ops executedevent=0xb1,period=200000000simd_uops_exec.arfloating pointSIMD micro-ops retired (excluding stores) (Must be precise)event=0xb0,period=2000000,umask=0x8000simd_uops_exec.sfloating pointSIMD micro-ops executed (excluding stores)event=0xb0,period=200000000simd_uop_type_exec.arithmetic.arfloating pointSIMD packed arithmetic micro-ops retiredevent=0xb3,period=2000000,umask=0xa000simd_uop_type_exec.arithmetic.sfloating pointSIMD packed arithmetic micro-ops executedevent=0xb3,period=2000000,umask=0x2000simd_uop_type_exec.logical.arfloating pointSIMD packed logical micro-ops retiredevent=0xb3,period=2000000,umask=0x9000simd_uop_type_exec.logical.sfloating pointSIMD packed logical micro-ops executedevent=0xb3,period=2000000,umask=0x1000simd_uop_type_exec.mul.arfloating pointSIMD packed multiply micro-ops retiredevent=0xb3,period=2000000,umask=0x8100simd_uop_type_exec.mul.sfloating pointSIMD packed multiply micro-ops executedevent=0xb3,period=2000000,umask=100simd_uop_type_exec.pack.arfloating pointSIMD packed micro-ops retiredevent=0xb3,period=2000000,umask=0x8400simd_uop_type_exec.pack.sfloating pointSIMD packed micro-ops executedevent=0xb3,period=2000000,umask=400simd_uop_type_exec.shift.arfloating pointSIMD packed shift micro-ops retiredevent=0xb3,period=2000000,umask=0x8200simd_uop_type_exec.shift.sfloating pointSIMD packed shift micro-ops executedevent=0xb3,period=2000000,umask=200simd_uop_type_exec.unpack.arfloating pointSIMD unpacked micro-ops retiredevent=0xb3,period=2000000,umask=0x8800simd_uop_type_exec.unpack.sfloating pointSIMD unpacked micro-ops executedevent=0xb3,period=2000000,umask=800x87_comp_ops_exe.any.arfloating pointFloating point computational micro-ops retired (Must be precise)event=0x10,period=2000000,umask=0x8100x87_comp_ops_exe.any.sfloating pointFloating point computational micro-ops executedevent=0x10,period=2000000,umask=100x87_comp_ops_exe.fxch.arfloating pointFXCH uops retired (Must be precise)event=0x10,period=2000000,umask=0x8200x87_comp_ops_exe.fxch.sfloating pointFXCH uops executedevent=0x10,period=2000000,umask=200baclears.anyfrontendBACLEARS assertedevent=0xe6,period=2000000,umask=100cycles_icache_mem_stalled.icache_mem_stalledfrontendCycles during which instruction fetches are  stalledevent=0x86,period=2000000,umask=100decode_stall.iq_fullfrontendDecode stall due to IQ fullevent=0x87,period=2000000,umask=200decode_stall.pfb_emptyfrontendDecode stall due to PFB emptyevent=0x87,period=2000000,umask=100icache.accessesfrontendInstruction fetchesevent=0x80,period=200000,umask=300icache.hitfrontendIcache hitevent=0x80,period=200000,umask=100icache.missesfrontendIcache missevent=0x80,period=200000,umask=200macro_insts.all_decodedfrontendAll Instructions decodedevent=0xaa,period=2000000,umask=300macro_insts.cisc_decodedfrontendCISC macro instructions decodedevent=0xaa,period=2000000,umask=200macro_insts.non_cisc_decodedfrontendNon-CISC macro instructions decodedevent=0xaa,period=2000000,umask=100uops.ms_cyclesfrontendThis event counts the cycles where 1 or more uops are issued by the micro-sequencer (MS), including microcode assists and inserted flows, and written to the IQevent=0xa9,cmask=1,period=2000000,umask=100misalign_mem_ref.bubblememoryNonzero segbase 1 bubbleevent=5,period=200000,umask=0x9700misalign_mem_ref.ld_bubblememoryNonzero segbase load 1 bubbleevent=5,period=200000,umask=0x9100misalign_mem_ref.ld_splitmemoryLoad splitsevent=5,period=200000,umask=900misalign_mem_ref.ld_split.armemoryLoad splits (At Retirement)event=5,period=200000,umask=0x8900misalign_mem_ref.rmw_bubblememoryNonzero segbase ld-op-st 1 bubbleevent=5,period=200000,umask=0x9400misalign_mem_ref.rmw_splitmemoryld-op-st splitsevent=5,period=200000,umask=0x8c00misalign_mem_ref.splitmemoryMemory references that cross an 8-byte boundaryevent=5,period=200000,umask=0xf00misalign_mem_ref.split.armemoryMemory references that cross an 8-byte boundary (At Retirement)event=5,period=200000,umask=0x8f00misalign_mem_ref.st_bubblememoryNonzero segbase store 1 bubbleevent=5,period=200000,umask=0x9200misalign_mem_ref.st_splitmemoryStore splitsevent=5,period=200000,umask=0xa00misalign_mem_ref.st_split.armemoryStore splits (Ar Retirement)event=5,period=200000,umask=0x8a00prefetch.hw_prefetchmemoryL1 hardware prefetch requestevent=7,period=2000000,umask=0x1000prefetch.prefetchntamemoryStreaming SIMD Extensions (SSE) Prefetch NTA instructions executedevent=7,period=200000,umask=0x8800prefetch.prefetcht0memoryStreaming SIMD Extensions (SSE) PrefetchT0 instructions executedevent=7,period=200000,umask=0x8100prefetch.prefetcht1memoryStreaming SIMD Extensions (SSE) PrefetchT1 instructions executedevent=7,period=200000,umask=0x8200prefetch.prefetcht2memoryStreaming SIMD Extensions (SSE) PrefetchT2 instructions executedevent=7,period=200000,umask=0x8400prefetch.software_prefetchmemoryAny Software prefetchevent=7,period=200000,umask=0xf00prefetch.software_prefetch.armemoryAny Software prefetchevent=7,period=200000,umask=0x8f00prefetch.sw_l2memoryStreaming SIMD Extensions (SSE) PrefetchT1 and PrefetchT2 instructions executedevent=7,period=200000,umask=0x8600busq_empty.selfotherBus queue is emptyevent=0x7d,period=200000,umask=0x4000bus_bnr_drv.all_agentsotherNumber of Bus Not Ready signals assertedevent=0x61,period=200000,umask=0x2000bus_bnr_drv.this_agentotherNumber of Bus Not Ready signals assertedevent=0x61,period=20000000bus_data_rcv.selfotherBus cycles while processor receives dataevent=0x64,period=200000,umask=0x4000bus_drdy_clocks.all_agentsotherBus cycles when data is sent on the busevent=0x62,period=200000,umask=0x2000bus_drdy_clocks.this_agentotherBus cycles when data is sent on the busevent=0x62,period=20000000bus_hitm_drv.all_agentsotherHITM signal assertedevent=0x7b,period=200000,umask=0x2000bus_hitm_drv.this_agentotherHITM signal assertedevent=0x7b,period=20000000bus_hit_drv.all_agentsotherHIT signal assertedevent=0x7a,period=200000,umask=0x2000bus_hit_drv.this_agentotherHIT signal assertedevent=0x7a,period=20000000bus_io_wait.selfotherIO requests waiting in the bus queueevent=0x7f,period=200000,umask=0x4000bus_lock_clocks.all_agentsotherBus cycles when a LOCK signal is assertedevent=0x63,period=200000,umask=0xe000bus_lock_clocks.selfotherBus cycles when a LOCK signal is assertedevent=0x63,period=200000,umask=0x4000bus_request_outstanding.all_agentsotherOutstanding cacheable data read bus requests durationevent=0x60,period=200000,umask=0xe000bus_request_outstanding.selfotherOutstanding cacheable data read bus requests durationevent=0x60,period=200000,umask=0x4000bus_trans_any.all_agentsotherAll bus transactionsevent=0x70,period=200000,umask=0xe000bus_trans_any.selfotherAll bus transactionsevent=0x70,period=200000,umask=0x4000bus_trans_brd.all_agentsotherBurst read bus transactionsevent=0x65,period=200000,umask=0xe000bus_trans_brd.selfotherBurst read bus transactionsevent=0x65,period=200000,umask=0x4000bus_trans_burst.all_agentsotherBurst (full cache-line) bus transactionsevent=0x6e,period=200000,umask=0xe000bus_trans_burst.selfotherBurst (full cache-line) bus transactionsevent=0x6e,period=200000,umask=0x4000bus_trans_def.all_agentsotherDeferred bus transactionsevent=0x6d,period=200000,umask=0xe000bus_trans_def.selfotherDeferred bus transactionsevent=0x6d,period=200000,umask=0x4000bus_trans_ifetch.all_agentsotherInstruction-fetch bus transactionsevent=0x68,period=200000,umask=0xe000bus_trans_ifetch.selfotherInstruction-fetch bus transactionsevent=0x68,period=200000,umask=0x4000bus_trans_inval.all_agentsotherInvalidate bus transactionsevent=0x69,period=200000,umask=0xe000bus_trans_inval.selfotherInvalidate bus transactionsevent=0x69,period=200000,umask=0x4000bus_trans_io.all_agentsotherIO bus transactionsevent=0x6c,period=200000,umask=0xe000bus_trans_io.selfotherIO bus transactionsevent=0x6c,period=200000,umask=0x4000bus_trans_mem.all_agentsotherMemory bus transactionsevent=0x6f,period=200000,umask=0xe000bus_trans_mem.selfotherMemory bus transactionsevent=0x6f,period=200000,umask=0x4000bus_trans_p.all_agentsotherPartial bus transactionsevent=0x6b,period=200000,umask=0xe000bus_trans_p.selfotherPartial bus transactionsevent=0x6b,period=200000,umask=0x4000bus_trans_pwr.all_agentsotherPartial write bus transactionevent=0x6a,period=200000,umask=0xe000bus_trans_pwr.selfotherPartial write bus transactionevent=0x6a,period=200000,umask=0x4000bus_trans_rfo.all_agentsotherRFO bus transactionsevent=0x66,period=200000,umask=0xe000bus_trans_rfo.selfotherRFO bus transactionsevent=0x66,period=200000,umask=0x4000bus_trans_wb.all_agentsotherExplicit writeback bus transactionsevent=0x67,period=200000,umask=0xe000bus_trans_wb.selfotherExplicit writeback bus transactionsevent=0x67,period=200000,umask=0x4000cycles_int_masked.cycles_int_maskedotherCycles during which interrupts are disabledevent=0xc6,period=2000000,umask=100cycles_int_masked.cycles_int_pending_and_maskedotherCycles during which interrupts are pending and disabledevent=0xc6,period=2000000,umask=200ext_snoop.all_agents.anyotherExternal snoopsevent=0x77,period=200000,umask=0x2b00ext_snoop.all_agents.cleanotherExternal snoopsevent=0x77,period=200000,umask=0x2100ext_snoop.all_agents.hitotherExternal snoopsevent=0x77,period=200000,umask=0x2200ext_snoop.all_agents.hitmotherExternal snoopsevent=0x77,period=200000,umask=0x2800ext_snoop.this_agent.anyotherExternal snoopsevent=0x77,period=200000,umask=0xb00ext_snoop.this_agent.cleanotherExternal snoopsevent=0x77,period=200000,umask=100ext_snoop.this_agent.hitotherExternal snoopsevent=0x77,period=200000,umask=200ext_snoop.this_agent.hitmotherExternal snoopsevent=0x77,period=200000,umask=800hw_int_rcvotherHardware interrupts receivedevent=0xc8,period=20000000snoop_stall_drv.all_agentsotherBus stalled for snoopsevent=0x7e,period=200000,umask=0xe000snoop_stall_drv.selfotherBus stalled for snoopsevent=0x7e,period=200000,umask=0x4000thermal_tripotherNumber of thermal tripsevent=0x3b,period=200000,umask=0xc000bogus_brpipelineBogus branchesevent=0xe4,period=2000000,umask=100br_inst_decodedpipelineBranch instructions decodedevent=0xe0,period=2000000,umask=100br_inst_retired.anypipelineRetired branch instructionsevent=0xc4,period=200000000br_inst_retired.any1pipelineRetired branch instructionsevent=0xc4,period=2000000,umask=0xf00br_inst_retired.mispredpipelineRetired mispredicted branch instructions (precise event) (Precise event)event=0xc5,period=20000000br_inst_retired.mispred_not_takenpipelineRetired branch instructions that were mispredicted not-takenevent=0xc4,period=200000,umask=200br_inst_retired.mispred_takenpipelineRetired branch instructions that were mispredicted takenevent=0xc4,period=200000,umask=800br_inst_retired.pred_not_takenpipelineRetired branch instructions that were predicted not-takenevent=0xc4,period=2000000,umask=100br_inst_retired.pred_takenpipelineRetired branch instructions that were predicted takenevent=0xc4,period=2000000,umask=400br_inst_retired.takenpipelineRetired taken branch instructionsevent=0xc4,period=2000000,umask=0xc00br_inst_type_retired.condpipelineAll macro conditional branch instructionsevent=0x88,period=2000000,umask=100br_inst_type_retired.cond_takenpipelineOnly taken macro conditional branch instructionsevent=0x88,period=2000000,umask=0x4100br_inst_type_retired.dir_callpipelineAll non-indirect callsevent=0x88,period=2000000,umask=0x1000br_inst_type_retired.indpipelineAll indirect branches that are not callsevent=0x88,period=2000000,umask=400br_inst_type_retired.ind_callpipelineAll indirect calls, including both register and memory indirectevent=0x88,period=2000000,umask=0x2000br_inst_type_retired.retpipelineAll indirect branches that have a return mnemonicevent=0x88,period=2000000,umask=800br_inst_type_retired.uncondpipelineAll macro unconditional branch instructions, excluding calls and indirectsevent=0x88,period=2000000,umask=200br_missp_type_retired.condpipelineMispredicted cond branch instructions retiredevent=0x89,period=200000,umask=100br_missp_type_retired.cond_takenpipelineMispredicted and taken cond branch instructions retiredevent=0x89,period=200000,umask=0x1100br_missp_type_retired.indpipelineMispredicted ind branches that are not callsevent=0x89,period=200000,umask=200br_missp_type_retired.ind_callpipelineMispredicted indirect calls, including both register and memory indirectevent=0x89,period=200000,umask=800br_missp_type_retired.returnpipelineMispredicted return branchesevent=0x89,period=200000,umask=400cpu_clk_unhalted.buspipelineBus cycles when core is not haltedevent=0x3c,period=200000,umask=100cpu_clk_unhalted.corepipelineCore cycles when core is not haltedevent=0x3c,period=200000300cpu_clk_unhalted.core_ppipelineCore cycles when core is not haltedevent=0x3c,period=200000000cpu_clk_unhalted.refpipelineReference cycles when core is not haltedevent=0x0,umask=0x03,period=200000300div.arpipelineDivide operations retiredevent=0x13,period=2000000,umask=0x8100div.spipelineDivide operations executedevent=0x13,period=2000000,umask=100inst_retired.anypipelineInstructions retiredevent=0xc0,period=200000300inst_retired.any_ppipelineInstructions retired (precise event) (Must be precise)event=0xc0,period=200000300machine_clears.smcpipelineSelf-Modifying Code detectedevent=0xc3,period=200000,umask=100mul.arpipelineMultiply operations retiredevent=0x12,period=2000000,umask=0x8100mul.spipelineMultiply operations executedevent=0x12,period=2000000,umask=100reissue.anypipelineMicro-op reissues for any causeevent=3,period=200000,umask=0x7f00reissue.any.arpipelineMicro-op reissues for any cause (At Retirement)event=3,period=200000,umask=0xff00reissue.overlap_storepipelineMicro-op reissues on a store-load collisionevent=3,period=200000,umask=100reissue.overlap_store.arpipelineMicro-op reissues on a store-load collision (At Retirement)event=3,period=200000,umask=0x8100resource_stalls.div_busypipelineCycles issue is stalled due to div busyevent=0xdc,period=2000000,umask=200store_forwards.anypipelineAll store forwardsevent=2,period=200000,umask=0x8300store_forwards.goodpipelineGood store forwardsevent=2,period=200000,umask=0x8100uops_retired.anypipelineMicro-ops retiredevent=0xc2,period=2000000,umask=0x1000uops_retired.stalled_cyclespipelineCycles no micro-ops retiredevent=0xc2,period=2000000,umask=0x1000uops_retired.stallspipelinePeriods no micro-ops retiredevent=0xc2,period=2000000,umask=0x1000data_tlb_misses.dtlb_missvirtual memoryMemory accesses that missed the DTLBevent=8,period=200000,umask=700data_tlb_misses.dtlb_miss_ldvirtual memoryDTLB misses due to load operationsevent=8,period=200000,umask=500data_tlb_misses.dtlb_miss_stvirtual memoryDTLB misses due to store operationsevent=8,period=200000,umask=600data_tlb_misses.l0_dtlb_miss_ldvirtual memoryL0 DTLB misses due to load operationsevent=8,period=200000,umask=900data_tlb_misses.l0_dtlb_miss_stvirtual memoryL0 DTLB misses due to store operationsevent=8,period=200000,umask=0xa00itlb.flushvirtual memoryITLB flushesevent=0x82,period=200000,umask=400itlb.hitvirtual memoryITLB hitsevent=0x82,period=200000,umask=100itlb.missesvirtual memoryITLB misses (Must be precise)event=0x82,period=200000,umask=200mem_load_retired.dtlb_missvirtual memoryRetired loads that miss the DTLB (precise event) (Precise event)event=0xcb,period=200000,umask=400page_walks.cyclesvirtual memoryDuration of page-walks in core cyclesevent=0xc,period=2000000,umask=300page_walks.d_side_cyclesvirtual memoryDuration of D-side only page walksevent=0xc,period=2000000,umask=100page_walks.d_side_walksvirtual memoryNumber of D-side only page walksevent=0xc,period=200000,umask=100page_walks.i_side_cyclesvirtual memoryDuration of I-Side page walksevent=0xc,period=2000000,umask=200page_walks.i_side_walksvirtual memoryNumber of I-Side page walksevent=0xc,period=200000,umask=200page_walks.walksvirtual memoryNumber of page-walks executedevent=0xc,period=200000,umask=300l1d.replacementcacheL1D data line replacementsevent=0x51,period=2000003,umask=100This event counts L1D data line replacements including opportunistic replacements, and replacements that require stall-for-replace or block-for-replacel1d_pend_miss.fb_fullcacheCycles a demand request was blocked due to Fill Buffers unavailabilityevent=0x48,cmask=1,period=2000003,umask=200l1d_pend_miss.pendingcacheL1D miss outstandings duration in cyclesevent=0x48,period=2000003,umask=100This event counts duration of L1D miss outstanding, that is each cycle number of Fill Buffers (FB) outstanding required by Demand Reads. FB either is held by demand loads, or it is held by non-demand loads and gets hit at least once by demand. The valid outstanding interval is defined until the FB deallocation by one of the following ways: from FB allocation, if FB is allocated by demand; from the demand Hit FB, if it is allocated by hardware or software prefetch.
Note: In the L1D, a Demand Read contains cacheable or noncacheable demand loads, including ones causing cache-line splits and reads due to page walks resulted from any request typel1d_pend_miss.pending_cyclescacheCycles with L1D load Misses outstandingevent=0x48,cmask=1,period=2000003,umask=100This event counts duration of L1D miss outstanding in cyclesl1d_pend_miss.pending_cycles_anycacheCycles with L1D load Misses outstanding from any thread on physical coreevent=0x48,any=1,cmask=1,period=2000003,umask=100l2_demand_rqsts.wb_hitcacheNot rejected writebacks that hit L2 cacheevent=0x27,period=200003,umask=0x5000This event counts the number of WB requests that hit L2 cachel2_lines_in.allcacheL2 cache lines filling L2event=0xf1,period=100003,umask=700This event counts the number of L2 cache lines filling the L2. Counting does not cover rejectsl2_lines_in.ecacheL2 cache lines in E state filling L2event=0xf1,period=100003,umask=400This event counts the number of L2 cache lines in the Exclusive state filling the L2. Counting does not cover rejectsl2_lines_in.icacheL2 cache lines in I state filling L2event=0xf1,period=100003,umask=100This event counts the number of L2 cache lines in the Invalidate state filling the L2. Counting does not cover rejectsl2_lines_in.scacheL2 cache lines in S state filling L2event=0xf1,period=100003,umask=200This event counts the number of L2 cache lines in the Shared state filling the L2. Counting does not cover rejectsl2_lines_out.demand_cleancacheClean L2 cache lines evicted by demandevent=0xf2,period=100003,umask=500l2_rqsts.all_code_rdcacheL2 code requestsevent=0x24,period=200003,umask=0xe400This event counts the total number of L2 code requestsl2_rqsts.all_demand_data_rdcacheDemand Data Read requestsevent=0x24,period=200003,umask=0xe100This event counts the number of demand Data Read requests (including requests from L1D hardware prefetchers). These loads may hit or miss L2 cache. Only non rejected loads are countedl2_rqsts.all_demand_misscacheDemand requests that miss L2 cacheevent=0x24,period=200003,umask=0x2700l2_rqsts.all_demand_referencescacheDemand requests to L2 cacheevent=0x24,period=200003,umask=0xe700l2_rqsts.all_pfcacheRequests from L2 hardware prefetchersevent=0x24,period=200003,umask=0xf800This event counts the total number of requests from the L2 hardware prefetchersl2_rqsts.all_rfocacheRFO requests to L2 cacheevent=0x24,period=200003,umask=0xe200This event counts the total number of RFO (read for ownership) requests to L2 cache. L2 RFO requests include both L1D demand RFO misses as well as L1D RFO prefetchesl2_rqsts.code_rd_hitcacheL2 cache hits when fetching instructions, code readsevent=0x24,period=200003,umask=0xc400l2_rqsts.code_rd_misscacheL2 cache misses when fetching instructionsevent=0x24,period=200003,umask=0x2400l2_rqsts.demand_data_rd_hitcacheDemand Data Read requests that hit L2 cacheevent=0x24,period=200003,umask=0xc100Counts the number of demand Data Read requests, initiated by load instructions, that hit L2 cachel2_rqsts.demand_data_rd_misscacheDemand Data Read miss L2, no rejectsevent=0x24,period=200003,umask=0x2100This event counts the number of demand Data Read requests that miss L2 cache. Only not rejected loads are countedl2_rqsts.l2_pf_hitcacheL2 prefetch requests that hit L2 cacheevent=0x24,period=200003,umask=0xd000This event counts the number of requests from the L2 hardware prefetchers that hit L2 cache. L3 prefetch new typesl2_rqsts.l2_pf_misscacheL2 prefetch requests that miss L2 cacheevent=0x24,period=200003,umask=0x3000This event counts the number of requests from the L2 hardware prefetchers that miss L2 cachel2_rqsts.misscacheAll requests that miss L2 cacheevent=0x24,period=200003,umask=0x3f00l2_rqsts.referencescacheAll L2 requestsevent=0x24,period=200003,umask=0xff00l2_rqsts.rfo_hitcacheRFO requests that hit L2 cacheevent=0x24,period=200003,umask=0xc200l2_rqsts.rfo_misscacheRFO requests that miss L2 cacheevent=0x24,period=200003,umask=0x2200l2_trans.all_pfcacheL2 or L3 HW prefetches that access L2 cacheevent=0xf0,period=200003,umask=800This event counts L2 or L3 HW prefetches that access L2 cache including rejectsl2_trans.all_requestscacheTransactions accessing L2 pipeevent=0xf0,period=200003,umask=0x8000This event counts transactions that access the L2 pipe including snoops, pagewalks, and so onl2_trans.code_rdcacheL2 cache accesses when fetching instructionsevent=0xf0,period=200003,umask=400This event counts the number of L2 cache accesses when fetching instructionsl2_trans.demand_data_rdcacheDemand Data Read requests that access L2 cacheevent=0xf0,period=200003,umask=100This event counts Demand Data Read requests that access L2 cache, including rejectsl2_trans.l1d_wbcacheL1D writebacks that access L2 cacheevent=0xf0,period=200003,umask=0x1000This event counts L1D writebacks that access L2 cachel2_trans.l2_fillcacheL2 fill requests that access L2 cacheevent=0xf0,period=200003,umask=0x2000This event counts L2 fill requests that access L2 cachel2_trans.l2_wbcacheL2 writebacks that access L2 cacheevent=0xf0,period=200003,umask=0x4000This event counts L2 writebacks that access L2 cachel2_trans.rfocacheRFO requests that access L2 cacheevent=0xf0,period=200003,umask=200This event counts Read for Ownership (RFO) requests that access L2 cachelock_cycles.cache_lock_durationcacheCycles when L1D is lockedevent=0x63,period=2000003,umask=200This event counts the number of cycles when the L1D is locked. It is a superset of the 0x1 mask (BUS_LOCK_CLOCKS.BUS_LOCK_DURATION)longest_lat_cache.misscacheCore-originated cacheable demand requests missed L3event=0x2e,period=100003,umask=0x4100This event counts core-originated cacheable demand requests that miss the last level cache (LLC). Demand requests include loads, RFOs, and hardware prefetches from L1D, and instruction fetches from IFUlongest_lat_cache.referencecacheCore-originated cacheable demand requests that refer to L3event=0x2e,period=100003,umask=0x4f00This event counts core-originated cacheable demand requests that refer to the last level cache (LLC). Demand requests include loads, RFOs, and hardware prefetches from L1D, and instruction fetches from IFUmem_load_uops_l3_hit_retired.xsnp_hitcacheRetired load uops which data sources were L3 and cross-core snoop hits in on-pkg core cache  Supports address when precise.  Spec update: BDM100 (Precise event)event=0xd2,period=20011,umask=200This event counts retired load uops which data sources were L3 hit and a cross-core snoop hit in the on-pkg core cache  Supports address when precise.  Spec update: BDM100 (Precise event)mem_load_uops_l3_hit_retired.xsnp_hitmcacheRetired load uops which data sources were HitM responses from shared L3  Supports address when precise.  Spec update: BDM100 (Precise event)event=0xd2,period=20011,umask=400This event counts retired load uops which data sources were HitM responses from a core on same socket (shared L3)  Supports address when precise.  Spec update: BDM100 (Precise event)mem_load_uops_l3_hit_retired.xsnp_misscacheRetired load uops which data sources were L3 hit and cross-core snoop missed in on-pkg core cache  Supports address when precise.  Spec update: BDM100 (Precise event)event=0xd2,period=20011,umask=100This event counts retired load uops which data sources were L3 Hit and a cross-core snoop missed in the on-pkg core cache  Supports address when precise.  Spec update: BDM100 (Precise event)mem_load_uops_l3_hit_retired.xsnp_nonecacheRetired load uops which data sources were hits in L3 without snoops required  Supports address when precise.  Spec update: BDM100 (Precise event)event=0xd2,period=100003,umask=800This event counts retired load uops which data sources were hits in the last-level (L3) cache without snoops required  Supports address when precise.  Spec update: BDM100 (Precise event)mem_load_uops_l3_miss_retired.local_dramcacheData from local DRAM either Snoop not needed or Snoop Miss (RspI)  Supports address when precise.  Spec update: BDE70, BDM100 (Precise event)event=0xd3,period=100007,umask=100Retired load uop whose Data Source was: local DRAM either Snoop not needed or Snoop Miss (RspI)  Supports address when precise.  Spec update: BDE70, BDM100 (Precise event)mem_load_uops_retired.hit_lfbcacheRetired load uops which data sources were load uops missed L1 but hit FB due to preceding miss to the same cache line with data not ready  Supports address when precise (Precise event)event=0xd1,period=100003,umask=0x4000This event counts retired load uops which data sources were load uops missed L1 but hit a fill buffer due to a preceding miss to the same cache line with the data not ready.
Note: Only two data-sources of L1/FB are applicable for AVX-256bit  even though the corresponding AVX load could be serviced by a deeper level in the memory hierarchy. Data source is reported for the Low-half load  Supports address when precise (Precise event)mem_load_uops_retired.l1_hitcacheRetired load uops with L1 cache hits as data sources  Supports address when precise (Precise event)event=0xd1,period=2000003,umask=100This event counts retired load uops which data sources were hits in the nearest-level (L1) cache.
Note: Only two data-sources of L1/FB are applicable for AVX-256bit  even though the corresponding AVX load could be serviced by a deeper level in the memory hierarchy. Data source is reported for the Low-half load. This event also counts SW prefetches independent of the actual data source  Supports address when precise (Precise event)mem_load_uops_retired.l1_misscacheRetired load uops misses in L1 cache as data sources  Supports address when precise (Precise event)event=0xd1,period=100003,umask=800This event counts retired load uops which data sources were misses in the nearest-level (L1) cache. Counting excludes unknown and UC data source  Supports address when precise (Precise event)mem_load_uops_retired.l2_hitcacheRetired load uops with L2 cache hits as data sources  Supports address when precise.  Spec update: BDM35 (Precise event)event=0xd1,period=100003,umask=200This event counts retired load uops which data sources were hits in the mid-level (L2) cache  Supports address when precise.  Spec update: BDM35 (Precise event)mem_load_uops_retired.l2_misscacheMiss in mid-level (L2) cache. Excludes Unknown data-source  Supports address when precise (Precise event)event=0xd1,period=50021,umask=0x1000This event counts retired load uops which data sources were misses in the mid-level (L2) cache. Counting excludes unknown and UC data source  Supports address when precise (Precise event)mem_load_uops_retired.l3_hitcacheRetired load uops which data sources were data hits in L3 without snoops required  Supports address when precise.  Spec update: BDM100 (Precise event)event=0xd1,period=50021,umask=400This event counts retired load uops which data sources were data hits in the last-level (L3) cache without snoops required  Supports address when precise.  Spec update: BDM100 (Precise event)mem_load_uops_retired.l3_misscacheMiss in last-level (L3) cache. Excludes Unknown data-source  Supports address when precise.  Spec update: BDM100, BDE70 (Precise event)event=0xd1,period=100007,umask=0x2000mem_uops_retired.all_loadscacheRetired load uops  Supports address when precise (Precise event)event=0xd0,period=2000003,umask=0x8100Counts all retired load uops. This event accounts for SW prefetch uops of PREFETCHNTA or PREFETCHT0/1/2 or PREFETCHW  Supports address when precise (Precise event)mem_uops_retired.all_storescacheRetired store uops  Supports address when precise (Precise event)event=0xd0,period=2000003,umask=0x8200Counts all retired store uops  Supports address when precise (Precise event)mem_uops_retired.lock_loadscacheRetired load uops with locked access  Supports address when precise.  Spec update: BDM35 (Precise event)event=0xd0,period=100007,umask=0x2100This event counts load uops with locked access retired to the architected path  Supports address when precise.  Spec update: BDM35 (Precise event)mem_uops_retired.split_loadscacheRetired load uops that split across a cacheline boundary  Supports address when precise (Precise event)event=0xd0,period=100003,umask=0x4100This event counts line-splitted load uops retired to the architected path. A line split is across 64B cache-line which includes a page split (4K)  Supports address when precise (Precise event)mem_uops_retired.split_storescacheRetired store uops that split across a cacheline boundary  Supports address when precise (Precise event)event=0xd0,period=100003,umask=0x4200This event counts line-splitted store uops retired to the architected path. A line split is across 64B cache-line which includes a page split (4K)  Supports address when precise (Precise event)mem_uops_retired.stlb_miss_loadscacheRetired load uops that miss the STLB  Supports address when precise (Precise event)event=0xd0,period=100003,umask=0x1100This event counts load uops with true STLB miss retired to the architected path. True STLB miss is an uop triggering page walk that gets completed without blocks, and later gets retired. This page walk can end up with or without a fault  Supports address when precise (Precise event)mem_uops_retired.stlb_miss_storescacheRetired store uops that miss the STLB  Supports address when precise (Precise event)event=0xd0,period=100003,umask=0x1200This event counts store uops with true STLB miss retired to the architected path. True STLB miss is an uop triggering page walk that gets completed without blocks, and later gets retired. This page walk can end up with or without a fault  Supports address when precise (Precise event)offcore_requests.all_data_rdcacheDemand and prefetch data readsevent=0xb0,period=100003,umask=800This event counts the demand and prefetch data reads. All Core Data Reads include cacheable Demands and L2 prefetchers (not L3 prefetchers). Counting also covers reads due to page walks resulted from any request typeoffcore_requests.all_requestscacheAny memory transaction that reached the SQevent=0xb0,period=100003,umask=0x8000This event counts memory transactions reached the super queue including requests initiated by the core, all L3 prefetches, page walks, and so onoffcore_requests.demand_code_rdcacheCacheable and non-cacheable code read requestsevent=0xb0,period=100003,umask=200This event counts both cacheable and non-cacheable code read requestsoffcore_requests.demand_data_rdcacheDemand Data Read requests sent to uncoreevent=0xb0,period=100003,umask=100This event counts the Demand Data Read requests sent to uncore. Use it in conjunction with OFFCORE_REQUESTS_OUTSTANDING to determine average latency in the uncoreoffcore_requests.demand_rfocacheDemand RFO requests including regular RFOs, locks, ItoMevent=0xb0,period=100003,umask=400This event counts the demand RFO (read for ownership) requests including regular RFOs, locks, ItoMoffcore_requests_buffer.sq_fullcacheOffcore requests buffer cannot take more entries for this thread coreevent=0xb2,period=2000003,umask=100This event counts the number of cases when the offcore requests buffer cannot take more entries for the core. This can happen when the superqueue does not contain eligible entries, or when L1D writeback pending FIFO requests is full.
Note: Writeback pending FIFO has six entriesoffcore_requests_outstanding.all_data_rdcacheOffcore outstanding cacheable Core Data Read transactions in SuperQueue (SQ), queue to uncore  Spec update: BDM76event=0x60,period=2000003,umask=800This event counts the number of offcore outstanding cacheable Core Data Read transactions in the super queue every cycle. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation). See corresponding Umask under OFFCORE_REQUESTS  Spec update: BDM76offcore_requests_outstanding.cycles_with_data_rdcacheCycles when offcore outstanding cacheable Core Data Read transactions are present in SuperQueue (SQ), queue to uncore  Spec update: BDM76event=0x60,cmask=1,period=2000003,umask=800This event counts cycles when offcore outstanding cacheable Core Data Read transactions are present in the super queue. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation). See corresponding Umask under OFFCORE_REQUESTS  Spec update: BDM76offcore_requests_outstanding.cycles_with_demand_data_rdcacheCycles when offcore outstanding Demand Data Read transactions are present in SuperQueue (SQ), queue to uncore  Spec update: BDM76event=0x60,cmask=1,period=2000003,umask=100This event counts cycles when offcore outstanding Demand Data Read transactions are present in the super queue (SQ). A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation)  Spec update: BDM76offcore_requests_outstanding.cycles_with_demand_rfocacheOffcore outstanding demand rfo reads transactions in SuperQueue (SQ), queue to uncore, every cycle  Spec update: BDM76event=0x60,cmask=1,period=2000003,umask=400This event counts the number of offcore outstanding demand rfo Reads transactions in the super queue every cycle. The Offcore outstanding state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTS  Spec update: BDM76offcore_requests_outstanding.demand_code_rdcacheOffcore outstanding code reads transactions in SuperQueue (SQ), queue to uncore, every cycle  Spec update: BDM76event=0x60,period=2000003,umask=200This event counts the number of offcore outstanding Code Reads transactions in the super queue every cycle. The Offcore outstanding state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTS  Spec update: BDM76offcore_requests_outstanding.demand_data_rdcacheOffcore outstanding Demand Data Read transactions in uncore queue  Spec update: BDM76event=0x60,period=2000003,umask=100This event counts the number of offcore outstanding Demand Data Read transactions in the super queue (SQ) every cycle. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor. See the corresponding Umask under OFFCORE_REQUESTS.
Note: A prefetch promoted to Demand is counted from the promotion point  Spec update: BDM76offcore_requests_outstanding.demand_data_rd_ge_6cacheCycles with at least 6 offcore outstanding Demand Data Read transactions in uncore queue  Spec update: BDM76event=0x60,cmask=6,period=2000003,umask=100offcore_requests_outstanding.demand_rfocacheOffcore outstanding RFO store transactions in SuperQueue (SQ), queue to uncore  Spec update: BDM76event=0x60,period=2000003,umask=400This event counts the number of offcore outstanding RFO (store) transactions in the super queue (SQ) every cycle. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation). See corresponding Umask under OFFCORE_REQUESTS  Spec update: BDM76offcore_responsecacheOffcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transactionevent=0xb7,period=100003,umask=100offcore_response.all_data_rd.any_responsecacheCounts all demand & prefetch data reads have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1009100offcore_response.all_data_rd.l3_hit.any_snoopcacheCounts all demand & prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C009100offcore_response.all_data_rd.l3_hit.snoop_hitmcacheCounts all demand & prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C009100offcore_response.all_data_rd.l3_hit.snoop_hit_no_fwdcacheCounts all demand & prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C009100offcore_response.all_data_rd.l3_hit.snoop_misscacheCounts all demand & prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C009100offcore_response.all_data_rd.l3_hit.snoop_nonecacheCounts all demand & prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C009100offcore_response.all_data_rd.l3_hit.snoop_not_neededcacheCounts all demand & prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C009100offcore_response.all_data_rd.supplier_none.any_snoopcacheCounts all demand & prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002009100offcore_response.all_data_rd.supplier_none.snoop_hitmcacheCounts all demand & prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002009100offcore_response.all_data_rd.supplier_none.snoop_hit_no_fwdcacheCounts all demand & prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002009100offcore_response.all_data_rd.supplier_none.snoop_misscacheCounts all demand & prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002009100offcore_response.all_data_rd.supplier_none.snoop_nonecacheCounts all demand & prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002009100offcore_response.all_data_rd.supplier_none.snoop_not_neededcacheCounts all demand & prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002009100offcore_response.all_pf_code_rd.any_responsecacheCounts all prefetch code reads have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1024000offcore_response.all_pf_code_rd.l3_hit.any_snoopcacheCounts all prefetch code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C024000offcore_response.all_pf_code_rd.l3_hit.snoop_hitmcacheCounts all prefetch code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C024000offcore_response.all_pf_code_rd.l3_hit.snoop_hit_no_fwdcacheCounts all prefetch code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C024000offcore_response.all_pf_code_rd.l3_hit.snoop_misscacheCounts all prefetch code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C024000offcore_response.all_pf_code_rd.l3_hit.snoop_nonecacheCounts all prefetch code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C024000offcore_response.all_pf_code_rd.l3_hit.snoop_not_neededcacheCounts all prefetch code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C024000offcore_response.all_pf_code_rd.supplier_none.any_snoopcacheCounts all prefetch code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002024000offcore_response.all_pf_code_rd.supplier_none.snoop_hitmcacheCounts all prefetch code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002024000offcore_response.all_pf_code_rd.supplier_none.snoop_hit_no_fwdcacheCounts all prefetch code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002024000offcore_response.all_pf_code_rd.supplier_none.snoop_misscacheCounts all prefetch code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002024000offcore_response.all_pf_code_rd.supplier_none.snoop_nonecacheCounts all prefetch code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002024000offcore_response.all_pf_code_rd.supplier_none.snoop_not_neededcacheCounts all prefetch code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002024000offcore_response.all_pf_data_rd.any_responsecacheCounts all prefetch data reads have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1009000offcore_response.all_pf_data_rd.l3_hit.any_snoopcacheCounts all prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C009000offcore_response.all_pf_data_rd.l3_hit.snoop_hitmcacheCounts all prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C009000offcore_response.all_pf_data_rd.l3_hit.snoop_hit_no_fwdcacheCounts all prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C009000offcore_response.all_pf_data_rd.l3_hit.snoop_misscacheCounts all prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C009000offcore_response.all_pf_data_rd.l3_hit.snoop_nonecacheCounts all prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C009000offcore_response.all_pf_data_rd.l3_hit.snoop_not_neededcacheCounts all prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C009000offcore_response.all_pf_data_rd.supplier_none.any_snoopcacheCounts all prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002009000offcore_response.all_pf_data_rd.supplier_none.snoop_hitmcacheCounts all prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002009000offcore_response.all_pf_data_rd.supplier_none.snoop_hit_no_fwdcacheCounts all prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002009000offcore_response.all_pf_data_rd.supplier_none.snoop_misscacheCounts all prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002009000offcore_response.all_pf_data_rd.supplier_none.snoop_nonecacheCounts all prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002009000offcore_response.all_pf_data_rd.supplier_none.snoop_not_neededcacheCounts all prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002009000offcore_response.all_pf_rfo.any_responsecacheCounts prefetch RFOs have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1012000offcore_response.all_pf_rfo.l3_hit.any_snoopcacheCounts prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C012000offcore_response.all_pf_rfo.l3_hit.snoop_hitmcacheCounts prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C012000offcore_response.all_pf_rfo.l3_hit.snoop_hit_no_fwdcacheCounts prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C012000offcore_response.all_pf_rfo.l3_hit.snoop_misscacheCounts prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C012000offcore_response.all_pf_rfo.l3_hit.snoop_nonecacheCounts prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C012000offcore_response.all_pf_rfo.l3_hit.snoop_not_neededcacheCounts prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C012000offcore_response.all_pf_rfo.supplier_none.any_snoopcacheCounts prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002012000offcore_response.all_pf_rfo.supplier_none.snoop_hitmcacheCounts prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002012000offcore_response.all_pf_rfo.supplier_none.snoop_hit_no_fwdcacheCounts prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002012000offcore_response.all_pf_rfo.supplier_none.snoop_misscacheCounts prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002012000offcore_response.all_pf_rfo.supplier_none.snoop_nonecacheCounts prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002012000offcore_response.all_pf_rfo.supplier_none.snoop_not_neededcacheCounts prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002012000offcore_response.all_rfo.any_responsecacheCounts all demand & prefetch RFOs have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1012200offcore_response.all_rfo.l3_hit.any_snoopcacheCounts all demand & prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C012200offcore_response.all_rfo.l3_hit.snoop_hitmcacheCounts all demand & prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C012200offcore_response.all_rfo.l3_hit.snoop_hit_no_fwdcacheCounts all demand & prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C012200offcore_response.all_rfo.l3_hit.snoop_misscacheCounts all demand & prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C012200offcore_response.all_rfo.l3_hit.snoop_nonecacheCounts all demand & prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C012200offcore_response.all_rfo.l3_hit.snoop_not_neededcacheCounts all demand & prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C012200offcore_response.all_rfo.supplier_none.any_snoopcacheCounts all demand & prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002012200offcore_response.all_rfo.supplier_none.snoop_hitmcacheCounts all demand & prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002012200offcore_response.all_rfo.supplier_none.snoop_hit_no_fwdcacheCounts all demand & prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002012200offcore_response.all_rfo.supplier_none.snoop_misscacheCounts all demand & prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002012200offcore_response.all_rfo.supplier_none.snoop_nonecacheCounts all demand & prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002012200offcore_response.all_rfo.supplier_none.snoop_not_neededcacheCounts all demand & prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002012200offcore_response.corewb.any_responsecacheCounts writebacks (modified to exclusive) have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000800offcore_response.corewb.l3_hit.any_snoopcacheCounts writebacks (modified to exclusive)event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C000800offcore_response.corewb.l3_hit.snoop_hitmcacheCounts writebacks (modified to exclusive)event=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000800offcore_response.corewb.l3_hit.snoop_hit_no_fwdcacheCounts writebacks (modified to exclusive)event=0xb7,period=100003,umask=1,offcore_rsp=0x4003C000800offcore_response.corewb.l3_hit.snoop_misscacheCounts writebacks (modified to exclusive)event=0xb7,period=100003,umask=1,offcore_rsp=0x2003C000800offcore_response.corewb.l3_hit.snoop_nonecacheCounts writebacks (modified to exclusive)event=0xb7,period=100003,umask=1,offcore_rsp=0x803C000800offcore_response.corewb.l3_hit.snoop_not_neededcacheCounts writebacks (modified to exclusive)event=0xb7,period=100003,umask=1,offcore_rsp=0x1003C000800offcore_response.corewb.supplier_none.any_snoopcacheCounts writebacks (modified to exclusive)event=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002000800offcore_response.corewb.supplier_none.snoop_hitmcacheCounts writebacks (modified to exclusive)event=0xb7,period=100003,umask=1,offcore_rsp=0x100002000800offcore_response.corewb.supplier_none.snoop_hit_no_fwdcacheCounts writebacks (modified to exclusive)event=0xb7,period=100003,umask=1,offcore_rsp=0x40002000800offcore_response.corewb.supplier_none.snoop_misscacheCounts writebacks (modified to exclusive)event=0xb7,period=100003,umask=1,offcore_rsp=0x20002000800offcore_response.corewb.supplier_none.snoop_nonecacheCounts writebacks (modified to exclusive)event=0xb7,period=100003,umask=1,offcore_rsp=0x8002000800offcore_response.corewb.supplier_none.snoop_not_neededcacheCounts writebacks (modified to exclusive)event=0xb7,period=100003,umask=1,offcore_rsp=0x10002000800offcore_response.demand_code_rd.any_responsecacheCounts all demand code reads have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000400offcore_response.demand_code_rd.l3_hit.any_snoopcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C000400offcore_response.demand_code_rd.l3_hit.snoop_hitmcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000400offcore_response.demand_code_rd.l3_hit.snoop_hit_no_fwdcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C000400offcore_response.demand_code_rd.l3_hit.snoop_misscacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C000400offcore_response.demand_code_rd.l3_hit.snoop_nonecacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C000400offcore_response.demand_code_rd.l3_hit.snoop_not_neededcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C000400offcore_response.demand_code_rd.supplier_none.any_snoopcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002000400offcore_response.demand_code_rd.supplier_none.snoop_hitmcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002000400offcore_response.demand_code_rd.supplier_none.snoop_hit_no_fwdcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002000400offcore_response.demand_code_rd.supplier_none.snoop_misscacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002000400offcore_response.demand_code_rd.supplier_none.snoop_nonecacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002000400offcore_response.demand_code_rd.supplier_none.snoop_not_neededcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002000400offcore_response.demand_data_rd.any_responsecacheCounts demand data reads have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000100offcore_response.demand_data_rd.l3_hit.any_snoopcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C000100offcore_response.demand_data_rd.l3_hit.snoop_hitmcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000100offcore_response.demand_data_rd.l3_hit.snoop_hit_no_fwdcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C000100offcore_response.demand_data_rd.l3_hit.snoop_misscacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C000100offcore_response.demand_data_rd.l3_hit.snoop_nonecacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C000100offcore_response.demand_data_rd.l3_hit.snoop_not_neededcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C000100offcore_response.demand_data_rd.supplier_none.any_snoopcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002000100offcore_response.demand_data_rd.supplier_none.snoop_hitmcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002000100offcore_response.demand_data_rd.supplier_none.snoop_hit_no_fwdcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002000100offcore_response.demand_data_rd.supplier_none.snoop_misscacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002000100offcore_response.demand_data_rd.supplier_none.snoop_nonecacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002000100offcore_response.demand_data_rd.supplier_none.snoop_not_neededcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002000100offcore_response.demand_rfo.any_responsecacheCounts all demand data writes (RFOs) have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000200offcore_response.demand_rfo.l3_hit.any_snoopcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C000200offcore_response.demand_rfo.l3_hit.snoop_hitmcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000200offcore_response.demand_rfo.l3_hit.snoop_hit_no_fwdcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x4003C000200offcore_response.demand_rfo.l3_hit.snoop_misscacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x2003C000200offcore_response.demand_rfo.l3_hit.snoop_nonecacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x803C000200offcore_response.demand_rfo.l3_hit.snoop_not_neededcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x1003C000200offcore_response.other.any_responsecacheCounts any other requests have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1800000offcore_response.other.l3_hit.any_snoopcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C800000offcore_response.other.l3_hit.snoop_hitmcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C800000offcore_response.other.l3_hit.snoop_hit_no_fwdcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C800000offcore_response.other.l3_hit.snoop_misscacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C800000offcore_response.other.l3_hit.snoop_nonecacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C800000offcore_response.other.l3_hit.snoop_not_neededcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C800000offcore_response.other.supplier_none.any_snoopcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002800000offcore_response.other.supplier_none.snoop_hitmcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002800000offcore_response.other.supplier_none.snoop_hit_no_fwdcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002800000offcore_response.other.supplier_none.snoop_misscacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002800000offcore_response.other.supplier_none.snoop_nonecacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002800000offcore_response.other.supplier_none.snoop_not_neededcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002800000offcore_response.pf_l2_code_rd.any_responsecacheCounts all prefetch (that bring data to LLC only) code reads have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1004000offcore_response.pf_l2_code_rd.l3_hit.any_snoopcacheCounts all prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C004000offcore_response.pf_l2_code_rd.l3_hit.snoop_hitmcacheCounts all prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C004000offcore_response.pf_l2_code_rd.l3_hit.snoop_hit_no_fwdcacheCounts all prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C004000offcore_response.pf_l2_code_rd.l3_hit.snoop_misscacheCounts all prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C004000offcore_response.pf_l2_code_rd.l3_hit.snoop_nonecacheCounts all prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C004000offcore_response.pf_l2_code_rd.l3_hit.snoop_not_neededcacheCounts all prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C004000offcore_response.pf_l2_code_rd.supplier_none.any_snoopcacheCounts all prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002004000offcore_response.pf_l2_code_rd.supplier_none.snoop_hitmcacheCounts all prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002004000offcore_response.pf_l2_code_rd.supplier_none.snoop_hit_no_fwdcacheCounts all prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002004000offcore_response.pf_l2_code_rd.supplier_none.snoop_misscacheCounts all prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002004000offcore_response.pf_l2_code_rd.supplier_none.snoop_nonecacheCounts all prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002004000offcore_response.pf_l2_code_rd.supplier_none.snoop_not_neededcacheCounts all prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002004000offcore_response.pf_l2_data_rd.any_responsecacheCounts prefetch (that bring data to L2) data reads have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1001000offcore_response.pf_l2_data_rd.l3_hit.any_snoopcacheCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C001000offcore_response.pf_l2_data_rd.l3_hit.snoop_hitmcacheCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C001000offcore_response.pf_l2_data_rd.l3_hit.snoop_hit_no_fwdcacheCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C001000offcore_response.pf_l2_data_rd.l3_hit.snoop_misscacheCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C001000offcore_response.pf_l2_data_rd.l3_hit.snoop_nonecacheCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C001000offcore_response.pf_l2_data_rd.l3_hit.snoop_not_neededcacheCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C001000offcore_response.pf_l2_data_rd.supplier_none.any_snoopcacheCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002001000offcore_response.pf_l2_data_rd.supplier_none.snoop_hitmcacheCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002001000offcore_response.pf_l2_data_rd.supplier_none.snoop_hit_no_fwdcacheCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002001000offcore_response.pf_l2_data_rd.supplier_none.snoop_misscacheCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002001000offcore_response.pf_l2_data_rd.supplier_none.snoop_nonecacheCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002001000offcore_response.pf_l2_data_rd.supplier_none.snoop_not_neededcacheCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002001000offcore_response.pf_l2_rfo.any_responsecacheCounts all prefetch (that bring data to L2) RFOs have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1002000offcore_response.pf_l2_rfo.l3_hit.any_snoopcacheCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C002000offcore_response.pf_l2_rfo.l3_hit.snoop_hitmcacheCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C002000offcore_response.pf_l2_rfo.l3_hit.snoop_hit_no_fwdcacheCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C002000offcore_response.pf_l2_rfo.l3_hit.snoop_misscacheCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C002000offcore_response.pf_l2_rfo.l3_hit.snoop_nonecacheCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C002000offcore_response.pf_l2_rfo.l3_hit.snoop_not_neededcacheCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C002000offcore_response.pf_l2_rfo.supplier_none.any_snoopcacheCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002002000offcore_response.pf_l2_rfo.supplier_none.snoop_hitmcacheCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002002000offcore_response.pf_l2_rfo.supplier_none.snoop_hit_no_fwdcacheCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002002000offcore_response.pf_l2_rfo.supplier_none.snoop_misscacheCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002002000offcore_response.pf_l2_rfo.supplier_none.snoop_nonecacheCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002002000offcore_response.pf_l2_rfo.supplier_none.snoop_not_neededcacheCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002002000offcore_response.pf_l3_code_rd.any_responsecacheCounts prefetch (that bring data to LLC only) code reads have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1020000offcore_response.pf_l3_code_rd.l3_hit.any_snoopcacheCounts prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C020000offcore_response.pf_l3_code_rd.l3_hit.snoop_hitmcacheCounts prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C020000offcore_response.pf_l3_code_rd.l3_hit.snoop_hit_no_fwdcacheCounts prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C020000offcore_response.pf_l3_code_rd.l3_hit.snoop_misscacheCounts prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C020000offcore_response.pf_l3_code_rd.l3_hit.snoop_nonecacheCounts prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C020000offcore_response.pf_l3_code_rd.l3_hit.snoop_not_neededcacheCounts prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C020000offcore_response.pf_l3_code_rd.supplier_none.any_snoopcacheCounts prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002020000offcore_response.pf_l3_code_rd.supplier_none.snoop_hitmcacheCounts prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002020000offcore_response.pf_l3_code_rd.supplier_none.snoop_hit_no_fwdcacheCounts prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002020000offcore_response.pf_l3_code_rd.supplier_none.snoop_misscacheCounts prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002020000offcore_response.pf_l3_code_rd.supplier_none.snoop_nonecacheCounts prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002020000offcore_response.pf_l3_code_rd.supplier_none.snoop_not_neededcacheCounts prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002020000offcore_response.pf_l3_data_rd.any_responsecacheCounts all prefetch (that bring data to LLC only) data reads have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1008000offcore_response.pf_l3_data_rd.l3_hit.any_snoopcacheCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C008000offcore_response.pf_l3_data_rd.l3_hit.snoop_hitmcacheCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C008000offcore_response.pf_l3_data_rd.l3_hit.snoop_hit_no_fwdcacheCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C008000offcore_response.pf_l3_data_rd.l3_hit.snoop_misscacheCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C008000offcore_response.pf_l3_data_rd.l3_hit.snoop_nonecacheCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C008000offcore_response.pf_l3_data_rd.l3_hit.snoop_not_neededcacheCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C008000offcore_response.pf_l3_data_rd.supplier_none.any_snoopcacheCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002008000offcore_response.pf_l3_data_rd.supplier_none.snoop_hitmcacheCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002008000offcore_response.pf_l3_data_rd.supplier_none.snoop_hit_no_fwdcacheCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002008000offcore_response.pf_l3_data_rd.supplier_none.snoop_misscacheCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002008000offcore_response.pf_l3_data_rd.supplier_none.snoop_nonecacheCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002008000offcore_response.pf_l3_data_rd.supplier_none.snoop_not_neededcacheCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002008000offcore_response.pf_l3_rfo.any_responsecacheCounts all prefetch (that bring data to LLC only) RFOs have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1010000offcore_response.pf_l3_rfo.l3_hit.any_snoopcacheCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C010000offcore_response.pf_l3_rfo.l3_hit.snoop_hitmcacheCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C010000offcore_response.pf_l3_rfo.l3_hit.snoop_hit_no_fwdcacheCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C010000offcore_response.pf_l3_rfo.l3_hit.snoop_misscacheCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C010000offcore_response.pf_l3_rfo.l3_hit.snoop_nonecacheCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C010000offcore_response.pf_l3_rfo.l3_hit.snoop_not_neededcacheCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C010000offcore_response.pf_l3_rfo.supplier_none.any_snoopcacheCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002010000offcore_response.pf_l3_rfo.supplier_none.snoop_hitmcacheCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002010000offcore_response.pf_l3_rfo.supplier_none.snoop_hit_no_fwdcacheCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002010000offcore_response.pf_l3_rfo.supplier_none.snoop_misscacheCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002010000offcore_response.pf_l3_rfo.supplier_none.snoop_nonecacheCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002010000offcore_response.pf_l3_rfo.supplier_none.snoop_not_neededcacheCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002010000sq_misc.split_lockcacheSplit locks in SQevent=0xf4,period=100003,umask=0x1000This event counts the number of split locks in the super queuefp_arith_inst_retired.128b_packed_doublefloating pointNumber of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 2 computation operations, one for each element.  Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per elementevent=0xc7,period=2000003,umask=400Number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 2 computation operations, one for each element.  Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.128b_packed_singlefloating pointNumber of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 4 computation operations, one for each element.  Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 4 calculations per elementevent=0xc7,period=2000003,umask=800Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 4 computation operations, one for each element.  Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.256b_packed_doublefloating pointNumber of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 4 computation operations, one for each element.  Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB.  FM(N)ADD/SUB instructions count twice as they perform 4 calculations per elementevent=0xc7,period=2000003,umask=0x1000Number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 4 computation operations, one for each element.  Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB.  FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.256b_packed_singlefloating pointNumber of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 8 computation operations, one for each element.  Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 8 calculations per elementevent=0xc7,period=2000003,umask=0x2000Number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 8 computation operations, one for each element.  Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.4_flopsfloating pointNumber of SSE/AVX computational 128-bit packed single and 256-bit packed double precision FP instructions retired; some instructions will count twice as noted below.  Each count represents 2 or/and 4 computation operations, 1 for each element.  Applies to SSE* and AVX* packed single precision and packed double precision FP instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB count twice as they perform 2 calculations per elementevent=0xc7,period=2000003,umask=0x1800Number of SSE/AVX computational 128-bit packed single precision and 256-bit packed double precision  floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 2 or/and 4 computation operations, one for each element.  Applies to SSE* and AVX* packed single precision floating-point and packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.doublefloating pointNumber of SSE/AVX computational double precision floating-point instructions retired; some instructions will count twice as noted below. Applies to SSE* and AVX* scalar and packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per elementevent=0xc7,period=2000006,umask=0x1500fp_arith_inst_retired.packedfloating pointNumber of SSE/AVX computational packed floating-point instructions retired; some instructions will count twice as noted below. Applies to SSE* and AVX* packed double and single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per elementevent=0xc7,period=2000004,umask=0x3c00fp_arith_inst_retired.scalarfloating pointNumber of SSE/AVX computational scalar floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computation operation.   Applies to SSE* and AVX* scalar double and single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform multiple calculations per elementevent=0xc7,period=2000003,umask=300Number of SSE/AVX computational scalar single precision and double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB.  FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.scalar_doublefloating pointNumber of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 1 computational operation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB.  FM(N)ADD/SUB instructions count twice as they perform multiple calculations per elementevent=0xc7,period=2000003,umask=100Number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 1 computational operation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB.  FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.scalar_singlefloating pointNumber of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB.  FM(N)ADD/SUB instructions count twice as they perform multiple calculations per elementevent=0xc7,period=2000003,umask=200Number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB.  FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.singlefloating pointNumber of SSE/AVX computational single precision floating-point instructions retired; some instructions will count twice as noted below. Applies to SSE* and AVX* scalar and packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP SQRT DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per elementevent=0xc7,period=2000005,umask=0x2a00fp_arith_inst_retired.vectorfloating pointNumber of any Vector retired FP arithmetic instructionsevent=0xc7,period=2000003,umask=0xfc00fp_assist.anyfloating pointCycles with any input/output SSE or FP assistevent=0xca,cmask=1,period=100003,umask=0x1e00This event counts cycles with any input and output SSE or x87 FP assist. If an input and output assist are detected on the same cycle the event increments by 1fp_assist.simd_inputfloating pointNumber of SIMD FP assists due to input valuesevent=0xca,period=100003,umask=0x1000This event counts any input SSE* FP assist - invalid operation, denormal operand, dividing by zero, SNaN operand. Counting includes only cases involving penalties that required micro-code assist interventionfp_assist.simd_outputfloating pointNumber of SIMD FP assists due to Output valuesevent=0xca,period=100003,umask=800This event counts the number of SSE* floating point (FP) micro-code assist (numeric overflow/underflow) when the output value (destination register) is invalid. Counting covers only cases involving penalties that require micro-code assist interventionfp_assist.x87_inputfloating pointNumber of X87 assists due to input valueevent=0xca,period=100003,umask=400This event counts x87 floating point (FP) micro-code assist (invalid operation, denormal operand, SNaN operand) when the input value (one of the source operands to an FP instruction) is invalidfp_assist.x87_outputfloating pointNumber of X87 assists due to output valueevent=0xca,period=100003,umask=200This event counts the number of x87 floating point (FP) micro-code assist (numeric overflow/underflow, inexact result) when the output value (destination register) is invalidmove_elimination.simd_eliminatedfloating pointNumber of SIMD Move Elimination candidate uops that were eliminatedevent=0x58,period=1000003,umask=200move_elimination.simd_not_eliminatedfloating pointNumber of SIMD Move Elimination candidate uops that were not eliminatedevent=0x58,period=1000003,umask=800other_assists.avx_to_ssefloating pointNumber of transitions from AVX-256 to legacy SSE when penalty applicable  Spec update: BDM30event=0xc1,period=100003,umask=800This event counts the number of transitions from AVX-256 to legacy SSE when penalty is applicable  Spec update: BDM30other_assists.sse_to_avxfloating pointNumber of transitions from SSE to AVX-256 when penalty applicable  Spec update: BDM30event=0xc1,period=100003,umask=0x1000This event counts the number of transitions from legacy SSE to AVX-256 when penalty is applicable  Spec update: BDM30uop_dispatches_cancelled.simd_prffloating pointMicro-op dispatches cancelled due to insufficient SIMD physical register file read portsevent=0xa0,period=2000003,umask=300This event counts the number of micro-operations cancelled after they were dispatched from the scheduler to the execution units when the total number of physical register read ports across all dispatch ports exceeds the read bandwidth of the physical register file.  The SIMD_PRF subevent applies to the following instructions: VDPPS, DPPS, VPCMPESTRI, PCMPESTRI, VPCMPESTRM, PCMPESTRM, VFMADD*, VFMADDSUB*, VFMSUB*, VMSUBADD*, VFNMADD*, VFNMSUB*.  See the Broadwell Optimization Guide for more informationbaclears.anyfrontendCounts the total number when the front end is resteered, mainly when the BPU cannot provide a correct prediction and this is corrected by other branch handling mechanisms at the front endevent=0xe6,period=100003,umask=0x1f00dsb2mite_switches.penalty_cyclesfrontendDecode Stream Buffer (DSB)-to-MITE switch true penalty cyclesevent=0xab,period=2000003,umask=200This event counts Decode Stream Buffer (DSB)-to-MITE switch true penalty cycles. These cycles do not include uops routed through because of the switch itself, for example, when Instruction Decode Queue (IDQ) pre-allocation is unavailable, or Instruction Decode Queue (IDQ) is full. SBD-to-MITE switch true penalty cycles happen after the merge mux (MM) receives Decode Stream Buffer (DSB) Sync-indication until receiving the first MITE uop. 
MM is placed before Instruction Decode Queue (IDQ) to merge uops being fed from the MITE and Decode Stream Buffer (DSB) paths. Decode Stream Buffer (DSB) inserts the Sync-indication whenever a Decode Stream Buffer (DSB)-to-MITE switch occurs.
Penalty: A Decode Stream Buffer (DSB) hit followed by a Decode Stream Buffer (DSB) miss can cost up to six cycles in which no uops are delivered to the IDQ. Most often, such switches from the Decode Stream Buffer (DSB) to the legacy pipeline cost 02 cyclesicache.hitfrontendNumber of Instruction Cache, Streaming Buffer and Victim Cache Reads. both cacheable and noncacheable, including UC fetchesevent=0x80,period=2000003,umask=100This event counts the number of both cacheable and noncacheable Instruction Cache, Streaming Buffer and Victim Cache Reads including UC fetchesicache.ifdata_stallfrontendCycles where a code fetch is stalled due to L1 instruction-cache missevent=0x80,period=2000003,umask=400This event counts cycles during which the demand fetch waits for data (wfdM104H) from L2 or iSB (opportunistic hit)icache.missesfrontendNumber of Instruction Cache, Streaming Buffer and Victim Cache Misses. Includes Uncacheable accessesevent=0x80,period=200003,umask=200This event counts the number of instruction cache, streaming buffer and victim cache misses. Counting includes UC accessesidq.all_dsb_cycles_4_uopsfrontendCycles Decode Stream Buffer (DSB) is delivering 4 Uopsevent=0x79,cmask=4,period=2000003,umask=0x1800This event counts the number of cycles 4  uops were  delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Counting includes uops that may bypass the IDQidq.all_dsb_cycles_any_uopsfrontendCycles Decode Stream Buffer (DSB) is delivering any Uopevent=0x79,cmask=1,period=2000003,umask=0x1800This event counts the number of cycles  uops were  delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Counting includes uops that may bypass the IDQidq.all_mite_cycles_4_uopsfrontendCycles MITE is delivering 4 Uopsevent=0x79,cmask=4,period=2000003,umask=0x2400This event counts the number of cycles 4  uops were  delivered to Instruction Decode Queue (IDQ) from the MITE path. Counting includes uops that may bypass the IDQ. This also means that uops are not being delivered from the Decode Stream Buffer (DSB)idq.all_mite_cycles_any_uopsfrontendCycles MITE is delivering any Uopevent=0x79,cmask=1,period=2000003,umask=0x2400This event counts the number of cycles  uops were delivered to Instruction Decode Queue (IDQ) from the MITE path. Counting includes uops that may bypass the IDQ. This also means that uops are not being delivered from the Decode Stream Buffer (DSB)idq.dsb_cyclesfrontendCycles when uops are being delivered to Instruction Decode Queue (IDQ) from Decode Stream Buffer (DSB) pathevent=0x79,cmask=1,period=2000003,umask=800This event counts cycles during which uops are being delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Counting includes uops that may bypass the IDQidq.dsb_uopsfrontendUops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) pathevent=0x79,period=2000003,umask=800This event counts the number of uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Counting includes uops that may bypass the IDQidq.emptyfrontendInstruction Decode Queue (IDQ) empty cyclesevent=0x79,period=2000003,umask=200This counts the number of cycles that the instruction decoder queue is empty and can indicate that the application may be bound in the front end.  It does not determine whether there are uops being delivered to the Alloc stage since uops can be delivered by bypass skipping the Instruction Decode Queue (IDQ) when it is emptyidq.mite_all_uopsfrontendUops delivered to Instruction Decode Queue (IDQ) from MITE pathevent=0x79,period=2000003,umask=0x3c00This event counts the number of uops delivered to Instruction Decode Queue (IDQ) from the MITE path. Counting includes uops that may bypass the IDQ. This also means that uops are not being delivered from the Decode Stream Buffer (DSB)idq.mite_cyclesfrontendCycles when uops are being delivered to Instruction Decode Queue (IDQ) from MITE pathevent=0x79,cmask=1,period=2000003,umask=400This event counts cycles during which uops are being delivered to Instruction Decode Queue (IDQ) from the MITE path. Counting includes uops that may bypass the IDQidq.mite_uopsfrontendUops delivered to Instruction Decode Queue (IDQ) from MITE pathevent=0x79,period=2000003,umask=400This event counts the number of uops delivered to Instruction Decode Queue (IDQ) from the MITE path. Counting includes uops that may bypass the IDQ. This also means that uops are not being delivered from the Decode Stream Buffer (DSB)idq.ms_cyclesfrontendCycles when uops are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busyevent=0x79,cmask=1,period=2000003,umask=0x3000This event counts cycles during which uops are being delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Counting includes uops that may bypass the IDQ. Uops maybe initiated by Decode Stream Buffer (DSB) or MITEidq.ms_dsb_cyclesfrontendCycles when uops initiated by Decode Stream Buffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busyevent=0x79,cmask=1,period=2000003,umask=0x1000This event counts cycles during which uops initiated by Decode Stream Buffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Counting includes uops that may bypass the IDQidq.ms_dsb_occurfrontendDeliveries to Instruction Decode Queue (IDQ) initiated by Decode Stream Buffer (DSB) while Microcode Sequencer (MS) is busyevent=0x79,cmask=1,edge=1,period=2000003,umask=0x1000This event counts the number of deliveries to Instruction Decode Queue (IDQ) initiated by Decode Stream Buffer (DSB) while the Microcode Sequencer (MS) is busy. Counting includes uops that may bypass the IDQidq.ms_dsb_uopsfrontendUops initiated by Decode Stream Buffer (DSB) that are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busyevent=0x79,period=2000003,umask=0x1000This event counts the number of uops initiated by Decode Stream Buffer (DSB) that are being delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Counting includes uops that may bypass the IDQidq.ms_mite_uopsfrontendUops initiated by MITE and delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busyevent=0x79,period=2000003,umask=0x2000This event counts the number of uops initiated by MITE and delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Counting includes uops that may bypass the IDQidq.ms_switchesfrontendNumber of switches from DSB (Decode Stream Buffer) or MITE (legacy decode pipeline) to the Microcode Sequencerevent=0x79,cmask=1,edge=1,period=2000003,umask=0x3000idq.ms_uopsfrontendUops delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busyevent=0x79,period=2000003,umask=0x3000This event counts the total number of uops delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Counting includes uops that may bypass the IDQ. Uops maybe initiated by Decode Stream Buffer (DSB) or MITEidq_uops_not_delivered.corefrontendUops not delivered to Resource Allocation Table (RAT) per thread when backend of the machine is not stalledevent=0x9c,period=2000003,umask=100This event counts the number of uops not delivered to Resource Allocation Table (RAT) per thread adding 4  x when Resource Allocation Table (RAT) is not stalled and Instruction Decode Queue (IDQ) delivers x uops to Resource Allocation Table (RAT) (where x belongs to {0,1,2,3}). Counting does not cover cases when:
 a. IDQ-Resource Allocation Table (RAT) pipe serves the other thread;
 b. Resource Allocation Table (RAT) is stalled for the thread (including uop drops and clear BE conditions); 
 c. Instruction Decode Queue (IDQ) delivers four uopsidq_uops_not_delivered.cycles_0_uops_deliv.corefrontendCycles per thread when 4 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalledevent=0x9c,cmask=4,period=2000003,umask=100This event counts, on the per-thread basis, cycles when no uops are delivered to Resource Allocation Table (RAT). IDQ_Uops_Not_Delivered.core =4idq_uops_not_delivered.cycles_fe_was_okfrontendCounts cycles FE delivered 4 uops or Resource Allocation Table (RAT) was stalling FEevent=0x9c,cmask=1,inv=1,period=2000003,umask=100idq_uops_not_delivered.cycles_le_1_uop_deliv.corefrontendCycles per thread when 3 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalledevent=0x9c,cmask=3,period=2000003,umask=100This event counts, on the per-thread basis, cycles when less than 1 uop is  delivered to Resource Allocation Table (RAT). IDQ_Uops_Not_Delivered.core >=3idq_uops_not_delivered.cycles_le_2_uop_deliv.corefrontendCycles with less than 2 uops delivered by the front endevent=0x9c,cmask=2,period=2000003,umask=100idq_uops_not_delivered.cycles_le_3_uop_deliv.corefrontendCycles with less than 3 uops delivered by the front endevent=0x9c,cmask=1,period=2000003,umask=100hle_retired.abortedmemoryNumber of times HLE abort was triggered (Precise event)event=0xc8,period=2000003,umask=400Number of times HLE abort was triggered (Precise event)hle_retired.aborted_misc1memoryNumber of times an HLE execution aborted due to various memory events (e.g., read/write capacity and conflicts)event=0xc8,period=2000003,umask=800Number of times an HLE abort was attributed to a Memory condition (See TSX_Memory event for additional details)hle_retired.aborted_misc2memoryNumber of times an HLE execution aborted due to uncommon conditionsevent=0xc8,period=2000003,umask=0x1000Number of times the TSX watchdog signaled an HLE aborthle_retired.aborted_misc3memoryNumber of times an HLE execution aborted due to HLE-unfriendly instructionsevent=0xc8,period=2000003,umask=0x2000Number of times a disallowed operation caused an HLE aborthle_retired.aborted_misc4memoryNumber of times an HLE execution aborted due to incompatible memory typeevent=0xc8,period=2000003,umask=0x4000Number of times HLE caused a faulthle_retired.aborted_misc5memoryNumber of times an HLE execution aborted due to none of the previous 4 categories (e.g. interrupts)event=0xc8,period=2000003,umask=0x8000Number of times HLE aborted and was not due to the abort conditions in subevents 3-6hle_retired.commitmemoryNumber of times HLE commit succeededevent=0xc8,period=2000003,umask=200Number of times HLE commit succeededhle_retired.startmemoryNumber of times we entered an HLE region; does not count nested transactionsevent=0xc8,period=2000003,umask=100Number of times we entered an HLE region
 does not count nested transactionsmachine_clears.memory_orderingmemoryCounts the number of machine clears due to memory order conflictsevent=0xc3,period=100003,umask=200This event counts the number of memory ordering Machine Clears detected. Memory Ordering Machine Clears can result from one of the following:
1. memory disambiguation,
2. external snoop, or
3. cross SMT-HW-thread snoop (stores) hitting load buffermem_trans_retired.load_latency_gt_128memoryRandomly selected loads with latency value being above 128  Supports address when precise.  Spec update: BDM100, BDM35 (Must be precise)event=0xcd,period=1009,umask=1,ldlat=0x8000Counts randomly selected loads with latency value being above 128  Supports address when precise.  Spec update: BDM100, BDM35 (Must be precise)mem_trans_retired.load_latency_gt_16memoryRandomly selected loads with latency value being above 16  Supports address when precise.  Spec update: BDM100, BDM35 (Must be precise)event=0xcd,period=20011,umask=1,ldlat=0x1000Counts randomly selected loads with latency value being above 16  Supports address when precise.  Spec update: BDM100, BDM35 (Must be precise)mem_trans_retired.load_latency_gt_256memoryRandomly selected loads with latency value being above 256  Supports address when precise.  Spec update: BDM100, BDM35 (Must be precise)event=0xcd,period=503,umask=1,ldlat=0x10000Counts randomly selected loads with latency value being above 256  Supports address when precise.  Spec update: BDM100, BDM35 (Must be precise)mem_trans_retired.load_latency_gt_32memoryRandomly selected loads with latency value being above 32  Supports address when precise.  Spec update: BDM100, BDM35 (Must be precise)event=0xcd,period=100007,umask=1,ldlat=0x2000Counts randomly selected loads with latency value being above 32  Supports address when precise.  Spec update: BDM100, BDM35 (Must be precise)mem_trans_retired.load_latency_gt_4memoryRandomly selected loads with latency value being above 4  Supports address when precise.  Spec update: BDM100, BDM35 (Must be precise)event=0xcd,period=100003,umask=1,ldlat=0x400Counts randomly selected loads with latency value being above four  Supports address when precise.  Spec update: BDM100, BDM35 (Must be precise)mem_trans_retired.load_latency_gt_512memoryRandomly selected loads with latency value being above 512  Supports address when precise.  Spec update: BDM100, BDM35 (Must be precise)event=0xcd,period=101,umask=1,ldlat=0x20000Counts randomly selected loads with latency value being above 512  Supports address when precise.  Spec update: BDM100, BDM35 (Must be precise)mem_trans_retired.load_latency_gt_64memoryRandomly selected loads with latency value being above 64  Supports address when precise.  Spec update: BDM100, BDM35 (Must be precise)event=0xcd,period=2003,umask=1,ldlat=0x4000Counts randomly selected loads with latency value being above 64  Supports address when precise.  Spec update: BDM100, BDM35 (Must be precise)mem_trans_retired.load_latency_gt_8memoryRandomly selected loads with latency value being above 8  Supports address when precise.  Spec update: BDM100, BDM35 (Must be precise)event=0xcd,period=50021,umask=1,ldlat=0x800Counts randomly selected loads with latency value being above eight  Supports address when precise.  Spec update: BDM100, BDM35 (Must be precise)misalign_mem_ref.loadsmemorySpeculative cache line split load uops dispatched to L1 cacheevent=5,period=2000003,umask=100This event counts speculative cache-line split load uops dispatched to the L1 cachemisalign_mem_ref.storesmemorySpeculative cache line split STA uops dispatched to L1 cacheevent=5,period=2000003,umask=200This event counts speculative cache line split store-address (STA) uops dispatched to the L1 cacheoffcore_response.all_data_rd.l3_hit.snoop_non_drammemoryCounts all demand & prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20003C009100offcore_response.all_data_rd.l3_miss.snoop_hit_no_fwdmemoryCounts all demand & prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00009100offcore_response.all_data_rd.l3_miss.snoop_missmemoryCounts all demand & prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00009100offcore_response.all_data_rd.l3_miss.snoop_nonememoryCounts all demand & prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00009100offcore_response.all_data_rd.l3_miss.snoop_not_neededmemoryCounts all demand & prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00009100offcore_response.all_data_rd.l3_miss_local_dram.any_snoopmemoryCounts all demand & prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400009100offcore_response.all_data_rd.l3_miss_local_dram.snoop_hitmmemoryCounts all demand & prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400009100offcore_response.all_data_rd.l3_miss_local_dram.snoop_hit_no_fwdmemoryCounts all demand & prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400009100offcore_response.all_data_rd.l3_miss_local_dram.snoop_missmemoryCounts all demand & prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400009100offcore_response.all_data_rd.l3_miss_local_dram.snoop_nonememoryCounts all demand & prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400009100offcore_response.all_data_rd.l3_miss_local_dram.snoop_non_drammemoryCounts all demand & prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200400009100offcore_response.all_data_rd.l3_miss_local_dram.snoop_not_neededmemoryCounts all demand & prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400009100offcore_response.all_data_rd.supplier_none.snoop_non_drammemoryCounts all demand & prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200002009100offcore_response.all_pf_code_rd.l3_hit.snoop_non_drammemoryCounts all prefetch code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20003C024000offcore_response.all_pf_code_rd.l3_miss.snoop_hit_no_fwdmemoryCounts all prefetch code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00024000offcore_response.all_pf_code_rd.l3_miss.snoop_missmemoryCounts all prefetch code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00024000offcore_response.all_pf_code_rd.l3_miss.snoop_nonememoryCounts all prefetch code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00024000offcore_response.all_pf_code_rd.l3_miss.snoop_not_neededmemoryCounts all prefetch code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00024000offcore_response.all_pf_code_rd.l3_miss_local_dram.any_snoopmemoryCounts all prefetch code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400024000offcore_response.all_pf_code_rd.l3_miss_local_dram.snoop_hitmmemoryCounts all prefetch code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400024000offcore_response.all_pf_code_rd.l3_miss_local_dram.snoop_hit_no_fwdmemoryCounts all prefetch code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400024000offcore_response.all_pf_code_rd.l3_miss_local_dram.snoop_missmemoryCounts all prefetch code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400024000offcore_response.all_pf_code_rd.l3_miss_local_dram.snoop_nonememoryCounts all prefetch code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400024000offcore_response.all_pf_code_rd.l3_miss_local_dram.snoop_non_drammemoryCounts all prefetch code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200400024000offcore_response.all_pf_code_rd.l3_miss_local_dram.snoop_not_neededmemoryCounts all prefetch code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400024000offcore_response.all_pf_code_rd.supplier_none.snoop_non_drammemoryCounts all prefetch code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200002024000offcore_response.all_pf_data_rd.l3_hit.snoop_non_drammemoryCounts all prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20003C009000offcore_response.all_pf_data_rd.l3_miss.snoop_hit_no_fwdmemoryCounts all prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00009000offcore_response.all_pf_data_rd.l3_miss.snoop_missmemoryCounts all prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00009000offcore_response.all_pf_data_rd.l3_miss.snoop_nonememoryCounts all prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00009000offcore_response.all_pf_data_rd.l3_miss.snoop_not_neededmemoryCounts all prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00009000offcore_response.all_pf_data_rd.l3_miss_local_dram.any_snoopmemoryCounts all prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400009000offcore_response.all_pf_data_rd.l3_miss_local_dram.snoop_hitmmemoryCounts all prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400009000offcore_response.all_pf_data_rd.l3_miss_local_dram.snoop_hit_no_fwdmemoryCounts all prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400009000offcore_response.all_pf_data_rd.l3_miss_local_dram.snoop_missmemoryCounts all prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400009000offcore_response.all_pf_data_rd.l3_miss_local_dram.snoop_nonememoryCounts all prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400009000offcore_response.all_pf_data_rd.l3_miss_local_dram.snoop_non_drammemoryCounts all prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200400009000offcore_response.all_pf_data_rd.l3_miss_local_dram.snoop_not_neededmemoryCounts all prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400009000offcore_response.all_pf_data_rd.supplier_none.snoop_non_drammemoryCounts all prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200002009000offcore_response.all_pf_rfo.l3_hit.snoop_non_drammemoryCounts prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20003C012000offcore_response.all_pf_rfo.l3_miss.snoop_hit_no_fwdmemoryCounts prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00012000offcore_response.all_pf_rfo.l3_miss.snoop_missmemoryCounts prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00012000offcore_response.all_pf_rfo.l3_miss.snoop_nonememoryCounts prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00012000offcore_response.all_pf_rfo.l3_miss.snoop_not_neededmemoryCounts prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00012000offcore_response.all_pf_rfo.l3_miss_local_dram.any_snoopmemoryCounts prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400012000offcore_response.all_pf_rfo.l3_miss_local_dram.snoop_hitmmemoryCounts prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400012000offcore_response.all_pf_rfo.l3_miss_local_dram.snoop_hit_no_fwdmemoryCounts prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400012000offcore_response.all_pf_rfo.l3_miss_local_dram.snoop_missmemoryCounts prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400012000offcore_response.all_pf_rfo.l3_miss_local_dram.snoop_nonememoryCounts prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400012000offcore_response.all_pf_rfo.l3_miss_local_dram.snoop_non_drammemoryCounts prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200400012000offcore_response.all_pf_rfo.l3_miss_local_dram.snoop_not_neededmemoryCounts prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400012000offcore_response.all_pf_rfo.supplier_none.snoop_non_drammemoryCounts prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200002012000offcore_response.all_rfo.l3_hit.snoop_non_drammemoryCounts all demand & prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20003C012200offcore_response.all_rfo.l3_miss.snoop_hit_no_fwdmemoryCounts all demand & prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00012200offcore_response.all_rfo.l3_miss.snoop_missmemoryCounts all demand & prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00012200offcore_response.all_rfo.l3_miss.snoop_nonememoryCounts all demand & prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00012200offcore_response.all_rfo.l3_miss.snoop_not_neededmemoryCounts all demand & prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00012200offcore_response.all_rfo.l3_miss_local_dram.any_snoopmemoryCounts all demand & prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400012200offcore_response.all_rfo.l3_miss_local_dram.snoop_hitmmemoryCounts all demand & prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400012200offcore_response.all_rfo.l3_miss_local_dram.snoop_hit_no_fwdmemoryCounts all demand & prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400012200offcore_response.all_rfo.l3_miss_local_dram.snoop_missmemoryCounts all demand & prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400012200offcore_response.all_rfo.l3_miss_local_dram.snoop_nonememoryCounts all demand & prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400012200offcore_response.all_rfo.l3_miss_local_dram.snoop_non_drammemoryCounts all demand & prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200400012200offcore_response.all_rfo.l3_miss_local_dram.snoop_not_neededmemoryCounts all demand & prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400012200offcore_response.all_rfo.supplier_none.snoop_non_drammemoryCounts all demand & prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200002012200offcore_response.corewb.l3_hit.snoop_non_drammemoryCounts writebacks (modified to exclusive)event=0xb7,period=100003,umask=1,offcore_rsp=0x20003C000800offcore_response.corewb.l3_miss.snoop_hit_no_fwdmemoryCounts writebacks (modified to exclusive)event=0xb7,period=100003,umask=1,offcore_rsp=0x43C00000800offcore_response.corewb.l3_miss.snoop_missmemoryCounts writebacks (modified to exclusive)event=0xb7,period=100003,umask=1,offcore_rsp=0x23C00000800offcore_response.corewb.l3_miss.snoop_nonememoryCounts writebacks (modified to exclusive)event=0xb7,period=100003,umask=1,offcore_rsp=0xBC00000800offcore_response.corewb.l3_miss.snoop_not_neededmemoryCounts writebacks (modified to exclusive)event=0xb7,period=100003,umask=1,offcore_rsp=0x13C00000800offcore_response.corewb.l3_miss_local_dram.any_snoopmemoryCounts writebacks (modified to exclusive)event=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400000800offcore_response.corewb.l3_miss_local_dram.snoop_hitmmemoryCounts writebacks (modified to exclusive)event=0xb7,period=100003,umask=1,offcore_rsp=0x100400000800offcore_response.corewb.l3_miss_local_dram.snoop_hit_no_fwdmemoryCounts writebacks (modified to exclusive)event=0xb7,period=100003,umask=1,offcore_rsp=0x40400000800offcore_response.corewb.l3_miss_local_dram.snoop_missmemoryCounts writebacks (modified to exclusive)event=0xb7,period=100003,umask=1,offcore_rsp=0x20400000800offcore_response.corewb.l3_miss_local_dram.snoop_nonememoryCounts writebacks (modified to exclusive)event=0xb7,period=100003,umask=1,offcore_rsp=0x8400000800offcore_response.corewb.l3_miss_local_dram.snoop_non_drammemoryCounts writebacks (modified to exclusive)event=0xb7,period=100003,umask=1,offcore_rsp=0x200400000800offcore_response.corewb.l3_miss_local_dram.snoop_not_neededmemoryCounts writebacks (modified to exclusive)event=0xb7,period=100003,umask=1,offcore_rsp=0x10400000800offcore_response.corewb.supplier_none.snoop_non_drammemoryCounts writebacks (modified to exclusive)event=0xb7,period=100003,umask=1,offcore_rsp=0x200002000800offcore_response.demand_code_rd.l3_hit.snoop_non_drammemoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20003C000400offcore_response.demand_code_rd.l3_miss.snoop_hit_no_fwdmemoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00000400offcore_response.demand_code_rd.l3_miss.snoop_missmemoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00000400offcore_response.demand_code_rd.l3_miss.snoop_nonememoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00000400offcore_response.demand_code_rd.l3_miss.snoop_not_neededmemoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00000400offcore_response.demand_code_rd.l3_miss_local_dram.any_snoopmemoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400000400offcore_response.demand_code_rd.l3_miss_local_dram.snoop_hitmmemoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400000400offcore_response.demand_code_rd.l3_miss_local_dram.snoop_hit_no_fwdmemoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400000400offcore_response.demand_code_rd.l3_miss_local_dram.snoop_missmemoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400000400offcore_response.demand_code_rd.l3_miss_local_dram.snoop_nonememoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400000400offcore_response.demand_code_rd.l3_miss_local_dram.snoop_non_drammemoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200400000400offcore_response.demand_code_rd.l3_miss_local_dram.snoop_not_neededmemoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400000400offcore_response.demand_code_rd.supplier_none.snoop_non_drammemoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200002000400offcore_response.demand_data_rd.l3_hit.snoop_non_drammemoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20003C000100offcore_response.demand_data_rd.l3_miss.snoop_hit_no_fwdmemoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00000100offcore_response.demand_data_rd.l3_miss.snoop_missmemoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00000100offcore_response.demand_data_rd.l3_miss.snoop_nonememoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00000100offcore_response.demand_data_rd.l3_miss.snoop_not_neededmemoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00000100offcore_response.demand_data_rd.l3_miss_local_dram.any_snoopmemoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400000100offcore_response.demand_data_rd.l3_miss_local_dram.snoop_hitmmemoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400000100offcore_response.demand_data_rd.l3_miss_local_dram.snoop_hit_no_fwdmemoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400000100offcore_response.demand_data_rd.l3_miss_local_dram.snoop_missmemoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400000100offcore_response.demand_data_rd.l3_miss_local_dram.snoop_nonememoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400000100offcore_response.demand_data_rd.l3_miss_local_dram.snoop_non_drammemoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200400000100offcore_response.demand_data_rd.l3_miss_local_dram.snoop_not_neededmemoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400000100offcore_response.demand_data_rd.supplier_none.snoop_non_drammemoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200002000100offcore_response.demand_rfo.l3_hit.snoop_non_drammemoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x20003C000200offcore_response.demand_rfo.l3_miss.snoop_hit_no_fwdmemoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x43C00000200offcore_response.demand_rfo.l3_miss.snoop_missmemoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x23C00000200offcore_response.demand_rfo.l3_miss.snoop_nonememoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0xBC00000200offcore_response.demand_rfo.l3_miss.snoop_not_neededmemoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x13C00000200offcore_response.demand_rfo.l3_miss_local_dram.any_snoopmemoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400000200offcore_response.other.l3_hit.snoop_non_drammemoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20003C800000offcore_response.other.l3_miss.snoop_hit_no_fwdmemoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00800000offcore_response.other.l3_miss.snoop_missmemoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00800000offcore_response.other.l3_miss.snoop_nonememoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00800000offcore_response.other.l3_miss.snoop_not_neededmemoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00800000offcore_response.other.l3_miss_local_dram.any_snoopmemoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400800000offcore_response.other.l3_miss_local_dram.snoop_hitmmemoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400800000offcore_response.other.l3_miss_local_dram.snoop_hit_no_fwdmemoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400800000offcore_response.other.l3_miss_local_dram.snoop_missmemoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400800000offcore_response.other.l3_miss_local_dram.snoop_nonememoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400800000offcore_response.other.l3_miss_local_dram.snoop_non_drammemoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200400800000offcore_response.other.l3_miss_local_dram.snoop_not_neededmemoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400800000offcore_response.other.supplier_none.snoop_non_drammemoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200002800000offcore_response.pf_l2_code_rd.l3_hit.snoop_non_drammemoryCounts all prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20003C004000offcore_response.pf_l2_code_rd.l3_miss.snoop_hit_no_fwdmemoryCounts all prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00004000offcore_response.pf_l2_code_rd.l3_miss.snoop_missmemoryCounts all prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00004000offcore_response.pf_l2_code_rd.l3_miss.snoop_nonememoryCounts all prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00004000offcore_response.pf_l2_code_rd.l3_miss.snoop_not_neededmemoryCounts all prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00004000offcore_response.pf_l2_code_rd.l3_miss_local_dram.any_snoopmemoryCounts all prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400004000offcore_response.pf_l2_code_rd.l3_miss_local_dram.snoop_hitmmemoryCounts all prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400004000offcore_response.pf_l2_code_rd.l3_miss_local_dram.snoop_hit_no_fwdmemoryCounts all prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400004000offcore_response.pf_l2_code_rd.l3_miss_local_dram.snoop_missmemoryCounts all prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400004000offcore_response.pf_l2_code_rd.l3_miss_local_dram.snoop_nonememoryCounts all prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400004000offcore_response.pf_l2_code_rd.l3_miss_local_dram.snoop_non_drammemoryCounts all prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200400004000offcore_response.pf_l2_code_rd.l3_miss_local_dram.snoop_not_neededmemoryCounts all prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400004000offcore_response.pf_l2_code_rd.supplier_none.snoop_non_drammemoryCounts all prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200002004000offcore_response.pf_l2_data_rd.l3_hit.snoop_non_drammemoryCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20003C001000offcore_response.pf_l2_data_rd.l3_miss.snoop_hit_no_fwdmemoryCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00001000offcore_response.pf_l2_data_rd.l3_miss.snoop_missmemoryCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00001000offcore_response.pf_l2_data_rd.l3_miss.snoop_nonememoryCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00001000offcore_response.pf_l2_data_rd.l3_miss.snoop_not_neededmemoryCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00001000offcore_response.pf_l2_data_rd.l3_miss_local_dram.any_snoopmemoryCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400001000offcore_response.pf_l2_data_rd.l3_miss_local_dram.snoop_hitmmemoryCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400001000offcore_response.pf_l2_data_rd.l3_miss_local_dram.snoop_hit_no_fwdmemoryCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400001000offcore_response.pf_l2_data_rd.l3_miss_local_dram.snoop_missmemoryCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400001000offcore_response.pf_l2_data_rd.l3_miss_local_dram.snoop_nonememoryCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400001000offcore_response.pf_l2_data_rd.l3_miss_local_dram.snoop_non_drammemoryCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200400001000offcore_response.pf_l2_data_rd.l3_miss_local_dram.snoop_not_neededmemoryCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400001000offcore_response.pf_l2_data_rd.supplier_none.snoop_non_drammemoryCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200002001000offcore_response.pf_l2_rfo.l3_hit.snoop_non_drammemoryCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20003C002000offcore_response.pf_l2_rfo.l3_miss.snoop_hit_no_fwdmemoryCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00002000offcore_response.pf_l2_rfo.l3_miss.snoop_missmemoryCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00002000offcore_response.pf_l2_rfo.l3_miss.snoop_nonememoryCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00002000offcore_response.pf_l2_rfo.l3_miss.snoop_not_neededmemoryCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00002000offcore_response.pf_l2_rfo.l3_miss_local_dram.any_snoopmemoryCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400002000offcore_response.pf_l2_rfo.l3_miss_local_dram.snoop_hitmmemoryCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400002000offcore_response.pf_l2_rfo.l3_miss_local_dram.snoop_hit_no_fwdmemoryCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400002000offcore_response.pf_l2_rfo.l3_miss_local_dram.snoop_missmemoryCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400002000offcore_response.pf_l2_rfo.l3_miss_local_dram.snoop_nonememoryCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400002000offcore_response.pf_l2_rfo.l3_miss_local_dram.snoop_non_drammemoryCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200400002000offcore_response.pf_l2_rfo.l3_miss_local_dram.snoop_not_neededmemoryCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400002000offcore_response.pf_l2_rfo.supplier_none.snoop_non_drammemoryCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200002002000offcore_response.pf_l3_code_rd.l3_hit.snoop_non_drammemoryCounts prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20003C020000offcore_response.pf_l3_code_rd.l3_miss.snoop_hit_no_fwdmemoryCounts prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00020000offcore_response.pf_l3_code_rd.l3_miss.snoop_missmemoryCounts prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00020000offcore_response.pf_l3_code_rd.l3_miss.snoop_nonememoryCounts prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00020000offcore_response.pf_l3_code_rd.l3_miss.snoop_not_neededmemoryCounts prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00020000offcore_response.pf_l3_code_rd.l3_miss_local_dram.any_snoopmemoryCounts prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400020000offcore_response.pf_l3_code_rd.l3_miss_local_dram.snoop_hitmmemoryCounts prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400020000offcore_response.pf_l3_code_rd.l3_miss_local_dram.snoop_hit_no_fwdmemoryCounts prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400020000offcore_response.pf_l3_code_rd.l3_miss_local_dram.snoop_missmemoryCounts prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400020000offcore_response.pf_l3_code_rd.l3_miss_local_dram.snoop_nonememoryCounts prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400020000offcore_response.pf_l3_code_rd.l3_miss_local_dram.snoop_non_drammemoryCounts prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200400020000offcore_response.pf_l3_code_rd.l3_miss_local_dram.snoop_not_neededmemoryCounts prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400020000offcore_response.pf_l3_code_rd.supplier_none.snoop_non_drammemoryCounts prefetch (that bring data to LLC only) code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200002020000offcore_response.pf_l3_data_rd.l3_hit.snoop_non_drammemoryCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20003C008000offcore_response.pf_l3_data_rd.l3_miss.snoop_hit_no_fwdmemoryCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00008000offcore_response.pf_l3_data_rd.l3_miss.snoop_missmemoryCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00008000offcore_response.pf_l3_data_rd.l3_miss.snoop_nonememoryCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00008000offcore_response.pf_l3_data_rd.l3_miss.snoop_not_neededmemoryCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00008000offcore_response.pf_l3_data_rd.l3_miss_local_dram.any_snoopmemoryCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400008000offcore_response.pf_l3_data_rd.l3_miss_local_dram.snoop_hitmmemoryCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400008000offcore_response.pf_l3_data_rd.l3_miss_local_dram.snoop_hit_no_fwdmemoryCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400008000offcore_response.pf_l3_data_rd.l3_miss_local_dram.snoop_missmemoryCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400008000offcore_response.pf_l3_data_rd.l3_miss_local_dram.snoop_nonememoryCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400008000offcore_response.pf_l3_data_rd.l3_miss_local_dram.snoop_non_drammemoryCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200400008000offcore_response.pf_l3_data_rd.l3_miss_local_dram.snoop_not_neededmemoryCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400008000offcore_response.pf_l3_data_rd.supplier_none.snoop_non_drammemoryCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200002008000offcore_response.pf_l3_rfo.l3_hit.snoop_non_drammemoryCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20003C010000offcore_response.pf_l3_rfo.l3_miss.snoop_hit_no_fwdmemoryCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00010000offcore_response.pf_l3_rfo.l3_miss.snoop_missmemoryCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00010000offcore_response.pf_l3_rfo.l3_miss.snoop_nonememoryCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00010000offcore_response.pf_l3_rfo.l3_miss.snoop_not_neededmemoryCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00010000offcore_response.pf_l3_rfo.l3_miss_local_dram.any_snoopmemoryCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400010000offcore_response.pf_l3_rfo.l3_miss_local_dram.snoop_hitmmemoryCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400010000offcore_response.pf_l3_rfo.l3_miss_local_dram.snoop_hit_no_fwdmemoryCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400010000offcore_response.pf_l3_rfo.l3_miss_local_dram.snoop_missmemoryCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400010000offcore_response.pf_l3_rfo.l3_miss_local_dram.snoop_nonememoryCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400010000offcore_response.pf_l3_rfo.l3_miss_local_dram.snoop_non_drammemoryCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200400010000offcore_response.pf_l3_rfo.l3_miss_local_dram.snoop_not_neededmemoryCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400010000offcore_response.pf_l3_rfo.supplier_none.snoop_non_drammemoryCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200002010000rtm_retired.abortedmemoryNumber of times RTM abort was triggered (Must be precise)event=0xc9,period=2000003,umask=400Number of times RTM abort was triggered  (Must be precise)rtm_retired.aborted_misc1memoryNumber of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts)event=0xc9,period=2000003,umask=800Number of times an RTM abort was attributed to a Memory condition (See TSX_Memory event for additional details)rtm_retired.aborted_misc2memoryNumber of times an RTM execution aborted due to various memory events (e.g., read/write capacity and conflicts)event=0xc9,period=2000003,umask=0x1000Number of times the TSX watchdog signaled an RTM abortrtm_retired.aborted_misc3memoryNumber of times an RTM execution aborted due to HLE-unfriendly instructionsevent=0xc9,period=2000003,umask=0x2000Number of times a disallowed operation caused an RTM abortrtm_retired.aborted_misc4memoryNumber of times an RTM execution aborted due to incompatible memory typeevent=0xc9,period=2000003,umask=0x4000Number of times a RTM caused a faultrtm_retired.aborted_misc5memoryNumber of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt)event=0xc9,period=2000003,umask=0x8000Number of times RTM aborted and was not due to the abort conditions in subevents 3-6rtm_retired.commitmemoryNumber of times RTM commit succeededevent=0xc9,period=2000003,umask=200Number of times RTM commit succeededrtm_retired.startmemoryNumber of times we entered an RTM region; does not count nested transactionsevent=0xc9,period=2000003,umask=100Number of times we entered an RTM region
 does not count nested transactionstx_exec.misc1memoryCounts the number of times a class of instructions that may cause a transactional abort was executed. Since this is the count of execution, it may not always cause a transactional abortevent=0x5d,period=2000003,umask=100tx_exec.misc2memoryCounts the number of times a class of instructions (e.g., vzeroupper) that may cause a transactional abort was executed inside a transactional regionevent=0x5d,period=2000003,umask=200Unfriendly TSX abort triggered by  a vzeroupper instructiontx_exec.misc3memoryCounts the number of times an instruction execution caused the transactional nest count supported to be exceededevent=0x5d,period=2000003,umask=400Unfriendly TSX abort triggered by a nest count that is too deeptx_exec.misc4memoryCounts the number of times a XBEGIN instruction was executed inside an HLE transactional regionevent=0x5d,period=2000003,umask=800RTM region detected inside HLEtx_exec.misc5memoryCounts the number of times an HLE XACQUIRE instruction was executed inside an RTM transactional regionevent=0x5d,period=2000003,umask=0x1000tx_mem.abort_capacity_writememoryNumber of times a TSX Abort was triggered due to an evicted line caused by a transaction overflowevent=0x54,period=2000003,umask=200Number of times a TSX Abort was triggered due to an evicted line caused by a transaction overflowtx_mem.abort_conflictmemoryNumber of times a TSX line had a cache conflictevent=0x54,period=2000003,umask=100Number of times a TSX line had a cache conflicttx_mem.abort_hle_elision_buffer_mismatchmemoryNumber of times a TSX Abort was triggered due to release/commit but data and address mismatchevent=0x54,period=2000003,umask=0x1000Number of times a TSX Abort was triggered due to release/commit but data and address mismatchtx_mem.abort_hle_elision_buffer_not_emptymemoryNumber of times a TSX Abort was triggered due to commit but Lock Buffer not emptyevent=0x54,period=2000003,umask=800Number of times a TSX Abort was triggered due to commit but Lock Buffer not emptytx_mem.abort_hle_elision_buffer_unsupported_alignmentmemoryNumber of times a TSX Abort was triggered due to attempting an unsupported alignment from Lock Bufferevent=0x54,period=2000003,umask=0x2000Number of times a TSX Abort was triggered due to attempting an unsupported alignment from Lock Buffertx_mem.abort_hle_store_to_elided_lockmemoryNumber of times a TSX Abort was triggered due to a non-release/commit store to lockevent=0x54,period=2000003,umask=400Number of times a TSX Abort was triggered due to a non-release/commit store to locktx_mem.hle_elision_buffer_fullmemoryNumber of times we could not allocate Lock Bufferevent=0x54,period=2000003,umask=0x4000Number of times we could not allocate Lock Buffercpl_cycles.ring0otherUnhalted core cycles when the thread is in ring 0event=0x5c,period=2000003,umask=100This event counts the unhalted core cycles during which the thread is in the ring 0 privileged modecpl_cycles.ring0_transotherNumber of intervals between processor halts while thread is in ring 0event=0x5c,cmask=1,edge=1,period=100007,umask=100This event counts when there is a transition from ring 1,2 or 3 to ring0cpl_cycles.ring123otherUnhalted core cycles when thread is in rings 1, 2, or 3event=0x5c,period=2000003,umask=200This event counts unhalted core cycles during which the thread is in rings 1, 2, or 3lock_cycles.split_lock_uc_lock_durationotherCycles when L1 and L2 are locked due to UC or split lockevent=0x63,period=2000003,umask=100This event counts cycles in which the L1 and L2 are locked due to a UC lock or split lock. A lock is asserted in case of locked memory access, due to noncacheable memory, locked operation that spans two cache lines, or a page walk from the noncacheable page table. L1D and L2 locks have a very high performance penalty and it is highly recommended to avoid such accessarith.fpu_div_activepipelineCycles when divider is busy executing divide operationsevent=0x14,period=2000003,umask=100This event counts the number of the divide operations executed. Uses edge-detect and a cmask value of 1 on ARITH.FPU_DIV_ACTIVE to get the number of the divide operations executedbr_inst_exec.all_branchespipelineSpeculative and retired  branchesevent=0x88,period=200003,umask=0xff00This event counts both taken and not taken speculative and retired branch instructionsbr_inst_exec.all_conditionalpipelineSpeculative and retired macro-conditional branchesevent=0x88,period=200003,umask=0xc100This event counts both taken and not taken speculative and retired macro-conditional branch instructionsbr_inst_exec.all_direct_jmppipelineSpeculative and retired macro-unconditional branches excluding calls and indirectsevent=0x88,period=200003,umask=0xc200This event counts both taken and not taken speculative and retired macro-unconditional branch instructions, excluding calls and indirectsbr_inst_exec.all_direct_near_callpipelineSpeculative and retired direct near callsevent=0x88,period=200003,umask=0xd000This event counts both taken and not taken speculative and retired direct near callsbr_inst_exec.all_indirect_jump_non_call_retpipelineSpeculative and retired indirect branches excluding calls and returnsevent=0x88,period=200003,umask=0xc400This event counts both taken and not taken speculative and retired indirect branches excluding calls and return branchesbr_inst_exec.all_indirect_near_returnpipelineSpeculative and retired indirect return branchesevent=0x88,period=200003,umask=0xc800This event counts both taken and not taken speculative and retired indirect branches that have a return mnemonicbr_inst_exec.nontaken_conditionalpipelineNot taken macro-conditional branchesevent=0x88,period=200003,umask=0x4100This event counts not taken macro-conditional branch instructionsbr_inst_exec.taken_conditionalpipelineTaken speculative and retired macro-conditional branchesevent=0x88,period=200003,umask=0x8100This event counts taken speculative and retired macro-conditional branch instructionsbr_inst_exec.taken_direct_jumppipelineTaken speculative and retired macro-conditional branch instructions excluding calls and indirectsevent=0x88,period=200003,umask=0x8200This event counts taken speculative and retired macro-conditional branch instructions excluding calls and indirect branchesbr_inst_exec.taken_direct_near_callpipelineTaken speculative and retired direct near callsevent=0x88,period=200003,umask=0x9000This event counts taken speculative and retired direct near callsbr_inst_exec.taken_indirect_jump_non_call_retpipelineTaken speculative and retired indirect branches excluding calls and returnsevent=0x88,period=200003,umask=0x8400This event counts taken speculative and retired indirect branches excluding calls and return branchesbr_inst_exec.taken_indirect_near_callpipelineTaken speculative and retired indirect callsevent=0x88,period=200003,umask=0xa000This event counts taken speculative and retired indirect calls including both register and memory indirectbr_inst_exec.taken_indirect_near_returnpipelineTaken speculative and retired indirect branches with return mnemonicevent=0x88,period=200003,umask=0x8800This event counts taken speculative and retired indirect branches that have a return mnemonicbr_inst_retired.all_branchespipelineAll (macro) branch instructions retiredevent=0xc4,period=40000900This event counts all (macro) branch instructions retiredbr_inst_retired.all_branches_pebspipelineAll (macro) branch instructions retired. (Precise Event - PEBS)  Spec update: BDW98 (Must be precise)event=0xc4,period=400009,umask=400This is a precise version of BR_INST_RETIRED.ALL_BRANCHES that counts all (macro) branch instructions retired  Spec update: BDW98 (Must be precise)br_inst_retired.conditionalpipelineConditional branch instructions retired (Precise event)event=0xc4,period=400009,umask=100This event counts conditional branch instructions retired (Precise event)br_inst_retired.far_branchpipelineFar branch instructions retired  Spec update: BDW98event=0xc4,period=100007,umask=0x4000This event counts far branch instructions retired  Spec update: BDW98br_inst_retired.near_callpipelineDirect and indirect near call instructions retired (Precise event)event=0xc4,period=100007,umask=200This event counts both direct and indirect near call instructions retired (Precise event)br_inst_retired.near_call_r3pipelineDirect and indirect macro near call instructions retired (captured in ring 3) (Precise event)event=0xc4,period=100007,umask=200This event counts both direct and indirect macro near call instructions retired (captured in ring 3) (Precise event)br_inst_retired.near_returnpipelineReturn instructions retired (Precise event)event=0xc4,period=100007,umask=800This event counts return instructions retired (Precise event)br_inst_retired.near_takenpipelineTaken branch instructions retired (Precise event)event=0xc4,period=400009,umask=0x2000This event counts taken branch instructions retired (Precise event)br_inst_retired.not_takenpipelineNot taken branch instructions retiredevent=0xc4,period=400009,umask=0x1000This event counts not taken branch instructions retiredbr_misp_exec.all_branchespipelineSpeculative and retired mispredicted macro conditional branchesevent=0x89,period=200003,umask=0xff00This event counts both taken and not taken speculative and retired mispredicted branch instructionsbr_misp_exec.all_conditionalpipelineSpeculative and retired mispredicted macro conditional branchesevent=0x89,period=200003,umask=0xc100This event counts both taken and not taken speculative and retired mispredicted macro conditional branch instructionsbr_misp_exec.all_indirect_jump_non_call_retpipelineMispredicted indirect branches excluding calls and returnsevent=0x89,period=200003,umask=0xc400This event counts both taken and not taken mispredicted indirect branches excluding calls and returnsbr_misp_exec.indirectpipelineSpeculative mispredicted indirect branchesevent=0x89,period=200003,umask=0xe400Counts speculatively miss-predicted indirect branches at execution time. Counts for indirect near CALL or JMP instructions (RET excluded)br_misp_exec.nontaken_conditionalpipelineNot taken speculative and retired mispredicted macro conditional branchesevent=0x89,period=200003,umask=0x4100This event counts not taken speculative and retired mispredicted macro conditional branch instructionsbr_misp_exec.taken_conditionalpipelineTaken speculative and retired mispredicted macro conditional branchesevent=0x89,period=200003,umask=0x8100This event counts taken speculative and retired mispredicted macro conditional branch instructionsbr_misp_exec.taken_indirect_jump_non_call_retpipelineTaken speculative and retired mispredicted indirect branches excluding calls and returnsevent=0x89,period=200003,umask=0x8400This event counts taken speculative and retired mispredicted indirect branches excluding calls and returnsbr_misp_exec.taken_indirect_near_callpipelineTaken speculative and retired mispredicted indirect callsevent=0x89,period=200003,umask=0xa000br_misp_exec.taken_return_nearpipelineTaken speculative and retired mispredicted indirect branches with return mnemonicevent=0x89,period=200003,umask=0x8800This event counts taken speculative and retired mispredicted indirect branches that have a return mnemonicbr_misp_retired.all_branchespipelineAll mispredicted macro branch instructions retiredevent=0xc5,period=40000900This event counts all mispredicted macro branch instructions retiredbr_misp_retired.all_branches_pebspipelineMispredicted macro branch instructions retired. (Precise Event - PEBS) (Must be precise)event=0xc5,period=400009,umask=400This is a precise version of BR_MISP_RETIRED.ALL_BRANCHES that counts all mispredicted macro branch instructions retired (Must be precise)br_misp_retired.conditionalpipelineMispredicted conditional branch instructions retired (Precise event)event=0xc5,period=400009,umask=100This event counts mispredicted conditional branch instructions retired (Precise event)br_misp_retired.near_takenpipelinenumber of near branch instructions retired that were mispredicted and taken (Precise event)event=0xc5,period=400009,umask=0x2000Number of near branch instructions retired that were mispredicted and taken (Precise event)br_misp_retired.retpipelineThis event counts the number of mispredicted ret instructions retired. Non PEBS (Precise event)event=0xc5,period=100007,umask=800This event counts mispredicted return instructions retired (Precise event)cpu_clk_thread_unhalted.one_thread_activepipelineCount XClk pulses when this thread is unhalted and the other thread is haltedevent=0x3c,period=100003,umask=200cpu_clk_thread_unhalted.ref_xclkpipelineReference cycles when the thread is unhalted (counts at 100 MHz rate)event=0x3c,period=100003,umask=100This is a fixed-frequency event programmed to general counters. It counts when the core is unhalted at 100 Mhzcpu_clk_thread_unhalted.ref_xclk_anypipelineReference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate)event=0x3c,any=1,period=100003,umask=100cpu_clk_unhalted.one_thread_activepipelineCount XClk pulses when this thread is unhalted and the other thread is haltedevent=0x3c,period=100003,umask=200cpu_clk_unhalted.ref_tscpipelineReference cycles when the core is not in halt stateevent=0,period=2000003,umask=300This event counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. This event has a constant ratio with the CPU_CLK_UNHALTED.REF_XCLK event. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. 
Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'.  This event is clocked by base clock (100 Mhz) on Sandy Bridge. The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'.  After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this casecpu_clk_unhalted.ref_xclkpipelineReference cycles when the thread is unhalted (counts at 100 MHz rate)event=0x3c,period=100003,umask=100Reference cycles when the thread is unhalted (counts at 100 MHz rate)cpu_clk_unhalted.ref_xclk_anypipelineReference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate)event=0x3c,any=1,period=100003,umask=100cpu_clk_unhalted.threadpipelineCore cycles when the thread is not in halt stateevent=0x3c,period=200000300This event counts the number of core cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios. The core frequency may change from time to time due to transitions associated with Enhanced Intel SpeedStep Technology or TM2. For this reason this event may have a changing ratio with regards to time. When the core frequency is constant, this event can approximate elapsed time while the core was not in the halt state. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other eventscpu_clk_unhalted.thread_anypipelineCore cycles when at least one thread on the physical core is not in halt stateevent=0x3c,any=1,period=200000300cpu_clk_unhalted.thread_p_anypipelineCore cycles when at least one thread on the physical core is not in halt stateevent=0x3c,any=1,period=200000300cycle_activity.cycles_l1d_misspipelineCycles while L1 cache miss demand load is outstandingevent=0xa3,cmask=8,period=2000003,umask=800cycle_activity.cycles_l1d_pendingpipelineCycles while L1 cache miss demand load is outstandingevent=0xa3,cmask=8,period=2000003,umask=800Counts number of cycles the CPU has at least one pending  demand load request missing the L1 data cachecycle_activity.cycles_l2_misspipelineCycles while L2 cache miss demand load is outstandingevent=0xa3,cmask=1,period=2000003,umask=100cycle_activity.cycles_l2_pendingpipelineCycles while L2 cache miss demand load is outstandingevent=0xa3,cmask=1,period=2000003,umask=100Counts number of cycles the CPU has at least one pending  demand* load request missing the L2 cachecycle_activity.cycles_ldm_pendingpipelineCycles while memory subsystem has an outstanding loadevent=0xa3,cmask=2,period=2000003,umask=200Counts number of cycles the CPU has at least one pending  demand load request (that is cycles with non-completed load waiting for its data from memory subsystem)cycle_activity.cycles_mem_anypipelineCycles while memory subsystem has an outstanding loadevent=0xa3,cmask=2,period=2000003,umask=200cycle_activity.cycles_no_executepipelineThis event increments by 1 for every cycle where there was no execute for this threadevent=0xa3,cmask=4,period=2000003,umask=400Counts number of cycles nothing is executed on any execution portcycle_activity.stalls_l1d_misspipelineExecution stalls while L1 cache miss demand load is outstandingevent=0xa3,cmask=12,period=2000003,umask=0xc00cycle_activity.stalls_l1d_pendingpipelineExecution stalls while L1 cache miss demand load is outstandingevent=0xa3,cmask=12,period=2000003,umask=0xc00Counts number of cycles nothing is executed on any execution port, while there was at least one pending demand load request missing the L1 data cachecycle_activity.stalls_l2_misspipelineExecution stalls while L2 cache miss demand load is outstandingevent=0xa3,cmask=5,period=2000003,umask=500cycle_activity.stalls_l2_pendingpipelineExecution stalls while L2 cache miss demand load is outstandingevent=0xa3,cmask=5,period=2000003,umask=500Counts number of cycles nothing is executed on any execution port, while there was at least one pending demand* load request missing the L2 cache.(as a footprint) * includes also L1 HW prefetch requests that may or may not be required by demandscycle_activity.stalls_ldm_pendingpipelineExecution stalls while memory subsystem has an outstanding loadevent=0xa3,cmask=6,period=2000003,umask=600Counts number of cycles nothing is executed on any execution port, while there was at least one pending demand load requestcycle_activity.stalls_mem_anypipelineExecution stalls while memory subsystem has an outstanding loadevent=0xa3,cmask=6,period=2000003,umask=600cycle_activity.stalls_totalpipelineTotal execution stallsevent=0xa3,cmask=4,period=2000003,umask=400ild_stall.lcppipelineStalls caused by changing prefix length of the instructionevent=0x87,period=2000003,umask=100This event counts stalls occurred due to changing prefix length (66, 67 or REX.W when they change the length of the decoded instruction). Occurrences counting is proportional to the number of prefixes in a 16B-line. This may result in the following penalties: three-cycle penalty for each LCP in a 16-byte chunkinst_retired.anypipelineInstructions retired from executionevent=0xc0,period=200000300This event counts the number of instructions retired from execution. For instructions that consist of multiple micro-ops, this event counts the retirement of the last micro-op of the instruction. Counting continues during hardware interrupts, traps, and inside interrupt handlers. 
Notes: INST_RETIRED.ANY is counted by a designated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. INST_RETIRED.ANY_P is counted by a programmable counter and it is an architectural performance event. 
Counting: Faulting executions of GETSEC/VM entry/VM Exit/MWait will not count as retired instructionsinst_retired.any_ppipelineNumber of instructions retired. General Counter   - architectural event  Spec update: BDM61event=0xc0,period=200000300This event counts the number of instructions (EOMs) retired. Counting covers macro-fused instructions individually (that is, increments by two)  Spec update: BDM61inst_retired.prec_distpipelinePrecise instruction retired event with HW to reduce effect of PEBS shadow in IP distribution  Spec update: BDM11, BDM55 (Must be precise)event=0xc0,period=2000003,umask=100This is a precise version (that is, uses PEBS) of the event that counts instructions retired  Spec update: BDM11, BDM55 (Must be precise)inst_retired.x87pipelineFP operations  retired. X87 FP operations that have no exceptions:event=0xc0,period=2000003,umask=200This event counts FP operations retired. For X87 FP operations that have no exceptions counting also includes flows that have several X87, or flows that use X87 uops in the exception handlingint_misc.rat_stall_cyclespipelineCycles when Resource Allocation Table (RAT) external stall is sent to Instruction Decode Queue (IDQ) for the threadevent=0xd,period=2000003,umask=800This event counts the number of cycles during which Resource Allocation Table (RAT) external stall is sent to Instruction Decode Queue (IDQ) for the current thread. This also includes the cycles during which the Allocator is serving another threadint_misc.recovery_cyclespipelineCore cycles the allocator was stalled due to recovery from earlier clear event for this thread (e.g. misprediction or memory nuke)event=0xd,cmask=1,period=2000003,umask=300Cycles checkpoints in Resource Allocation Table (RAT) are recovering from JEClear or machine clearint_misc.recovery_cycles_anypipelineCore cycles the allocator was stalled due to recovery from earlier clear event for any thread running on the physical core (e.g. misprediction or memory nuke)event=0xd,any=1,cmask=1,period=2000003,umask=300ld_blocks.no_srpipelineThis event counts the number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in useevent=3,period=100003,umask=800ld_blocks.store_forwardpipelineCases when loads get true Block-on-Store blocking code preventing store forwardingevent=3,period=100003,umask=200This event counts how many times the load operation got the true Block-on-Store blocking code preventing store forwarding. This includes cases when:
 - preceding store conflicts with the load (incomplete overlap);
 - store forwarding is impossible due to u-arch limitations;
 - preceding lock RMW operations are not forwarded;
 - store has the no-forward bit set (uncacheable/page-split/masked stores);
 - all-blocking stores are used (mostly, fences and port I/O);
and others.
The most common case is a load blocked due to its address range overlapping with a preceding smaller uncompleted store. Note: This event does not take into account cases of out-of-SW-control (for example, SbTailHit), unknown physical STA, and cases of blocking loads on store due to being non-WB memory type or a lock. These cases are covered by other events.
See the table of not supported store forwards in the Optimization Guideld_blocks_partial.address_aliaspipelineFalse dependencies in MOB due to partial compareevent=7,period=100003,umask=100This event counts false dependencies in MOB when the partial comparison upon loose net check and dependency was resolved by the Enhanced Loose net mechanism. This may not result in high performance penalties. Loose net checks can fail when loads and stores are 4k aliasedload_hit_pre.hw_pfpipelineNot software-prefetch load dispatches that hit FB allocated for hardware prefetchevent=0x4c,period=100003,umask=200This event counts all not software-prefetch load dispatches that hit the fill buffer (FB) allocated for the hardware prefetchload_hit_pre.sw_pfpipelineNot software-prefetch load dispatches that hit FB allocated for software prefetchevent=0x4c,period=100003,umask=100This event counts all not software-prefetch load dispatches that hit the fill buffer (FB) allocated for the software prefetch. It can also be incremented by some lock instructions. So it should only be used with profiling so that the locks can be excluded by asm inspection of the nearby instructionslsd.cycles_4_uopspipelineCycles 4 Uops delivered by the LSD, but didn't come from the decoderevent=0xa8,cmask=4,period=2000003,umask=100lsd.cycles_activepipelineCycles Uops delivered by the LSD, but didn't come from the decoderevent=0xa8,cmask=1,period=2000003,umask=100lsd.uopspipelineNumber of Uops delivered by the LSDevent=0xa8,period=2000003,umask=100machine_clears.countpipelineNumber of machine clears (nukes) of any typeevent=0xc3,cmask=1,edge=1,period=100003,umask=100machine_clears.cyclespipelineCycles there was a Nuke. Account for both thread-specific and All Thread Nukesevent=0xc3,period=2000003,umask=100This event counts both thread-specific (TS) and all-thread (AT) nukesmachine_clears.maskmovpipelineThis event counts the number of executed Intel AVX masked load operations that refer to an illegal address range with the mask bits set to 0event=0xc3,period=100003,umask=0x2000Maskmov false fault - counts number of time ucode passes through Maskmov flow due to instruction's mask being 0 while the flow was completed without raising a faultmachine_clears.smcpipelineSelf-modifying code (SMC) detectedevent=0xc3,period=100003,umask=400This event counts self-modifying code (SMC) detected, which causes a machine clearmove_elimination.int_eliminatedpipelineNumber of integer Move Elimination candidate uops that were eliminatedevent=0x58,period=1000003,umask=100move_elimination.int_not_eliminatedpipelineNumber of integer Move Elimination candidate uops that were not eliminatedevent=0x58,period=1000003,umask=400other_assists.any_wb_assistpipelineNumber of times any microcode assist is invoked by HW upon uop writebackevent=0xc1,period=100003,umask=0x4000resource_stalls.anypipelineResource-related stall cyclesevent=0xa2,period=2000003,umask=100This event counts resource-related stall cyclesresource_stalls.robpipelineCycles stalled due to re-order buffer fullevent=0xa2,period=2000003,umask=0x1000This event counts ROB full stall cycles. This counts cycles that the pipeline backend blocked uop delivery from the front endresource_stalls.rspipelineCycles stalled due to no eligible RS entry availableevent=0xa2,period=2000003,umask=400This event counts stall cycles caused by absence of eligible entries in the reservation station (RS). This may result from RS overflow, or from RS deallocation because of the RS array Write Port allocation scheme (each RS entry has two write ports instead of four. As a result, empty entries could not be used, although RS is not really full). This counts cycles that the pipeline backend blocked uop delivery from the front endresource_stalls.sbpipelineCycles stalled due to no store buffers available. (not including draining form sync)event=0xa2,period=2000003,umask=800This event counts stall cycles caused by the store buffer (SB) overflow (excluding draining from synch). This counts cycles that the pipeline backend blocked uop delivery from the front endrob_misc_events.lbr_insertspipelineCount cases of saving new LBRevent=0xcc,period=2000003,umask=0x2000This event counts cases of saving new LBR records by hardware. This assumes proper enabling of LBRs and takes into account LBR filtering done by the LBR_SELECT registerrs_events.empty_cyclespipelineCycles when Reservation Station (RS) is empty for the threadevent=0x5e,period=2000003,umask=100This event counts cycles during which the reservation station (RS) is empty for the thread.
Note: In ST-mode, not active thread should drive 0. This is usually caused by severely costly branch mispredictions, or allocator/FE issuesrs_events.empty_endpipelineCounts end of periods where the Reservation Station (RS) was empty. Could be useful to precisely locate Frontend Latency Bound issuesevent=0x5e,cmask=1,edge=1,inv=1,period=200003,umask=100uops_dispatched_port.port_0pipelineCycles per thread when uops are executed in port 0event=0xa1,period=2000003,umask=100This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 0uops_dispatched_port.port_1pipelineCycles per thread when uops are executed in port 1event=0xa1,period=2000003,umask=200This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 1uops_dispatched_port.port_2pipelineCycles per thread when uops are executed in port 2event=0xa1,period=2000003,umask=400This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 2uops_dispatched_port.port_3pipelineCycles per thread when uops are executed in port 3event=0xa1,period=2000003,umask=800This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 3uops_dispatched_port.port_4pipelineCycles per thread when uops are executed in port 4event=0xa1,period=2000003,umask=0x1000This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 4uops_dispatched_port.port_5pipelineCycles per thread when uops are executed in port 5event=0xa1,period=2000003,umask=0x2000This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 5uops_dispatched_port.port_6pipelineCycles per thread when uops are executed in port 6event=0xa1,period=2000003,umask=0x4000This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 6uops_dispatched_port.port_7pipelineCycles per thread when uops are executed in port 7event=0xa1,period=2000003,umask=0x8000This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 7uops_executed.corepipelineNumber of uops executed on the coreevent=0xb1,period=2000003,umask=200Number of uops executed from any threaduops_executed.core_cycles_ge_1pipelineCycles at least 1 micro-op is executed from any thread on physical coreevent=0xb1,cmask=1,period=2000003,umask=200uops_executed.core_cycles_ge_2pipelineCycles at least 2 micro-op is executed from any thread on physical coreevent=0xb1,cmask=2,period=2000003,umask=200uops_executed.core_cycles_ge_3pipelineCycles at least 3 micro-op is executed from any thread on physical coreevent=0xb1,cmask=3,period=2000003,umask=200uops_executed.core_cycles_ge_4pipelineCycles at least 4 micro-op is executed from any thread on physical coreevent=0xb1,cmask=4,period=2000003,umask=200uops_executed.core_cycles_nonepipelineCycles with no micro-ops executed from any thread on physical coreevent=0xb1,inv=1,period=2000003,umask=200uops_executed.cycles_ge_1_uop_execpipelineCycles where at least 1 uop was executed per-threadevent=0xb1,cmask=1,period=2000003,umask=100uops_executed.cycles_ge_2_uops_execpipelineCycles where at least 2 uops were executed per-threadevent=0xb1,cmask=2,period=2000003,umask=100uops_executed.cycles_ge_3_uops_execpipelineCycles where at least 3 uops were executed per-threadevent=0xb1,cmask=3,period=2000003,umask=100uops_executed.cycles_ge_4_uops_execpipelineCycles where at least 4 uops were executed per-threadevent=0xb1,cmask=4,period=2000003,umask=100uops_executed.stall_cyclespipelineCounts number of cycles no uops were dispatched to be executed on this threadevent=0xb1,cmask=1,inv=1,period=2000003,umask=100This event counts cycles during which no uops were dispatched from the Reservation Station (RS) per threaduops_executed.threadpipelineCounts the number of uops to be executed per-thread each cycleevent=0xb1,period=2000003,umask=100Number of uops to be executed per-thread each cycleuops_executed_port.port_0pipelineCycles per thread when uops are executed in port 0event=0xa1,period=2000003,umask=100This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 0uops_executed_port.port_0_corepipelineCycles per core when uops are executed in port 0event=0xa1,any=1,period=2000003,umask=100uops_executed_port.port_1pipelineCycles per thread when uops are executed in port 1event=0xa1,period=2000003,umask=200This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 1uops_executed_port.port_1_corepipelineCycles per core when uops are executed in port 1event=0xa1,any=1,period=2000003,umask=200uops_executed_port.port_2pipelineCycles per thread when uops are executed in port 2event=0xa1,period=2000003,umask=400This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 2uops_executed_port.port_2_corepipelineCycles per core when uops are dispatched to port 2event=0xa1,any=1,period=2000003,umask=400uops_executed_port.port_3pipelineCycles per thread when uops are executed in port 3event=0xa1,period=2000003,umask=800This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 3uops_executed_port.port_3_corepipelineCycles per core when uops are dispatched to port 3event=0xa1,any=1,period=2000003,umask=800uops_executed_port.port_4pipelineCycles per thread when uops are executed in port 4event=0xa1,period=2000003,umask=0x1000This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 4uops_executed_port.port_4_corepipelineCycles per core when uops are executed in port 4event=0xa1,any=1,period=2000003,umask=0x1000uops_executed_port.port_5pipelineCycles per thread when uops are executed in port 5event=0xa1,period=2000003,umask=0x2000This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 5uops_executed_port.port_5_corepipelineCycles per core when uops are executed in port 5event=0xa1,any=1,period=2000003,umask=0x2000uops_executed_port.port_6pipelineCycles per thread when uops are executed in port 6event=0xa1,period=2000003,umask=0x4000This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 6uops_executed_port.port_6_corepipelineCycles per core when uops are executed in port 6event=0xa1,any=1,period=2000003,umask=0x4000uops_executed_port.port_7pipelineCycles per thread when uops are executed in port 7event=0xa1,period=2000003,umask=0x8000This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 7uops_executed_port.port_7_corepipelineCycles per core when uops are dispatched to port 7event=0xa1,any=1,period=2000003,umask=0x8000uops_issued.anypipelineUops that Resource Allocation Table (RAT) issues to Reservation Station (RS)event=0xe,period=2000003,umask=100This event counts the number of Uops issued by the Resource Allocation Table (RAT) to the reservation station (RS)uops_issued.flags_mergepipelineNumber of flags-merge uops being allocated. Such uops considered perf sensitive; added by GSR u-archevent=0xe,period=2000003,umask=0x1000Number of flags-merge uops being allocated. Such uops considered perf sensitive
 added by GSR u-archuops_issued.single_mulpipelineNumber of Multiply packed/scalar single precision uops allocatedevent=0xe,period=2000003,umask=0x4000uops_issued.slow_leapipelineNumber of slow LEA uops being allocated. A uop is generally considered SlowLea if it has 3 sources (e.g. 2 sources + immediate) regardless if as a result of LEA instruction or notevent=0xe,period=2000003,umask=0x2000uops_issued.stall_cyclespipelineCycles when Resource Allocation Table (RAT) does not issue Uops to Reservation Station (RS) for the threadevent=0xe,cmask=1,inv=1,period=2000003,umask=100This event counts cycles during which the Resource Allocation Table (RAT) does not issue any Uops to the reservation station (RS) for the current threaduops_retired.allpipelineActually retired uops (Precise event)event=0xc2,period=2000003,umask=100This event counts all actually retired uops. Counting increments by two for micro-fused uops, and by one for macro-fused and other uops. Maximal increment value for one cycle is eight (Precise event)uops_retired.retire_slotspipelineRetirement slots used (Precise event)event=0xc2,period=2000003,umask=200This event counts the number of retirement slots used (Precise event)uops_retired.stall_cyclespipelineCycles without actually retired uopsevent=0xc2,cmask=1,inv=1,period=2000003,umask=100This event counts cycles without actually retired uopsuops_retired.total_cyclespipelineCycles with less than 10 actually retired uopsevent=0xc2,cmask=16,inv=1,period=2000003,umask=100Number of cycles using always true condition (uops_ret < 16) applied to non PEBS uops retired eventunc_cbo_cache_lookup.any_esuncore cacheL3 Lookup any request that access cache and found line in E or S-stateevent=0x34,umask=0x8601L3 Lookup any request that access cache and found line in E or S-stateunc_cbo_cache_lookup.any_iuncore cacheL3 Lookup any request that access cache and found line in I-stateevent=0x34,umask=0x8801L3 Lookup any request that access cache and found line in I-stateunc_cbo_cache_lookup.any_muncore cacheL3 Lookup any request that access cache and found line in M-stateevent=0x34,umask=0x8101L3 Lookup any request that access cache and found line in M-stateunc_cbo_cache_lookup.any_mesiuncore cacheL3 Lookup any request that access cache and found line in MESI-stateevent=0x34,umask=0x8f01L3 Lookup any request that access cache and found line in MESI-stateunc_cbo_cache_lookup.read_esuncore cacheL3 Lookup read request that access cache and found line in E or S-stateevent=0x34,umask=0x1601L3 Lookup read request that access cache and found line in E or S-stateunc_cbo_cache_lookup.read_iuncore cacheL3 Lookup read request that access cache and found line in I-stateevent=0x34,umask=0x1801L3 Lookup read request that access cache and found line in I-stateunc_cbo_cache_lookup.read_muncore cacheL3 Lookup read request that access cache and found line in M-stateevent=0x34,umask=0x1101L3 Lookup read request that access cache and found line in M-stateunc_cbo_cache_lookup.read_mesiuncore cacheL3 Lookup read request that access cache and found line in any MESI-stateevent=0x34,umask=0x1f01L3 Lookup read request that access cache and found line in any MESI-stateunc_cbo_cache_lookup.write_esuncore cacheL3 Lookup write request that access cache and found line in E or S-stateevent=0x34,umask=0x2601L3 Lookup write request that access cache and found line in E or S-stateunc_cbo_cache_lookup.write_muncore cacheL3 Lookup write request that access cache and found line in M-stateevent=0x34,umask=0x2101L3 Lookup write request that access cache and found line in M-stateunc_cbo_cache_lookup.write_mesiuncore cacheL3 Lookup write request that access cache and found line in MESI-stateevent=0x34,umask=0x2f01L3 Lookup write request that access cache and found line in MESI-stateunc_cbo_xsnp_response.hitm_xcoreuncore cacheA cross-core snoop initiated by this Cbox due to processor core memory request which hits a modified line in some processor coreevent=0x22,umask=0x4801unc_cbo_xsnp_response.hit_xcoreuncore cacheA cross-core snoop initiated by this Cbox due to processor core memory request which hits a non-modified line in some processor coreevent=0x22,umask=0x4401unc_cbo_xsnp_response.miss_evictionuncore cacheA cross-core snoop resulted from L3 Eviction which misses in some processor coreevent=0x22,umask=0x8101unc_cbo_xsnp_response.miss_xcoreuncore cacheA cross-core snoop initiated by this Cbox due to processor core memory request which misses in some processor coreevent=0x22,umask=0x4101uncore_cbox_0unc_clock.socketuncore cacheThis 48-bit fixed counter counts the UCLK cyclesevent=0xff01This 48-bit fixed counter counts the UCLK cyclesunc_arb_coh_trk_requests.alluncore interconnectNumber of entries allocated. Account for Any type: e.g. Snoop, Core aperture, etcevent=0x84,umask=101unc_arb_trk_occupancy.alluncore interconnectEach cycle counts number of all Core outgoing valid entries. Such entry is defined as valid from its allocation till first of IDI0 or DRS0 messages is sent out. Accounts for Coherent and non-coherent trafficevent=0x80,umask=101unc_arb_trk_occupancy.cycles_with_any_requestuncore interconnectCycles with at least one request outstanding is waiting for data return from memory controller. Account for coherent and non-coherent requests initiated by IA Cores, Processor Graphics Unit, or LLC.;event=0x80,cmask=1,umask=101unc_arb_trk_occupancy.drd_directuncore interconnectEach cycle count number of 'valid' coherent Data Read entries that are in DirectData mode. Such entry is defined as valid when it is allocated till data sent to Core (first chunk, IDI0). Applicable for IA Cores' requests in normal caseevent=0x80,umask=201Each cycle count number of valid coherent Data Read entries that are in DirectData mode. Such entry is defined as valid when it is allocated till data sent to Core (first chunk, IDI0). Applicable for IA Cores' requests in normal caseunc_arb_trk_requests.alluncore interconnectTotal number of Core outgoing entries allocated. Accounts for Coherent and non-coherent trafficevent=0x81,umask=101unc_arb_trk_requests.drd_directuncore interconnectNumber of Core coherent Data Read entries allocated in DirectData modeevent=0x81,umask=201Number of Core coherent Data Read entries allocated in DirectData modeunc_arb_trk_requests.writesuncore interconnectNumber of Writes allocated - any write transactions: full/partials writes and evictionsevent=0x81,umask=0x2001dtlb_load_misses.miss_causes_a_walkvirtual memoryLoad misses in all DTLB levels that cause page walks  Spec update: BDM69event=8,period=100003,umask=100This event counts load misses in all DTLB levels that cause page walks of any page size (4K/2M/4M/1G)  Spec update: BDM69dtlb_load_misses.stlb_hitvirtual memoryLoad operations that miss the first DTLB level but hit the second and do not cause page walksevent=8,period=2000003,umask=0x6000dtlb_load_misses.stlb_hit_2mvirtual memoryLoad misses that miss the  DTLB and hit the STLB (2M)event=8,period=2000003,umask=0x4000dtlb_load_misses.stlb_hit_4kvirtual memoryLoad misses that miss the  DTLB and hit the STLB (4K)event=8,period=2000003,umask=0x2000dtlb_load_misses.walk_completedvirtual memoryDemand load Miss in all translation lookaside buffer (TLB) levels causes a page walk that completes of any page size  Spec update: BDM69event=8,period=100003,umask=0xe00dtlb_load_misses.walk_completed_1gvirtual memoryLoad miss in all TLB levels causes a page walk that completes. (1G)  Spec update: BDM69event=8,period=2000003,umask=800This event counts load misses in all DTLB levels that cause a completed page walk (1G  page size). The page walk can end with or without a fault  Spec update: BDM69dtlb_load_misses.walk_completed_2m_4mvirtual memoryDemand load Miss in all translation lookaside buffer (TLB) levels causes a page walk that completes (2M/4M)  Spec update: BDM69event=8,period=2000003,umask=400This event counts load misses in all DTLB levels that cause a completed page walk (2M and 4M page sizes). The page walk can end with or without a fault  Spec update: BDM69dtlb_load_misses.walk_completed_4kvirtual memoryDemand load Miss in all translation lookaside buffer (TLB) levels causes a page walk that completes (4K)  Spec update: BDM69event=8,period=2000003,umask=200This event counts load misses in all DTLB levels that cause a completed page walk (4K page size). The page walk can end with or without a fault  Spec update: BDM69dtlb_load_misses.walk_durationvirtual memoryCycles when PMH is busy with page walks  Spec update: BDM69event=8,period=2000003,umask=0x1000This event counts the number of cycles while PMH is busy with the page walk  Spec update: BDM69dtlb_store_misses.miss_causes_a_walkvirtual memoryStore misses in all DTLB levels that cause page walks  Spec update: BDM69event=0x49,period=100003,umask=100This event counts store misses in all DTLB levels that cause page walks of any page size (4K/2M/4M/1G)  Spec update: BDM69dtlb_store_misses.stlb_hitvirtual memoryStore operations that miss the first TLB level but hit the second and do not cause page walksevent=0x49,period=100003,umask=0x6000dtlb_store_misses.stlb_hit_2mvirtual memoryStore misses that miss the  DTLB and hit the STLB (2M)event=0x49,period=100003,umask=0x4000dtlb_store_misses.stlb_hit_4kvirtual memoryStore misses that miss the  DTLB and hit the STLB (4K)event=0x49,period=100003,umask=0x2000dtlb_store_misses.walk_completedvirtual memoryStore misses in all DTLB levels that cause completed page walks  Spec update: BDM69event=0x49,period=100003,umask=0xe00dtlb_store_misses.walk_completed_1gvirtual memoryStore misses in all DTLB levels that cause completed page walks (1G)  Spec update: BDM69event=0x49,period=100003,umask=800This event counts store misses in all DTLB levels that cause a completed page walk (1G  page size). The page walk can end with or without a fault  Spec update: BDM69dtlb_store_misses.walk_completed_2m_4mvirtual memoryStore misses in all DTLB levels that cause completed page walks (2M/4M)  Spec update: BDM69event=0x49,period=100003,umask=400This event counts store misses in all DTLB levels that cause a completed page walk (2M and 4M page sizes). The page walk can end with or without a fault  Spec update: BDM69dtlb_store_misses.walk_completed_4kvirtual memoryStore miss in all TLB levels causes a page walk that completes. (4K)  Spec update: BDM69event=0x49,period=100003,umask=200This event counts store misses in all DTLB levels that cause a completed page walk (4K page size). The page walk can end with or without a fault  Spec update: BDM69dtlb_store_misses.walk_durationvirtual memoryCycles when PMH is busy with page walks  Spec update: BDM69event=0x49,period=100003,umask=0x1000This event counts the number of cycles while PMH is busy with the page walk  Spec update: BDM69ept.walk_cyclesvirtual memoryCycle count for an Extended Page table walkevent=0x4f,period=2000003,umask=0x1000This event counts cycles for an extended page table walk. The Extended Page directory cache differs from standard TLB caches by the operating system that use it. Virtual machine operating systems use the extended page directory cache, while guest operating systems use the standard TLB cachesitlb.itlb_flushvirtual memoryFlushing of the Instruction TLB (ITLB) pages, includes 4k/2M/4M pagesevent=0xae,period=100007,umask=100This event counts the number of flushes of the big or small ITLB pages. Counting include both TLB Flush (covering all sets) and TLB Set Clear (set-specific)itlb_misses.miss_causes_a_walkvirtual memoryMisses at all ITLB levels that cause page walks  Spec update: BDM69event=0x85,period=100003,umask=100This event counts store misses in all DTLB levels that cause page walks of any page size (4K/2M/4M/1G)  Spec update: BDM69itlb_misses.stlb_hitvirtual memoryOperations that miss the first ITLB level but hit the second and do not cause any page walksevent=0x85,period=100003,umask=0x6000itlb_misses.stlb_hit_2mvirtual memoryCode misses that miss the  DTLB and hit the STLB (2M)event=0x85,period=100003,umask=0x4000itlb_misses.stlb_hit_4kvirtual memoryCore misses that miss the  DTLB and hit the STLB (4K)event=0x85,period=100003,umask=0x2000itlb_misses.walk_completedvirtual memoryMisses in all ITLB levels that cause completed page walks  Spec update: BDM69event=0x85,period=100003,umask=0xe00itlb_misses.walk_completed_1gvirtual memoryStore miss in all TLB levels causes a page walk that completes. (1G)  Spec update: BDM69event=0x85,period=100003,umask=800This event counts store misses in all DTLB levels that cause a completed page walk (1G  page size). The page walk can end with or without a fault  Spec update: BDM69itlb_misses.walk_completed_2m_4mvirtual memoryCode miss in all TLB levels causes a page walk that completes. (2M/4M)  Spec update: BDM69event=0x85,period=100003,umask=400This event counts store misses in all DTLB levels that cause a completed page walk (2M and 4M page sizes). The page walk can end with or without a fault  Spec update: BDM69itlb_misses.walk_completed_4kvirtual memoryCode miss in all TLB levels causes a page walk that completes. (4K)  Spec update: BDM69event=0x85,period=100003,umask=200This event counts store misses in all DTLB levels that cause a completed page walk (4K page size). The page walk can end with or without a fault  Spec update: BDM69itlb_misses.walk_durationvirtual memoryCycles when PMH is busy with page walks  Spec update: BDM69event=0x85,period=100003,umask=0x1000This event counts the number of cycles while PMH is busy with the page walk  Spec update: BDM69page_walker_loads.dtlb_l1virtual memoryNumber of DTLB page walker hits in the L1+FB  Spec update: BDM69, BDM98event=0xbc,period=2000003,umask=0x1100page_walker_loads.dtlb_l2virtual memoryNumber of DTLB page walker hits in the L2  Spec update: BDM69, BDM98event=0xbc,period=2000003,umask=0x1200page_walker_loads.dtlb_l3virtual memoryNumber of DTLB page walker hits in the L3 + XSNP  Spec update: BDM69, BDM98event=0xbc,period=2000003,umask=0x1400page_walker_loads.dtlb_memoryvirtual memoryNumber of DTLB page walker hits in Memory  Spec update: BDM69, BDM98event=0xbc,period=2000003,umask=0x1800page_walker_loads.itlb_l1virtual memoryNumber of ITLB page walker hits in the L1+FB  Spec update: BDM69, BDM98event=0xbc,period=2000003,umask=0x2100page_walker_loads.itlb_l2virtual memoryNumber of ITLB page walker hits in the L2  Spec update: BDM69, BDM98event=0xbc,period=2000003,umask=0x2200page_walker_loads.itlb_l3virtual memoryNumber of ITLB page walker hits in the L3 + XSNP  Spec update: BDM69, BDM98event=0xbc,period=2000003,umask=0x2400tlb_flush.dtlb_threadvirtual memoryDTLB flush attempts of the thread-specific entriesevent=0xbd,period=100007,umask=100This event counts the number of DTLB flush attempts of the thread-specific entriestlb_flush.stlb_anyvirtual memorySTLB flush attemptsevent=0xbd,period=100007,umask=0x2000This event counts the number of any STLB flush attempts (such as entire, VPID, PCID, InvPage, CR3 write, and so on)mem_load_uops_l3_miss_retired.remote_dramcacheRetired load uop whose Data Source was: remote DRAM either Snoop not needed or Snoop Miss (RspI)  Supports address when precise.  Spec update: BDE70 (Precise event)event=0xd3,period=100007,umask=400mem_load_uops_l3_miss_retired.remote_fwdcacheRetired load uop whose Data Source was: forwarded from remote cache  Supports address when precise.  Spec update: BDE70 (Precise event)event=0xd3,period=100007,umask=0x2000mem_load_uops_l3_miss_retired.remote_hitmcacheRetired load uop whose Data Source was: Remote cache HITM  Supports address when precise.  Spec update: BDE70 (Precise event)event=0xd3,period=100007,umask=0x1000rtm_retired.abortedmemoryNumber of times RTM abort was triggered (Precise event)event=0xc9,period=2000003,umask=400Number of times RTM abort was triggered  (Precise event)unc_c_bounce_controluncore cacheBounce Controlevent=0xa01unc_c_clockticksuncore cacheUncore Clocksevent=001unc_c_counter0_occupancyuncore cacheCounter 0 Occupancyevent=0x1f01Since occupancy counts can only be captured in the Cbo's 0 counter, this event allows a user to capture occupancy related information by filtering the Cb0 occupancy count captured in Counter 0.   The filtering available is found in the control register - threshold, invert and edge detect.   E.g. setting threshold to 1 can effectively monitor how many cycles the monitored queue has an entryunc_c_fast_asserteduncore cacheFaST wire assertedevent=901Counts the number of cycles either the local distress or incoming distress signals are asserted.  Incoming distress includes both up and dnunc_c_llc_lookup.anyuncore cacheCache Lookups; Any Requestevent=0x34,umask=0x1101Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CBoGlCtrl[22:18] bits correspond to [FMESI] state.; Filters for any transaction originating from the IPQ or IRQ.  This does not include lookups originating from the ISMQunc_c_llc_lookup.data_readuncore cacheCache Lookups; Data Read Requestevent=0x34,umask=301Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CBoGlCtrl[22:18] bits correspond to [FMESI] state.; Read transactionsunc_c_llc_lookup.niduncore cacheCache Lookups; Lookups that Match NIDevent=0x34,umask=0x4101Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CBoGlCtrl[22:18] bits correspond to [FMESI] state.; Qualify one of the other subevents by the Target NID.  The NID is programmed in Cn_MSR_PMON_BOX_FILTER.nid.   In conjunction with STATE = I, it is possible to monitor misses to specific NIDs in the systemunc_c_llc_lookup.readuncore cacheCache Lookups; Any Read Requestevent=0x34,umask=0x2101Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CBoGlCtrl[22:18] bits correspond to [FMESI] state.; Read transactionsunc_c_llc_lookup.remote_snoopuncore cacheCache Lookups; External Snoop Requestevent=0x34,umask=901Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CBoGlCtrl[22:18] bits correspond to [FMESI] state.; Filters for only snoop requests coming from the remote socket(s) through the IPQunc_c_llc_lookup.writeuncore cacheCache Lookups; Write Requestsevent=0x34,umask=501Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CBoGlCtrl[22:18] bits correspond to [FMESI] state.; Writeback transactions from L2 to the LLC  This includes all write transactions -- both Cacheable and UCunc_c_llc_victims.e_stateuncore cacheLines Victimized; Lines in E stateevent=0x37,umask=201Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_c_llc_victims.f_stateuncore cacheLines Victimizedevent=0x37,umask=801Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_c_llc_victims.i_stateuncore cacheLines Victimized; Lines in S Stateevent=0x37,umask=401Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_c_llc_victims.missuncore cacheLines Victimizedevent=0x37,umask=0x1001Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_c_llc_victims.m_stateuncore cacheLines Victimized; Lines in M stateevent=0x37,umask=101Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_c_llc_victims.niduncore cacheLines Victimized; Victimized Lines that Match NIDevent=0x37,umask=0x4001Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was in.; Qualify one of the other subevents by the Target NID.  The NID is programmed in Cn_MSR_PMON_BOX_FILTER.nid.   In conjunction with STATE = I, it is possible to monitor misses to specific NIDs in the systemunc_c_misc.cvzero_prefetch_missuncore cacheCbo Misc; DRd hitting non-M with raw CV=0event=0x39,umask=0x2001Miscellaneous events in the Cbounc_c_misc.cvzero_prefetch_victimuncore cacheCbo Misc; Clean Victim with raw CV=0event=0x39,umask=0x1001Miscellaneous events in the Cbounc_c_misc.rfo_hit_suncore cacheCbo Misc; RFO HitSevent=0x39,umask=801Miscellaneous events in the Cbo.; Number of times that an RFO hit in S state.  This is useful for determining if it might be good for a workload to use RspIWB instead of RspSWBunc_c_misc.rspi_was_fseuncore cacheCbo Misc; Silent Snoop Evictionevent=0x39,umask=101Miscellaneous events in the Cbo.; Counts the number of times when a Snoop hit in FSE states and triggered a silent eviction.  This is useful because this information is lost in the PRE encodingsunc_c_misc.starteduncore cacheCbo Miscevent=0x39,umask=401Miscellaneous events in the Cbounc_c_misc.wc_aliasinguncore cacheCbo Misc; Write Combining Aliasingevent=0x39,umask=201Miscellaneous events in the Cbo.; Counts the number of times that a USWC write (WCIL(F)) transaction hit in the LLC in M state, triggering a WBMtoI followed by the USWC write.  This occurs when there is WC aliasingunc_c_qlru.age0uncore cacheLRU Queue; LRU Age 0event=0x3c,umask=101How often age was set to 0unc_c_qlru.age1uncore cacheLRU Queue; LRU Age 1event=0x3c,umask=201How often age was set to 1unc_c_qlru.age2uncore cacheLRU Queue; LRU Age 2event=0x3c,umask=401How often age was set to 2unc_c_qlru.age3uncore cacheLRU Queue; LRU Age 3event=0x3c,umask=801How often age was set to 3unc_c_qlru.lru_decrementuncore cacheLRU Queue; LRU Bits Decrementedevent=0x3c,umask=0x1001How often all LRU bits were decremented by 1unc_c_qlru.victim_non_zerouncore cacheLRU Queue; Non-0 Aged Victimevent=0x3c,umask=0x2001How often we picked a victim that had a non-zero ageunc_c_ring_ad_used.alluncore cacheAD Ring In Use; Allevent=0x1b,umask=0xf01Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_ad_used.ccwuncore cacheAD Ring In Use; Downevent=0x1b,umask=0xc01Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_ad_used.cwuncore cacheAD Ring In Use; Upevent=0x1b,umask=301Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_ad_used.down_evenuncore cacheAD Ring In Use; Down and Evenevent=0x1b,umask=401Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Even ring polarityunc_c_ring_ad_used.down_odduncore cacheAD Ring In Use; Down and Oddevent=0x1b,umask=801Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Odd ring polarityunc_c_ring_ad_used.up_evenuncore cacheAD Ring In Use; Up and Evenevent=0x1b,umask=101Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Even ring polarityunc_c_ring_ad_used.up_odduncore cacheAD Ring In Use; Up and Oddevent=0x1b,umask=201Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Odd ring polarityunc_c_ring_ak_used.alluncore cacheAK Ring In Use; Allevent=0x1c,umask=0xf01Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_ak_used.ccwuncore cacheAK Ring In Use; Downevent=0x1c,umask=0xc01Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_ak_used.cwuncore cacheAK Ring In Use; Upevent=0x1c,umask=301Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_ak_used.down_evenuncore cacheAK Ring In Use; Down and Evenevent=0x1c,umask=401Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Even ring polarityunc_c_ring_ak_used.down_odduncore cacheAK Ring In Use; Down and Oddevent=0x1c,umask=801Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Odd ring polarityunc_c_ring_ak_used.up_evenuncore cacheAK Ring In Use; Up and Evenevent=0x1c,umask=101Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Even ring polarityunc_c_ring_ak_used.up_odduncore cacheAK Ring In Use; Up and Oddevent=0x1c,umask=201Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Odd ring polarityunc_c_ring_bl_used.alluncore cacheBL Ring in Use; Downevent=0x1d,umask=0xf01Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_bl_used.ccwuncore cacheBL Ring in Use; Downevent=0x1d,umask=0xc01Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_bl_used.cwuncore cacheBL Ring in Use; Upevent=0x1d,umask=301Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_bl_used.down_evenuncore cacheBL Ring in Use; Down and Evenevent=0x1d,umask=401Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Even ring polarityunc_c_ring_bl_used.down_odduncore cacheBL Ring in Use; Down and Oddevent=0x1d,umask=801Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Odd ring polarityunc_c_ring_bl_used.up_evenuncore cacheBL Ring in Use; Up and Evenevent=0x1d,umask=101Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Even ring polarityunc_c_ring_bl_used.up_odduncore cacheBL Ring in Use; Up and Oddevent=0x1d,umask=201Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Odd ring polarityunc_c_ring_bounces.aduncore cacheNumber of LLC responses that bounced on the Ring.; ADevent=5,umask=101unc_c_ring_bounces.akuncore cacheNumber of LLC responses that bounced on the Ring.; AKevent=5,umask=201unc_c_ring_bounces.bluncore cacheNumber of LLC responses that bounced on the Ring.; BLevent=5,umask=401unc_c_ring_bounces.ivuncore cacheNumber of LLC responses that bounced on the Ring.; Snoops of processor's cacheevent=5,umask=0x1001unc_c_ring_iv_used.anyuncore cacheBL Ring in Use; Anyevent=0x1e,umask=0xf01Counts the number of cycles that the IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring in BDX  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODD.; Filters any polarityunc_c_ring_iv_used.dnuncore cacheBL Ring in Use; Anyevent=0x1e,umask=0xc01Counts the number of cycles that the IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring in BDX  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODD.; Filters any polarityunc_c_ring_iv_used.downuncore cacheBL Ring in Use; Downevent=0x1e,umask=0xcc01Counts the number of cycles that the IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring in BDX  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODD.; Filters for Down polarityunc_c_ring_iv_used.upuncore cacheBL Ring in Use; Anyevent=0x1e,umask=301Counts the number of cycles that the IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring in BDX  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODD.; Filters any polarityunc_c_ring_sink_starved.aduncore cacheADevent=6,umask=101unc_c_ring_sink_starved.akuncore cacheAKevent=6,umask=201unc_c_ring_sink_starved.bluncore cacheBLevent=6,umask=401unc_c_ring_sink_starved.ivuncore cacheIVevent=6,umask=801unc_c_ring_src_thrtluncore cacheNumber of cycles the Cbo is actively throttling traffic onto the Ring in order to limit bounce trafficevent=701unc_c_rxr_ext_starved.ipquncore cacheIngress Arbiter Blocking Cycles; IRQevent=0x12,umask=201Counts cycles in external starvation.  This occurs when one of the ingress queues is being starved by the other queues.; IPQ is externally startved and therefore we are blocking the IRQunc_c_rxr_ext_starved.irquncore cacheIngress Arbiter Blocking Cycles; IPQevent=0x12,umask=101Counts cycles in external starvation.  This occurs when one of the ingress queues is being starved by the other queues.; IRQ is externally starved and therefore we are blocking the IPQunc_c_rxr_ext_starved.ismq_bidsuncore cacheIngress Arbiter Blocking Cycles; ISMQ_BIDevent=0x12,umask=801Counts cycles in external starvation.  This occurs when one of the ingress queues is being starved by the other queues.; Number of times that the ISMQ Bidunc_c_rxr_ext_starved.prquncore cacheIngress Arbiter Blocking Cycles; PRQevent=0x12,umask=401Counts cycles in external starvation.  This occurs when one of the ingress queues is being starved by the other queuesunc_c_rxr_inserts.ipquncore cacheIngress Allocations; IPQevent=0x13,umask=401Counts number of allocations per cycle into the specified Ingress queueunc_c_rxr_inserts.irquncore cacheIngress Allocations; IRQevent=0x13,umask=101Counts number of allocations per cycle into the specified Ingress queueunc_c_rxr_inserts.irq_rejuncore cacheIngress Allocations; IRQ Rejectedevent=0x13,umask=201Counts number of allocations per cycle into the specified Ingress queueunc_c_rxr_inserts.prquncore cacheIngress Allocations; PRQevent=0x13,umask=0x1001Counts number of allocations per cycle into the specified Ingress queueunc_c_rxr_inserts.prq_rejuncore cacheIngress Allocations; PRQevent=0x13,umask=0x2001Counts number of allocations per cycle into the specified Ingress queueunc_c_rxr_int_starved.ipquncore cacheIngress Internal Starvation Cycles; IPQevent=0x14,umask=401Counts cycles in internal starvation.  This occurs when one (or more) of the entries in the ingress queue are being starved out by other entries in that queue.; Cycles with the IPQ in Internal Starvationunc_c_rxr_int_starved.irquncore cacheIngress Internal Starvation Cycles; IRQevent=0x14,umask=101Counts cycles in internal starvation.  This occurs when one (or more) of the entries in the ingress queue are being starved out by other entries in that queue.; Cycles with the IRQ in Internal Starvationunc_c_rxr_int_starved.ismquncore cacheIngress Internal Starvation Cycles; ISMQevent=0x14,umask=801Counts cycles in internal starvation.  This occurs when one (or more) of the entries in the ingress queue are being starved out by other entries in that queue.; Cycles with the ISMQ in Internal Starvationunc_c_rxr_int_starved.prquncore cacheIngress Internal Starvation Cycles; PRQevent=0x14,umask=0x1001Counts cycles in internal starvation.  This occurs when one (or more) of the entries in the ingress queue are being starved out by other entries in that queueunc_c_rxr_ipq_retry.addr_conflictuncore cacheProbe Queue Retries; Address Conflictevent=0x31,umask=401Number of times a snoop (probe) request had to retry.  Filters exist to cover some of the common cases retries.; Counts the number of times that a request form the IPQ was retried because of a TOR reject from an address conflicts.  Address conflicts out of the IPQ should be rare.  They will generally only occur if two different sockets are sending requests to the same address at the same time.  This is a true conflict case, unlike the IPQ Address Conflict which is commonly caused by prefetching characteristicsunc_c_rxr_ipq_retry.anyuncore cacheProbe Queue Retries; Any Rejectevent=0x31,umask=101Number of times a snoop (probe) request had to retry.  Filters exist to cover some of the common cases retries.; Counts the number of times that a request form the IPQ was retried because of a TOR reject.  TOR rejects from the IPQ can be caused by the Egress being full or Address Conflictsunc_c_rxr_ipq_retry.fulluncore cacheProbe Queue Retries; No Egress Creditsevent=0x31,umask=201Number of times a snoop (probe) request had to retry.  Filters exist to cover some of the common cases retries.; Counts the number of times that a request form the IPQ was retried because of a TOR reject from the Egress being full.  IPQ requests make use of the AD Egress for regular responses, the BL egress to forward data, and the AK egress to return creditsunc_c_rxr_ipq_retry.qpi_creditsuncore cacheProbe Queue Retries; No QPI Creditsevent=0x31,umask=0x1001Number of times a snoop (probe) request had to retry.  Filters exist to cover some of the common cases retriesunc_c_rxr_ipq_retry2.ad_sbouncore cacheProbe Queue Retries; No AD Sbo Creditsevent=0x28,umask=101Number of times a snoop (probe) request had to retry.  Filters exist to cover some of the common cases retries.; Counts the number of times that a request from the IPQ was retried because of it lacked credits to send an AD packet to the Sbounc_c_rxr_ipq_retry2.targetuncore cacheProbe Queue Retries; Target Node Filterevent=0x28,umask=0x4001Number of times a snoop (probe) request had to retry.  Filters exist to cover some of the common cases retries.; Counts the number of times that a request from the IPQ was retried filtered by the Target NodeID as specified in the Cbox's Filter registerunc_c_rxr_irq_retry.addr_conflictuncore cacheIngress Request Queue Rejects; Address Conflictevent=0x32,umask=401Counts the number of times that a request from the IRQ was retried because of an address match in the TOR.  In order to maintain coherency, requests to the same address are not allowed to pass each other up in the Cbo.  Therefore, if there is an outstanding request to a given address, one cannot issue another request to that address until it is complete.  This comes up most commonly with prefetches.  Outstanding prefetches occasionally will not complete their memory fetch and a demand request to the same address will then sit in the IRQ and get retried until the prefetch fills the data into the LLC.  Therefore, it will not be uncommon to see this case in high bandwidth streaming workloads when the LLC Prefetcher in the core is enabledunc_c_rxr_irq_retry.anyuncore cacheIngress Request Queue Rejects; Any Rejectevent=0x32,umask=101Counts the number of IRQ retries that occur.  Requests from the IRQ are retried if they are rejected from the TOR pipeline for a variety of reasons.  Some of the most common reasons include if the Egress is full, there are no RTIDs, or there is a Physical Address match to another outstanding requestunc_c_rxr_irq_retry.fulluncore cacheIngress Request Queue Rejects; No Egress Creditsevent=0x32,umask=201Counts the number of times that a request from the IRQ was retried because it failed to acquire an entry in the Egress.  The egress is the buffer that queues up for allocating onto the ring.  IRQ requests can make use of all four rings and all four Egresses.  If any of the queues that a given request needs to make use of are full, the request will be retriedunc_c_rxr_irq_retry.iio_creditsuncore cacheIngress Request Queue Rejects; No IIO Creditsevent=0x32,umask=0x2001Number of times a request attempted to acquire the NCS/NCB credit for sending messages on BL to the IIO.  There is a single credit in each CBo that is shared between the NCS and NCB message classes for sending transactions on the BL ring (such as read data) to the IIOunc_c_rxr_irq_retry.niduncore cacheIngress Request Queue Rejectsevent=0x32,umask=0x4001Qualify one of the other subevents by a given RTID destination NID.  The NID is programmed in Cn_MSR_PMON_BOX_FILTER1.nidunc_c_rxr_irq_retry.qpi_creditsuncore cacheIngress Request Queue Rejects; No QPI Creditsevent=0x32,umask=0x1001Number of requests rejects because of lack of QPI Ingress credits.  These credits are required in order to send transactions to the QPI agent.  Please see the QPI_IGR_CREDITS events for more informationunc_c_rxr_irq_retry.rtiduncore cacheIngress Request Queue Rejects; No RTIDsevent=0x32,umask=801Counts the number of times that requests from the IRQ were retried because there were no RTIDs available.  RTIDs are required after a request misses the LLC and needs to send snoops and/or requests to memory.  If there are no RTIDs available, requests will queue up in the IRQ and retry until one becomes available.  Note that there are multiple RTID pools for the different sockets.  There may be cases where the local RTIDs are all used, but requests destined for remote memory can still acquire an RTID because there are remote RTIDs available.  This event does not provide any filtering for this caseunc_c_rxr_irq_retry2.ad_sbouncore cacheIngress Request Queue Rejects; No AD Sbo Creditsevent=0x29,umask=101Counts the number of times that a request from the IPQ was retried because of it lacked credits to send an AD packet to the Sbounc_c_rxr_irq_retry2.bl_sbouncore cacheIngress Request Queue Rejects; No BL Sbo Creditsevent=0x29,umask=201Counts the number of times that a request from the IPQ was retried because of it lacked credits to send an BL packet to the Sbounc_c_rxr_irq_retry2.targetuncore cacheIngress Request Queue Rejects; Target Node Filterevent=0x29,umask=0x4001Counts the number of times that a request from the IPQ was retried filtered by the Target NodeID as specified in the Cbox's Filter registerunc_c_rxr_ismq_retry.anyuncore cacheISMQ Retries; Any Rejectevent=0x33,umask=101Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the cores.; Counts the total number of times that a request from the ISMQ retried because of a TOR reject.  ISMQ requests generally will not need to retry (or at least ISMQ retries are less common than IRQ retries).  ISMQ requests will retry if they are not able to acquire a needed Egress credit to get onto the ring, or for cache evictions that need to acquire an RTID.  Most ISMQ requests already have an RTID, so eviction retries will be less common hereunc_c_rxr_ismq_retry.fulluncore cacheISMQ Retries; No Egress Creditsevent=0x33,umask=201Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the cores.; Counts the number of times that a request from the ISMQ retried because of a TOR reject caused by a lack of Egress credits. The egress is the buffer that queues up for allocating onto the ring.  If any of the Egress queues that a given request needs to make use of are full, the request will be retriedunc_c_rxr_ismq_retry.iio_creditsuncore cacheISMQ Retries; No IIO Creditsevent=0x33,umask=0x2001Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the cores.; Number of times a request attempted to acquire the NCS/NCB credit for sending messages on BL to the IIO.  There is a single credit in each CBo that is shared between the NCS and NCB message classes for sending transactions on the BL ring (such as read data) to the IIOunc_c_rxr_ismq_retry.niduncore cacheISMQ Retriesevent=0x33,umask=0x4001Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the cores.; Qualify one of the other subevents by a given RTID destination NID.  The NID is programmed in Cn_MSR_PMON_BOX_FILTER1.nidunc_c_rxr_ismq_retry.qpi_creditsuncore cacheISMQ Retries; No QPI Creditsevent=0x33,umask=0x1001Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the coresunc_c_rxr_ismq_retry.rtiduncore cacheISMQ Retries; No RTIDsevent=0x33,umask=801Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the cores.; Counts the number of times that a request from the ISMQ retried because of a TOR reject caused by no RTIDs.  M-state cache evictions are serviced through the ISMQ, and must acquire an RTID in order to write back to memory.  If no RTIDs are available, they will be retriedunc_c_rxr_ismq_retry.wb_creditsuncore cacheISMQ Retriesevent=0x33,umask=0x8001Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the cores.; Qualify one of the other subevents by a given RTID destination NID.  The NID is programmed in Cn_MSR_PMON_BOX_FILTER1.nidunc_c_rxr_ismq_retry2.ad_sbouncore cacheISMQ Request Queue Rejects; No AD Sbo Creditsevent=0x2a,umask=101Counts the number of times that a request from the ISMQ was retried because of it lacked credits to send an AD packet to the Sbounc_c_rxr_ismq_retry2.bl_sbouncore cacheISMQ Request Queue Rejects; No BL Sbo Creditsevent=0x2a,umask=201Counts the number of times that a request from the ISMQ was retried because of it lacked credits to send an BL packet to the Sbounc_c_rxr_ismq_retry2.targetuncore cacheISMQ Request Queue Rejects; Target Node Filterevent=0x2a,umask=0x4001Counts the number of times that a request from the ISMQ was retried filtered by the Target NodeID as specified in the Cbox's Filter registerunc_c_rxr_occupancy.ipquncore cacheIngress Occupancy; IPQevent=0x11,umask=401Counts number of entries in the specified Ingress queue in each cycleunc_c_rxr_occupancy.irquncore cacheIngress Occupancy; IRQevent=0x11,umask=101Counts number of entries in the specified Ingress queue in each cycleunc_c_rxr_occupancy.irq_rejuncore cacheIngress Occupancy; IRQ Rejectedevent=0x11,umask=201Counts number of entries in the specified Ingress queue in each cycleunc_c_rxr_occupancy.prq_rejuncore cacheIngress Occupancy; PRQ Rejectsevent=0x11,umask=0x2001Counts number of entries in the specified Ingress queue in each cycleunc_c_sbo_credits_acquired.aduncore cacheSBo Credits Acquired; For AD Ringevent=0x3d,umask=101Number of Sbo credits acquired in a given cycle, per ring.  Each Cbo is assigned an Sbo it can communicate withunc_c_sbo_credits_acquired.bluncore cacheSBo Credits Acquired; For BL Ringevent=0x3d,umask=201Number of Sbo credits acquired in a given cycle, per ring.  Each Cbo is assigned an Sbo it can communicate withunc_c_sbo_credit_occupancy.aduncore cacheSBo Credits Occupancy; For AD Ringevent=0x3e,umask=101Number of Sbo credits in use in a given cycle, per ring.  Each Cbo is assigned an Sbo it can communicate withunc_c_sbo_credit_occupancy.bluncore cacheSBo Credits Occupancy; For BL Ringevent=0x3e,umask=201Number of Sbo credits in use in a given cycle, per ring.  Each Cbo is assigned an Sbo it can communicate withunc_c_tor_inserts.alluncore cacheTOR Inserts; Allevent=0x35,umask=801Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; All transactions inserted into the TOR.    This includes requests that reside in the TOR for a short time, such as LLC Hits that do not need to snoop cores or requests that get rejected and have to be retried through one of the ingress queues.  The TOR is more commonly a bottleneck in skews with smaller core counts, where the ratio of RTIDs to TOR entries is larger.  Note that there are reserved TOR entries for various request types, so it is possible that a given request type be blocked with an occupancy that is less than 20.  Also note that generally requests will not be able to arbitrate into the TOR pipeline if there are no available TOR slotsunc_c_tor_inserts.evictionuncore cacheTOR Inserts; Evictionsevent=0x35,umask=401Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; Eviction transactions inserted into the TOR.  Evictions can be quick, such as when the line is in the F, S, or E states and no core valid bits are set.  They can also be longer if either CV bits are set (so the cores need to be snooped) and/or if there is a HitM (in which case it is necessary to write the request out to memory)unc_c_tor_inserts.localuncore cacheTOR Inserts; Local Memoryevent=0x35,umask=0x2801Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; All transactions inserted into the TOR that are satisfied by locally HOMed memoryunc_c_tor_inserts.local_opcodeuncore cacheTOR Inserts; Local Memory - Opcode Matchedevent=0x35,umask=0x2101Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; All transactions, satisfied by an opcode,  inserted into the TOR that are satisfied by locally HOMed memoryunc_c_tor_inserts.miss_localuncore cacheTOR Inserts; Misses to Local Memoryevent=0x35,umask=0x2a01Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; Miss transactions inserted into the TOR that are satisfied by locally HOMed memoryunc_c_tor_inserts.miss_local_opcodeuncore cacheTOR Inserts; Misses to Local Memory - Opcode Matchedevent=0x35,umask=0x2301Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; Miss transactions, satisfied by an opcode, inserted into the TOR that are satisfied by locally HOMed memoryunc_c_tor_inserts.miss_opcodeuncore cacheTOR Inserts; Miss Opcode Matchevent=0x35,umask=301Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; Miss transactions inserted into the TOR that match an opcodeunc_c_tor_inserts.miss_remoteuncore cacheTOR Inserts; Misses to Remote Memoryevent=0x35,umask=0x8a01Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; Miss transactions inserted into the TOR that are satisfied by remote caches or remote memoryunc_c_tor_inserts.miss_remote_opcodeuncore cacheTOR Inserts; Misses to Remote Memory - Opcode Matchedevent=0x35,umask=0x8301Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; Miss transactions, satisfied by an opcode,  inserted into the TOR that are satisfied by remote caches or remote memoryunc_c_tor_inserts.nid_alluncore cacheTOR Inserts; NID Matchedevent=0x35,umask=0x4801Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; All NID matched (matches an RTID destination) transactions inserted into the TOR.  The NID is programmed in Cn_MSR_PMON_BOX_FILTER.nid.  In conjunction with STATE = I, it is possible to monitor misses to specific NIDs in the systemunc_c_tor_inserts.nid_evictionuncore cacheTOR Inserts; NID Matched Evictionsevent=0x35,umask=0x4401Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; NID matched eviction transactions inserted into the TORunc_c_tor_inserts.nid_miss_alluncore cacheTOR Inserts; NID Matched Miss Allevent=0x35,umask=0x4a01Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; All NID matched miss requests that were inserted into the TORunc_c_tor_inserts.nid_miss_opcodeuncore cacheTOR Inserts; NID and Opcode Matched Missevent=0x35,umask=0x4301Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; Miss transactions inserted into the TOR that match a NID and an opcodeunc_c_tor_inserts.nid_opcodeuncore cacheTOR Inserts; NID and Opcode Matchedevent=0x35,umask=0x4101Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; Transactions inserted into the TOR that match a NID and an opcodeunc_c_tor_inserts.nid_wbuncore cacheTOR Inserts; NID Matched Writebacksevent=0x35,umask=0x5001Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; NID matched write transactions inserted into the TORunc_c_tor_inserts.opcodeuncore cacheTOR Inserts; Opcode Matchevent=0x35,umask=101Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; Transactions inserted into the TOR that match an opcode (matched by Cn_MSR_PMON_BOX_FILTER.opc)unc_c_tor_inserts.remoteuncore cacheTOR Inserts; Remote Memoryevent=0x35,umask=0x8801Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; All transactions inserted into the TOR that are satisfied by remote caches or remote memoryunc_c_tor_inserts.remote_opcodeuncore cacheTOR Inserts; Remote Memory - Opcode Matchedevent=0x35,umask=0x8101Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; All transactions, satisfied by an opcode,  inserted into the TOR that are satisfied by remote caches or remote memoryunc_c_tor_inserts.wbuncore cacheTOR Inserts; Writebacksevent=0x35,umask=0x1001Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; Write transactions inserted into the TOR.   This does not include RFO, but actual operations that contain data being sent from the coreunc_c_tor_occupancy.alluncore cacheTOR Occupancy; Anyevent=0x36,umask=801For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); All valid TOR entries.  This includes requests that reside in the TOR for a short time, such as LLC Hits that do not need to snoop cores or requests that get rejected and have to be retried through one of the ingress queues.  The TOR is more commonly a bottleneck in skews with smaller core counts, where the ratio of RTIDs to TOR entries is larger.  Note that there are reserved TOR entries for various request types, so it is possible that a given request type be blocked with an occupancy that is less than 20.  Also note that generally requests will not be able to arbitrate into the TOR pipeline if there are no available TOR slotsunc_c_tor_occupancy.evictionuncore cacheTOR Occupancy; Evictionsevent=0x36,umask=401For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Number of outstanding eviction transactions in the TOR.  Evictions can be quick, such as when the line is in the F, S, or E states and no core valid bits are set.  They can also be longer if either CV bits are set (so the cores need to be snooped) and/or if there is a HitM (in which case it is necessary to write the request out to memory)unc_c_tor_occupancy.localuncore cacheTOR Occupancyevent=0x36,umask=0x2801For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)unc_c_tor_occupancy.local_opcodeuncore cacheTOR Occupancy; Local Memory - Opcode Matchedevent=0x36,umask=0x2101For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Number of outstanding  transactions, satisfied by an opcode,  in the TOR that are satisfied by locally HOMed memoryunc_c_tor_occupancy.miss_alluncore cacheTOR Occupancy; Miss Allevent=0x36,umask=0xa01For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Number of outstanding miss requests in the TOR.  'Miss' means the allocation requires an RTID.  This generally means that the request was sent to memory or MMIOunc_c_tor_occupancy.miss_localuncore cacheTOR Occupancyevent=0x36,umask=0x2a01For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)unc_c_tor_occupancy.miss_local_opcodeuncore cacheTOR Occupancy; Misses to Local Memory - Opcode Matchedevent=0x36,umask=0x2301For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Number of outstanding Miss transactions, satisfied by an opcode, in the TOR that are satisfied by locally HOMed memoryunc_c_tor_occupancy.miss_opcodeuncore cacheTOR Occupancy; Miss Opcode Matchevent=0x36,umask=301For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); TOR entries for miss transactions that match an opcode. This generally means that the request was sent to memory or MMIOunc_c_tor_occupancy.miss_remoteuncore cacheTOR Occupancyevent=0x36,umask=0x8a01For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)unc_c_tor_occupancy.miss_remote_opcodeuncore cacheTOR Occupancy; Misses to Remote Memory - Opcode Matchedevent=0x36,umask=0x8301For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Number of outstanding Miss transactions, satisfied by an opcode, in the TOR that are satisfied by remote caches or remote memoryunc_c_tor_occupancy.nid_alluncore cacheTOR Occupancy; NID Matchedevent=0x36,umask=0x4801For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Number of NID matched outstanding requests in the TOR.  The NID is programmed in Cn_MSR_PMON_BOX_FILTER.nid.In conjunction with STATE = I, it is possible to monitor misses to specific NIDs in the systemunc_c_tor_occupancy.nid_evictionuncore cacheTOR Occupancy; NID Matched Evictionsevent=0x36,umask=0x4401For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Number of outstanding NID matched eviction transactions in the TOR unc_c_tor_occupancy.nid_miss_alluncore cacheTOR Occupancy; NID Matchedevent=0x36,umask=0x4a01For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Number of outstanding Miss requests in the TOR that match a NIDunc_c_tor_occupancy.nid_miss_opcodeuncore cacheTOR Occupancy; NID and Opcode Matched Missevent=0x36,umask=0x4301For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Number of outstanding Miss requests in the TOR that match a NID and an opcodeunc_c_tor_occupancy.nid_opcodeuncore cacheTOR Occupancy; NID and Opcode Matchedevent=0x36,umask=0x4101For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); TOR entries that match a NID and an opcodeunc_c_tor_occupancy.nid_wbuncore cacheTOR Occupancy; NID Matched Writebacksevent=0x36,umask=0x5001For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); NID matched write transactions int the TORunc_c_tor_occupancy.opcodeuncore cacheTOR Occupancy; Opcode Matchevent=0x36,umask=101For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); TOR entries that match an opcode (matched by Cn_MSR_PMON_BOX_FILTER.opc)unc_c_tor_occupancy.remoteuncore cacheTOR Occupancyevent=0x36,umask=0x8801For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)unc_c_tor_occupancy.remote_opcodeuncore cacheTOR Occupancy; Remote Memory - Opcode Matchedevent=0x36,umask=0x8101For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Number of outstanding  transactions, satisfied by an opcode,  in the TOR that are satisfied by remote caches or remote memoryunc_c_tor_occupancy.wbuncore cacheTOR Occupancy; Writebacksevent=0x36,umask=0x1001For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Write transactions in the TOR.   This does not include RFO, but actual operations that contain data being sent from the coreunc_c_txr_ads_used.aduncore cacheOnto AD Ringevent=4,umask=101unc_c_txr_ads_used.akuncore cacheOnto AK Ringevent=4,umask=201unc_c_txr_ads_used.bluncore cacheOnto BL Ringevent=4,umask=401unc_c_txr_inserts.ad_cacheuncore cacheEgress Allocations; AD - Cacheboevent=2,umask=101Number of allocations into the Cbo Egress.  The Egress is used to queue up requests destined for the ring.; Ring transactions from the Cachebo destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_c_txr_inserts.ad_coreuncore cacheEgress Allocations; AD - Coreboevent=2,umask=0x1001Number of allocations into the Cbo Egress.  The Egress is used to queue up requests destined for the ring.; Ring transactions from the Corebo destined for the AD ring.  This is commonly used for outbound requestsunc_c_txr_inserts.ak_cacheuncore cacheEgress Allocations; AK - Cacheboevent=2,umask=201Number of allocations into the Cbo Egress.  The Egress is used to queue up requests destined for the ring.; Ring transactions from the Cachebo destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_c_txr_inserts.ak_coreuncore cacheEgress Allocations; AK - Coreboevent=2,umask=0x2001Number of allocations into the Cbo Egress.  The Egress is used to queue up requests destined for the ring.; Ring transactions from the Corebo destined for the AK ring.  This is commonly used for snoop responses coming from the core and destined for a Cachebounc_c_txr_inserts.bl_cacheuncore cacheEgress Allocations; BL - Cachenoevent=2,umask=401Number of allocations into the Cbo Egress.  The Egress is used to queue up requests destined for the ring.; Ring transactions from the Cachebo destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_c_txr_inserts.bl_coreuncore cacheEgress Allocations; BL - Coreboevent=2,umask=0x4001Number of allocations into the Cbo Egress.  The Egress is used to queue up requests destined for the ring.; Ring transactions from the Corebo destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_c_txr_inserts.iv_cacheuncore cacheEgress Allocations; IV - Cacheboevent=2,umask=801Number of allocations into the Cbo Egress.  The Egress is used to queue up requests destined for the ring.; Ring transactions from the Cachebo destined for the IV ring.  This is commonly used for snoops to the coresunc_c_txr_starved.ad_coreuncore cacheInjection Starvation; Onto AD Ring (to core)event=3,umask=0x1001Counts injection starvation.  This starvation is triggered when the Egress cannot send a transaction onto the ring for a long period of time.; cycles that the core AD egress spent in starvationunc_c_txr_starved.ak_bothuncore cacheInjection Starvation; Onto AK Ringevent=3,umask=201Counts injection starvation.  This starvation is triggered when the Egress cannot send a transaction onto the ring for a long period of time.; cycles that both AK egresses spent in starvationunc_c_txr_starved.bl_bothuncore cacheInjection Starvation; Onto BL Ringevent=3,umask=401Counts injection starvation.  This starvation is triggered when the Egress cannot send a transaction onto the ring for a long period of time.; cycles that both BL egresses spent in starvationunc_c_txr_starved.ivuncore cacheInjection Starvation; Onto IV Ringevent=3,umask=801Counts injection starvation.  This starvation is triggered when the Egress cannot send a transaction onto the ring for a long period of time.; cycles that the cachebo IV egress spent in starvationuncore_haunc_h_bt_cycles_neuncore cacheBT Cycles Not Emptyevent=0x4201Cycles the Backup Tracker (BT) is not empty. The BT is the actual HOM tracker in IVTunc_h_bt_to_ht_not_issued.incoming_bl_hazarduncore cacheBT to HT Not Issued; Incoming Data Hazardevent=0x51,umask=401Counts the number of cycles when the HA does not issue transaction from BT to HT.; Cycles unable to issue from BT due to incoming BL data hazardunc_h_bt_to_ht_not_issued.incoming_snp_hazarduncore cacheBT to HT Not Issued; Incoming Snoop Hazardevent=0x51,umask=201Counts the number of cycles when the HA does not issue transaction from BT to HT.; Cycles unable to issue from BT due to incoming snoop hazardunc_h_bt_to_ht_not_issued.rspackcflt_hazarduncore cacheBT to HT Not Issued; Incoming Data Hazardevent=0x51,umask=801Counts the number of cycles when the HA does not issue transaction from BT to HT.; Cycles unable to issue from BT due to incoming BL data hazardunc_h_bt_to_ht_not_issued.wbmdata_hazarduncore cacheBT to HT Not Issued; Incoming Data Hazardevent=0x51,umask=0x1001Counts the number of cycles when the HA does not issue transaction from BT to HT.; Cycles unable to issue from BT due to incoming BL data hazardunc_h_bypass_imc.not_takenuncore cacheHA to iMC Bypass; Not Takenevent=0x14,umask=201Counts the number of times when the HA was able to bypass was attempted.  This is a latency optimization for situations when there is light loadings on the memory subsystem.  This can be filted by when the bypass was taken and when it was not.; Filter for transactions that could not take the bypassunc_h_bypass_imc.takenuncore cacheHA to iMC Bypass; Takenevent=0x14,umask=101Counts the number of times when the HA was able to bypass was attempted.  This is a latency optimization for situations when there is light loadings on the memory subsystem.  This can be filted by when the bypass was taken and when it was not.; Filter for transactions that succeeded in taking the bypassunc_h_clockticksuncore cacheuclksevent=001Counts the number of uclks in the HA.  This will be slightly different than the count in the Ubox because of enable/freeze delays.  The HA is on the other side of the die from the fixed Ubox uclk counter, so the drift could be somewhat larger than in units that are closer like the QPI Agentunc_h_direct2core_countuncore cacheDirect2Core Messages Sentevent=0x1101Number of Direct2Core messages sentunc_h_direct2core_cycles_disableduncore cacheCycles when Direct2Core was Disabledevent=0x1201Number of cycles in which Direct2Core was disabledunc_h_direct2core_txn_overrideuncore cacheNumber of Reads that had Direct2Core Overriddenevent=0x1301Number of Reads where Direct2Core overriddenunc_h_directory_lat_optuncore cacheDirectory Lat Opt Returnevent=0x4101Directory Latency Optimization Data Return Path Taken. When directory mode is enabled and the directory returned for a read is Dir=I, then data can be returned using a faster path if certain conditions are met (credits, free pipeline, etc)unc_h_directory_lookup.no_snpuncore cacheDirectory Lookups; Snoop Not Neededevent=0xc,umask=201Counts the number of transactions that looked up the directory.  Can be filtered by requests that had to snoop and those that did not have to.; Filters for transactions that did not have to send any snoops because the directory bit was clearunc_h_directory_lookup.snpuncore cacheDirectory Lookups; Snoop Neededevent=0xc,umask=101Counts the number of transactions that looked up the directory.  Can be filtered by requests that had to snoop and those that did not have to.; Filters for transactions that had to send one or more snoops because the directory bit was setunc_h_directory_update.anyuncore cacheDirectory Updates; Any Directory Updateevent=0xd,umask=301Counts the number of directory updates that were required.  These result in writes to the memory controller.  This can be filtered by directory sets and directory clearsunc_h_directory_update.clearuncore cacheDirectory Updates; Directory Clearevent=0xd,umask=201Counts the number of directory updates that were required.  These result in writes to the memory controller.  This can be filtered by directory sets and directory clears.; Filter for directory clears.  This occurs when snoops were sent and all returned with RspIunc_h_directory_update.setuncore cacheDirectory Updates; Directory Setevent=0xd,umask=101Counts the number of directory updates that were required.  These result in writes to the memory controller.  This can be filtered by directory sets and directory clears.; Filter for directory sets.  This occurs when a remote read transaction requests memory, bringing it to a remote cacheunc_h_hitme_hit.ackcnfltwbiuncore cacheCounts Number of Hits in HitMe Cache; op is AckCnfltWbIevent=0x71,umask=401unc_h_hitme_hit.alluncore cacheCounts Number of Hits in HitMe Cache; All Requestsevent=0x71,umask=0xff01unc_h_hitme_hit.allocsuncore cacheCounts Number of Hits in HitMe Cache; Allocationsevent=0x71,umask=0x7001unc_h_hitme_hit.evictsuncore cacheCounts Number of Hits in HitMe Cache; Allocationsevent=0x71,umask=0x4201unc_h_hitme_hit.homuncore cacheCounts Number of Hits in HitMe Cache; HOM Requestsevent=0x71,umask=0xf01unc_h_hitme_hit.invalsuncore cacheCounts Number of Hits in HitMe Cache; Invalidationsevent=0x71,umask=0x2601unc_h_hitme_hit.read_or_invitoeuncore cacheCounts Number of Hits in HitMe Cache; op is RdCode, RdData, RdDataMigratory, RdInvOwn, RdCur or InvItoEevent=0x71,umask=101unc_h_hitme_hit.rspuncore cacheCounts Number of Hits in HitMe Cache; op is RspI, RspIWb, RspS, RspSWb, RspCnflt or RspCnfltWbIevent=0x71,umask=0x8001unc_h_hitme_hit.rspfwdi_localuncore cacheCounts Number of Hits in HitMe Cache; op is RspIFwd or RspIFwdWb for a local requestevent=0x71,umask=0x2001unc_h_hitme_hit.rspfwdi_remoteuncore cacheCounts Number of Hits in HitMe Cache; op is RspIFwd or RspIFwdWb for a remote requestevent=0x71,umask=0x1001unc_h_hitme_hit.rspfwdsuncore cacheCounts Number of Hits in HitMe Cache; op is RsSFwd or RspSFwdWbevent=0x71,umask=0x4001unc_h_hitme_hit.wbmtoe_or_suncore cacheCounts Number of Hits in HitMe Cache; op is WbMtoE or WbMtoSevent=0x71,umask=801unc_h_hitme_hit.wbmtoiuncore cacheCounts Number of Hits in HitMe Cache; op is WbMtoIevent=0x71,umask=201unc_h_hitme_hit_pv_bits_set.ackcnfltwbiuncore cacheAccumulates Number of PV bits set on HitMe Cache Hits; op is AckCnfltWbIevent=0x72,umask=401unc_h_hitme_hit_pv_bits_set.alluncore cacheAccumulates Number of PV bits set on HitMe Cache Hits; All Requestsevent=0x72,umask=0xff01unc_h_hitme_hit_pv_bits_set.homuncore cacheAccumulates Number of PV bits set on HitMe Cache Hits; HOM Requestsevent=0x72,umask=0xf01unc_h_hitme_hit_pv_bits_set.read_or_invitoeuncore cacheAccumulates Number of PV bits set on HitMe Cache Hits; op is RdCode, RdData, RdDataMigratory, RdInvOwn, RdCur or InvItoEevent=0x72,umask=101unc_h_hitme_hit_pv_bits_set.rspuncore cacheAccumulates Number of PV bits set on HitMe Cache Hits; op is RspI, RspIWb, RspS, RspSWb, RspCnflt or RspCnfltWbIevent=0x72,umask=0x8001unc_h_hitme_hit_pv_bits_set.rspfwdi_localuncore cacheAccumulates Number of PV bits set on HitMe Cache Hits; op is RspIFwd or RspIFwdWb for a local requestevent=0x72,umask=0x2001unc_h_hitme_hit_pv_bits_set.rspfwdi_remoteuncore cacheAccumulates Number of PV bits set on HitMe Cache Hits; op is RspIFwd or RspIFwdWb for a remote requestevent=0x72,umask=0x1001unc_h_hitme_hit_pv_bits_set.rspfwdsuncore cacheAccumulates Number of PV bits set on HitMe Cache Hits; op is RsSFwd or RspSFwdWbevent=0x72,umask=0x4001unc_h_hitme_hit_pv_bits_set.wbmtoe_or_suncore cacheAccumulates Number of PV bits set on HitMe Cache Hits; op is WbMtoE or WbMtoSevent=0x72,umask=801unc_h_hitme_hit_pv_bits_set.wbmtoiuncore cacheAccumulates Number of PV bits set on HitMe Cache Hits; op is WbMtoIevent=0x72,umask=201unc_h_hitme_lookup.ackcnfltwbiuncore cacheCounts Number of times HitMe Cache is accessed; op is AckCnfltWbIevent=0x70,umask=401unc_h_hitme_lookup.alluncore cacheCounts Number of times HitMe Cache is accessed; All Requestsevent=0x70,umask=0xff01unc_h_hitme_lookup.allocsuncore cacheCounts Number of times HitMe Cache is accessed; Allocationsevent=0x70,umask=0x7001unc_h_hitme_lookup.homuncore cacheCounts Number of times HitMe Cache is accessed; HOM Requestsevent=0x70,umask=0xf01unc_h_hitme_lookup.invalsuncore cacheCounts Number of times HitMe Cache is accessed; Invalidationsevent=0x70,umask=0x2601unc_h_hitme_lookup.read_or_invitoeuncore cacheCounts Number of times HitMe Cache is accessed; op is RdCode, RdData, RdDataMigratory, RdInvOwn, RdCur or InvItoEevent=0x70,umask=101unc_h_hitme_lookup.rspuncore cacheCounts Number of times HitMe Cache is accessed; op is RspI, RspIWb, RspS, RspSWb, RspCnflt or RspCnfltWbIevent=0x70,umask=0x8001unc_h_hitme_lookup.rspfwdi_localuncore cacheCounts Number of times HitMe Cache is accessed; op is RspIFwd or RspIFwdWb for a local requestevent=0x70,umask=0x2001unc_h_hitme_lookup.rspfwdi_remoteuncore cacheCounts Number of times HitMe Cache is accessed; op is RspIFwd or RspIFwdWb for a remote requestevent=0x70,umask=0x1001unc_h_hitme_lookup.rspfwdsuncore cacheCounts Number of times HitMe Cache is accessed; op is RsSFwd or RspSFwdWbevent=0x70,umask=0x4001unc_h_hitme_lookup.wbmtoe_or_suncore cacheCounts Number of times HitMe Cache is accessed; op is WbMtoE or WbMtoSevent=0x70,umask=801unc_h_hitme_lookup.wbmtoiuncore cacheCounts Number of times HitMe Cache is accessed; op is WbMtoIevent=0x70,umask=201unc_h_igr_no_credit_cycles.ad_qpi0uncore cacheCycles without QPI Ingress Credits; AD to QPI Link 0event=0x22,umask=101Counts the number of cycles when the HA does not have credits to send messages to the QPI Agent.  This can be filtered by the different credit pools and the different linksunc_h_igr_no_credit_cycles.ad_qpi1uncore cacheCycles without QPI Ingress Credits; AD to QPI Link 1event=0x22,umask=201Counts the number of cycles when the HA does not have credits to send messages to the QPI Agent.  This can be filtered by the different credit pools and the different linksunc_h_igr_no_credit_cycles.ad_qpi2uncore cacheCycles without QPI Ingress Credits; BL to QPI Link 0event=0x22,umask=0x1001Counts the number of cycles when the HA does not have credits to send messages to the QPI Agent.  This can be filtered by the different credit pools and the different linksunc_h_igr_no_credit_cycles.bl_qpi0uncore cacheCycles without QPI Ingress Credits; BL to QPI Link 0event=0x22,umask=401Counts the number of cycles when the HA does not have credits to send messages to the QPI Agent.  This can be filtered by the different credit pools and the different linksunc_h_igr_no_credit_cycles.bl_qpi1uncore cacheCycles without QPI Ingress Credits; BL to QPI Link 1event=0x22,umask=801Counts the number of cycles when the HA does not have credits to send messages to the QPI Agent.  This can be filtered by the different credit pools and the different linksunc_h_igr_no_credit_cycles.bl_qpi2uncore cacheCycles without QPI Ingress Credits; BL to QPI Link 1event=0x22,umask=0x2001Counts the number of cycles when the HA does not have credits to send messages to the QPI Agent.  This can be filtered by the different credit pools and the different linksunc_h_imc_reads.normaluncore cacheHA to iMC Normal Priority Reads Issued; Normal Priorityevent=0x17,umask=101Count of the number of reads issued to any of the memory controller channels.  This can be filtered by the priority of the readsunc_h_imc_retryuncore cacheRetry Eventsevent=0x1e01unc_h_imc_writes.alluncore cacheHA to iMC Full Line Writes Issued; All Writesevent=0x1a,umask=0xf01Counts the total number of full line writes issued from the HA into the memory controller.  This counts for all four channels.  It can be filtered by full/partial and ISOCH/non-ISOCHunc_h_imc_writes.fulluncore cacheHA to iMC Full Line Writes Issued; Full Line Non-ISOCHevent=0x1a,umask=101Counts the total number of full line writes issued from the HA into the memory controller.  This counts for all four channels.  It can be filtered by full/partial and ISOCH/non-ISOCHunc_h_imc_writes.full_isochuncore cacheHA to iMC Full Line Writes Issued; ISOCH Full Lineevent=0x1a,umask=401Counts the total number of full line writes issued from the HA into the memory controller.  This counts for all four channels.  It can be filtered by full/partial and ISOCH/non-ISOCHunc_h_imc_writes.partialuncore cacheHA to iMC Full Line Writes Issued; Partial Non-ISOCHevent=0x1a,umask=201Counts the total number of full line writes issued from the HA into the memory controller.  This counts for all four channels.  It can be filtered by full/partial and ISOCH/non-ISOCHunc_h_imc_writes.partial_isochuncore cacheHA to iMC Full Line Writes Issued; ISOCH Partialevent=0x1a,umask=801Counts the total number of full line writes issued from the HA into the memory controller.  This counts for all four channels.  It can be filtered by full/partial and ISOCH/non-ISOCHunc_h_iot_backpressure.hubuncore cacheIOT Backpressureevent=0x61,umask=201unc_h_iot_backpressure.satuncore cacheIOT Backpressureevent=0x61,umask=101unc_h_iot_cts_east_lo.cts0uncore cacheIOT Common Trigger Sequencer - Loevent=0x64,umask=101Debug Mask/Match Tie-Insunc_h_iot_cts_east_lo.cts1uncore cacheIOT Common Trigger Sequencer - Loevent=0x64,umask=201Debug Mask/Match Tie-Insunc_h_iot_cts_hi.cts2uncore cacheIOT Common Trigger Sequencer - Hievent=0x65,umask=101Debug Mask/Match Tie-Insunc_h_iot_cts_hi.cts3uncore cacheIOT Common Trigger Sequencer - Hievent=0x65,umask=201Debug Mask/Match Tie-Insunc_h_iot_cts_west_lo.cts0uncore cacheIOT Common Trigger Sequencer - Loevent=0x62,umask=101Debug Mask/Match Tie-Insunc_h_iot_cts_west_lo.cts1uncore cacheIOT Common Trigger Sequencer - Loevent=0x62,umask=201Debug Mask/Match Tie-Insunc_h_osb.cancelleduncore cacheOSB Snoop Broadcast; Cancelledevent=0x53,umask=0x1001Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.; OSB Snoop broadcast cancelled due to D2C or Other. OSB cancel is counted when OSB local read is not allowed even when the transaction in local InItoE. It also counts D2C OSB cancel, but also includes the cases were D2C was not set in the first place for the transaction coming from the ringunc_h_osb.invitoe_localuncore cacheOSB Snoop Broadcast; Local InvItoEevent=0x53,umask=401Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSBunc_h_osb.reads_localuncore cacheOSB Snoop Broadcast; Local Readsevent=0x53,umask=201Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSBunc_h_osb.reads_local_usefuluncore cacheOSB Snoop Broadcast; Reads Local -  Usefulevent=0x53,umask=0x2001Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSBunc_h_osb.remoteuncore cacheOSB Snoop Broadcast; Remoteevent=0x53,umask=801Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSBunc_h_osb.remote_usefuluncore cacheOSB Snoop Broadcast; Remote - Usefulevent=0x53,umask=0x4001Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSBunc_h_osb_edr.alluncore cacheOSB Early Data Return; Allevent=0x54,umask=101Counts the number of transactions that broadcast snoop due to OSB, but found clean data in memory and was able to do early data returnunc_h_osb_edr.reads_local_iuncore cacheOSB Early Data Return; Reads to Local  Ievent=0x54,umask=201Counts the number of transactions that broadcast snoop due to OSB, but found clean data in memory and was able to do early data returnunc_h_osb_edr.reads_local_suncore cacheOSB Early Data Return; Reads to Local Sevent=0x54,umask=801Counts the number of transactions that broadcast snoop due to OSB, but found clean data in memory and was able to do early data returnunc_h_osb_edr.reads_remote_iuncore cacheOSB Early Data Return; Reads to Remote Ievent=0x54,umask=401Counts the number of transactions that broadcast snoop due to OSB, but found clean data in memory and was able to do early data returnunc_h_osb_edr.reads_remote_suncore cacheOSB Early Data Return; Reads to Remote Sevent=0x54,umask=0x1001Counts the number of transactions that broadcast snoop due to OSB, but found clean data in memory and was able to do early data returnunc_h_requests.invitoe_localuncore cacheRead and Write Requests; Local InvItoEsevent=1,umask=0x1001Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO).  Writes include all writes (streaming, evictions, HitM, etc).; This filter includes only InvItoEs coming from the local socketunc_h_requests.invitoe_remoteuncore cacheRead and Write Requests; Remote InvItoEsevent=1,umask=0x2001Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO).  Writes include all writes (streaming, evictions, HitM, etc).; This filter includes only InvItoEs coming from remote socketsunc_h_requests.readsuncore cacheRead and Write Requests; Readsevent=1,umask=301Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO).  Writes include all writes (streaming, evictions, HitM, etc).; Incoming ead requests.  This is a good proxy for LLC Read Misses (including RFOs)unc_h_requests.reads_localuncore cacheRead and Write Requests; Local Readsevent=1,umask=101Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO).  Writes include all writes (streaming, evictions, HitM, etc).; This filter includes only read requests coming from the local socket.  This is a good proxy for LLC Read Misses (including RFOs) from the local socketunc_h_requests.reads_remoteuncore cacheRead and Write Requests; Remote Readsevent=1,umask=201Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO).  Writes include all writes (streaming, evictions, HitM, etc).; This filter includes only read requests coming from the remote socket.  This is a good proxy for LLC Read Misses (including RFOs) from the remote socketunc_h_requests.writesuncore cacheRead and Write Requests; Writesevent=1,umask=0xc01Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO).  Writes include all writes (streaming, evictions, HitM, etc).; Incoming write requestsunc_h_requests.writes_localuncore cacheRead and Write Requests; Local Writesevent=1,umask=401Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO).  Writes include all writes (streaming, evictions, HitM, etc).; This filter includes only writes coming from the local socketunc_h_requests.writes_remoteuncore cacheRead and Write Requests; Remote Writesevent=1,umask=801Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO).  Writes include all writes (streaming, evictions, HitM, etc).; This filter includes only writes coming from remote socketsunc_h_ring_ad_used.ccwuncore cacheHA AD Ring in Use; Counterclockwiseevent=0x3e,umask=0xc01Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_h_ring_ad_used.ccw_evenuncore cacheHA AD Ring in Use; Counterclockwise and Evenevent=0x3e,umask=401Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarityunc_h_ring_ad_used.ccw_odduncore cacheHA AD Ring in Use; Counterclockwise and Oddevent=0x3e,umask=801Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarityunc_h_ring_ad_used.cwuncore cacheHA AD Ring in Use; Clockwiseevent=0x3e,umask=301Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_h_ring_ad_used.cw_evenuncore cacheHA AD Ring in Use; Clockwise and Evenevent=0x3e,umask=101Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarityunc_h_ring_ad_used.cw_odduncore cacheHA AD Ring in Use; Clockwise and Oddevent=0x3e,umask=201Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarityunc_h_ring_ak_used.alluncore cacheHA AK Ring in Use; Allevent=0x3f,umask=0xf01Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_h_ring_ak_used.ccwuncore cacheHA AK Ring in Use; Counterclockwiseevent=0x3f,umask=0xc01Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_h_ring_ak_used.ccw_evenuncore cacheHA AK Ring in Use; Counterclockwise and Evenevent=0x3f,umask=401Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarityunc_h_ring_ak_used.ccw_odduncore cacheHA AK Ring in Use; Counterclockwise and Oddevent=0x3f,umask=801Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarityunc_h_ring_ak_used.cwuncore cacheHA AK Ring in Use; Clockwiseevent=0x3f,umask=301Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_h_ring_ak_used.cw_evenuncore cacheHA AK Ring in Use; Clockwise and Evenevent=0x3f,umask=101Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarityunc_h_ring_ak_used.cw_odduncore cacheHA AK Ring in Use; Clockwise and Oddevent=0x3f,umask=201Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarityunc_h_ring_bl_used.alluncore cacheHA BL Ring in Use; Allevent=0x40,umask=0xf01Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_h_ring_bl_used.ccwuncore cacheHA BL Ring in Use; Counterclockwiseevent=0x40,umask=0xc01Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_h_ring_bl_used.ccw_evenuncore cacheHA BL Ring in Use; Counterclockwise and Evenevent=0x40,umask=401Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarityunc_h_ring_bl_used.ccw_odduncore cacheHA BL Ring in Use; Counterclockwise and Oddevent=0x40,umask=801Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarityunc_h_ring_bl_used.cwuncore cacheHA BL Ring in Use; Clockwiseevent=0x40,umask=301Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_h_ring_bl_used.cw_evenuncore cacheHA BL Ring in Use; Clockwise and Evenevent=0x40,umask=101Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarityunc_h_ring_bl_used.cw_odduncore cacheHA BL Ring in Use; Clockwise and Oddevent=0x40,umask=201Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarityunc_h_rpq_cycles_no_reg_credits.chn0uncore cacheiMC RPQ Credits Empty - Regular; Channel 0event=0x15,umask=101Counts the number of cycles when there are no regular credits available for posting reads from the HA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue).  This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads.  This count only tracks the regular credits  Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given time.; Filter for memory controller channel 0 onlyunc_h_rpq_cycles_no_reg_credits.chn1uncore cacheiMC RPQ Credits Empty - Regular; Channel 1event=0x15,umask=201Counts the number of cycles when there are no regular credits available for posting reads from the HA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue).  This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads.  This count only tracks the regular credits  Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given time.; Filter for memory controller channel 1 onlyunc_h_rpq_cycles_no_reg_credits.chn2uncore cacheiMC RPQ Credits Empty - Regular; Channel 2event=0x15,umask=401Counts the number of cycles when there are no regular credits available for posting reads from the HA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue).  This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads.  This count only tracks the regular credits  Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given time.; Filter for memory controller channel 2 onlyunc_h_rpq_cycles_no_reg_credits.chn3uncore cacheiMC RPQ Credits Empty - Regular; Channel 3event=0x15,umask=801Counts the number of cycles when there are no regular credits available for posting reads from the HA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue).  This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads.  This count only tracks the regular credits  Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given time.; Filter for memory controller channel 3 onlyunc_h_rpq_cycles_no_spec_credits.chn0uncore cacheiMC RPQ Credits Empty - Special; Channel 0event=0x16,umask=101Counts the number of cycles when there are no special credits available for posting reads from the HA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue).  This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads.  This count only tracks the special credits.  This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given time.; Filter for memory controller channel 0 onlyunc_h_rpq_cycles_no_spec_credits.chn1uncore cacheiMC RPQ Credits Empty - Special; Channel 1event=0x16,umask=201Counts the number of cycles when there are no special credits available for posting reads from the HA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue).  This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads.  This count only tracks the special credits.  This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given time.; Filter for memory controller channel 1 onlyunc_h_rpq_cycles_no_spec_credits.chn2uncore cacheiMC RPQ Credits Empty - Special; Channel 2event=0x16,umask=401Counts the number of cycles when there are no special credits available for posting reads from the HA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue).  This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads.  This count only tracks the special credits.  This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given time.; Filter for memory controller channel 2 onlyunc_h_rpq_cycles_no_spec_credits.chn3uncore cacheiMC RPQ Credits Empty - Special; Channel 3event=0x16,umask=801Counts the number of cycles when there are no special credits available for posting reads from the HA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue).  This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads.  This count only tracks the special credits.  This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given time.; Filter for memory controller channel 3 onlyunc_h_sbo0_credits_acquired.aduncore cacheSBo0 Credits Acquired; For AD Ringevent=0x68,umask=101Number of Sbo 0 credits acquired in a given cycle, per ringunc_h_sbo0_credits_acquired.bluncore cacheSBo0 Credits Acquired; For BL Ringevent=0x68,umask=201Number of Sbo 0 credits acquired in a given cycle, per ringunc_h_sbo0_credit_occupancy.aduncore cacheSBo0 Credits Occupancy; For AD Ringevent=0x6a,umask=101Number of Sbo 0 credits in use in a given cycle, per ringunc_h_sbo0_credit_occupancy.bluncore cacheSBo0 Credits Occupancy; For BL Ringevent=0x6a,umask=201Number of Sbo 0 credits in use in a given cycle, per ringunc_h_sbo1_credits_acquired.aduncore cacheSBo1 Credits Acquired; For AD Ringevent=0x69,umask=101Number of Sbo 1 credits acquired in a given cycle, per ringunc_h_sbo1_credits_acquired.bluncore cacheSBo1 Credits Acquired; For BL Ringevent=0x69,umask=201Number of Sbo 1 credits acquired in a given cycle, per ringunc_h_sbo1_credit_occupancy.aduncore cacheSBo1 Credits Occupancy; For AD Ringevent=0x6b,umask=101Number of Sbo 1 credits in use in a given cycle, per ringunc_h_sbo1_credit_occupancy.bluncore cacheSBo1 Credits Occupancy; For BL Ringevent=0x6b,umask=201Number of Sbo 1 credits in use in a given cycle, per ringunc_h_snoops_rsp_after_data.localuncore cacheData beat the Snoop Responses; Local Requestsevent=0xa,umask=101Counts the number of reads when the snoop was on the critical path to the data return.; This filter includes only requests coming from the local socketunc_h_snoops_rsp_after_data.remoteuncore cacheData beat the Snoop Responses; Remote Requestsevent=0xa,umask=201Counts the number of reads when the snoop was on the critical path to the data return.; This filter includes only requests coming from remote socketsunc_h_snoop_cycles_ne.alluncore cacheCycles with Snoops Outstanding; All Requestsevent=8,umask=301Counts cycles when one or more snoops are outstanding.; Tracked for snoops from both local and remote socketsunc_h_snoop_cycles_ne.localuncore cacheCycles with Snoops Outstanding; Local Requestsevent=8,umask=101Counts cycles when one or more snoops are outstanding.; This filter includes only requests coming from the local socketunc_h_snoop_cycles_ne.remoteuncore cacheCycles with Snoops Outstanding; Remote Requestsevent=8,umask=201Counts cycles when one or more snoops are outstanding.; This filter includes only requests coming from remote socketsunc_h_snoop_occupancy.localuncore cacheTracker Snoops Outstanding Accumulator; Local Requestsevent=9,umask=101Accumulates the occupancy of either the local HA tracker pool that have snoops pending in every cycle.    This can be used in conjection with the not empty stat to calculate average queue occupancy or the allocations stat in order to calculate average queue latency.  HA trackers are allocated as soon as a request enters the HA if an HT (HomeTracker) entry is available and this occupancy is decremented when all the snoop responses have returned.; This filter includes only requests coming from the local socketunc_h_snoop_occupancy.remoteuncore cacheTracker Snoops Outstanding Accumulator; Remote Requestsevent=9,umask=201Accumulates the occupancy of either the local HA tracker pool that have snoops pending in every cycle.    This can be used in conjection with the not empty stat to calculate average queue occupancy or the allocations stat in order to calculate average queue latency.  HA trackers are allocated as soon as a request enters the HA if an HT (HomeTracker) entry is available and this occupancy is decremented when all the snoop responses have returned.; This filter includes only requests coming from remote socketsunc_h_snoop_resp.rspcnflctuncore cacheSnoop Responses Received; RSPCNFLCT*event=0x21,umask=0x4001Counts the total number of RspI snoop responses received.  Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system.   In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received.  For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1.; Filters for snoops responses of RspConflict.  This is returned when a snoop finds an existing outstanding transaction in a remote caching agent when it CAMs that caching agent.  This triggers conflict resolution hardware.  This covers both RspCnflct and RspCnflctWbIunc_h_snoop_resp.rspiuncore cacheSnoop Responses Received; RspIevent=0x21,umask=101Counts the total number of RspI snoop responses received.  Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system.   In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received.  For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1.; Filters for snoops responses of RspI.  RspI is returned when the remote cache does not have the data, or when the remote cache silently evicts data (such as when an RFO hits non-modified data)unc_h_snoop_resp.rspifwduncore cacheSnoop Responses Received; RspIFwdevent=0x21,umask=401Counts the total number of RspI snoop responses received.  Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system.   In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received.  For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1.; Filters for snoop responses of RspIFwd.  This is returned when a remote caching agent forwards data and the requesting agent is able to acquire the data in E or M states.  This is commonly returned with RFO transactions.  It can be either a HitM or a HitFEunc_h_snoop_resp.rspsuncore cacheSnoop Responses Received; RspSevent=0x21,umask=201Counts the total number of RspI snoop responses received.  Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system.   In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received.  For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1.; Filters for snoop responses of RspS.  RspS is returned when a remote cache has data but is not forwarding it.  It is a way to let the requesting socket know that it cannot allocate the data in E state.  No data is sent with S RspSunc_h_snoop_resp.rspsfwduncore cacheSnoop Responses Received; RspSFwdevent=0x21,umask=801Counts the total number of RspI snoop responses received.  Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system.   In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received.  For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1.; Filters for a snoop response of RspSFwd.  This is returned when a remote caching agent forwards data but holds on to its current copy.  This is common for data and code reads that hit in a remote socket in E or F stateunc_h_snoop_resp.rsp_fwd_wbuncore cacheSnoop Responses Received; Rsp*Fwd*WBevent=0x21,umask=0x2001Counts the total number of RspI snoop responses received.  Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system.   In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received.  For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1.; Filters for a snoop response of Rsp*Fwd*WB.  This snoop response is only used in 4s systems.  It is used when a snoop HITM's in a remote caching agent and it directly forwards data to a requestor, and simultaneously returns data to the home to be written back to memoryunc_h_snoop_resp.rsp_wbuncore cacheSnoop Responses Received; Rsp*WBevent=0x21,umask=0x1001Counts the total number of RspI snoop responses received.  Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system.   In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received.  For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1.; Filters for a snoop response of RspIWB or RspSWB.  This is returned when a non-RFO request hits in M state.  Data and Code Reads can return either RspIWB or RspSWB depending on how the system has been configured.  InvItoE transactions will also return RspIWB because they must acquire ownershipunc_h_snp_resp_recv_local.otheruncore cacheSnoop Responses Received Local; Otherevent=0x60,umask=0x8001Number of snoop responses received for a Local  request; Filters for all other snoop responsesunc_h_snp_resp_recv_local.rspcnflctuncore cacheSnoop Responses Received Local; RspCnflctevent=0x60,umask=0x4001Number of snoop responses received for a Local  request; Filters for snoops responses of RspConflict.  This is returned when a snoop finds an existing outstanding transaction in a remote caching agent when it CAMs that caching agent.  This triggers conflict resolution hardware.  This covers both RspCnflct and RspCnflctWbIunc_h_snp_resp_recv_local.rspiuncore cacheSnoop Responses Received Local; RspIevent=0x60,umask=101Number of snoop responses received for a Local  request; Filters for snoops responses of RspI.  RspI is returned when the remote cache does not have the data, or when the remote cache silently evicts data (such as when an RFO hits non-modified data)unc_h_snp_resp_recv_local.rspifwduncore cacheSnoop Responses Received Local; RspIFwdevent=0x60,umask=401Number of snoop responses received for a Local  request; Filters for snoop responses of RspIFwd.  This is returned when a remote caching agent forwards data and the requesting agent is able to acquire the data in E or M states.  This is commonly returned with RFO transactions.  It can be either a HitM or a HitFEunc_h_snp_resp_recv_local.rspsuncore cacheSnoop Responses Received Local; RspSevent=0x60,umask=201Number of snoop responses received for a Local  request; Filters for snoop responses of RspS.  RspS is returned when a remote cache has data but is not forwarding it.  It is a way to let the requesting socket know that it cannot allocate the data in E state.  No data is sent with S RspSunc_h_snp_resp_recv_local.rspsfwduncore cacheSnoop Responses Received Local; RspSFwdevent=0x60,umask=801Number of snoop responses received for a Local  request; Filters for a snoop response of RspSFwd.  This is returned when a remote caching agent forwards data but holds on to its current copy.  This is common for data and code reads that hit in a remote socket in E or F stateunc_h_snp_resp_recv_local.rspxfwdxwbuncore cacheSnoop Responses Received Local; Rsp*FWD*WBevent=0x60,umask=0x2001Number of snoop responses received for a Local  request; Filters for a snoop response of Rsp*Fwd*WB.  This snoop response is only used in 4s systems.  It is used when a snoop HITM's in a remote caching agent and it directly forwards data to a requestor, and simultaneously returns data to the home to be written back to memoryunc_h_snp_resp_recv_local.rspxwbuncore cacheSnoop Responses Received Local; Rsp*WBevent=0x60,umask=0x1001Number of snoop responses received for a Local  request; Filters for a snoop response of RspIWB or RspSWB.  This is returned when a non-RFO request hits in M state.  Data and Code Reads can return either RspIWB or RspSWB depending on how the system has been configured.  InvItoE transactions will also return RspIWB because they must acquire ownershipunc_h_stall_no_sbo_credit.sbo0_aduncore cacheStall on No Sbo Credits; For SBo0, AD Ringevent=0x6c,umask=101Number of cycles Egress is stalled waiting for an Sbo credit to become available.  Per Sbo, per Ringunc_h_stall_no_sbo_credit.sbo0_bluncore cacheStall on No Sbo Credits; For SBo0, BL Ringevent=0x6c,umask=401Number of cycles Egress is stalled waiting for an Sbo credit to become available.  Per Sbo, per Ringunc_h_stall_no_sbo_credit.sbo1_aduncore cacheStall on No Sbo Credits; For SBo1, AD Ringevent=0x6c,umask=201Number of cycles Egress is stalled waiting for an Sbo credit to become available.  Per Sbo, per Ringunc_h_stall_no_sbo_credit.sbo1_bluncore cacheStall on No Sbo Credits; For SBo1, BL Ringevent=0x6c,umask=801Number of cycles Egress is stalled waiting for an Sbo credit to become available.  Per Sbo, per Ringunc_h_tad_requests_g0.region0uncore cacheHA Requests to a TAD Region - Group 0; TAD Region 0event=0x1b,umask=101Counts the number of HA requests to a given TAD region.  There are up to 11 TAD (target address decode) regions in each home agent.  All requests destined for the memory controller must first be decoded to determine which TAD region they are in.  This event is filtered based on the TAD region ID, and covers regions 0 to 7.  This event is useful for understanding how applications are using the memory that is spread across the different memory regions.  It is particularly useful for Monroe systems that use the TAD to enable individual channels to enter self-refresh to save power.; Filters request made to TAD Region 0unc_h_tad_requests_g0.region1uncore cacheHA Requests to a TAD Region - Group 0; TAD Region 1event=0x1b,umask=201Counts the number of HA requests to a given TAD region.  There are up to 11 TAD (target address decode) regions in each home agent.  All requests destined for the memory controller must first be decoded to determine which TAD region they are in.  This event is filtered based on the TAD region ID, and covers regions 0 to 7.  This event is useful for understanding how applications are using the memory that is spread across the different memory regions.  It is particularly useful for Monroe systems that use the TAD to enable individual channels to enter self-refresh to save power.; Filters request made to TAD Region 1unc_h_tad_requests_g0.region2uncore cacheHA Requests to a TAD Region - Group 0; TAD Region 2event=0x1b,umask=401Counts the number of HA requests to a given TAD region.  There are up to 11 TAD (target address decode) regions in each home agent.  All requests destined for the memory controller must first be decoded to determine which TAD region they are in.  This event is filtered based on the TAD region ID, and covers regions 0 to 7.  This event is useful for understanding how applications are using the memory that is spread across the different memory regions.  It is particularly useful for Monroe systems that use the TAD to enable individual channels to enter self-refresh to save power.; Filters request made to TAD Region 2unc_h_tad_requests_g0.region3uncore cacheHA Requests to a TAD Region - Group 0; TAD Region 3event=0x1b,umask=801Counts the number of HA requests to a given TAD region.  There are up to 11 TAD (target address decode) regions in each home agent.  All requests destined for the memory controller must first be decoded to determine which TAD region they are in.  This event is filtered based on the TAD region ID, and covers regions 0 to 7.  This event is useful for understanding how applications are using the memory that is spread across the different memory regions.  It is particularly useful for Monroe systems that use the TAD to enable individual channels to enter self-refresh to save power.; Filters request made to TAD Region 3unc_h_tad_requests_g0.region4uncore cacheHA Requests to a TAD Region - Group 0; TAD Region 4event=0x1b,umask=0x1001Counts the number of HA requests to a given TAD region.  There are up to 11 TAD (target address decode) regions in each home agent.  All requests destined for the memory controller must first be decoded to determine which TAD region they are in.  This event is filtered based on the TAD region ID, and covers regions 0 to 7.  This event is useful for understanding how applications are using the memory that is spread across the different memory regions.  It is particularly useful for Monroe systems that use the TAD to enable individual channels to enter self-refresh to save power.; Filters request made to TAD Region 4unc_h_tad_requests_g0.region5uncore cacheHA Requests to a TAD Region - Group 0; TAD Region 5event=0x1b,umask=0x2001Counts the number of HA requests to a given TAD region.  There are up to 11 TAD (target address decode) regions in each home agent.  All requests destined for the memory controller must first be decoded to determine which TAD region they are in.  This event is filtered based on the TAD region ID, and covers regions 0 to 7.  This event is useful for understanding how applications are using the memory that is spread across the different memory regions.  It is particularly useful for Monroe systems that use the TAD to enable individual channels to enter self-refresh to save power.; Filters request made to TAD Region 5unc_h_tad_requests_g0.region6uncore cacheHA Requests to a TAD Region - Group 0; TAD Region 6event=0x1b,umask=0x4001Counts the number of HA requests to a given TAD region.  There are up to 11 TAD (target address decode) regions in each home agent.  All requests destined for the memory controller must first be decoded to determine which TAD region they are in.  This event is filtered based on the TAD region ID, and covers regions 0 to 7.  This event is useful for understanding how applications are using the memory that is spread across the different memory regions.  It is particularly useful for Monroe systems that use the TAD to enable individual channels to enter self-refresh to save power.; Filters request made to TAD Region 6unc_h_tad_requests_g0.region7uncore cacheHA Requests to a TAD Region - Group 0; TAD Region 7event=0x1b,umask=0x8001Counts the number of HA requests to a given TAD region.  There are up to 11 TAD (target address decode) regions in each home agent.  All requests destined for the memory controller must first be decoded to determine which TAD region they are in.  This event is filtered based on the TAD region ID, and covers regions 0 to 7.  This event is useful for understanding how applications are using the memory that is spread across the different memory regions.  It is particularly useful for Monroe systems that use the TAD to enable individual channels to enter self-refresh to save power.; Filters request made to TAD Region 7unc_h_tad_requests_g1.region10uncore cacheHA Requests to a TAD Region - Group 1; TAD Region 10event=0x1c,umask=401Counts the number of HA requests to a given TAD region.  There are up to 11 TAD (target address decode) regions in each home agent.  All requests destined for the memory controller must first be decoded to determine which TAD region they are in.  This event is filtered based on the TAD region ID, and covers regions 8 to 10.  This event is useful for understanding how applications are using the memory that is spread across the different memory regions.  It is particularly useful for Monroe systems that use the TAD to enable individual channels to enter self-refresh to save power.; Filters request made to TAD Region 10unc_h_tad_requests_g1.region11uncore cacheHA Requests to a TAD Region - Group 1; TAD Region 11event=0x1c,umask=801Counts the number of HA requests to a given TAD region.  There are up to 11 TAD (target address decode) regions in each home agent.  All requests destined for the memory controller must first be decoded to determine which TAD region they are in.  This event is filtered based on the TAD region ID, and covers regions 8 to 10.  This event is useful for understanding how applications are using the memory that is spread across the different memory regions.  It is particularly useful for Monroe systems that use the TAD to enable individual channels to enter self-refresh to save power.; Filters request made to TAD Region 11unc_h_tad_requests_g1.region8uncore cacheHA Requests to a TAD Region - Group 1; TAD Region 8event=0x1c,umask=101Counts the number of HA requests to a given TAD region.  There are up to 11 TAD (target address decode) regions in each home agent.  All requests destined for the memory controller must first be decoded to determine which TAD region they are in.  This event is filtered based on the TAD region ID, and covers regions 8 to 10.  This event is useful for understanding how applications are using the memory that is spread across the different memory regions.  It is particularly useful for Monroe systems that use the TAD to enable individual channels to enter self-refresh to save power.; Filters request made to TAD Region 8unc_h_tad_requests_g1.region9uncore cacheHA Requests to a TAD Region - Group 1; TAD Region 9event=0x1c,umask=201Counts the number of HA requests to a given TAD region.  There are up to 11 TAD (target address decode) regions in each home agent.  All requests destined for the memory controller must first be decoded to determine which TAD region they are in.  This event is filtered based on the TAD region ID, and covers regions 8 to 10.  This event is useful for understanding how applications are using the memory that is spread across the different memory regions.  It is particularly useful for Monroe systems that use the TAD to enable individual channels to enter self-refresh to save power.; Filters request made to TAD Region 9unc_h_tracker_cycles_full.alluncore cacheTracker Cycles Full; Cycles Completely Usedevent=2,umask=201Counts the number of cycles when the local HA tracker pool is completely used.  This can be used with edge detect to identify the number of situations when the pool became fully utilized.  This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure.  In other words, the system could be starved for RTIDs but not fill up the HA trackers.  HA trackers are allocated as soon as a request enters the HA and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.; Counts the number of cycles when the HA tracker pool (HT) is completely used including reserved HT entries.  It will not return valid count when BT is disabledunc_h_tracker_cycles_full.gpuncore cacheTracker Cycles Full; Cycles GP Completely Usedevent=2,umask=101Counts the number of cycles when the local HA tracker pool is completely used.  This can be used with edge detect to identify the number of situations when the pool became fully utilized.  This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure.  In other words, the system could be starved for RTIDs but not fill up the HA trackers.  HA trackers are allocated as soon as a request enters the HA and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.; Counts the number of cycles when the general purpose (GP) HA tracker pool (HT) is completely used.  It will not return valid count when BT is disabledunc_h_tracker_cycles_ne.alluncore cacheTracker Cycles Not Empty; All Requestsevent=3,umask=301Counts the number of cycles when the local HA tracker pool is not empty.  This can be used with edge detect to identify the number of situations when the pool became empty.  This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure.  In other words, this buffer could be completely empty, but there may still be credits in use by the CBos.  This stat can be used in conjunction with the occupancy accumulation stat in order to calculate average queue occpancy.  HA trackers are allocated as soon as a request enters the HA if an HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.; Requests coming from both local and remote socketsunc_h_tracker_cycles_ne.localuncore cacheTracker Cycles Not Empty; Local Requestsevent=3,umask=101Counts the number of cycles when the local HA tracker pool is not empty.  This can be used with edge detect to identify the number of situations when the pool became empty.  This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure.  In other words, this buffer could be completely empty, but there may still be credits in use by the CBos.  This stat can be used in conjunction with the occupancy accumulation stat in order to calculate average queue occpancy.  HA trackers are allocated as soon as a request enters the HA if an HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.; This filter includes only requests coming from the local socketunc_h_tracker_cycles_ne.remoteuncore cacheTracker Cycles Not Empty; Remote Requestsevent=3,umask=201Counts the number of cycles when the local HA tracker pool is not empty.  This can be used with edge detect to identify the number of situations when the pool became empty.  This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure.  In other words, this buffer could be completely empty, but there may still be credits in use by the CBos.  This stat can be used in conjunction with the occupancy accumulation stat in order to calculate average queue occpancy.  HA trackers are allocated as soon as a request enters the HA if an HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.; This filter includes only requests coming from remote socketsunc_h_tracker_occupancy.invitoe_localuncore cacheTracker Occupancy Accumulator; Local InvItoE Requestsevent=4,umask=0x4001Accumulates the occupancy of the local HA tracker pool in every cycle.  This can be used in conjection with the not empty stat to calculate average queue occupancy or the allocations stat in order to calculate average queue latency.  HA trackers are allocated as soon as a request enters the HA if a HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ringunc_h_tracker_occupancy.invitoe_remoteuncore cacheTracker Occupancy Accumulator; Remote InvItoE Requestsevent=4,umask=0x8001Accumulates the occupancy of the local HA tracker pool in every cycle.  This can be used in conjection with the not empty stat to calculate average queue occupancy or the allocations stat in order to calculate average queue latency.  HA trackers are allocated as soon as a request enters the HA if a HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ringunc_h_tracker_occupancy.reads_localuncore cacheTracker Occupancy Accumulator; Local Read Requestsevent=4,umask=401Accumulates the occupancy of the local HA tracker pool in every cycle.  This can be used in conjection with the not empty stat to calculate average queue occupancy or the allocations stat in order to calculate average queue latency.  HA trackers are allocated as soon as a request enters the HA if a HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ringunc_h_tracker_occupancy.reads_remoteuncore cacheTracker Occupancy Accumulator; Remote Read Requestsevent=4,umask=801Accumulates the occupancy of the local HA tracker pool in every cycle.  This can be used in conjection with the not empty stat to calculate average queue occupancy or the allocations stat in order to calculate average queue latency.  HA trackers are allocated as soon as a request enters the HA if a HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ringunc_h_tracker_occupancy.writes_localuncore cacheTracker Occupancy Accumulator; Local Write Requestsevent=4,umask=0x1001Accumulates the occupancy of the local HA tracker pool in every cycle.  This can be used in conjection with the not empty stat to calculate average queue occupancy or the allocations stat in order to calculate average queue latency.  HA trackers are allocated as soon as a request enters the HA if a HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ringunc_h_tracker_occupancy.writes_remoteuncore cacheTracker Occupancy Accumulator; Remote Write Requestsevent=4,umask=0x2001Accumulates the occupancy of the local HA tracker pool in every cycle.  This can be used in conjection with the not empty stat to calculate average queue occupancy or the allocations stat in order to calculate average queue latency.  HA trackers are allocated as soon as a request enters the HA if a HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ringunc_h_tracker_pending_occupancy.localuncore cacheData Pending Occupancy Accumulator; Local Requestsevent=5,umask=101Accumulates the number of transactions that have data from the memory controller until they get scheduled to the Egress.  This can be used to calculate the queuing latency for two things.  (1) If the system is waiting for snoops, this will increase.  (2) If the system can't schedule to the Egress because of either (a) Egress Credits or (b) QPI BL IGR credits for remote requests.; This filter includes only requests coming from the local socketunc_h_tracker_pending_occupancy.remoteuncore cacheData Pending Occupancy Accumulator; Remote Requestsevent=5,umask=201Accumulates the number of transactions that have data from the memory controller until they get scheduled to the Egress.  This can be used to calculate the queuing latency for two things.  (1) If the system is waiting for snoops, this will increase.  (2) If the system can't schedule to the Egress because of either (a) Egress Credits or (b) QPI BL IGR credits for remote requests.; This filter includes only requests coming from remote socketsunc_h_txr_ad.homuncore cacheOutbound NDR Ring Transactions; Non-data Responsesevent=0xf,umask=401Counts the number of outbound transactions on the AD ring.  This can be filtered by the NDR and SNP message classes.  See the filter descriptions for more details.; Filter for outbound NDR transactions sent on the AD ring.  NDR stands for non-data response and is generally used for completions that do not include data.  AD NDR is used for transactions to remote socketsunc_h_txr_ad_cycles_full.alluncore cacheAD Egress Full; Allevent=0x2a,umask=301AD Egress Full; Cycles full from both schedulersunc_h_txr_ad_cycles_full.sched0uncore cacheAD Egress Full; Scheduler 0event=0x2a,umask=101AD Egress Full; Filter for cycles full  from scheduler bank 0unc_h_txr_ad_cycles_full.sched1uncore cacheAD Egress Full; Scheduler 1event=0x2a,umask=201AD Egress Full; Filter for cycles full  from scheduler bank 1unc_h_txr_ad_cycles_ne.alluncore cacheAD Egress Not Empty; Allevent=0x29,umask=301AD Egress Not Empty; Cycles full from both schedulersunc_h_txr_ad_cycles_ne.sched0uncore cacheAD Egress Not Empty; Scheduler 0event=0x29,umask=101AD Egress Not Empty; Filter for cycles not empty  from scheduler bank 0unc_h_txr_ad_cycles_ne.sched1uncore cacheAD Egress Not Empty; Scheduler 1event=0x29,umask=201AD Egress Not Empty; Filter for cycles not empty from scheduler bank 1unc_h_txr_ad_inserts.alluncore cacheAD Egress Allocations; Allevent=0x27,umask=301AD Egress Allocations; Allocations from both schedulersunc_h_txr_ad_inserts.sched0uncore cacheAD Egress Allocations; Scheduler 0event=0x27,umask=101AD Egress Allocations; Filter for allocations from scheduler bank 0unc_h_txr_ad_inserts.sched1uncore cacheAD Egress Allocations; Scheduler 1event=0x27,umask=201AD Egress Allocations; Filter for allocations from scheduler bank 1unc_h_txr_ak_cycles_full.alluncore cacheAK Egress Full; Allevent=0x32,umask=301AK Egress Full; Cycles full from both schedulersunc_h_txr_ak_cycles_full.sched0uncore cacheAK Egress Full; Scheduler 0event=0x32,umask=101AK Egress Full; Filter for cycles full  from scheduler bank 0unc_h_txr_ak_cycles_full.sched1uncore cacheAK Egress Full; Scheduler 1event=0x32,umask=201AK Egress Full; Filter for cycles full  from scheduler bank 1unc_h_txr_ak_cycles_ne.alluncore cacheAK Egress Not Empty; Allevent=0x31,umask=301AK Egress Not Empty; Cycles full from both schedulersunc_h_txr_ak_cycles_ne.sched0uncore cacheAK Egress Not Empty; Scheduler 0event=0x31,umask=101AK Egress Not Empty; Filter for cycles not empty  from scheduler bank 0unc_h_txr_ak_cycles_ne.sched1uncore cacheAK Egress Not Empty; Scheduler 1event=0x31,umask=201AK Egress Not Empty; Filter for cycles not empty from scheduler bank 1unc_h_txr_ak_inserts.alluncore cacheAK Egress Allocations; Allevent=0x2f,umask=301AK Egress Allocations; Allocations from both schedulersunc_h_txr_ak_inserts.sched0uncore cacheAK Egress Allocations; Scheduler 0event=0x2f,umask=101AK Egress Allocations; Filter for allocations from scheduler bank 0unc_h_txr_ak_inserts.sched1uncore cacheAK Egress Allocations; Scheduler 1event=0x2f,umask=201AK Egress Allocations; Filter for allocations from scheduler bank 1unc_h_txr_bl.drs_cacheuncore cacheOutbound DRS Ring Transactions to Cache; Data to Cacheevent=0x10,umask=101Counts the number of DRS messages sent out on the BL ring.   This can be filtered by the destination.; Filter for data being sent to the cacheunc_h_txr_bl.drs_coreuncore cacheOutbound DRS Ring Transactions to Cache; Data to Coreevent=0x10,umask=201Counts the number of DRS messages sent out on the BL ring.   This can be filtered by the destination.; Filter for data being sent directly to the requesting coreunc_h_txr_bl.drs_qpiuncore cacheOutbound DRS Ring Transactions to Cache; Data to QPIevent=0x10,umask=401Counts the number of DRS messages sent out on the BL ring.   This can be filtered by the destination.; Filter for data being sent to a remote socket over QPIunc_h_txr_bl_cycles_full.alluncore cacheBL Egress Full; Allevent=0x36,umask=301BL Egress Full; Cycles full from both schedulersunc_h_txr_bl_cycles_full.sched0uncore cacheBL Egress Full; Scheduler 0event=0x36,umask=101BL Egress Full; Filter for cycles full  from scheduler bank 0unc_h_txr_bl_cycles_full.sched1uncore cacheBL Egress Full; Scheduler 1event=0x36,umask=201BL Egress Full; Filter for cycles full  from scheduler bank 1unc_h_txr_bl_cycles_ne.alluncore cacheBL Egress Not Empty; Allevent=0x35,umask=301BL Egress Not Empty; Cycles full from both schedulersunc_h_txr_bl_cycles_ne.sched0uncore cacheBL Egress Not Empty; Scheduler 0event=0x35,umask=101BL Egress Not Empty; Filter for cycles not empty  from scheduler bank 0unc_h_txr_bl_cycles_ne.sched1uncore cacheBL Egress Not Empty; Scheduler 1event=0x35,umask=201BL Egress Not Empty; Filter for cycles not empty from scheduler bank 1unc_h_txr_bl_inserts.alluncore cacheBL Egress Allocations; Allevent=0x33,umask=301BL Egress Allocations; Allocations from both schedulersunc_h_txr_bl_inserts.sched0uncore cacheBL Egress Allocations; Scheduler 0event=0x33,umask=101BL Egress Allocations; Filter for allocations from scheduler bank 0unc_h_txr_bl_inserts.sched1uncore cacheBL Egress Allocations; Scheduler 1event=0x33,umask=201BL Egress Allocations; Filter for allocations from scheduler bank 1unc_h_txr_starved.akuncore cacheInjection Starvation; For AK Ringevent=0x6d,umask=101Counts injection starvation.  This starvation is triggered when the Egress cannot send a transaction onto the ring for a long period of timeunc_h_txr_starved.bluncore cacheInjection Starvation; For BL Ringevent=0x6d,umask=201Counts injection starvation.  This starvation is triggered when the Egress cannot send a transaction onto the ring for a long period of timeunc_h_wpq_cycles_no_reg_credits.chn0uncore cacheHA iMC CHN0 WPQ Credits Empty - Regular; Channel 0event=0x18,umask=101Counts the number of cycles when there are no regular credits available for posting writes from the HA into the iMC.  In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue).  This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes.  This count only tracks the regular credits  Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given time.; Filter for memory controller channel 0 onlyunc_h_wpq_cycles_no_reg_credits.chn1uncore cacheHA iMC CHN0 WPQ Credits Empty - Regular; Channel 1event=0x18,umask=201Counts the number of cycles when there are no regular credits available for posting writes from the HA into the iMC.  In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue).  This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes.  This count only tracks the regular credits  Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given time.; Filter for memory controller channel 1 onlyunc_h_wpq_cycles_no_reg_credits.chn2uncore cacheHA iMC CHN0 WPQ Credits Empty - Regular; Channel 2event=0x18,umask=401Counts the number of cycles when there are no regular credits available for posting writes from the HA into the iMC.  In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue).  This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes.  This count only tracks the regular credits  Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given time.; Filter for memory controller channel 2 onlyunc_h_wpq_cycles_no_reg_credits.chn3uncore cacheHA iMC CHN0 WPQ Credits Empty - Regular; Channel 3event=0x18,umask=801Counts the number of cycles when there are no regular credits available for posting writes from the HA into the iMC.  In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue).  This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes.  This count only tracks the regular credits  Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given time.; Filter for memory controller channel 3 onlyunc_h_wpq_cycles_no_spec_credits.chn0uncore cacheHA iMC CHN0 WPQ Credits Empty - Special; Channel 0event=0x19,umask=101Counts the number of cycles when there are no special credits available for posting writes from the HA into the iMC.  In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue).  This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes.  This count only tracks the special credits.  This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given time.; Filter for memory controller channel 0 onlyunc_h_wpq_cycles_no_spec_credits.chn1uncore cacheHA iMC CHN0 WPQ Credits Empty - Special; Channel 1event=0x19,umask=201Counts the number of cycles when there are no special credits available for posting writes from the HA into the iMC.  In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue).  This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes.  This count only tracks the special credits.  This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given time.; Filter for memory controller channel 1 onlyunc_h_wpq_cycles_no_spec_credits.chn2uncore cacheHA iMC CHN0 WPQ Credits Empty - Special; Channel 2event=0x19,umask=401Counts the number of cycles when there are no special credits available for posting writes from the HA into the iMC.  In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue).  This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes.  This count only tracks the special credits.  This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given time.; Filter for memory controller channel 2 onlyunc_h_wpq_cycles_no_spec_credits.chn3uncore cacheHA iMC CHN0 WPQ Credits Empty - Special; Channel 3event=0x19,umask=801Counts the number of cycles when there are no special credits available for posting writes from the HA into the iMC.  In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue).  This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes.  This count only tracks the special credits.  This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given time.; Filter for memory controller channel 3 onlyuncore_irpunc_i_cache_total_occupancy.anyuncore interconnectTotal Write Cache Occupancy; Any Sourceevent=0x12,umask=101Accumulates the number of reads and writes that are outstanding in the uncore in each cycle.  This is effectively the sum of the READ_OCCUPANCY and WRITE_OCCUPANCY events.; Tracks all requests from any source portunc_i_cache_total_occupancy.sourceuncore interconnectTotal Write Cache Occupancy; Select Sourceevent=0x12,umask=201Accumulates the number of reads and writes that are outstanding in the uncore in each cycle.  This is effectively the sum of the READ_OCCUPANCY and WRITE_OCCUPANCY events.; Tracks only those requests that come from the port specified in the IRP_PmonFilter.OrderingQ register.  This register allows one to select one specific queue.  It is not possible to monitor multiple queues at a timeunc_i_clockticksuncore interconnectClocks in the IRPevent=001Number of clocks in the IRPunc_i_coherent_ops.clflushuncore interconnectCoherent Ops; CLFlushevent=0x13,umask=0x8001Counts the number of coherency related operations servied by the IRPunc_i_coherent_ops.crduncore interconnectCoherent Ops; CRdevent=0x13,umask=201Counts the number of coherency related operations servied by the IRPunc_i_coherent_ops.drduncore interconnectCoherent Ops; DRdevent=0x13,umask=401Counts the number of coherency related operations servied by the IRPunc_i_coherent_ops.pcidcahintuncore interconnectCoherent Ops; PCIDCAHin5tevent=0x13,umask=0x2001Counts the number of coherency related operations servied by the IRPunc_i_coherent_ops.pcirdcuruncore interconnectCoherent Ops; PCIRdCurevent=0x13,umask=101Counts the number of coherency related operations servied by the IRPunc_i_coherent_ops.pcitomuncore interconnectCoherent Ops; PCIItoMevent=0x13,umask=0x1001Counts the number of coherency related operations servied by the IRPunc_i_coherent_ops.rfouncore interconnectCoherent Ops; RFOevent=0x13,umask=801Counts the number of coherency related operations servied by the IRPunc_i_coherent_ops.wbmtoiuncore interconnectCoherent Ops; WbMtoIevent=0x13,umask=0x4001Counts the number of coherency related operations servied by the IRPunc_i_misc0.2nd_atomic_insertuncore interconnectMisc Events - Set 0; Cache Inserts of Atomic Transactions as Secondaryevent=0x14,umask=0x1001Counts Timeouts - Set 0 : Cache Inserts of Atomic Transactions as Secondaryunc_i_misc0.2nd_rd_insertuncore interconnectMisc Events - Set 0; Cache Inserts of Read Transactions as Secondaryevent=0x14,umask=401Counts Timeouts - Set 0 : Cache Inserts of Read Transactions as Secondaryunc_i_misc0.2nd_wr_insertuncore interconnectMisc Events - Set 0; Cache Inserts of Write Transactions as Secondaryevent=0x14,umask=801Counts Timeouts - Set 0 : Cache Inserts of Write Transactions as Secondaryunc_i_misc0.fast_rejuncore interconnectMisc Events - Set 0; Fastpath Rejectsevent=0x14,umask=201Counts Timeouts - Set 0 : Fastpath Rejectsunc_i_misc0.fast_requncore interconnectMisc Events - Set 0; Fastpath Requestsevent=0x14,umask=101Counts Timeouts - Set 0 : Fastpath Requestsunc_i_misc0.fast_xferuncore interconnectMisc Events - Set 0; Fastpath Transfers From Primary to Secondaryevent=0x14,umask=0x2001Counts Timeouts - Set 0 : Fastpath Transfers From Primary to Secondaryunc_i_misc0.pf_ack_hintuncore interconnectMisc Events - Set 0; Prefetch Ack Hints From Primary to Secondaryevent=0x14,umask=0x4001Counts Timeouts - Set 0 : Prefetch Ack Hints From Primary to Secondaryunc_i_misc0.pf_timeoutuncore interconnectMisc Events - Set 0; Prefetch TimeOutevent=0x14,umask=0x8001Indicates the fetch for a previous prefetch wasn't accepted by the prefetch.   This happens in the case of a prefetch TimeOutunc_i_misc1.data_throttleuncore interconnectMisc Events - Set 1; Data Throttledevent=0x15,umask=0x8001IRP throttled switch dataunc_i_misc1.lost_fwduncore interconnectMisc Events - Set 1event=0x15,umask=0x1001Misc Events - Set 1 : Lost Forward : Snoop pulled away ownership before a write was committedunc_i_misc1.sec_rcvd_invlduncore interconnectMisc Events - Set 1; Received Invalidevent=0x15,umask=0x2001Secondary received a transfer that did not have sufficient MESI stateunc_i_misc1.sec_rcvd_vlduncore interconnectMisc Events - Set 1; Received Validevent=0x15,umask=0x4001Secondary received a transfer that did have sufficient MESI stateunc_i_misc1.slow_euncore interconnectMisc Events - Set 1; Slow Transfer of E Lineevent=0x15,umask=401Secondary received a transfer that did have sufficient MESI stateunc_i_misc1.slow_iuncore interconnectMisc Events - Set 1; Slow Transfer of I Lineevent=0x15,umask=101Snoop took cacheline ownership before write from data was committedunc_i_misc1.slow_muncore interconnectMisc Events - Set 1; Slow Transfer of M Lineevent=0x15,umask=801Snoop took cacheline ownership before write from data was committedunc_i_misc1.slow_suncore interconnectMisc Events - Set 1; Slow Transfer of S Lineevent=0x15,umask=201Secondary received a transfer that did not have sufficient MESI stateunc_i_rxr_ak_insertsuncore interconnectAK Ingress Occupancyevent=0xa01Counts the number of allocations into the AK Ingress.  This queue is where the IRP receives responses from R2PCIe (the ring)unc_i_rxr_bl_drs_cycles_fulluncore interconnectUNC_I_RxR_BL_DRS_CYCLES_FULLevent=401Counts the number of cycles when the BL Ingress is full.  This queue is where the IRP receives data from R2PCIe (the ring).  It is used for data returns from read requests as well as outbound MMIO writesunc_i_rxr_bl_drs_insertsuncore interconnectBL Ingress Occupancy - DRSevent=101Counts the number of allocations into the BL Ingress.  This queue is where the IRP receives data from R2PCIe (the ring).  It is used for data returns from read requests as well as outbound MMIO writesunc_i_rxr_bl_drs_occupancyuncore interconnectUNC_I_RxR_BL_DRS_OCCUPANCYevent=701Accumulates the occupancy of the BL Ingress in each cycles.  This queue is where the IRP receives data from R2PCIe (the ring).  It is used for data returns from read requests as well as outbound MMIO writesunc_i_rxr_bl_ncb_cycles_fulluncore interconnectUNC_I_RxR_BL_NCB_CYCLES_FULLevent=501Counts the number of cycles when the BL Ingress is full.  This queue is where the IRP receives data from R2PCIe (the ring).  It is used for data returns from read requests as well as outbound MMIO writesunc_i_rxr_bl_ncb_insertsuncore interconnectBL Ingress Occupancy - NCBevent=201Counts the number of allocations into the BL Ingress.  This queue is where the IRP receives data from R2PCIe (the ring).  It is used for data returns from read requests as well as outbound MMIO writesunc_i_rxr_bl_ncb_occupancyuncore interconnectUNC_I_RxR_BL_NCB_OCCUPANCYevent=801Accumulates the occupancy of the BL Ingress in each cycles.  This queue is where the IRP receives data from R2PCIe (the ring).  It is used for data returns from read requests as well as outbound MMIO writesunc_i_rxr_bl_ncs_cycles_fulluncore interconnectUNC_I_RxR_BL_NCS_CYCLES_FULLevent=601Counts the number of cycles when the BL Ingress is full.  This queue is where the IRP receives data from R2PCIe (the ring).  It is used for data returns from read requests as well as outbound MMIO writesunc_i_rxr_bl_ncs_insertsuncore interconnectBL Ingress Occupancy - NCSevent=301Counts the number of allocations into the BL Ingress.  This queue is where the IRP receives data from R2PCIe (the ring).  It is used for data returns from read requests as well as outbound MMIO writesunc_i_rxr_bl_ncs_occupancyuncore interconnectUNC_I_RxR_BL_NCS_OCCUPANCYevent=901Accumulates the occupancy of the BL Ingress in each cycles.  This queue is where the IRP receives data from R2PCIe (the ring).  It is used for data returns from read requests as well as outbound MMIO writesunc_i_snoop_resp.hit_esuncore interconnectSnoop Responses; Hit E or Sevent=0x17,umask=401Snoop Responses : Hit E or Sunc_i_snoop_resp.hit_iuncore interconnectSnoop Responses; Hit Ievent=0x17,umask=201Snoop Responses : Hit Iunc_i_snoop_resp.hit_muncore interconnectSnoop Responses; Hit Mevent=0x17,umask=801Snoop Responses : Hit Munc_i_snoop_resp.missuncore interconnectSnoop Responses; Missevent=0x17,umask=101Snoop Responses : Missunc_i_snoop_resp.snpcodeuncore interconnectSnoop Responses; SnpCodeevent=0x17,umask=0x1001Snoop Responses : SnpCodeunc_i_snoop_resp.snpdatauncore interconnectSnoop Responses; SnpDataevent=0x17,umask=0x2001Snoop Responses : SnpDataunc_i_snoop_resp.snpinvuncore interconnectSnoop Responses; SnpInvevent=0x17,umask=0x4001Snoop Responses : SnpInvunc_i_transactions.atomicuncore interconnectInbound Transaction Count; Atomicevent=0x16,umask=0x1001Counts the number of Inbound transactions from the IRP to the Uncore.  This can be filtered based on request type in addition to the source queue.  Note the special filtering equation.  We do OR-reduction on the request type.  If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Tracks the number of atomic transactionsunc_i_transactions.otheruncore interconnectInbound Transaction Count; Otherevent=0x16,umask=0x2001Counts the number of Inbound transactions from the IRP to the Uncore.  This can be filtered based on request type in addition to the source queue.  Note the special filtering equation.  We do OR-reduction on the request type.  If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Tracks the number of 'other' kinds of transactionsunc_i_transactions.rd_prefuncore interconnectInbound Transaction Count; Read Prefetchesevent=0x16,umask=401Counts the number of Inbound transactions from the IRP to the Uncore.  This can be filtered based on request type in addition to the source queue.  Note the special filtering equation.  We do OR-reduction on the request type.  If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Tracks the number of read prefetchesunc_i_transactions.readsuncore interconnectInbound Transaction Count; Readsevent=0x16,umask=101Counts the number of Inbound transactions from the IRP to the Uncore.  This can be filtered based on request type in addition to the source queue.  Note the special filtering equation.  We do OR-reduction on the request type.  If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Tracks only read requests (not including read prefetches)unc_i_transactions.writesuncore interconnectInbound Transaction Count; Writesevent=0x16,umask=201Counts the number of Inbound transactions from the IRP to the Uncore.  This can be filtered based on request type in addition to the source queue.  Note the special filtering equation.  We do OR-reduction on the request type.  If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Tracks only write requests.  Each write request should have a prefetch, so there is no need to explicitly track these requestsunc_i_transactions.wr_prefuncore interconnectInbound Transaction Count; Write Prefetchesevent=0x16,umask=801Counts the number of Inbound transactions from the IRP to the Uncore.  This can be filtered based on request type in addition to the source queue.  Note the special filtering equation.  We do OR-reduction on the request type.  If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Tracks the number of write prefetchesunc_i_txr_ad_stall_credit_cyclesuncore interconnectNo AD Egress Credit Stallsevent=0x1801Counts the number times when it is not possible to issue a request to the R2PCIe because there are no AD Egress Credits availableunc_i_txr_bl_stall_credit_cyclesuncore interconnectNo BL Egress Credit Stallsevent=0x1901Counts the number times when it is not possible to issue data to the R2PCIe because there are no BL Egress Credits availableunc_i_txr_data_inserts_ncbuncore interconnectOutbound Read Requestsevent=0xe01Counts the number of requests issued to the switch (towards the devices)unc_i_txr_data_inserts_ncsuncore interconnectOutbound Read Requestsevent=0xf01Counts the number of requests issued to the switch (towards the devices)unc_i_txr_request_occupancyuncore interconnectOutbound Request Queue Occupancyevent=0xd01Accumulates the number of outstanding outbound requests from the IRP to the switch (towards the devices).  This can be used in conjunction with the allocations event in order to calculate average latency of outbound requestsuncore_uboxunc_u_event_msg.doorbell_rcvduncore interconnectVLW Receivedevent=0x42,umask=801Virtual Logical Wire (legacy) message were received from Uncore.   Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadIDunc_u_filter_match.disableuncore interconnectFilter Matchevent=0x41,umask=201Filter match per thread (w/ or w/o Filter Enable).  Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadIDunc_u_filter_match.enableuncore interconnectFilter Matchevent=0x41,umask=101Filter match per thread (w/ or w/o Filter Enable).  Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadIDunc_u_filter_match.u2c_disableuncore interconnectFilter Matchevent=0x41,umask=801Filter match per thread (w/ or w/o Filter Enable).  Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadIDunc_u_filter_match.u2c_enableuncore interconnectFilter Matchevent=0x41,umask=401Filter match per thread (w/ or w/o Filter Enable).  Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadIDunc_u_phold_cycles.assert_to_ackuncore interconnectCycles PHOLD Assert to Ack; Assert to ACKevent=0x45,umask=101PHOLD cycles.  Filter from source CoreIDunc_u_racu_requestsuncore interconnectRACU Requestevent=0x4601Number outstanding register requests within message channel trackerunc_u_u2c_events.cmcuncore interconnectMonitor Sent to T0; Correctable Machine Checkevent=0x43,umask=0x1001Events coming from Uncore can be sent to one or all coresunc_u_u2c_events.livelockuncore interconnectMonitor Sent to T0; Livelockevent=0x43,umask=401Events coming from Uncore can be sent to one or all cores; Filter by coreunc_u_u2c_events.lterroruncore interconnectMonitor Sent to T0; LTErrorevent=0x43,umask=801Events coming from Uncore can be sent to one or all cores; Filter by coreunc_u_u2c_events.monitor_t0uncore interconnectMonitor Sent to T0; Monitor T0event=0x43,umask=101Events coming from Uncore can be sent to one or all cores; Filter by coreunc_u_u2c_events.monitor_t1uncore interconnectMonitor Sent to T0; Monitor T1event=0x43,umask=201Events coming from Uncore can be sent to one or all cores; Filter by coreunc_u_u2c_events.otheruncore interconnectMonitor Sent to T0; Otherevent=0x43,umask=0x8001Events coming from Uncore can be sent to one or all cores; PREQ, PSMI, P2U, Thermal, PCUSMI, PMIunc_u_u2c_events.trapuncore interconnectMonitor Sent to T0; Trapevent=0x43,umask=0x4001Events coming from Uncore can be sent to one or all coresunc_u_u2c_events.umcuncore interconnectMonitor Sent to T0; Uncorrectable Machine Checkevent=0x43,umask=0x2001Events coming from Uncore can be sent to one or all coresuncore_r2pcieunc_r2_clockticksuncore ioNumber of uclks in domainevent=101Counts the number of uclks in the R2PCIe uclk domain.  This could be slightly different than the count in the Ubox because of enable/freeze delays.  However, because the R2PCIe is close to the Ubox, they generally should not diverge by more than a handful of cyclesunc_r2_iio_credit.isoch_qpi0uncore ioUNC_R2_IIO_CREDIT.ISOCH_QPI0event=0x2d,umask=401unc_r2_iio_credit.isoch_qpi1uncore ioUNC_R2_IIO_CREDIT.ISOCH_QPI1event=0x2d,umask=801unc_r2_iio_credit.prq_qpi0uncore ioUNC_R2_IIO_CREDIT.PRQ_QPI0event=0x2d,umask=101unc_r2_iio_credit.prq_qpi1uncore ioUNC_R2_IIO_CREDIT.PRQ_QPI1event=0x2d,umask=201unc_r2_iio_credits_acquired.drsuncore ioR2PCIe IIO Credit Acquired; DRSevent=0x33,umask=801Counts the number of credits that are acquired in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use.  Transactions from the BL ring going into the IIO Agent must first acquire a credit.  These credits are for either the NCB or NCS message classes.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly).; Credits to the IIO for the DRS message classunc_r2_iio_credits_acquired.ncbuncore ioR2PCIe IIO Credit Acquired; NCBevent=0x33,umask=0x1001Counts the number of credits that are acquired in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use.  Transactions from the BL ring going into the IIO Agent must first acquire a credit.  These credits are for either the NCB or NCS message classes.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly).; Credits to the IIO for the NCB message classunc_r2_iio_credits_acquired.ncsuncore ioR2PCIe IIO Credit Acquired; NCSevent=0x33,umask=0x2001Counts the number of credits that are acquired in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use.  Transactions from the BL ring going into the IIO Agent must first acquire a credit.  These credits are for either the NCB or NCS message classes.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly).; Credits to the IIO for the NCS message classunc_r2_iio_credits_used.drsuncore ioR2PCIe IIO Credits in Use; DRSevent=0x32,umask=801Counts the number of cycles when one or more credits in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use.  Transactions from the BL ring going into the IIO Agent must first acquire a credit.  These credits are for either the NCB or NCS message classes.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly).; Credits to the IIO for the DRS message classunc_r2_iio_credits_used.ncbuncore ioR2PCIe IIO Credits in Use; NCBevent=0x32,umask=0x1001Counts the number of cycles when one or more credits in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use.  Transactions from the BL ring going into the IIO Agent must first acquire a credit.  These credits are for either the NCB or NCS message classes.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly).; Credits to the IIO for the NCB message classunc_r2_iio_credits_used.ncsuncore ioR2PCIe IIO Credits in Use; NCSevent=0x32,umask=0x2001Counts the number of cycles when one or more credits in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use.  Transactions from the BL ring going into the IIO Agent must first acquire a credit.  These credits are for either the NCB or NCS message classes.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly).; Credits to the IIO for the NCS message classunc_r2_ring_ad_used.alluncore ioR2 AD Ring in Use; Allevent=7,umask=0xf01Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r2_ring_ad_used.ccwuncore ioR2 AD Ring in Use; Counterclockwiseevent=7,umask=0xc01Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r2_ring_ad_used.ccw_evenuncore ioR2 AD Ring in Use; Counterclockwise and Evenevent=7,umask=401Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarityunc_r2_ring_ad_used.ccw_odduncore ioR2 AD Ring in Use; Counterclockwise and Oddevent=7,umask=801Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarityunc_r2_ring_ad_used.cwuncore ioR2 AD Ring in Use; Clockwiseevent=7,umask=301Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r2_ring_ad_used.cw_evenuncore ioR2 AD Ring in Use; Clockwise and Evenevent=7,umask=101Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarityunc_r2_ring_ad_used.cw_odduncore ioR2 AD Ring in Use; Clockwise and Oddevent=7,umask=201Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarityunc_r2_ring_ak_bounces.dnuncore ioAK Ingress Bounced; Dnevent=0x12,umask=201Counts the number of times when a request destined for the AK ingress bouncedunc_r2_ring_ak_bounces.upuncore ioAK Ingress Bounced; Upevent=0x12,umask=101Counts the number of times when a request destined for the AK ingress bouncedunc_r2_ring_ak_used.alluncore ioR2 AK Ring in Use; Allevent=8,umask=0xf01Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r2_ring_ak_used.ccwuncore ioR2 AK Ring in Use; Counterclockwiseevent=8,umask=0xc01Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r2_ring_ak_used.ccw_evenuncore ioR2 AK Ring in Use; Counterclockwise and Evenevent=8,umask=401Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarityunc_r2_ring_ak_used.ccw_odduncore ioR2 AK Ring in Use; Counterclockwise and Oddevent=8,umask=801Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarityunc_r2_ring_ak_used.cwuncore ioR2 AK Ring in Use; Clockwiseevent=8,umask=301Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r2_ring_ak_used.cw_evenuncore ioR2 AK Ring in Use; Clockwise and Evenevent=8,umask=101Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarityunc_r2_ring_ak_used.cw_odduncore ioR2 AK Ring in Use; Clockwise and Oddevent=8,umask=201Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarityunc_r2_ring_bl_used.alluncore ioR2 BL Ring in Use; Allevent=9,umask=0xf01Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r2_ring_bl_used.ccwuncore ioR2 BL Ring in Use; Counterclockwiseevent=9,umask=0xc01Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r2_ring_bl_used.ccw_evenuncore ioR2 BL Ring in Use; Counterclockwise and Evenevent=9,umask=401Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarityunc_r2_ring_bl_used.ccw_odduncore ioR2 BL Ring in Use; Counterclockwise and Oddevent=9,umask=801Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarityunc_r2_ring_bl_used.cwuncore ioR2 BL Ring in Use; Clockwiseevent=9,umask=301Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r2_ring_bl_used.cw_evenuncore ioR2 BL Ring in Use; Clockwise and Evenevent=9,umask=101Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarityunc_r2_ring_bl_used.cw_odduncore ioR2 BL Ring in Use; Clockwise and Oddevent=9,umask=201Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarityunc_r2_ring_iv_used.anyuncore ioR2 IV Ring in Use; Anyevent=0xa,umask=0xf01Counts the number of cycles that the IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stopunc_r2_ring_iv_used.ccwuncore ioR2 IV Ring in Use; Counterclockwiseevent=0xa,umask=0xc01Counts the number of cycles that the IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stopunc_r2_ring_iv_used.cwuncore ioR2 IV Ring in Use; Clockwiseevent=0xa,umask=301Counts the number of cycles that the IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stopunc_r2_rxr_cycles_ne.ncbuncore ioIngress Cycles Not Empty; NCBevent=0x10,umask=0x1001Counts the number of cycles when the R2PCIe Ingress is not empty.  This tracks one of the three rings that are used by the R2PCIe agent.  This can be used in conjunction with the R2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple counters.; NCB Ingress Queueunc_r2_rxr_cycles_ne.ncsuncore ioIngress Cycles Not Empty; NCSevent=0x10,umask=0x2001Counts the number of cycles when the R2PCIe Ingress is not empty.  This tracks one of the three rings that are used by the R2PCIe agent.  This can be used in conjunction with the R2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple counters.; NCS Ingress Queueunc_r2_rxr_inserts.ncbuncore ioIngress Allocations; NCBevent=0x11,umask=0x1001Counts the number of allocations into the R2PCIe Ingress.  This tracks one of the three rings that are used by the R2PCIe agent.  This can be used in conjunction with the R2PCIe Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; NCB Ingress Queueunc_r2_rxr_inserts.ncsuncore ioIngress Allocations; NCSevent=0x11,umask=0x2001Counts the number of allocations into the R2PCIe Ingress.  This tracks one of the three rings that are used by the R2PCIe agent.  This can be used in conjunction with the R2PCIe Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; NCS Ingress Queueunc_r2_rxr_occupancy.drsuncore ioIngress Occupancy Accumulator; DRSevent=0x13,umask=801Accumulates the occupancy of a given R2PCIe Ingress queue in each cycles.  This tracks one of the three ring Ingress buffers.  This can be used with the R2PCIe Ingress Not Empty event to calculate average occupancy or the R2PCIe Ingress Allocations event in order to calculate average queuing latency.; DRS Ingress Queueunc_r2_sbo0_credits_acquired.aduncore ioSBo0 Credits Acquired; For AD Ringevent=0x28,umask=101Number of Sbo 0 credits acquired in a given cycle, per ringunc_r2_sbo0_credits_acquired.bluncore ioSBo0 Credits Acquired; For BL Ringevent=0x28,umask=201Number of Sbo 0 credits acquired in a given cycle, per ringunc_r2_sbo0_credit_occupancy.aduncore ioSBo0 Credits Occupancy; For AD Ringevent=0x2a,umask=101Number of Sbo 0 credits in use in a given cycle, per ringunc_r2_sbo0_credit_occupancy.bluncore ioSBo0 Credits Occupancy; For BL Ringevent=0x2a,umask=201Number of Sbo 0 credits in use in a given cycle, per ringunc_r2_stall_no_sbo_credit.sbo0_aduncore ioStall on No Sbo Credits; For SBo0, AD Ringevent=0x2c,umask=101Number of cycles Egress is stalled waiting for an Sbo credit to become available.  Per Sbo, per Ringunc_r2_stall_no_sbo_credit.sbo0_bluncore ioStall on No Sbo Credits; For SBo0, BL Ringevent=0x2c,umask=401Number of cycles Egress is stalled waiting for an Sbo credit to become available.  Per Sbo, per Ringunc_r2_stall_no_sbo_credit.sbo1_aduncore ioStall on No Sbo Credits; For SBo1, AD Ringevent=0x2c,umask=201Number of cycles Egress is stalled waiting for an Sbo credit to become available.  Per Sbo, per Ringunc_r2_stall_no_sbo_credit.sbo1_bluncore ioStall on No Sbo Credits; For SBo1, BL Ringevent=0x2c,umask=801Number of cycles Egress is stalled waiting for an Sbo credit to become available.  Per Sbo, per Ringunc_r2_txr_cycles_full.aduncore ioEgress Cycles Full; ADevent=0x25,umask=101Counts the number of cycles when the R2PCIe Egress buffer is full.; AD Egress Queueunc_r2_txr_cycles_full.akuncore ioEgress Cycles Full; AKevent=0x25,umask=201Counts the number of cycles when the R2PCIe Egress buffer is full.; AK Egress Queueunc_r2_txr_cycles_full.bluncore ioEgress Cycles Full; BLevent=0x25,umask=401Counts the number of cycles when the R2PCIe Egress buffer is full.; BL Egress Queueunc_r2_txr_cycles_ne.aduncore ioEgress Cycles Not Empty; ADevent=0x23,umask=101Counts the number of cycles when the R2PCIe Egress is not empty.  This tracks one of the three rings that are used by the R2PCIe agent.  This can be used in conjunction with the R2PCIe Egress Occupancy Accumulator event in order to calculate average queue occupancy.  Only a single Egress queue can be tracked at any given time.  It is not possible to filter based on direction or polarity.; AD Egress Queueunc_r2_txr_cycles_ne.akuncore ioEgress Cycles Not Empty; AKevent=0x23,umask=201Counts the number of cycles when the R2PCIe Egress is not empty.  This tracks one of the three rings that are used by the R2PCIe agent.  This can be used in conjunction with the R2PCIe Egress Occupancy Accumulator event in order to calculate average queue occupancy.  Only a single Egress queue can be tracked at any given time.  It is not possible to filter based on direction or polarity.; AK Egress Queueunc_r2_txr_cycles_ne.bluncore ioEgress Cycles Not Empty; BLevent=0x23,umask=401Counts the number of cycles when the R2PCIe Egress is not empty.  This tracks one of the three rings that are used by the R2PCIe agent.  This can be used in conjunction with the R2PCIe Egress Occupancy Accumulator event in order to calculate average queue occupancy.  Only a single Egress queue can be tracked at any given time.  It is not possible to filter based on direction or polarity.; BL Egress Queueunc_r2_txr_nack_cw.dn_aduncore ioEgress CCW NACK; AD CCWevent=0x26,umask=101AD CounterClockwise Egress Queueunc_r2_txr_nack_cw.dn_akuncore ioEgress CCW NACK; AK CCWevent=0x26,umask=401AK CounterClockwise Egress Queueunc_r2_txr_nack_cw.dn_bluncore ioEgress CCW NACK; BL CCWevent=0x26,umask=201BL CounterClockwise Egress Queueunc_r2_txr_nack_cw.up_aduncore ioEgress CCW NACK; AK CCWevent=0x26,umask=801BL CounterClockwise Egress Queueunc_r2_txr_nack_cw.up_akuncore ioEgress CCW NACK; BL CWevent=0x26,umask=0x2001AD Clockwise Egress Queueunc_r2_txr_nack_cw.up_bluncore ioEgress CCW NACK; BL CCWevent=0x26,umask=0x1001AD CounterClockwise Egress Queueunc_m_act_count.bypuncore memoryDRAM Activate Count; Activate due to Writeevent=1,umask=801Counts the number of DRAM Activate commands sent on this channel.  Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS.  One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activatesunc_m_act_count.rduncore memoryDRAM Activate Count; Activate due to Readevent=1,umask=101Counts the number of DRAM Activate commands sent on this channel.  Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS.  One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activatesunc_m_act_count.wruncore memoryDRAM Activate Count; Activate due to Writeevent=1,umask=201Counts the number of DRAM Activate commands sent on this channel.  Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS.  One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activatesunc_m_byp_cmds.actuncore memoryACT command issued by 2 cycle bypassevent=0xa1,umask=101unc_m_byp_cmds.casuncore memoryCAS command issued by 2 cycle bypassevent=0xa1,umask=201unc_m_byp_cmds.preuncore memoryPRE command issued by 2 cycle bypassevent=0xa1,umask=401unc_m_cas_count.alluncore memoryDRAM RD_CAS and WR_CAS Commands.; All DRAM WR_CAS (w/ and w/out auto-pre)event=4,umask=0xf01DRAM RD_CAS and WR_CAS Commands; Counts the total number of DRAM CAS commands issued on this channelunc_m_cas_count.rduncore memoryDRAM RD_CAS and WR_CAS Commands.; All DRAM Reads (RD_CAS + Underfills)event=4,umask=301DRAM RD_CAS and WR_CAS Commands; Counts the total number of DRAM Read CAS commands issued on this channel (including underfills)unc_m_cas_count.rd_reguncore memoryDRAM RD_CAS and WR_CAS Commands.; All DRAM RD_CAS (w/ and w/out auto-pre)event=4,umask=101DRAM RD_CAS and WR_CAS Commands; Counts the total number or DRAM Read CAS commands issued on this channel.  This includes both regular RD CAS commands as well as those with implicit Precharge.  AutoPre is only used in systems that are using closed page policy.  We do not filter based on major mode, as RD_CAS is not issued during WMM (with the exception of underfills)unc_m_cas_count.rd_rmmuncore memoryDRAM RD_CAS and WR_CAS Commands.; Read CAS issued in RMMevent=4,umask=0x2001unc_m_cas_count.rd_underfilluncore memoryDRAM RD_CAS and WR_CAS Commands.; Underfill Read Issuedevent=4,umask=201DRAM RD_CAS and WR_CAS Commands; Counts the number of underfill reads that are issued by the memory controller.  This will generally be about the same as the number of partial writes, but may be slightly less because of partials hitting in the WPQ.  While it is possible for underfills to be issed in both WMM and RMM, this event counts bothunc_m_cas_count.rd_wmmuncore memoryDRAM RD_CAS and WR_CAS Commands.; Read CAS issued in WMMevent=4,umask=0x1001unc_m_cas_count.wruncore memoryDRAM RD_CAS and WR_CAS Commands.; All DRAM WR_CAS (both Modes)event=4,umask=0xc01DRAM RD_CAS and WR_CAS Commands; Counts the total number of DRAM Write CAS commands issued on this channelunc_m_cas_count.wr_rmmuncore memoryDRAM RD_CAS and WR_CAS Commands.; DRAM WR_CAS (w/ and w/out auto-pre) in Read Major Modeevent=4,umask=801DRAM RD_CAS and WR_CAS Commands; Counts the total number of Opportunistic DRAM Write CAS commands issued on this channel while in Read-Major-Modeunc_m_cas_count.wr_wmmuncore memoryDRAM RD_CAS and WR_CAS Commands.; DRAM WR_CAS (w/ and w/out auto-pre) in Write Major Modeevent=4,umask=401DRAM RD_CAS and WR_CAS Commands; Counts the total number or DRAM Write CAS commands issued on this channel while in Write-Major-Modeunc_m_dclockticksuncore memoryDRAM Clockticksevent=001unc_m_dram_pre_alluncore memoryDRAM Precharge All Commandsevent=601Counts the number of times that the precharge all command was sentunc_m_dram_refresh.highuncore memoryNumber of DRAM Refreshes Issuedevent=5,umask=401Counts the number of refreshes issuedunc_m_dram_refresh.panicuncore memoryNumber of DRAM Refreshes Issuedevent=5,umask=201Counts the number of refreshes issuedunc_m_ecc_correctable_errorsuncore memoryECC Correctable Errorsevent=901Counts the number of ECC errors detected and corrected by the iMC on this channel.  This counter is only useful with ECC DRAM devices.  This count will increment one time for each correction regardless of the number of bits corrected.  The iMC can correct up to 4 bit errors in independent channel mode and 8 bit errors in lockstep modeunc_m_major_modes.isochuncore memoryCycles in a Major Mode; Isoch Major Modeevent=7,umask=801Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel.   Major modea are channel-wide, and not a per-rank (or dimm or bank) mode.; We group these two modes together so that we can use four counters to track each of the major modes at one time.  These major modes are used whenever there is an ISOCH txn in the memory controller.  In these mode, only ISOCH transactions are processedunc_m_major_modes.partialuncore memoryCycles in a Major Mode; Partial Major Modeevent=7,umask=401Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel.   Major modea are channel-wide, and not a per-rank (or dimm or bank) mode.; This major mode is used to drain starved underfill reads.  Regular reads and writes are blocked and only underfill reads will be processedunc_m_major_modes.readuncore memoryCycles in a Major Mode; Read Major Modeevent=7,umask=101Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel.   Major modea are channel-wide, and not a per-rank (or dimm or bank) mode.; Read Major Mode is the default mode for the iMC, as reads are generally more critical to forward progress than writesunc_m_major_modes.writeuncore memoryCycles in a Major Mode; Write Major Modeevent=7,umask=201Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel.   Major modea are channel-wide, and not a per-rank (or dimm or bank) mode.; This mode is triggered when the WPQ hits high occupancy and causes writes to be higher priority than reads.  This can cause blips in the available read bandwidth in the system and temporarily increase read latencies in order to achieve better bus utilizations and higher bandwidthunc_m_power_channel_dlloffuncore memoryChannel DLLOFF Cyclesevent=0x8401Number of cycles when all the ranks in the channel are in CKE Slow (DLLOFF) modeunc_m_power_channel_ppduncore memoryChannel PPD Cyclesevent=0x8501Number of cycles when all the ranks in the channel are in PPD mode.  If IBT=off is enabled, then this can be used to count those cycles.  If it is not enabled, then this can count the number of cycles when that could have been taken advantage ofunc_m_power_cke_cycles.rank0uncore memoryCKE_ON_CYCLES by Rank; DIMM IDevent=0x83,umask=101Number of cycles spent in CKE ON mode.  The filter allows you to select a rank to monitor.  If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation.  Multiple counters will need to be used to track multiple ranks simultaneously.  There is no distinction between the different CKE modes (APD, PPDS, PPDF).  This can be determined based on the system programming.  These events should commonly be used with Invert to get the number of cycles in power saving mode.  Edge Detect is also useful here.  Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary)unc_m_power_cke_cycles.rank1uncore memoryCKE_ON_CYCLES by Rank; DIMM IDevent=0x83,umask=201Number of cycles spent in CKE ON mode.  The filter allows you to select a rank to monitor.  If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation.  Multiple counters will need to be used to track multiple ranks simultaneously.  There is no distinction between the different CKE modes (APD, PPDS, PPDF).  This can be determined based on the system programming.  These events should commonly be used with Invert to get the number of cycles in power saving mode.  Edge Detect is also useful here.  Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary)unc_m_power_cke_cycles.rank2uncore memoryCKE_ON_CYCLES by Rank; DIMM IDevent=0x83,umask=401Number of cycles spent in CKE ON mode.  The filter allows you to select a rank to monitor.  If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation.  Multiple counters will need to be used to track multiple ranks simultaneously.  There is no distinction between the different CKE modes (APD, PPDS, PPDF).  This can be determined based on the system programming.  These events should commonly be used with Invert to get the number of cycles in power saving mode.  Edge Detect is also useful here.  Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary)unc_m_power_cke_cycles.rank3uncore memoryCKE_ON_CYCLES by Rank; DIMM IDevent=0x83,umask=801Number of cycles spent in CKE ON mode.  The filter allows you to select a rank to monitor.  If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation.  Multiple counters will need to be used to track multiple ranks simultaneously.  There is no distinction between the different CKE modes (APD, PPDS, PPDF).  This can be determined based on the system programming.  These events should commonly be used with Invert to get the number of cycles in power saving mode.  Edge Detect is also useful here.  Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary)unc_m_power_cke_cycles.rank4uncore memoryCKE_ON_CYCLES by Rank; DIMM IDevent=0x83,umask=0x1001Number of cycles spent in CKE ON mode.  The filter allows you to select a rank to monitor.  If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation.  Multiple counters will need to be used to track multiple ranks simultaneously.  There is no distinction between the different CKE modes (APD, PPDS, PPDF).  This can be determined based on the system programming.  These events should commonly be used with Invert to get the number of cycles in power saving mode.  Edge Detect is also useful here.  Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary)unc_m_power_cke_cycles.rank5uncore memoryCKE_ON_CYCLES by Rank; DIMM IDevent=0x83,umask=0x2001Number of cycles spent in CKE ON mode.  The filter allows you to select a rank to monitor.  If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation.  Multiple counters will need to be used to track multiple ranks simultaneously.  There is no distinction between the different CKE modes (APD, PPDS, PPDF).  This can be determined based on the system programming.  These events should commonly be used with Invert to get the number of cycles in power saving mode.  Edge Detect is also useful here.  Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary)unc_m_power_cke_cycles.rank6uncore memoryCKE_ON_CYCLES by Rank; DIMM IDevent=0x83,umask=0x4001Number of cycles spent in CKE ON mode.  The filter allows you to select a rank to monitor.  If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation.  Multiple counters will need to be used to track multiple ranks simultaneously.  There is no distinction between the different CKE modes (APD, PPDS, PPDF).  This can be determined based on the system programming.  These events should commonly be used with Invert to get the number of cycles in power saving mode.  Edge Detect is also useful here.  Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary)unc_m_power_cke_cycles.rank7uncore memoryCKE_ON_CYCLES by Rank; DIMM IDevent=0x83,umask=0x8001Number of cycles spent in CKE ON mode.  The filter allows you to select a rank to monitor.  If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation.  Multiple counters will need to be used to track multiple ranks simultaneously.  There is no distinction between the different CKE modes (APD, PPDS, PPDF).  This can be determined based on the system programming.  These events should commonly be used with Invert to get the number of cycles in power saving mode.  Edge Detect is also useful here.  Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary)unc_m_power_critical_throttle_cyclesuncore memoryCritical Throttle Cyclesevent=0x8601Counts the number of cycles when the iMC is in critical thermal throttling.  When this happens, all traffic is blocked.  This should be rare unless something bad is going on in the platform.  There is no filtering by rank for this eventunc_m_power_pcu_throttlinguncore memoryUNC_M_POWER_PCU_THROTTLINGevent=0x4201unc_m_power_self_refreshuncore memoryClock-Enabled Self-Refreshevent=0x4301Counts the number of cycles when the iMC is in self-refresh and the iMC still has a clock.  This happens in some package C-states.  For example, the PCU may ask the iMC to enter self-refresh even though some of the cores are still processing.  One use of this is for Monroe technology.  Self-refresh is required during package C3 and C6, but there is no clock in the iMC at this time, so it is not possible to count these casesunc_m_power_throttle_cycles.rank0uncore memoryThrottle Cycles for Rank 0; DIMM IDevent=0x41,umask=101Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling.  It is not possible to distinguish between the two.  This can be filtered by rank.  If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.; Thermal throttling is performed per DIMM.  We support 3 DIMMs per channel.  This ID allows us to filter by IDunc_m_power_throttle_cycles.rank1uncore memoryThrottle Cycles for Rank 0; DIMM IDevent=0x41,umask=201Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling.  It is not possible to distinguish between the two.  This can be filtered by rank.  If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1unc_m_power_throttle_cycles.rank2uncore memoryThrottle Cycles for Rank 0; DIMM IDevent=0x41,umask=401Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling.  It is not possible to distinguish between the two.  This can be filtered by rank.  If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1unc_m_power_throttle_cycles.rank3uncore memoryThrottle Cycles for Rank 0; DIMM IDevent=0x41,umask=801Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling.  It is not possible to distinguish between the two.  This can be filtered by rank.  If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1unc_m_power_throttle_cycles.rank4uncore memoryThrottle Cycles for Rank 0; DIMM IDevent=0x41,umask=0x1001Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling.  It is not possible to distinguish between the two.  This can be filtered by rank.  If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1unc_m_power_throttle_cycles.rank5uncore memoryThrottle Cycles for Rank 0; DIMM IDevent=0x41,umask=0x2001Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling.  It is not possible to distinguish between the two.  This can be filtered by rank.  If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1unc_m_power_throttle_cycles.rank6uncore memoryThrottle Cycles for Rank 0; DIMM IDevent=0x41,umask=0x4001Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling.  It is not possible to distinguish between the two.  This can be filtered by rank.  If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1unc_m_power_throttle_cycles.rank7uncore memoryThrottle Cycles for Rank 0; DIMM IDevent=0x41,umask=0x8001Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling.  It is not possible to distinguish between the two.  This can be filtered by rank.  If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1unc_m_preemption.rd_preempt_rduncore memoryRead Preemption Count; Read over Read Preemptionevent=8,umask=101Counts the number of times a read in the iMC preempts another read or write.  Generally reads to an open page are issued ahead of requests to closed pages.  This improves the page hit rate of the system.  However, high priority requests can cause pages of active requests to be closed in order to get them out.  This will reduce the latency of the high-priority request at the expense of lower bandwidth and increased overall average latency.; Filter for when a read preempts another readunc_m_preemption.rd_preempt_wruncore memoryRead Preemption Count; Read over Write Preemptionevent=8,umask=201Counts the number of times a read in the iMC preempts another read or write.  Generally reads to an open page are issued ahead of requests to closed pages.  This improves the page hit rate of the system.  However, high priority requests can cause pages of active requests to be closed in order to get them out.  This will reduce the latency of the high-priority request at the expense of lower bandwidth and increased overall average latency.; Filter for when a read preempts a writeunc_m_pre_count.bypuncore memoryDRAM Precharge commands.; Precharge due to bypassevent=2,umask=0x1001Counts the number of DRAM Precharge commands sent on this channelunc_m_pre_count.page_closeuncore memoryDRAM Precharge commands.; Precharge due to timer expirationevent=2,umask=201Counts the number of DRAM Precharge commands sent on this channel.; Counts the number of DRAM Precharge commands sent on this channel as a result of the page close counter expiring.  This does not include implicit precharge commands sent in auto-precharge modeunc_m_pre_count.page_missuncore memoryDRAM Precharge commands.; Precharges due to page missevent=2,umask=101Counts the number of DRAM Precharge commands sent on this channel.; Counts the number of DRAM Precharge commands sent on this channel as a result of page misses.  This does not include explicit precharge commands sent with CAS commands in Auto-Precharge mode.  This does not include PRE commands sent as a result of the page close counter expirationunc_m_pre_count.rduncore memoryDRAM Precharge commands.; Precharge due to readevent=2,umask=401Counts the number of DRAM Precharge commands sent on this channelunc_m_pre_count.wruncore memoryDRAM Precharge commands.; Precharge due to writeevent=2,umask=801Counts the number of DRAM Precharge commands sent on this channelunc_m_rd_cas_prio.highuncore memoryRead CAS issued with HIGH priorityevent=0xa0,umask=401unc_m_rd_cas_prio.lowuncore memoryRead CAS issued with LOW priorityevent=0xa0,umask=101unc_m_rd_cas_prio.meduncore memoryRead CAS issued with MEDIUM priorityevent=0xa0,umask=201unc_m_rd_cas_prio.panicuncore memoryRead CAS issued with PANIC NON ISOCH priority (starved)event=0xa0,umask=801unc_m_rd_cas_rank0.allbanksuncore memoryRD_CAS Access to Rank 0; All Banksevent=0xb0,umask=0x1001RD_CAS Access to Rank 0 : All Banksunc_m_rd_cas_rank0.bank0uncore memoryRD_CAS Access to Rank 0; Bank 0event=0xb001RD_CAS Access to Rank 0 : Bank 0unc_m_rd_cas_rank0.bank1uncore memoryRD_CAS Access to Rank 0; Bank 1event=0xb0,umask=101RD_CAS Access to Rank 0 : Bank 1unc_m_rd_cas_rank0.bank10uncore memoryRD_CAS Access to Rank 0; Bank 10event=0xb0,umask=0xa01RD_CAS Access to Rank 0 : Bank 10unc_m_rd_cas_rank0.bank11uncore memoryRD_CAS Access to Rank 0; Bank 11event=0xb0,umask=0xb01RD_CAS Access to Rank 0 : Bank 11unc_m_rd_cas_rank0.bank12uncore memoryRD_CAS Access to Rank 0; Bank 12event=0xb0,umask=0xc01RD_CAS Access to Rank 0 : Bank 12unc_m_rd_cas_rank0.bank13uncore memoryRD_CAS Access to Rank 0; Bank 13event=0xb0,umask=0xd01RD_CAS Access to Rank 0 : Bank 13unc_m_rd_cas_rank0.bank14uncore memoryRD_CAS Access to Rank 0; Bank 14event=0xb0,umask=0xe01RD_CAS Access to Rank 0 : Bank 14unc_m_rd_cas_rank0.bank15uncore memoryRD_CAS Access to Rank 0; Bank 15event=0xb0,umask=0xf01RD_CAS Access to Rank 0 : Bank 15unc_m_rd_cas_rank0.bank2uncore memoryRD_CAS Access to Rank 0; Bank 2event=0xb0,umask=201RD_CAS Access to Rank 0 : Bank 2unc_m_rd_cas_rank0.bank3uncore memoryRD_CAS Access to Rank 0; Bank 3event=0xb0,umask=301RD_CAS Access to Rank 0 : Bank 3unc_m_rd_cas_rank0.bank4uncore memoryRD_CAS Access to Rank 0; Bank 4event=0xb0,umask=401RD_CAS Access to Rank 0 : Bank 4unc_m_rd_cas_rank0.bank5uncore memoryRD_CAS Access to Rank 0; Bank 5event=0xb0,umask=501RD_CAS Access to Rank 0 : Bank 5unc_m_rd_cas_rank0.bank6uncore memoryRD_CAS Access to Rank 0; Bank 6event=0xb0,umask=601RD_CAS Access to Rank 0 : Bank 6unc_m_rd_cas_rank0.bank7uncore memoryRD_CAS Access to Rank 0; Bank 7event=0xb0,umask=701RD_CAS Access to Rank 0 : Bank 7unc_m_rd_cas_rank0.bank8uncore memoryRD_CAS Access to Rank 0; Bank 8event=0xb0,umask=801RD_CAS Access to Rank 0 : Bank 8unc_m_rd_cas_rank0.bank9uncore memoryRD_CAS Access to Rank 0; Bank 9event=0xb0,umask=901RD_CAS Access to Rank 0 : Bank 9unc_m_rd_cas_rank0.bankg0uncore memoryRD_CAS Access to Rank 0; Bank Group 0 (Banks 0-3)event=0xb0,umask=0x1101RD_CAS Access to Rank 0 : Bank Group 0 (Banks 0-3)unc_m_rd_cas_rank0.bankg1uncore memoryRD_CAS Access to Rank 0; Bank Group 1 (Banks 4-7)event=0xb0,umask=0x1201RD_CAS Access to Rank 0 : Bank Group 1 (Banks 4-7)unc_m_rd_cas_rank0.bankg2uncore memoryRD_CAS Access to Rank 0; Bank Group 2 (Banks 8-11)event=0xb0,umask=0x1301RD_CAS Access to Rank 0 : Bank Group 2 (Banks 8-11)unc_m_rd_cas_rank0.bankg3uncore memoryRD_CAS Access to Rank 0; Bank Group 3 (Banks 12-15)event=0xb0,umask=0x1401RD_CAS Access to Rank 0 : Bank Group 3 (Banks 12-15)unc_m_rd_cas_rank1.allbanksuncore memoryRD_CAS Access to Rank 1; All Banksevent=0xb1,umask=0x1001RD_CAS Access to Rank 0 : All Banksunc_m_rd_cas_rank1.bank0uncore memoryRD_CAS Access to Rank 1; Bank 0event=0xb101RD_CAS Access to Rank 0 : Bank 0unc_m_rd_cas_rank1.bank1uncore memoryRD_CAS Access to Rank 1; Bank 1event=0xb1,umask=101RD_CAS Access to Rank 0 : Bank 1unc_m_rd_cas_rank1.bank10uncore memoryRD_CAS Access to Rank 1; Bank 10event=0xb1,umask=0xa01RD_CAS Access to Rank 0 : Bank 10unc_m_rd_cas_rank1.bank11uncore memoryRD_CAS Access to Rank 1; Bank 11event=0xb1,umask=0xb01RD_CAS Access to Rank 0 : Bank 11unc_m_rd_cas_rank1.bank12uncore memoryRD_CAS Access to Rank 1; Bank 12event=0xb1,umask=0xc01RD_CAS Access to Rank 0 : Bank 12unc_m_rd_cas_rank1.bank13uncore memoryRD_CAS Access to Rank 1; Bank 13event=0xb1,umask=0xd01RD_CAS Access to Rank 0 : Bank 13unc_m_rd_cas_rank1.bank14uncore memoryRD_CAS Access to Rank 1; Bank 14event=0xb1,umask=0xe01RD_CAS Access to Rank 0 : Bank 14unc_m_rd_cas_rank1.bank15uncore memoryRD_CAS Access to Rank 1; Bank 15event=0xb1,umask=0xf01RD_CAS Access to Rank 0 : Bank 15unc_m_rd_cas_rank1.bank2uncore memoryRD_CAS Access to Rank 1; Bank 2event=0xb1,umask=201RD_CAS Access to Rank 0 : Bank 2unc_m_rd_cas_rank1.bank3uncore memoryRD_CAS Access to Rank 1; Bank 3event=0xb1,umask=301RD_CAS Access to Rank 0 : Bank 3unc_m_rd_cas_rank1.bank4uncore memoryRD_CAS Access to Rank 1; Bank 4event=0xb1,umask=401RD_CAS Access to Rank 0 : Bank 4unc_m_rd_cas_rank1.bank5uncore memoryRD_CAS Access to Rank 1; Bank 5event=0xb1,umask=501RD_CAS Access to Rank 0 : Bank 5unc_m_rd_cas_rank1.bank6uncore memoryRD_CAS Access to Rank 1; Bank 6event=0xb1,umask=601RD_CAS Access to Rank 0 : Bank 6unc_m_rd_cas_rank1.bank7uncore memoryRD_CAS Access to Rank 1; Bank 7event=0xb1,umask=701RD_CAS Access to Rank 0 : Bank 7unc_m_rd_cas_rank1.bank8uncore memoryRD_CAS Access to Rank 1; Bank 8event=0xb1,umask=801RD_CAS Access to Rank 0 : Bank 8unc_m_rd_cas_rank1.bank9uncore memoryRD_CAS Access to Rank 1; Bank 9event=0xb1,umask=901RD_CAS Access to Rank 0 : Bank 9unc_m_rd_cas_rank1.bankg0uncore memoryRD_CAS Access to Rank 1; Bank Group 0 (Banks 0-3)event=0xb1,umask=0x1101RD_CAS Access to Rank 0 : Bank Group 0 (Banks 0-3)unc_m_rd_cas_rank1.bankg1uncore memoryRD_CAS Access to Rank 1; Bank Group 1 (Banks 4-7)event=0xb1,umask=0x1201RD_CAS Access to Rank 0 : Bank Group 1 (Banks 4-7)unc_m_rd_cas_rank1.bankg2uncore memoryRD_CAS Access to Rank 1; Bank Group 2 (Banks 8-11)event=0xb1,umask=0x1301RD_CAS Access to Rank 0 : Bank Group 2 (Banks 8-11)unc_m_rd_cas_rank1.bankg3uncore memoryRD_CAS Access to Rank 1; Bank Group 3 (Banks 12-15)event=0xb1,umask=0x1401RD_CAS Access to Rank 0 : Bank Group 3 (Banks 12-15)unc_m_rd_cas_rank2.bank0uncore memoryRD_CAS Access to Rank 2; Bank 0event=0xb201RD_CAS Access to Rank 0 : Bank 0unc_m_rd_cas_rank4.allbanksuncore memoryRD_CAS Access to Rank 4; All Banksevent=0xb4,umask=0x1001RD_CAS Access to Rank 0 : All Banksunc_m_rd_cas_rank4.bank0uncore memoryRD_CAS Access to Rank 4; Bank 0event=0xb401RD_CAS Access to Rank 0 : Bank 0unc_m_rd_cas_rank4.bank1uncore memoryRD_CAS Access to Rank 4; Bank 1event=0xb4,umask=101RD_CAS Access to Rank 0 : Bank 1unc_m_rd_cas_rank4.bank10uncore memoryRD_CAS Access to Rank 4; Bank 10event=0xb4,umask=0xa01RD_CAS Access to Rank 0 : Bank 10unc_m_rd_cas_rank4.bank11uncore memoryRD_CAS Access to Rank 4; Bank 11event=0xb4,umask=0xb01RD_CAS Access to Rank 0 : Bank 11unc_m_rd_cas_rank4.bank12uncore memoryRD_CAS Access to Rank 4; Bank 12event=0xb4,umask=0xc01RD_CAS Access to Rank 0 : Bank 12unc_m_rd_cas_rank4.bank13uncore memoryRD_CAS Access to Rank 4; Bank 13event=0xb4,umask=0xd01RD_CAS Access to Rank 0 : Bank 13unc_m_rd_cas_rank4.bank14uncore memoryRD_CAS Access to Rank 4; Bank 14event=0xb4,umask=0xe01RD_CAS Access to Rank 0 : Bank 14unc_m_rd_cas_rank4.bank15uncore memoryRD_CAS Access to Rank 4; Bank 15event=0xb4,umask=0xf01RD_CAS Access to Rank 0 : Bank 15unc_m_rd_cas_rank4.bank2uncore memoryRD_CAS Access to Rank 4; Bank 2event=0xb4,umask=201RD_CAS Access to Rank 0 : Bank 2unc_m_rd_cas_rank4.bank3uncore memoryRD_CAS Access to Rank 4; Bank 3event=0xb4,umask=301RD_CAS Access to Rank 0 : Bank 3unc_m_rd_cas_rank4.bank4uncore memoryRD_CAS Access to Rank 4; Bank 4event=0xb4,umask=401RD_CAS Access to Rank 0 : Bank 4unc_m_rd_cas_rank4.bank5uncore memoryRD_CAS Access to Rank 4; Bank 5event=0xb4,umask=501RD_CAS Access to Rank 0 : Bank 5unc_m_rd_cas_rank4.bank6uncore memoryRD_CAS Access to Rank 4; Bank 6event=0xb4,umask=601RD_CAS Access to Rank 0 : Bank 6unc_m_rd_cas_rank4.bank7uncore memoryRD_CAS Access to Rank 4; Bank 7event=0xb4,umask=701RD_CAS Access to Rank 0 : Bank 7unc_m_rd_cas_rank4.bank8uncore memoryRD_CAS Access to Rank 4; Bank 8event=0xb4,umask=801RD_CAS Access to Rank 0 : Bank 8unc_m_rd_cas_rank4.bank9uncore memoryRD_CAS Access to Rank 4; Bank 9event=0xb4,umask=901RD_CAS Access to Rank 0 : Bank 9unc_m_rd_cas_rank4.bankg0uncore memoryRD_CAS Access to Rank 4; Bank Group 0 (Banks 0-3)event=0xb4,umask=0x1101RD_CAS Access to Rank 0 : Bank Group 0 (Banks 0-3)unc_m_rd_cas_rank4.bankg1uncore memoryRD_CAS Access to Rank 4; Bank Group 1 (Banks 4-7)event=0xb4,umask=0x1201RD_CAS Access to Rank 0 : Bank Group 1 (Banks 4-7)unc_m_rd_cas_rank4.bankg2uncore memoryRD_CAS Access to Rank 4; Bank Group 2 (Banks 8-11)event=0xb4,umask=0x1301RD_CAS Access to Rank 0 : Bank Group 2 (Banks 8-11)unc_m_rd_cas_rank4.bankg3uncore memoryRD_CAS Access to Rank 4; Bank Group 3 (Banks 12-15)event=0xb4,umask=0x1401RD_CAS Access to Rank 0 : Bank Group 3 (Banks 12-15)unc_m_rd_cas_rank5.allbanksuncore memoryRD_CAS Access to Rank 5; All Banksevent=0xb5,umask=0x1001RD_CAS Access to Rank 0 : All Banksunc_m_rd_cas_rank5.bank0uncore memoryRD_CAS Access to Rank 5; Bank 0event=0xb501RD_CAS Access to Rank 0 : Bank 0unc_m_rd_cas_rank5.bank1uncore memoryRD_CAS Access to Rank 5; Bank 1event=0xb5,umask=101RD_CAS Access to Rank 0 : Bank 1unc_m_rd_cas_rank5.bank10uncore memoryRD_CAS Access to Rank 5; Bank 10event=0xb5,umask=0xa01RD_CAS Access to Rank 0 : Bank 10unc_m_rd_cas_rank5.bank11uncore memoryRD_CAS Access to Rank 5; Bank 11event=0xb5,umask=0xb01RD_CAS Access to Rank 0 : Bank 11unc_m_rd_cas_rank5.bank12uncore memoryRD_CAS Access to Rank 5; Bank 12event=0xb5,umask=0xc01RD_CAS Access to Rank 0 : Bank 12unc_m_rd_cas_rank5.bank13uncore memoryRD_CAS Access to Rank 5; Bank 13event=0xb5,umask=0xd01RD_CAS Access to Rank 0 : Bank 13unc_m_rd_cas_rank5.bank14uncore memoryRD_CAS Access to Rank 5; Bank 14event=0xb5,umask=0xe01RD_CAS Access to Rank 0 : Bank 14unc_m_rd_cas_rank5.bank15uncore memoryRD_CAS Access to Rank 5; Bank 15event=0xb5,umask=0xf01RD_CAS Access to Rank 0 : Bank 15unc_m_rd_cas_rank5.bank2uncore memoryRD_CAS Access to Rank 5; Bank 2event=0xb5,umask=201RD_CAS Access to Rank 0 : Bank 2unc_m_rd_cas_rank5.bank3uncore memoryRD_CAS Access to Rank 5; Bank 3event=0xb5,umask=301RD_CAS Access to Rank 0 : Bank 3unc_m_rd_cas_rank5.bank4uncore memoryRD_CAS Access to Rank 5; Bank 4event=0xb5,umask=401RD_CAS Access to Rank 0 : Bank 4unc_m_rd_cas_rank5.bank5uncore memoryRD_CAS Access to Rank 5; Bank 5event=0xb5,umask=501RD_CAS Access to Rank 0 : Bank 5unc_m_rd_cas_rank5.bank6uncore memoryRD_CAS Access to Rank 5; Bank 6event=0xb5,umask=601RD_CAS Access to Rank 0 : Bank 6unc_m_rd_cas_rank5.bank7uncore memoryRD_CAS Access to Rank 5; Bank 7event=0xb5,umask=701RD_CAS Access to Rank 0 : Bank 7unc_m_rd_cas_rank5.bank8uncore memoryRD_CAS Access to Rank 5; Bank 8event=0xb5,umask=801RD_CAS Access to Rank 0 : Bank 8unc_m_rd_cas_rank5.bank9uncore memoryRD_CAS Access to Rank 5; Bank 9event=0xb5,umask=901RD_CAS Access to Rank 0 : Bank 9unc_m_rd_cas_rank5.bankg0uncore memoryRD_CAS Access to Rank 5; Bank Group 0 (Banks 0-3)event=0xb5,umask=0x1101RD_CAS Access to Rank 0 : Bank Group 0 (Banks 0-3)unc_m_rd_cas_rank5.bankg1uncore memoryRD_CAS Access to Rank 5; Bank Group 1 (Banks 4-7)event=0xb5,umask=0x1201RD_CAS Access to Rank 0 : Bank Group 1 (Banks 4-7)unc_m_rd_cas_rank5.bankg2uncore memoryRD_CAS Access to Rank 5; Bank Group 2 (Banks 8-11)event=0xb5,umask=0x1301RD_CAS Access to Rank 0 : Bank Group 2 (Banks 8-11)unc_m_rd_cas_rank5.bankg3uncore memoryRD_CAS Access to Rank 5; Bank Group 3 (Banks 12-15)event=0xb5,umask=0x1401RD_CAS Access to Rank 0 : Bank Group 3 (Banks 12-15)unc_m_rd_cas_rank6.allbanksuncore memoryRD_CAS Access to Rank 6; All Banksevent=0xb6,umask=0x1001RD_CAS Access to Rank 0 : All Banksunc_m_rd_cas_rank6.bank0uncore memoryRD_CAS Access to Rank 6; Bank 0event=0xb601RD_CAS Access to Rank 0 : Bank 0unc_m_rd_cas_rank6.bank1uncore memoryRD_CAS Access to Rank 6; Bank 1event=0xb6,umask=101RD_CAS Access to Rank 0 : Bank 1unc_m_rd_cas_rank6.bank10uncore memoryRD_CAS Access to Rank 6; Bank 10event=0xb6,umask=0xa01RD_CAS Access to Rank 0 : Bank 10unc_m_rd_cas_rank6.bank11uncore memoryRD_CAS Access to Rank 6; Bank 11event=0xb6,umask=0xb01RD_CAS Access to Rank 0 : Bank 11unc_m_rd_cas_rank6.bank12uncore memoryRD_CAS Access to Rank 6; Bank 12event=0xb6,umask=0xc01RD_CAS Access to Rank 0 : Bank 12unc_m_rd_cas_rank6.bank13uncore memoryRD_CAS Access to Rank 6; Bank 13event=0xb6,umask=0xd01RD_CAS Access to Rank 0 : Bank 13unc_m_rd_cas_rank6.bank14uncore memoryRD_CAS Access to Rank 6; Bank 14event=0xb6,umask=0xe01RD_CAS Access to Rank 0 : Bank 14unc_m_rd_cas_rank6.bank15uncore memoryRD_CAS Access to Rank 6; Bank 15event=0xb6,umask=0xf01RD_CAS Access to Rank 0 : Bank 15unc_m_rd_cas_rank6.bank2uncore memoryRD_CAS Access to Rank 6; Bank 2event=0xb6,umask=201RD_CAS Access to Rank 0 : Bank 2unc_m_rd_cas_rank6.bank3uncore memoryRD_CAS Access to Rank 6; Bank 3event=0xb6,umask=301RD_CAS Access to Rank 0 : Bank 3unc_m_rd_cas_rank6.bank4uncore memoryRD_CAS Access to Rank 6; Bank 4event=0xb6,umask=401RD_CAS Access to Rank 0 : Bank 4unc_m_rd_cas_rank6.bank5uncore memoryRD_CAS Access to Rank 6; Bank 5event=0xb6,umask=501RD_CAS Access to Rank 0 : Bank 5unc_m_rd_cas_rank6.bank6uncore memoryRD_CAS Access to Rank 6; Bank 6event=0xb6,umask=601RD_CAS Access to Rank 0 : Bank 6unc_m_rd_cas_rank6.bank7uncore memoryRD_CAS Access to Rank 6; Bank 7event=0xb6,umask=701RD_CAS Access to Rank 0 : Bank 7unc_m_rd_cas_rank6.bank8uncore memoryRD_CAS Access to Rank 6; Bank 8event=0xb6,umask=801RD_CAS Access to Rank 0 : Bank 8unc_m_rd_cas_rank6.bank9uncore memoryRD_CAS Access to Rank 6; Bank 9event=0xb6,umask=901RD_CAS Access to Rank 0 : Bank 9unc_m_rd_cas_rank6.bankg0uncore memoryRD_CAS Access to Rank 6; Bank Group 0 (Banks 0-3)event=0xb6,umask=0x1101RD_CAS Access to Rank 0 : Bank Group 0 (Banks 0-3)unc_m_rd_cas_rank6.bankg1uncore memoryRD_CAS Access to Rank 6; Bank Group 1 (Banks 4-7)event=0xb6,umask=0x1201RD_CAS Access to Rank 0 : Bank Group 1 (Banks 4-7)unc_m_rd_cas_rank6.bankg2uncore memoryRD_CAS Access to Rank 6; Bank Group 2 (Banks 8-11)event=0xb6,umask=0x1301RD_CAS Access to Rank 0 : Bank Group 2 (Banks 8-11)unc_m_rd_cas_rank6.bankg3uncore memoryRD_CAS Access to Rank 6; Bank Group 3 (Banks 12-15)event=0xb6,umask=0x1401RD_CAS Access to Rank 0 : Bank Group 3 (Banks 12-15)unc_m_rd_cas_rank7.allbanksuncore memoryRD_CAS Access to Rank 7; All Banksevent=0xb7,umask=0x1001RD_CAS Access to Rank 0 : All Banksunc_m_rd_cas_rank7.bank0uncore memoryRD_CAS Access to Rank 7; Bank 0event=0xb701RD_CAS Access to Rank 0 : Bank 0unc_m_rd_cas_rank7.bank1uncore memoryRD_CAS Access to Rank 7; Bank 1event=0xb7,umask=101RD_CAS Access to Rank 0 : Bank 1unc_m_rd_cas_rank7.bank10uncore memoryRD_CAS Access to Rank 7; Bank 10event=0xb7,umask=0xa01RD_CAS Access to Rank 0 : Bank 10unc_m_rd_cas_rank7.bank11uncore memoryRD_CAS Access to Rank 7; Bank 11event=0xb7,umask=0xb01RD_CAS Access to Rank 0 : Bank 11unc_m_rd_cas_rank7.bank12uncore memoryRD_CAS Access to Rank 7; Bank 12event=0xb7,umask=0xc01RD_CAS Access to Rank 0 : Bank 12unc_m_rd_cas_rank7.bank13uncore memoryRD_CAS Access to Rank 7; Bank 13event=0xb7,umask=0xd01RD_CAS Access to Rank 0 : Bank 13unc_m_rd_cas_rank7.bank14uncore memoryRD_CAS Access to Rank 7; Bank 14event=0xb7,umask=0xe01RD_CAS Access to Rank 0 : Bank 14unc_m_rd_cas_rank7.bank15uncore memoryRD_CAS Access to Rank 7; Bank 15event=0xb7,umask=0xf01RD_CAS Access to Rank 0 : Bank 15unc_m_rd_cas_rank7.bank2uncore memoryRD_CAS Access to Rank 7; Bank 2event=0xb7,umask=201RD_CAS Access to Rank 0 : Bank 2unc_m_rd_cas_rank7.bank3uncore memoryRD_CAS Access to Rank 7; Bank 3event=0xb7,umask=301RD_CAS Access to Rank 0 : Bank 3unc_m_rd_cas_rank7.bank4uncore memoryRD_CAS Access to Rank 7; Bank 4event=0xb7,umask=401RD_CAS Access to Rank 0 : Bank 4unc_m_rd_cas_rank7.bank5uncore memoryRD_CAS Access to Rank 7; Bank 5event=0xb7,umask=501RD_CAS Access to Rank 0 : Bank 5unc_m_rd_cas_rank7.bank6uncore memoryRD_CAS Access to Rank 7; Bank 6event=0xb7,umask=601RD_CAS Access to Rank 0 : Bank 6unc_m_rd_cas_rank7.bank7uncore memoryRD_CAS Access to Rank 7; Bank 7event=0xb7,umask=701RD_CAS Access to Rank 0 : Bank 7unc_m_rd_cas_rank7.bank8uncore memoryRD_CAS Access to Rank 7; Bank 8event=0xb7,umask=801RD_CAS Access to Rank 0 : Bank 8unc_m_rd_cas_rank7.bank9uncore memoryRD_CAS Access to Rank 7; Bank 9event=0xb7,umask=901RD_CAS Access to Rank 0 : Bank 9unc_m_rd_cas_rank7.bankg0uncore memoryRD_CAS Access to Rank 7; Bank Group 0 (Banks 0-3)event=0xb7,umask=0x1101RD_CAS Access to Rank 0 : Bank Group 0 (Banks 0-3)unc_m_rd_cas_rank7.bankg1uncore memoryRD_CAS Access to Rank 7; Bank Group 1 (Banks 4-7)event=0xb7,umask=0x1201RD_CAS Access to Rank 0 : Bank Group 1 (Banks 4-7)unc_m_rd_cas_rank7.bankg2uncore memoryRD_CAS Access to Rank 7; Bank Group 2 (Banks 8-11)event=0xb7,umask=0x1301RD_CAS Access to Rank 0 : Bank Group 2 (Banks 8-11)unc_m_rd_cas_rank7.bankg3uncore memoryRD_CAS Access to Rank 7; Bank Group 3 (Banks 12-15)event=0xb7,umask=0x1401RD_CAS Access to Rank 0 : Bank Group 3 (Banks 12-15)unc_m_rpq_cycles_neuncore memoryRead Pending Queue Not Emptyevent=0x1101Counts the number of cycles that the Read Pending Queue is not empty.  This can then be used to calculate the average occupancy (in conjunction with the Read Pending Queue Occupancy count).  The RPQ is used to schedule reads out to the memory controller and to track the requests.  Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC.  They deallocate after the CAS command has been issued to memory.  This filter is to be used in conjunction with the occupancy filter so that one can correctly track the average occupancies for schedulable entries and scheduled requestsunc_m_rpq_insertsuncore memoryRead Pending Queue Allocationsevent=0x1001Counts the number of allocations into the Read Pending Queue.  This queue is used to schedule reads out to the memory controller and to track the requests.  Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC.  They deallocate after the CAS command has been issued to memory.  This includes both ISOCH and non-ISOCH requestsunc_m_vmse_mxb_wr_occupancyuncore memoryVMSE MXB write buffer occupancyevent=0x9101unc_m_vmse_wr_push.rmmuncore memoryVMSE WR PUSH issued; VMSE write PUSH issued in RMMevent=0x90,umask=201unc_m_vmse_wr_push.wmmuncore memoryVMSE WR PUSH issued; VMSE write PUSH issued in WMMevent=0x90,umask=101unc_m_wmm_to_rmm.low_threshuncore memoryTransition from WMM to RMM because of low threshold; Transition from WMM to RMM because of starve counterevent=0xc0,umask=101unc_m_wmm_to_rmm.starveuncore memoryTransition from WMM to RMM because of low thresholdevent=0xc0,umask=201unc_m_wmm_to_rmm.vmse_retryuncore memoryTransition from WMM to RMM because of low thresholdevent=0xc0,umask=401unc_m_wpq_cycles_fulluncore memoryWrite Pending Queue Full Cyclesevent=0x2201Counts the number of cycles when the Write Pending Queue is full.  When the WPQ is full, the HA will not be able to issue any additional read requests into the iMC.  This count should be similar count in the HA which tracks the number of cycles that the HA has no WPQ credits, just somewhat smaller to account for the credit return overheadunc_m_wpq_cycles_neuncore memoryWrite Pending Queue Not Emptyevent=0x2101Counts the number of cycles that the Write Pending Queue is not empty.  This can then be used to calculate the average queue occupancy (in conjunction with the WPQ Occupancy Accumulation count).  The WPQ is used to schedule write out to the memory controller and to track the writes.  Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC.  They deallocate after being issued to DRAM.  Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have posted to the iMC.  This is not to be confused with actually performing the write to DRAM.  Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latenciesunc_m_wpq_read_hituncore memoryWrite Pending Queue CAM Matchevent=0x2301Counts the number of times a request hits in the WPQ (write-pending queue).  The iMC allows writes and reads to pass up other writes to different addresses.  Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address.  When reads hit, they are able to directly pull their data from the WPQ instead of going to memory.  Writes that hit will overwrite the existing data.  Partial writes that hit will not need to do underfill reads and will simply update their relevant sectionsunc_m_wpq_write_hituncore memoryWrite Pending Queue CAM Matchevent=0x2401Counts the number of times a request hits in the WPQ (write-pending queue).  The iMC allows writes and reads to pass up other writes to different addresses.  Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address.  When reads hit, they are able to directly pull their data from the WPQ instead of going to memory.  Writes that hit will overwrite the existing data.  Partial writes that hit will not need to do underfill reads and will simply update their relevant sectionsunc_m_wrong_mmuncore memoryNot getting the requested Major Modeevent=0xc101unc_m_wr_cas_rank0.allbanksuncore memoryWR_CAS Access to Rank 0; All Banksevent=0xb8,umask=0x1001WR_CAS Access to Rank 0 : All Banksunc_m_wr_cas_rank0.bank0uncore memoryWR_CAS Access to Rank 0; Bank 0event=0xb801WR_CAS Access to Rank 0 : Bank 0unc_m_wr_cas_rank0.bank1uncore memoryWR_CAS Access to Rank 0; Bank 1event=0xb8,umask=101WR_CAS Access to Rank 0 : Bank 1unc_m_wr_cas_rank0.bank10uncore memoryWR_CAS Access to Rank 0; Bank 10event=0xb8,umask=0xa01WR_CAS Access to Rank 0 : Bank 10unc_m_wr_cas_rank0.bank11uncore memoryWR_CAS Access to Rank 0; Bank 11event=0xb8,umask=0xb01WR_CAS Access to Rank 0 : Bank 11unc_m_wr_cas_rank0.bank12uncore memoryWR_CAS Access to Rank 0; Bank 12event=0xb8,umask=0xc01WR_CAS Access to Rank 0 : Bank 12unc_m_wr_cas_rank0.bank13uncore memoryWR_CAS Access to Rank 0; Bank 13event=0xb8,umask=0xd01WR_CAS Access to Rank 0 : Bank 13unc_m_wr_cas_rank0.bank14uncore memoryWR_CAS Access to Rank 0; Bank 14event=0xb8,umask=0xe01WR_CAS Access to Rank 0 : Bank 14unc_m_wr_cas_rank0.bank15uncore memoryWR_CAS Access to Rank 0; Bank 15event=0xb8,umask=0xf01WR_CAS Access to Rank 0 : Bank 15unc_m_wr_cas_rank0.bank2uncore memoryWR_CAS Access to Rank 0; Bank 2event=0xb8,umask=201WR_CAS Access to Rank 0 : Bank 2unc_m_wr_cas_rank0.bank3uncore memoryWR_CAS Access to Rank 0; Bank 3event=0xb8,umask=301WR_CAS Access to Rank 0 : Bank 3unc_m_wr_cas_rank0.bank4uncore memoryWR_CAS Access to Rank 0; Bank 4event=0xb8,umask=401WR_CAS Access to Rank 0 : Bank 4unc_m_wr_cas_rank0.bank5uncore memoryWR_CAS Access to Rank 0; Bank 5event=0xb8,umask=501WR_CAS Access to Rank 0 : Bank 5unc_m_wr_cas_rank0.bank6uncore memoryWR_CAS Access to Rank 0; Bank 6event=0xb8,umask=601WR_CAS Access to Rank 0 : Bank 6unc_m_wr_cas_rank0.bank7uncore memoryWR_CAS Access to Rank 0; Bank 7event=0xb8,umask=701WR_CAS Access to Rank 0 : Bank 7unc_m_wr_cas_rank0.bank8uncore memoryWR_CAS Access to Rank 0; Bank 8event=0xb8,umask=801WR_CAS Access to Rank 0 : Bank 8unc_m_wr_cas_rank0.bank9uncore memoryWR_CAS Access to Rank 0; Bank 9event=0xb8,umask=901WR_CAS Access to Rank 0 : Bank 9unc_m_wr_cas_rank0.bankg0uncore memoryWR_CAS Access to Rank 0; Bank Group 0 (Banks 0-3)event=0xb8,umask=0x1101WR_CAS Access to Rank 0 : Bank Group 0 (Banks 0-3)unc_m_wr_cas_rank0.bankg1uncore memoryWR_CAS Access to Rank 0; Bank Group 1 (Banks 4-7)event=0xb8,umask=0x1201WR_CAS Access to Rank 0 : Bank Group 1 (Banks 4-7)unc_m_wr_cas_rank0.bankg2uncore memoryWR_CAS Access to Rank 0; Bank Group 2 (Banks 8-11)event=0xb8,umask=0x1301WR_CAS Access to Rank 0 : Bank Group 2 (Banks 8-11)unc_m_wr_cas_rank0.bankg3uncore memoryWR_CAS Access to Rank 0; Bank Group 3 (Banks 12-15)event=0xb8,umask=0x1401WR_CAS Access to Rank 0 : Bank Group 3 (Banks 12-15)unc_m_wr_cas_rank1.allbanksuncore memoryWR_CAS Access to Rank 1; All Banksevent=0xb9,umask=0x1001WR_CAS Access to Rank 0 : All Banksunc_m_wr_cas_rank1.bank0uncore memoryWR_CAS Access to Rank 1; Bank 0event=0xb901WR_CAS Access to Rank 0 : Bank 0unc_m_wr_cas_rank1.bank1uncore memoryWR_CAS Access to Rank 1; Bank 1event=0xb9,umask=101WR_CAS Access to Rank 0 : Bank 1unc_m_wr_cas_rank1.bank10uncore memoryWR_CAS Access to Rank 1; Bank 10event=0xb9,umask=0xa01WR_CAS Access to Rank 0 : Bank 10unc_m_wr_cas_rank1.bank11uncore memoryWR_CAS Access to Rank 1; Bank 11event=0xb9,umask=0xb01WR_CAS Access to Rank 0 : Bank 11unc_m_wr_cas_rank1.bank12uncore memoryWR_CAS Access to Rank 1; Bank 12event=0xb9,umask=0xc01WR_CAS Access to Rank 0 : Bank 12unc_m_wr_cas_rank1.bank13uncore memoryWR_CAS Access to Rank 1; Bank 13event=0xb9,umask=0xd01WR_CAS Access to Rank 0 : Bank 13unc_m_wr_cas_rank1.bank14uncore memoryWR_CAS Access to Rank 1; Bank 14event=0xb9,umask=0xe01WR_CAS Access to Rank 0 : Bank 14unc_m_wr_cas_rank1.bank15uncore memoryWR_CAS Access to Rank 1; Bank 15event=0xb9,umask=0xf01WR_CAS Access to Rank 0 : Bank 15unc_m_wr_cas_rank1.bank2uncore memoryWR_CAS Access to Rank 1; Bank 2event=0xb9,umask=201WR_CAS Access to Rank 0 : Bank 2unc_m_wr_cas_rank1.bank3uncore memoryWR_CAS Access to Rank 1; Bank 3event=0xb9,umask=301WR_CAS Access to Rank 0 : Bank 3unc_m_wr_cas_rank1.bank4uncore memoryWR_CAS Access to Rank 1; Bank 4event=0xb9,umask=401WR_CAS Access to Rank 0 : Bank 4unc_m_wr_cas_rank1.bank5uncore memoryWR_CAS Access to Rank 1; Bank 5event=0xb9,umask=501WR_CAS Access to Rank 0 : Bank 5unc_m_wr_cas_rank1.bank6uncore memoryWR_CAS Access to Rank 1; Bank 6event=0xb9,umask=601WR_CAS Access to Rank 0 : Bank 6unc_m_wr_cas_rank1.bank7uncore memoryWR_CAS Access to Rank 1; Bank 7event=0xb9,umask=701WR_CAS Access to Rank 0 : Bank 7unc_m_wr_cas_rank1.bank8uncore memoryWR_CAS Access to Rank 1; Bank 8event=0xb9,umask=801WR_CAS Access to Rank 0 : Bank 8unc_m_wr_cas_rank1.bank9uncore memoryWR_CAS Access to Rank 1; Bank 9event=0xb9,umask=901WR_CAS Access to Rank 0 : Bank 9unc_m_wr_cas_rank1.bankg0uncore memoryWR_CAS Access to Rank 1; Bank Group 0 (Banks 0-3)event=0xb9,umask=0x1101WR_CAS Access to Rank 0 : Bank Group 0 (Banks 0-3)unc_m_wr_cas_rank1.bankg1uncore memoryWR_CAS Access to Rank 1; Bank Group 1 (Banks 4-7)event=0xb9,umask=0x1201WR_CAS Access to Rank 0 : Bank Group 1 (Banks 4-7)unc_m_wr_cas_rank1.bankg2uncore memoryWR_CAS Access to Rank 1; Bank Group 2 (Banks 8-11)event=0xb9,umask=0x1301WR_CAS Access to Rank 0 : Bank Group 2 (Banks 8-11)unc_m_wr_cas_rank1.bankg3uncore memoryWR_CAS Access to Rank 1; Bank Group 3 (Banks 12-15)event=0xb9,umask=0x1401WR_CAS Access to Rank 0 : Bank Group 3 (Banks 12-15)unc_m_wr_cas_rank4.allbanksuncore memoryWR_CAS Access to Rank 4; All Banksevent=0xbc,umask=0x1001WR_CAS Access to Rank 0 : All Banksunc_m_wr_cas_rank4.bank0uncore memoryWR_CAS Access to Rank 4; Bank 0event=0xbc01WR_CAS Access to Rank 0 : Bank 0unc_m_wr_cas_rank4.bank1uncore memoryWR_CAS Access to Rank 4; Bank 1event=0xbc,umask=101WR_CAS Access to Rank 0 : Bank 1unc_m_wr_cas_rank4.bank10uncore memoryWR_CAS Access to Rank 4; Bank 10event=0xbc,umask=0xa01WR_CAS Access to Rank 0 : Bank 10unc_m_wr_cas_rank4.bank11uncore memoryWR_CAS Access to Rank 4; Bank 11event=0xbc,umask=0xb01WR_CAS Access to Rank 0 : Bank 11unc_m_wr_cas_rank4.bank12uncore memoryWR_CAS Access to Rank 4; Bank 12event=0xbc,umask=0xc01WR_CAS Access to Rank 0 : Bank 12unc_m_wr_cas_rank4.bank13uncore memoryWR_CAS Access to Rank 4; Bank 13event=0xbc,umask=0xd01WR_CAS Access to Rank 0 : Bank 13unc_m_wr_cas_rank4.bank14uncore memoryWR_CAS Access to Rank 4; Bank 14event=0xbc,umask=0xe01WR_CAS Access to Rank 0 : Bank 14unc_m_wr_cas_rank4.bank15uncore memoryWR_CAS Access to Rank 4; Bank 15event=0xbc,umask=0xf01WR_CAS Access to Rank 0 : Bank 15unc_m_wr_cas_rank4.bank2uncore memoryWR_CAS Access to Rank 4; Bank 2event=0xbc,umask=201WR_CAS Access to Rank 0 : Bank 2unc_m_wr_cas_rank4.bank3uncore memoryWR_CAS Access to Rank 4; Bank 3event=0xbc,umask=301WR_CAS Access to Rank 0 : Bank 3unc_m_wr_cas_rank4.bank4uncore memoryWR_CAS Access to Rank 4; Bank 4event=0xbc,umask=401WR_CAS Access to Rank 0 : Bank 4unc_m_wr_cas_rank4.bank5uncore memoryWR_CAS Access to Rank 4; Bank 5event=0xbc,umask=501WR_CAS Access to Rank 0 : Bank 5unc_m_wr_cas_rank4.bank6uncore memoryWR_CAS Access to Rank 4; Bank 6event=0xbc,umask=601WR_CAS Access to Rank 0 : Bank 6unc_m_wr_cas_rank4.bank7uncore memoryWR_CAS Access to Rank 4; Bank 7event=0xbc,umask=701WR_CAS Access to Rank 0 : Bank 7unc_m_wr_cas_rank4.bank8uncore memoryWR_CAS Access to Rank 4; Bank 8event=0xbc,umask=801WR_CAS Access to Rank 0 : Bank 8unc_m_wr_cas_rank4.bank9uncore memoryWR_CAS Access to Rank 4; Bank 9event=0xbc,umask=901WR_CAS Access to Rank 0 : Bank 9unc_m_wr_cas_rank4.bankg0uncore memoryWR_CAS Access to Rank 4; Bank Group 0 (Banks 0-3)event=0xbc,umask=0x1101WR_CAS Access to Rank 0 : Bank Group 0 (Banks 0-3)unc_m_wr_cas_rank4.bankg1uncore memoryWR_CAS Access to Rank 4; Bank Group 1 (Banks 4-7)event=0xbc,umask=0x1201WR_CAS Access to Rank 0 : Bank Group 1 (Banks 4-7)unc_m_wr_cas_rank4.bankg2uncore memoryWR_CAS Access to Rank 4; Bank Group 2 (Banks 8-11)event=0xbc,umask=0x1301WR_CAS Access to Rank 0 : Bank Group 2 (Banks 8-11)unc_m_wr_cas_rank4.bankg3uncore memoryWR_CAS Access to Rank 4; Bank Group 3 (Banks 12-15)event=0xbc,umask=0x1401WR_CAS Access to Rank 0 : Bank Group 3 (Banks 12-15)unc_m_wr_cas_rank5.allbanksuncore memoryWR_CAS Access to Rank 5; All Banksevent=0xbd,umask=0x1001WR_CAS Access to Rank 0 : All Banksunc_m_wr_cas_rank5.bank0uncore memoryWR_CAS Access to Rank 5; Bank 0event=0xbd01WR_CAS Access to Rank 0 : Bank 0unc_m_wr_cas_rank5.bank1uncore memoryWR_CAS Access to Rank 5; Bank 1event=0xbd,umask=101WR_CAS Access to Rank 0 : Bank 1unc_m_wr_cas_rank5.bank10uncore memoryWR_CAS Access to Rank 5; Bank 10event=0xbd,umask=0xa01WR_CAS Access to Rank 0 : Bank 10unc_m_wr_cas_rank5.bank11uncore memoryWR_CAS Access to Rank 5; Bank 11event=0xbd,umask=0xb01WR_CAS Access to Rank 0 : Bank 11unc_m_wr_cas_rank5.bank12uncore memoryWR_CAS Access to Rank 5; Bank 12event=0xbd,umask=0xc01WR_CAS Access to Rank 0 : Bank 12unc_m_wr_cas_rank5.bank13uncore memoryWR_CAS Access to Rank 5; Bank 13event=0xbd,umask=0xd01WR_CAS Access to Rank 0 : Bank 13unc_m_wr_cas_rank5.bank14uncore memoryWR_CAS Access to Rank 5; Bank 14event=0xbd,umask=0xe01WR_CAS Access to Rank 0 : Bank 14unc_m_wr_cas_rank5.bank15uncore memoryWR_CAS Access to Rank 5; Bank 15event=0xbd,umask=0xf01WR_CAS Access to Rank 0 : Bank 15unc_m_wr_cas_rank5.bank2uncore memoryWR_CAS Access to Rank 5; Bank 2event=0xbd,umask=201WR_CAS Access to Rank 0 : Bank 2unc_m_wr_cas_rank5.bank3uncore memoryWR_CAS Access to Rank 5; Bank 3event=0xbd,umask=301WR_CAS Access to Rank 0 : Bank 3unc_m_wr_cas_rank5.bank4uncore memoryWR_CAS Access to Rank 5; Bank 4event=0xbd,umask=401WR_CAS Access to Rank 0 : Bank 4unc_m_wr_cas_rank5.bank5uncore memoryWR_CAS Access to Rank 5; Bank 5event=0xbd,umask=501WR_CAS Access to Rank 0 : Bank 5unc_m_wr_cas_rank5.bank6uncore memoryWR_CAS Access to Rank 5; Bank 6event=0xbd,umask=601WR_CAS Access to Rank 0 : Bank 6unc_m_wr_cas_rank5.bank7uncore memoryWR_CAS Access to Rank 5; Bank 7event=0xbd,umask=701WR_CAS Access to Rank 0 : Bank 7unc_m_wr_cas_rank5.bank8uncore memoryWR_CAS Access to Rank 5; Bank 8event=0xbd,umask=801WR_CAS Access to Rank 0 : Bank 8unc_m_wr_cas_rank5.bank9uncore memoryWR_CAS Access to Rank 5; Bank 9event=0xbd,umask=901WR_CAS Access to Rank 0 : Bank 9unc_m_wr_cas_rank5.bankg0uncore memoryWR_CAS Access to Rank 5; Bank Group 0 (Banks 0-3)event=0xbd,umask=0x1101WR_CAS Access to Rank 0 : Bank Group 0 (Banks 0-3)unc_m_wr_cas_rank5.bankg1uncore memoryWR_CAS Access to Rank 5; Bank Group 1 (Banks 4-7)event=0xbd,umask=0x1201WR_CAS Access to Rank 0 : Bank Group 1 (Banks 4-7)unc_m_wr_cas_rank5.bankg2uncore memoryWR_CAS Access to Rank 5; Bank Group 2 (Banks 8-11)event=0xbd,umask=0x1301WR_CAS Access to Rank 0 : Bank Group 2 (Banks 8-11)unc_m_wr_cas_rank5.bankg3uncore memoryWR_CAS Access to Rank 5; Bank Group 3 (Banks 12-15)event=0xbd,umask=0x1401WR_CAS Access to Rank 0 : Bank Group 3 (Banks 12-15)unc_m_wr_cas_rank6.allbanksuncore memoryWR_CAS Access to Rank 6; All Banksevent=0xbe,umask=0x1001WR_CAS Access to Rank 0 : All Banksunc_m_wr_cas_rank6.bank0uncore memoryWR_CAS Access to Rank 6; Bank 0event=0xbe01WR_CAS Access to Rank 0 : Bank 0unc_m_wr_cas_rank6.bank1uncore memoryWR_CAS Access to Rank 6; Bank 1event=0xbe,umask=101WR_CAS Access to Rank 0 : Bank 1unc_m_wr_cas_rank6.bank10uncore memoryWR_CAS Access to Rank 6; Bank 10event=0xbe,umask=0xa01WR_CAS Access to Rank 0 : Bank 10unc_m_wr_cas_rank6.bank11uncore memoryWR_CAS Access to Rank 6; Bank 11event=0xbe,umask=0xb01WR_CAS Access to Rank 0 : Bank 11unc_m_wr_cas_rank6.bank12uncore memoryWR_CAS Access to Rank 6; Bank 12event=0xbe,umask=0xc01WR_CAS Access to Rank 0 : Bank 12unc_m_wr_cas_rank6.bank13uncore memoryWR_CAS Access to Rank 6; Bank 13event=0xbe,umask=0xd01WR_CAS Access to Rank 0 : Bank 13unc_m_wr_cas_rank6.bank14uncore memoryWR_CAS Access to Rank 6; Bank 14event=0xbe,umask=0xe01WR_CAS Access to Rank 0 : Bank 14unc_m_wr_cas_rank6.bank15uncore memoryWR_CAS Access to Rank 6; Bank 15event=0xbe,umask=0xf01WR_CAS Access to Rank 0 : Bank 15unc_m_wr_cas_rank6.bank2uncore memoryWR_CAS Access to Rank 6; Bank 2event=0xbe,umask=201WR_CAS Access to Rank 0 : Bank 2unc_m_wr_cas_rank6.bank3uncore memoryWR_CAS Access to Rank 6; Bank 3event=0xbe,umask=301WR_CAS Access to Rank 0 : Bank 3unc_m_wr_cas_rank6.bank4uncore memoryWR_CAS Access to Rank 6; Bank 4event=0xbe,umask=401WR_CAS Access to Rank 0 : Bank 4unc_m_wr_cas_rank6.bank5uncore memoryWR_CAS Access to Rank 6; Bank 5event=0xbe,umask=501WR_CAS Access to Rank 0 : Bank 5unc_m_wr_cas_rank6.bank6uncore memoryWR_CAS Access to Rank 6; Bank 6event=0xbe,umask=601WR_CAS Access to Rank 0 : Bank 6unc_m_wr_cas_rank6.bank7uncore memoryWR_CAS Access to Rank 6; Bank 7event=0xbe,umask=701WR_CAS Access to Rank 0 : Bank 7unc_m_wr_cas_rank6.bank8uncore memoryWR_CAS Access to Rank 6; Bank 8event=0xbe,umask=801WR_CAS Access to Rank 0 : Bank 8unc_m_wr_cas_rank6.bank9uncore memoryWR_CAS Access to Rank 6; Bank 9event=0xbe,umask=901WR_CAS Access to Rank 0 : Bank 9unc_m_wr_cas_rank6.bankg0uncore memoryWR_CAS Access to Rank 6; Bank Group 0 (Banks 0-3)event=0xbe,umask=0x1101WR_CAS Access to Rank 0 : Bank Group 0 (Banks 0-3)unc_m_wr_cas_rank6.bankg1uncore memoryWR_CAS Access to Rank 6; Bank Group 1 (Banks 4-7)event=0xbe,umask=0x1201WR_CAS Access to Rank 0 : Bank Group 1 (Banks 4-7)unc_m_wr_cas_rank6.bankg2uncore memoryWR_CAS Access to Rank 6; Bank Group 2 (Banks 8-11)event=0xbe,umask=0x1301WR_CAS Access to Rank 0 : Bank Group 2 (Banks 8-11)unc_m_wr_cas_rank6.bankg3uncore memoryWR_CAS Access to Rank 6; Bank Group 3 (Banks 12-15)event=0xbe,umask=0x1401WR_CAS Access to Rank 0 : Bank Group 3 (Banks 12-15)unc_m_wr_cas_rank7.allbanksuncore memoryWR_CAS Access to Rank 7; All Banksevent=0xbf,umask=0x1001WR_CAS Access to Rank 0 : All Banksunc_m_wr_cas_rank7.bank0uncore memoryWR_CAS Access to Rank 7; Bank 0event=0xbf01WR_CAS Access to Rank 0 : Bank 0unc_m_wr_cas_rank7.bank1uncore memoryWR_CAS Access to Rank 7; Bank 1event=0xbf,umask=101WR_CAS Access to Rank 0 : Bank 1unc_m_wr_cas_rank7.bank10uncore memoryWR_CAS Access to Rank 7; Bank 10event=0xbf,umask=0xa01WR_CAS Access to Rank 0 : Bank 10unc_m_wr_cas_rank7.bank11uncore memoryWR_CAS Access to Rank 7; Bank 11event=0xbf,umask=0xb01WR_CAS Access to Rank 0 : Bank 11unc_m_wr_cas_rank7.bank12uncore memoryWR_CAS Access to Rank 7; Bank 12event=0xbf,umask=0xc01WR_CAS Access to Rank 0 : Bank 12unc_m_wr_cas_rank7.bank13uncore memoryWR_CAS Access to Rank 7; Bank 13event=0xbf,umask=0xd01WR_CAS Access to Rank 0 : Bank 13unc_m_wr_cas_rank7.bank14uncore memoryWR_CAS Access to Rank 7; Bank 14event=0xbf,umask=0xe01WR_CAS Access to Rank 0 : Bank 14unc_m_wr_cas_rank7.bank15uncore memoryWR_CAS Access to Rank 7; Bank 15event=0xbf,umask=0xf01WR_CAS Access to Rank 0 : Bank 15unc_m_wr_cas_rank7.bank2uncore memoryWR_CAS Access to Rank 7; Bank 2event=0xbf,umask=201WR_CAS Access to Rank 0 : Bank 2unc_m_wr_cas_rank7.bank3uncore memoryWR_CAS Access to Rank 7; Bank 3event=0xbf,umask=301WR_CAS Access to Rank 0 : Bank 3unc_m_wr_cas_rank7.bank4uncore memoryWR_CAS Access to Rank 7; Bank 4event=0xbf,umask=401WR_CAS Access to Rank 0 : Bank 4unc_m_wr_cas_rank7.bank5uncore memoryWR_CAS Access to Rank 7; Bank 5event=0xbf,umask=501WR_CAS Access to Rank 0 : Bank 5unc_m_wr_cas_rank7.bank6uncore memoryWR_CAS Access to Rank 7; Bank 6event=0xbf,umask=601WR_CAS Access to Rank 0 : Bank 6unc_m_wr_cas_rank7.bank7uncore memoryWR_CAS Access to Rank 7; Bank 7event=0xbf,umask=701WR_CAS Access to Rank 0 : Bank 7unc_m_wr_cas_rank7.bank8uncore memoryWR_CAS Access to Rank 7; Bank 8event=0xbf,umask=801WR_CAS Access to Rank 0 : Bank 8unc_m_wr_cas_rank7.bank9uncore memoryWR_CAS Access to Rank 7; Bank 9event=0xbf,umask=901WR_CAS Access to Rank 0 : Bank 9unc_m_wr_cas_rank7.bankg0uncore memoryWR_CAS Access to Rank 7; Bank Group 0 (Banks 0-3)event=0xbf,umask=0x1101WR_CAS Access to Rank 0 : Bank Group 0 (Banks 0-3)unc_m_wr_cas_rank7.bankg1uncore memoryWR_CAS Access to Rank 7; Bank Group 1 (Banks 4-7)event=0xbf,umask=0x1201WR_CAS Access to Rank 0 : Bank Group 1 (Banks 4-7)unc_m_wr_cas_rank7.bankg2uncore memoryWR_CAS Access to Rank 7; Bank Group 2 (Banks 8-11)event=0xbf,umask=0x1301WR_CAS Access to Rank 0 : Bank Group 2 (Banks 8-11)unc_m_wr_cas_rank7.bankg3uncore memoryWR_CAS Access to Rank 7; Bank Group 3 (Banks 12-15)event=0xbf,umask=0x1401WR_CAS Access to Rank 0 : Bank Group 3 (Banks 12-15)uncore_pcuunc_p_clockticksuncore powerpclk Cyclesevent=001The PCU runs off a fixed 1 GHz clock.  This event counts the number of pclk cycles measured while the counter was enabled.  The pclk, like the Memory Controller's dclk, counts at a constant rate making it a good measure of actual wall timeunc_p_core0_transition_cyclesuncore powerCore C State Transition Cyclesevent=0x6001Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core10_transition_cyclesuncore powerCore C State Transition Cyclesevent=0x6a01Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core11_transition_cyclesuncore powerCore C State Transition Cyclesevent=0x6b01Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core12_transition_cyclesuncore powerCore C State Transition Cyclesevent=0x6c01Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core13_transition_cyclesuncore powerCore C State Transition Cyclesevent=0x6d01Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core14_transition_cyclesuncore powerCore C State Transition Cyclesevent=0x6e01Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core15_transition_cyclesuncore powerCore C State Transition Cyclesevent=0x6f01Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core16_transition_cyclesuncore powerCore C State Transition Cyclesevent=0x7001Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core17_transition_cyclesuncore powerCore C State Transition Cyclesevent=0x7101Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core1_transition_cyclesuncore powerCore C State Transition Cyclesevent=0x6101Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core2_transition_cyclesuncore powerCore C State Transition Cyclesevent=0x6201Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core3_transition_cyclesuncore powerCore C State Transition Cyclesevent=0x6301Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core4_transition_cyclesuncore powerCore C State Transition Cyclesevent=0x6401Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core5_transition_cyclesuncore powerCore C State Transition Cyclesevent=0x6501Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core6_transition_cyclesuncore powerCore C State Transition Cyclesevent=0x6601Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core7_transition_cyclesuncore powerCore C State Transition Cyclesevent=0x6701Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core8_transition_cyclesuncore powerCore C State Transition Cyclesevent=0x6801Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core9_transition_cyclesuncore powerCore C State Transition Cyclesevent=0x6901Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_demotions_core0uncore powerCore C State Demotionsevent=0x3001Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core1uncore powerCore C State Demotionsevent=0x3101Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core10uncore powerCore C State Demotionsevent=0x3a01Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core11uncore powerCore C State Demotionsevent=0x3b01Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core12uncore powerCore C State Demotionsevent=0x3c01Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core13uncore powerCore C State Demotionsevent=0x3d01Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core14uncore powerCore C State Demotionsevent=0x3e01Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core15uncore powerCore C State Demotionsevent=0x3f01Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core16uncore powerCore C State Demotionsevent=0x4001Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core17uncore powerCore C State Demotionsevent=0x4101Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core2uncore powerCore C State Demotionsevent=0x3201Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core3uncore powerCore C State Demotionsevent=0x3301Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core4uncore powerCore C State Demotionsevent=0x3401Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core5uncore powerCore C State Demotionsevent=0x3501Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core6uncore powerCore C State Demotionsevent=0x3601Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core7uncore powerCore C State Demotionsevent=0x3701Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core8uncore powerCore C State Demotionsevent=0x3801Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core9uncore powerCore C State Demotionsevent=0x3901Counts the number of times when a configurable cores had a C-state demotionunc_p_freq_max_limit_thermal_cyclesuncore powerThermal Strongest Upper Limit Cyclesevent=401Counts the number of cycles when thermal conditions are the upper limit on frequency.  This is related to the THERMAL_THROTTLE CYCLES_ABOVE_TEMP event, which always counts cycles when we are above the thermal temperature.  This event (STRONGEST_UPPER_LIMIT) is sampled at the output of the algorithm that determines the actual frequency, while THERMAL_THROTTLE looks at the inputunc_p_freq_max_os_cyclesuncore powerOS Strongest Upper Limit Cyclesevent=601Counts the number of cycles when the OS is the upper limit on frequencyunc_p_freq_max_power_cyclesuncore powerPower Strongest Upper Limit Cyclesevent=501Counts the number of cycles when power is the upper limit on frequencyunc_p_freq_min_io_p_cyclesuncore powerIO P Limit Strongest Lower Limit Cyclesevent=0x7301Counts the number of cycles when IO P Limit is preventing us from dropping the frequency lower.  This algorithm monitors the needs to the IO subsystem on both local and remote sockets and will maintain a frequency high enough to maintain good IO BW.  This is necessary for when all the IA cores on a socket are idle but a user still would like to maintain high IO Bandwidthunc_p_freq_trans_cyclesuncore powerCycles spent changing Frequencyevent=0x7401Counts the number of cycles when the system is changing frequency.  This can not be filtered by thread ID.  One can also use it with the occupancy counter that monitors number of threads in C0 to estimate the performance impact that frequency transitions had on the systemunc_p_memory_phase_shedding_cyclesuncore powerMemory Phase Shedding Cyclesevent=0x2f01Counts the number of cycles that the PCU has triggered memory phase shedding.  This is a mode that can be run in the iMC physicals that saves power at the expense of additional latencyunc_p_pkg_residency_c0_cyclesuncore powerPackage C State Residency - C0event=0x2a01Counts the number of cycles when the package was in C0.  This event can be used in conjunction with edge detect to count C0 entrances (or exits using invert).  Residency events do not include transition timesunc_p_pkg_residency_c1e_cyclesuncore powerPackage C State Residency - C1Eevent=0x4e01Counts the number of cycles when the package was in C1E.  This event can be used in conjunction with edge detect to count C1E entrances (or exits using invert).  Residency events do not include transition timesunc_p_pkg_residency_c2e_cyclesuncore powerPackage C State Residency - C2Eevent=0x2b01Counts the number of cycles when the package was in C2E.  This event can be used in conjunction with edge detect to count C2E entrances (or exits using invert).  Residency events do not include transition timesunc_p_pkg_residency_c3_cyclesuncore powerPackage C State Residency - C3event=0x2c01Counts the number of cycles when the package was in C3.  This event can be used in conjunction with edge detect to count C3 entrances (or exits using invert).  Residency events do not include transition timesunc_p_pkg_residency_c6_cyclesuncore powerPackage C State Residency - C6event=0x2d01Counts the number of cycles when the package was in C6.  This event can be used in conjunction with edge detect to count C6 entrances (or exits using invert).  Residency events do not include transition timesunc_p_pkg_residency_c7_cyclesuncore powerPackage C7 State Residencyevent=0x2e01Counts the number of cycles when the package was in C7.  This event can be used in conjunction with edge detect to count C7 entrances (or exits using invert).  Residency events do not include transition timesunc_p_power_state_occupancy.cores_c0uncore powerNumber of cores in C-State; C0 and C1event=0x80,occ_sel=101This is an occupancy event that tracks the number of cores that are in the chosen C-State.  It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other detailsunc_p_power_state_occupancy.cores_c3uncore powerNumber of cores in C-State; C3event=0x80,occ_sel=201This is an occupancy event that tracks the number of cores that are in the chosen C-State.  It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other detailsunc_p_power_state_occupancy.cores_c6uncore powerNumber of cores in C-State; C6 and C7event=0x80,occ_sel=301This is an occupancy event that tracks the number of cores that are in the chosen C-State.  It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other detailsunc_p_prochot_external_cyclesuncore powerExternal Prochotevent=0xa01Counts the number of cycles that we are in external PROCHOT mode.  This mode is triggered when a sensor off the die determines that something off-die (like DRAM) is too hot and must throttle to avoid damaging the chipunc_p_prochot_internal_cyclesuncore powerInternal Prochotevent=901Counts the number of cycles that we are in Internal PROCHOT mode.  This mode is triggered when a sensor on the die determines that we are too hot and must throttle to avoid damaging the chipunc_p_total_transition_cyclesuncore powerTotal Core C State Transition Cyclesevent=0x7201Number of cycles spent performing core C state transitions across all coresunc_p_ufs_transitions_ring_gvuncore powerUNC_P_UFS_TRANSITIONS_RING_GVevent=0x7901Ring GV with same final and initial frequencyunc_p_vr_hot_cyclesuncore powerVR Hotevent=0x4201VR Hot : Number of cycles that a CPU SVID VR is hot.  Does not cover DRAM VRsoffcore_response.all_code_rd.llc_hit.hit_other_core_no_fwdcacheCounts all demand & prefetch code reads hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C024400offcore_response.all_data_rd.llc_hit.hitm_other_corecacheCounts all demand & prefetch data reads hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C009100offcore_response.all_data_rd.llc_hit.hit_other_core_no_fwdcacheCounts all demand & prefetch data reads hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C009100offcore_response.all_reads.llc_hit.hitm_other_corecacheCounts all data/code/rfo reads (demand & prefetch) hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C07F700offcore_response.all_reads.llc_hit.hit_other_core_no_fwdcacheCounts all data/code/rfo reads (demand & prefetch) hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C07F700offcore_response.all_requests.llc_hit.any_responsecacheCounts all requests hit in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C8FFF00offcore_response.all_rfo.llc_hit.hitm_other_corecacheCounts all demand & prefetch RFOs hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C012200offcore_response.all_rfo.llc_hit.hit_other_core_no_fwdcacheCounts all demand & prefetch RFOs hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C012200offcore_response.demand_rfo.llc_hit.any_responsecacheCounts all demand data writes (RFOs) hit in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C000200offcore_response.demand_rfo.llc_hit.hitm_other_corecacheCounts all demand data writes (RFOs) hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000200offcore_response.pf_llc_code_rd.llc_hit.any_responsecacheCounts prefetch (that bring data to LLC only) code reads hit in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C020000offcore_response.pf_llc_rfo.llc_hit.any_responsecacheCounts all prefetch (that bring data to LLC only) RFOs hit in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C010000offcore_response.all_code_rd.llc_miss.any_responsememoryCounts all demand & prefetch code reads miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FBFC0024400offcore_response.all_code_rd.llc_miss.local_drammemoryCounts all demand & prefetch code reads miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400024400offcore_response.all_data_rd.llc_miss.any_responsememoryCounts all demand & prefetch data reads miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FBFC0009100offcore_response.all_data_rd.llc_miss.local_drammemoryCounts all demand & prefetch data reads miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400009100offcore_response.all_data_rd.llc_miss.remote_drammemoryCounts all demand & prefetch data reads miss the L3 and the data is returned from remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x63BC0009100offcore_response.all_data_rd.llc_miss.remote_hitmmemoryCounts all demand & prefetch data reads miss the L3 and the modified data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0009100offcore_response.all_data_rd.llc_miss.remote_hit_forwardmemoryCounts all demand & prefetch data reads miss the L3 and clean or shared data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x87FC0009100offcore_response.all_reads.llc_miss.any_responsememoryCounts all data/code/rfo reads (demand & prefetch) miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FBFC007F700offcore_response.all_reads.llc_miss.local_drammemoryCounts all data/code/rfo reads (demand & prefetch) miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x6040007F700offcore_response.all_reads.llc_miss.remote_drammemoryCounts all data/code/rfo reads (demand & prefetch) miss the L3 and the data is returned from remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x63BC007F700offcore_response.all_reads.llc_miss.remote_hitmmemoryCounts all data/code/rfo reads (demand & prefetch) miss the L3 and the modified data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC007F700offcore_response.all_reads.llc_miss.remote_hit_forwardmemoryCounts all data/code/rfo reads (demand & prefetch) miss the L3 and clean or shared data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x87FC007F700offcore_response.all_requests.llc_miss.any_responsememoryCounts all requests miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FBFC08FFF00offcore_response.all_rfo.llc_miss.any_responsememoryCounts all demand & prefetch RFOs miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FBFC0012200offcore_response.all_rfo.llc_miss.local_drammemoryCounts all demand & prefetch RFOs miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400012200offcore_response.demand_rfo.llc_miss.any_responsememoryCounts all demand data writes (RFOs) miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FBFC0000200offcore_response.demand_rfo.llc_miss.remote_hitmmemoryCounts all demand data writes (RFOs) miss the L3 and the modified data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0000200offcore_response.pf_llc_code_rd.llc_miss.any_responsememoryCounts prefetch (that bring data to LLC only) code reads miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FBFC0020000offcore_response.pf_llc_rfo.llc_miss.any_responsememoryCounts all prefetch (that bring data to LLC only) RFOs miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FBFC0010000llc_misses.code_llc_prefetchuncore cacheLLC prefetch misses for code reads. Derived from unc_c_tor_inserts.miss_opcodeevent=0x35,umask=3,filter_opc=0x1910164BytesCounts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; Miss transactions inserted into the TOR that match an opcodellc_misses.data_llc_prefetchuncore cacheLLC prefetch misses for data reads. Derived from unc_c_tor_inserts.miss_opcodeevent=0x35,umask=3,filter_opc=0x1920164BytesCounts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; Miss transactions inserted into the TOR that match an opcodellc_misses.data_readuncore cacheLLC misses - demand and prefetch data reads - excludes LLC prefetches. Derived from unc_c_tor_inserts.miss_opcodeevent=0x35,umask=3,filter_opc=0x1820164BytesCounts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; Miss transactions inserted into the TOR that match an opcodellc_misses.mmio_readuncore cacheMMIO reads. Derived from unc_c_tor_inserts.miss_opcodeevent=0x35,umask=3,filter_opc=0x187,filter_nc=10164BytesCounts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; Miss transactions inserted into the TOR that match an opcodellc_misses.mmio_writeuncore cacheMMIO writes. Derived from unc_c_tor_inserts.miss_opcodeevent=0x35,umask=3,filter_opc=0x18f,filter_nc=10164BytesCounts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; Miss transactions inserted into the TOR that match an opcodellc_misses.pcie_non_snoop_writeuncore cachePCIe write misses (full cache line). Derived from unc_c_tor_inserts.miss_opcodeevent=0x35,umask=3,filter_opc=0x1c8,filter_tid=0x3e0164BytesCounts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; Miss transactions inserted into the TOR that match an opcodellc_misses.pcie_readuncore cacheLLC misses for PCIe read current. Derived from unc_c_tor_inserts.miss_opcodeevent=0x35,umask=3,filter_opc=0x19e0164BytesCounts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; Miss transactions inserted into the TOR that match an opcodellc_misses.pcie_writeuncore cacheItoM write misses (as part of fast string memcpy stores) + PCIe full line writes. Derived from unc_c_tor_inserts.miss_opcodeevent=0x35,umask=3,filter_opc=0x1c80164BytesCounts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; Miss transactions inserted into the TOR that match an opcodellc_misses.rfo_llc_prefetchuncore cacheLLC prefetch misses for RFO. Derived from unc_c_tor_inserts.miss_opcodeevent=0x35,umask=3,filter_opc=0x1900164BytesCounts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; Miss transactions inserted into the TOR that match an opcodellc_misses.uncacheableuncore cacheLLC misses - Uncacheable reads (from cpu) . Derived from unc_c_tor_inserts.miss_opcodeevent=0x35,umask=3,filter_opc=0x1870164BytesCounts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; Miss transactions inserted into the TOR that match an opcodellc_references.code_llc_prefetchuncore cacheL2 demand and L2 prefetch code references to LLC. Derived from unc_c_tor_inserts.opcodeevent=0x35,umask=1,filter_opc=0x1810164BytesCounts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; Transactions inserted into the TOR that match an opcode (matched by Cn_MSR_PMON_BOX_FILTER.opc)llc_references.pcie_ns_partial_writeuncore cachePCIe writes (partial cache line). Derived from unc_c_tor_inserts.opcodeevent=0x35,umask=1,filter_opc=0x180,filter_tid=0x3e01Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; Transactions inserted into the TOR that match an opcode (matched by Cn_MSR_PMON_BOX_FILTER.opc)llc_references.pcie_readuncore cachePCIe read current. Derived from unc_c_tor_inserts.opcodeevent=0x35,umask=1,filter_opc=0x19e0164BytesCounts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; Transactions inserted into the TOR that match an opcode (matched by Cn_MSR_PMON_BOX_FILTER.opc)llc_references.pcie_writeuncore cachePCIe write references (full cache line). Derived from unc_c_tor_inserts.opcodeevent=0x35,umask=1,filter_opc=0x1c8,filter_tid=0x3e0164BytesCounts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; Transactions inserted into the TOR that match an opcode (matched by Cn_MSR_PMON_BOX_FILTER.opc)llc_references.streaming_fulluncore cacheStreaming stores (full cache line). Derived from unc_c_tor_inserts.opcodeevent=0x35,umask=1,filter_opc=0x18c0164BytesCounts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; Transactions inserted into the TOR that match an opcode (matched by Cn_MSR_PMON_BOX_FILTER.opc)llc_references.streaming_partialuncore cacheStreaming stores (partial cache line). Derived from unc_c_tor_inserts.opcodeevent=0x35,umask=1,filter_opc=0x18d0164BytesCounts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; Transactions inserted into the TOR that match an opcode (matched by Cn_MSR_PMON_BOX_FILTER.opc)unc_c_llc_lookup.anyuncore cacheAll LLC Misses (code+ data rd + data wr - including demand and prefetch)event=0x34,umask=0x11,filter_state=0x10164BytesCounts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CBoGlCtrl[22:18] bits correspond to [FMESI] state.; Filters for any transaction originating from the IPQ or IRQ.  This does not include lookups originating from the ISMQunc_c_llc_victims.m_stateuncore cacheM line evictions from LLC (writebacks to memory)event=0x37,umask=10164BytesCounts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_c_ring_ad_used.downuncore cacheAD Ring In Use; Downevent=0x1b,umask=0xc01Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in BDX-- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_ad_used.upuncore cacheAD Ring In Use; Upevent=0x1b,umask=301Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_ak_used.downuncore cacheAK Ring In Use; Downevent=0x1c,umask=0xc01Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_ak_used.upuncore cacheAK Ring In Use; Upevent=0x1c,umask=301Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_bl_used.downuncore cacheBL Ring in Use; Downevent=0x1d,umask=0xc01Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_bl_used.upuncore cacheBL Ring in Use; Upevent=0x1d,umask=301Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_tor_occupancy.llc_data_readuncore cacheOccupancy counter for LLC data reads (demand and L2 prefetch). Derived from unc_c_tor_occupancy.miss_opcodeevent=0x36,umask=3,filter_opc=0x18201For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); TOR entries for miss transactions that match an opcode. This generally means that the request was sent to memory or MMIOunc_h_snoop_resp.rspifwduncore cacheM line forwarded from remote cache with no writeback to memoryevent=0x21,umask=40164BytesCounts the total number of RspI snoop responses received.  Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system.   In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received.  For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1.; Filters for snoop responses of RspIFwd.  This is returned when a remote caching agent forwards data and the requesting agent is able to acquire the data in E or M states.  This is commonly returned with RFO transactions.  It can be either a HitM or a HitFEunc_h_snoop_resp.rspsuncore cacheShared line response from remote cacheevent=0x21,umask=20164BytesCounts the total number of RspI snoop responses received.  Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system.   In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received.  For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1.; Filters for snoop responses of RspS.  RspS is returned when a remote cache has data but is not forwarding it.  It is a way to let the requesting socket know that it cannot allocate the data in E state.  No data is sent with S RspSunc_h_snoop_resp.rspsfwduncore cacheShared line forwarded from remote cacheevent=0x21,umask=80164BytesCounts the total number of RspI snoop responses received.  Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system.   In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received.  For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1.; Filters for a snoop response of RspSFwd.  This is returned when a remote caching agent forwards data but holds on to its current copy.  This is common for data and code reads that hit in a remote socket in E or F stateunc_h_snoop_resp.rsp_fwd_wbuncore cacheM line forwarded from remote cache along with writeback to memoryevent=0x21,umask=0x200164BytesCounts the total number of RspI snoop responses received.  Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system.   In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received.  For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1.; Filters for a snoop response of Rsp*Fwd*WB.  This snoop response is only used in 4s systems.  It is used when a snoop HITM's in a remote caching agent and it directly forwards data to a requestor, and simultaneously returns data to the home to be written back to memoryuncore_qpiqpi_ctl_bandwidth_txuncore interconnectNumber of non data (control) flits transmitted . Derived from unc_q_txl_flits_g0.non_dataevent=0,umask=4018BytesCounts the number of flits transmitted across the QPI Link.  It includes filters for Idle, protocol, and Data Flits.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p.; Number of non-NULL non-data flits transmitted across QPI.  This basically tracks the protocol overhead on the QPI link.  One can get a good picture of the QPI-link characteristics by evaluating the protocol flits, data flits, and idle/null flits.  This includes the header flits for data packetsqpi_data_bandwidth_txuncore interconnectNumber of data flits transmitted . Derived from unc_q_txl_flits_g0.dataevent=0,umask=2018BytesCounts the number of flits transmitted across the QPI Link.  It includes filters for Idle, protocol, and Data Flits.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p.; Number of data flits transmitted over QPI.  Each flit contains 64b of data.  This includes both DRS and NCB data flits (coherent and non-coherent).  This can be used to calculate the data bandwidth of the QPI link.  One can get a good picture of the QPI-link characteristics by evaluating the protocol flits, data flits, and idle/null flits.  This does not include the header flits that go in data packetsunc_q_clockticksuncore interconnectNumber of qfclksevent=0x1401Counts the number of clocks in the QPI LL.  This clock runs at 1/4th the GT/s speed of the QPI link.  For example, a 4GT/s link will have qfclk or 1GHz.  BDX does not support dynamic link speeds, so this frequency is fixedunc_q_cto_countuncore interconnectCount of CTO Eventsevent=0x3801Counts the number of CTO (cluster trigger outs) events that were asserted across the two slots.  If both slots trigger in a given cycle, the event will increment by 2.  You can use edge detect to count the number of cases when both events triggeredunc_q_direct2core.failure_creditsuncore interconnectDirect 2 Core Spawning; Spawn Failure - Egress Creditsevent=0x13,umask=201Counts the number of DRS packets that we attempted to do direct2core on.  There are 4 mutually exclusive filters.  Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases.  Note that this does not count packets that are not candidates for Direct2Core.  The only candidates for Direct2Core are DRS packets destined for Cbos.; The spawn failed because there were not enough Egress credits.  Had there been enough credits, the spawn would have worked as the RBT bit was set and the RBT tag matchedunc_q_direct2core.failure_credits_missuncore interconnectDirect 2 Core Spawning; Spawn Failure - Egress and RBT Missevent=0x13,umask=0x2001Counts the number of DRS packets that we attempted to do direct2core on.  There are 4 mutually exclusive filters.  Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases.  Note that this does not count packets that are not candidates for Direct2Core.  The only candidates for Direct2Core are DRS packets destined for Cbos.; The spawn failed because the RBT tag did not match and there weren't enough Egress credits.   The valid bit was setunc_q_direct2core.failure_credits_rbtuncore interconnectDirect 2 Core Spawning; Spawn Failure - Egress and RBT Invalidevent=0x13,umask=801Counts the number of DRS packets that we attempted to do direct2core on.  There are 4 mutually exclusive filters.  Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases.  Note that this does not count packets that are not candidates for Direct2Core.  The only candidates for Direct2Core are DRS packets destined for Cbos.; The spawn failed because there were not enough Egress credits AND the RBT bit was not set, but the RBT tag matchedunc_q_direct2core.failure_credits_rbt_missuncore interconnectDirect 2 Core Spawning; Spawn Failure - Egress and RBT Miss, Invalidevent=0x13,umask=0x8001Counts the number of DRS packets that we attempted to do direct2core on.  There are 4 mutually exclusive filters.  Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases.  Note that this does not count packets that are not candidates for Direct2Core.  The only candidates for Direct2Core are DRS packets destined for Cbos.; The spawn failed because the RBT tag did not match, the valid bit was not set and there weren't enough Egress creditsunc_q_direct2core.failure_missuncore interconnectDirect 2 Core Spawning; Spawn Failure - RBT Missevent=0x13,umask=0x1001Counts the number of DRS packets that we attempted to do direct2core on.  There are 4 mutually exclusive filters.  Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases.  Note that this does not count packets that are not candidates for Direct2Core.  The only candidates for Direct2Core are DRS packets destined for Cbos.; The spawn failed because the RBT tag did not match although the valid bit was set and there were enough Egress creditsunc_q_direct2core.failure_rbt_hituncore interconnectDirect 2 Core Spawning; Spawn Failure - RBT Invalidevent=0x13,umask=401Counts the number of DRS packets that we attempted to do direct2core on.  There are 4 mutually exclusive filters.  Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases.  Note that this does not count packets that are not candidates for Direct2Core.  The only candidates for Direct2Core are DRS packets destined for Cbos.; The spawn failed because the route-back table (RBT) specified that the transaction should not trigger a direct2core transaction.  This is common for IO transactions.  There were enough Egress credits and the RBT tag matched but the valid bit was not setunc_q_direct2core.failure_rbt_missuncore interconnectDirect 2 Core Spawning; Spawn Failure - RBT Miss and Invalidevent=0x13,umask=0x4001Counts the number of DRS packets that we attempted to do direct2core on.  There are 4 mutually exclusive filters.  Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases.  Note that this does not count packets that are not candidates for Direct2Core.  The only candidates for Direct2Core are DRS packets destined for Cbos.; The spawn failed because the RBT tag did not match and the valid bit was not set although there were enough Egress creditsunc_q_direct2core.success_rbt_hituncore interconnectDirect 2 Core Spawning; Spawn Successevent=0x13,umask=101Counts the number of DRS packets that we attempted to do direct2core on.  There are 4 mutually exclusive filters.  Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases.  Note that this does not count packets that are not candidates for Direct2Core.  The only candidates for Direct2Core are DRS packets destined for Cbos.; The spawn was successful.  There were sufficient credits, the RBT valid bit was set and there was an RBT tag match.  The message was marked to spawn direct2coreunc_q_l1_power_cyclesuncore interconnectCycles in L1event=0x1201Number of QPI qfclk cycles spent in L1 power mode.  L1 is a mode that totally shuts down a QPI link.  Use edge detect to count the number of instances when the QPI link entered L1.  Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another. Because L1 totally shuts down the link, it takes a good amount of time to exit this modeunc_q_rxl0p_power_cyclesuncore interconnectCycles in L0pevent=0x1001Number of QPI qfclk cycles spent in L0p power mode.  L0p is a mode where we disable 1/2 of the QPI lanes, decreasing our bandwidth in order to save power.  It increases snoop and data transfer latencies and decreases overall bandwidth.  This mode can be very useful in NUMA optimized workloads that largely only utilize QPI for snoops and their responses.  Use edge detect to count the number of instances when the QPI link entered L0p.  Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in anotherunc_q_rxl0_power_cyclesuncore interconnectCycles in L0event=0xf01Number of QPI qfclk cycles spent in L0 power mode in the Link Layer.  L0 is the default mode which provides the highest performance with the most power.  Use edge detect to count the number of instances that the link entered L0.  Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another.  The phy layer  sometimes leaves L0 for training, which will not be captured by this eventunc_q_rxl_bypasseduncore interconnectRx Flit Buffer Bypassedevent=901Counts the number of times that an incoming flit was able to bypass the flit buffer and pass directly across the BGF and into the Egress.  This is a latency optimization, and should generally be the common case.  If this value is less than the number of flits transferred, it implies that there was queueing getting onto the ring, and thus the transactions saw higher latencyunc_q_rxl_crc_errors.link_inituncore interconnectCRC Errors Detected; LinkInitevent=3,umask=101Number of CRC errors detected in the QPI Agent.  Each QPI flit incorporates 8 bits of CRC for error detection.  This counts the number of flits where the CRC was able to detect an error.  After an error has been detected, the QPI agent will send a request to the transmitting socket to resend the flit (as well as any flits that came after it).; CRC errors detected during link initializationunc_q_rxl_crc_errors.normal_opuncore interconnectUNC_Q_RxL_CRC_ERRORS.NORMAL_OPevent=3,umask=201unc_q_rxl_credits_consumed_vn0.drsuncore interconnectVN0 Credit Consumed; DRSevent=0x1e,umask=101Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer).  This includes packets that went through the RxQ and those that were bypasssed.; VN0 credit for the DRS message classunc_q_rxl_credits_consumed_vn0.homuncore interconnectVN0 Credit Consumed; HOMevent=0x1e,umask=801Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer).  This includes packets that went through the RxQ and those that were bypasssed.; VN0 credit for the HOM message classunc_q_rxl_credits_consumed_vn0.ncbuncore interconnectVN0 Credit Consumed; NCBevent=0x1e,umask=201Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer).  This includes packets that went through the RxQ and those that were bypasssed.; VN0 credit for the NCB message classunc_q_rxl_credits_consumed_vn0.ncsuncore interconnectVN0 Credit Consumed; NCSevent=0x1e,umask=401Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer).  This includes packets that went through the RxQ and those that were bypasssed.; VN0 credit for the NCS message classunc_q_rxl_credits_consumed_vn0.ndruncore interconnectVN0 Credit Consumed; NDRevent=0x1e,umask=0x2001Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer).  This includes packets that went through the RxQ and those that were bypasssed.; VN0 credit for the NDR message classunc_q_rxl_credits_consumed_vn0.snpuncore interconnectVN0 Credit Consumed; SNPevent=0x1e,umask=0x1001Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer).  This includes packets that went through the RxQ and those that were bypasssed.; VN0 credit for the SNP message classunc_q_rxl_credits_consumed_vn1.drsuncore interconnectVN1 Credit Consumed; DRSevent=0x39,umask=101Counts the number of times that an RxQ VN1 credit was consumed (i.e. message uses a VN1 credit for the Rx Buffer).  This includes packets that went through the RxQ and those that were bypasssed.; VN1 credit for the DRS message classunc_q_rxl_credits_consumed_vn1.homuncore interconnectVN1 Credit Consumed; HOMevent=0x39,umask=801Counts the number of times that an RxQ VN1 credit was consumed (i.e. message uses a VN1 credit for the Rx Buffer).  This includes packets that went through the RxQ and those that were bypasssed.; VN1 credit for the HOM message classunc_q_rxl_credits_consumed_vn1.ncbuncore interconnectVN1 Credit Consumed; NCBevent=0x39,umask=201Counts the number of times that an RxQ VN1 credit was consumed (i.e. message uses a VN1 credit for the Rx Buffer).  This includes packets that went through the RxQ and those that were bypasssed.; VN1 credit for the NCB message classunc_q_rxl_credits_consumed_vn1.ncsuncore interconnectVN1 Credit Consumed; NCSevent=0x39,umask=401Counts the number of times that an RxQ VN1 credit was consumed (i.e. message uses a VN1 credit for the Rx Buffer).  This includes packets that went through the RxQ and those that were bypasssed.; VN1 credit for the NCS message classunc_q_rxl_credits_consumed_vn1.ndruncore interconnectVN1 Credit Consumed; NDRevent=0x39,umask=0x2001Counts the number of times that an RxQ VN1 credit was consumed (i.e. message uses a VN1 credit for the Rx Buffer).  This includes packets that went through the RxQ and those that were bypasssed.; VN1 credit for the NDR message classunc_q_rxl_credits_consumed_vn1.snpuncore interconnectVN1 Credit Consumed; SNPevent=0x39,umask=0x1001Counts the number of times that an RxQ VN1 credit was consumed (i.e. message uses a VN1 credit for the Rx Buffer).  This includes packets that went through the RxQ and those that were bypasssed.; VN1 credit for the SNP message classunc_q_rxl_credits_consumed_vnauncore interconnectVNA Credit Consumedevent=0x1d01Counts the number of times that an RxQ VNA credit was consumed (i.e. message uses a VNA credit for the Rx Buffer).  This includes packets that went through the RxQ and those that were bypasssedunc_q_rxl_cycles_neuncore interconnectRxQ Cycles Not Emptyevent=0xa01Counts the number of cycles that the QPI RxQ was not empty.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancyunc_q_rxl_cycles_ne_drs.vn0uncore interconnectRxQ Cycles Not Empty - DRS; for VN0event=0xf,umask=101Counts the number of cycles that the QPI RxQ was not empty.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy.  This monitors DRS flits onlyunc_q_rxl_cycles_ne_drs.vn1uncore interconnectRxQ Cycles Not Empty - DRS; for VN1event=0xf,umask=201Counts the number of cycles that the QPI RxQ was not empty.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy.  This monitors DRS flits onlyunc_q_rxl_cycles_ne_hom.vn0uncore interconnectRxQ Cycles Not Empty - HOM; for VN0event=0x12,umask=101Counts the number of cycles that the QPI RxQ was not empty.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy.  This monitors HOM flits onlyunc_q_rxl_cycles_ne_hom.vn1uncore interconnectRxQ Cycles Not Empty - HOM; for VN1event=0x12,umask=201Counts the number of cycles that the QPI RxQ was not empty.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy.  This monitors HOM flits onlyunc_q_rxl_cycles_ne_ncb.vn0uncore interconnectRxQ Cycles Not Empty - NCB; for VN0event=0x10,umask=101Counts the number of cycles that the QPI RxQ was not empty.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy.  This monitors NCB flits onlyunc_q_rxl_cycles_ne_ncb.vn1uncore interconnectRxQ Cycles Not Empty - NCB; for VN1event=0x10,umask=201Counts the number of cycles that the QPI RxQ was not empty.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy.  This monitors NCB flits onlyunc_q_rxl_cycles_ne_ncs.vn0uncore interconnectRxQ Cycles Not Empty - NCS; for VN0event=0x11,umask=101Counts the number of cycles that the QPI RxQ was not empty.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy.  This monitors NCS flits onlyunc_q_rxl_cycles_ne_ncs.vn1uncore interconnectRxQ Cycles Not Empty - NCS; for VN1event=0x11,umask=201Counts the number of cycles that the QPI RxQ was not empty.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy.  This monitors NCS flits onlyunc_q_rxl_cycles_ne_ndr.vn0uncore interconnectRxQ Cycles Not Empty - NDR; for VN0event=0x14,umask=101Counts the number of cycles that the QPI RxQ was not empty.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy.  This monitors NDR flits onlyunc_q_rxl_cycles_ne_ndr.vn1uncore interconnectRxQ Cycles Not Empty - NDR; for VN1event=0x14,umask=201Counts the number of cycles that the QPI RxQ was not empty.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy.  This monitors NDR flits onlyunc_q_rxl_cycles_ne_snp.vn0uncore interconnectRxQ Cycles Not Empty - SNP; for VN0event=0x13,umask=101Counts the number of cycles that the QPI RxQ was not empty.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy.  This monitors SNP flits onlyunc_q_rxl_cycles_ne_snp.vn1uncore interconnectRxQ Cycles Not Empty - SNP; for VN1event=0x13,umask=201Counts the number of cycles that the QPI RxQ was not empty.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy.  This monitors SNP flits onlyunc_q_rxl_flits_g0.idleuncore interconnectFlits Received - Group 0; Idle and Null Flitsevent=1,umask=101Counts the number of flits received from the QPI Link.  It includes filters for Idle, protocol, and Data Flits.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p.; Number of flits received over QPI that do not hold protocol payload.  When QPI is not in a power saving state, it continuously transmits flits across the link.  When there are no protocol flits to send, it will send IDLE and NULL flits  across.  These flits sometimes do carry a payload, such as credit returns, but are generally not considered part of the QPI bandwidthunc_q_rxl_flits_g1.drsuncore interconnectFlits Received - Group 1; DRS Flits (both Header and Data)event=2,umask=0x1801Counts the number of flits received from the QPI Link.  This is one of three groups that allow us to track flits.  It includes filters for SNP, HOM, and DRS message classes.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the total number of flits received over QPI on the DRS (Data Response) channel.  DRS flits are used to transmit data with coherency.  This does not count data flits received over the NCB channel which transmits non-coherent dataunc_q_rxl_flits_g1.drs_datauncore interconnectFlits Received - Group 1; DRS Data Flitsevent=2,umask=801Counts the number of flits received from the QPI Link.  This is one of three groups that allow us to track flits.  It includes filters for SNP, HOM, and DRS message classes.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the total number of data flits received over QPI on the DRS (Data Response) channel.  DRS flits are used to transmit data with coherency.  This does not count data flits received over the NCB channel which transmits non-coherent data.  This includes only the data flits (not the header)unc_q_rxl_flits_g1.drs_nondatauncore interconnectFlits Received - Group 1; DRS Header Flitsevent=2,umask=0x1001Counts the number of flits received from the QPI Link.  This is one of three groups that allow us to track flits.  It includes filters for SNP, HOM, and DRS message classes.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the total number of protocol flits received over QPI on the DRS (Data Response) channel.  DRS flits are used to transmit data with coherency.  This does not count data flits received over the NCB channel which transmits non-coherent data.  This includes only the header flits (not the data).  This includes extended headersunc_q_rxl_flits_g1.homuncore interconnectFlits Received - Group 1; HOM Flitsevent=2,umask=601Counts the number of flits received from the QPI Link.  This is one of three groups that allow us to track flits.  It includes filters for SNP, HOM, and DRS message classes.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the number of flits received over QPI on the home channelunc_q_rxl_flits_g1.hom_nonrequncore interconnectFlits Received - Group 1; HOM Non-Request Flitsevent=2,umask=401Counts the number of flits received from the QPI Link.  This is one of three groups that allow us to track flits.  It includes filters for SNP, HOM, and DRS message classes.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the number of non-request flits received over QPI on the home channel.  These are most commonly snoop responses, and this event can be used as a proxy for thatunc_q_rxl_flits_g1.hom_requncore interconnectFlits Received - Group 1; HOM Request Flitsevent=2,umask=201Counts the number of flits received from the QPI Link.  This is one of three groups that allow us to track flits.  It includes filters for SNP, HOM, and DRS message classes.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the number of data request received over QPI on the home channel.  This basically counts the number of remote memory requests received over QPI.  In conjunction with the local read count in the Home Agent, one can calculate the number of LLC Missesunc_q_rxl_flits_g1.snpuncore interconnectFlits Received - Group 1; SNP Flitsevent=2,umask=101Counts the number of flits received from the QPI Link.  This is one of three groups that allow us to track flits.  It includes filters for SNP, HOM, and DRS message classes.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the number of snoop request flits received over QPI.  These requests are contained in the snoop channel.  This does not include snoop responses, which are received on the home channelunc_q_rxl_flits_g2.ncbuncore interconnectFlits Received - Group 2; Non-Coherent Rx Flitsevent=3,umask=0xc01Counts the number of flits received from the QPI Link.  This is one of three groups that allow us to track flits.  It includes filters for NDR, NCB, and NCS message classes.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Number of Non-Coherent Bypass flits.  These packets are generally used to transmit non-coherent data across QPIunc_q_rxl_flits_g2.ncb_datauncore interconnectFlits Received - Group 2; Non-Coherent data Rx Flitsevent=3,umask=401Counts the number of flits received from the QPI Link.  This is one of three groups that allow us to track flits.  It includes filters for NDR, NCB, and NCS message classes.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Number of Non-Coherent Bypass data flits.  These flits are generally used to transmit non-coherent data across QPI.  This does not include a count of the DRS (coherent) data flits.  This only counts the data flits, not the NCB headersunc_q_rxl_flits_g2.ncb_nondatauncore interconnectFlits Received - Group 2; Non-Coherent non-data Rx Flitsevent=3,umask=801Counts the number of flits received from the QPI Link.  This is one of three groups that allow us to track flits.  It includes filters for NDR, NCB, and NCS message classes.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Number of Non-Coherent Bypass non-data flits.  These packets are generally used to transmit non-coherent data across QPI, and the flits counted here are for headers and other non-data flits.  This includes extended headersunc_q_rxl_flits_g2.ncsuncore interconnectFlits Received - Group 2; Non-Coherent standard Rx Flitsevent=3,umask=0x1001Counts the number of flits received from the QPI Link.  This is one of three groups that allow us to track flits.  It includes filters for NDR, NCB, and NCS message classes.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Number of NCS (non-coherent standard) flits received over QPI.    This includes extended headersunc_q_rxl_flits_g2.ndr_aduncore interconnectFlits Received - Group 2; Non-Data Response Rx Flits - ADevent=3,umask=101Counts the number of flits received from the QPI Link.  This is one of three groups that allow us to track flits.  It includes filters for NDR, NCB, and NCS message classes.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the total number of flits received over the NDR (Non-Data Response) channel.  This channel is used to send a variety of protocol flits including grants and completions.  This is only for NDR packets to the local socket which use the AK ringunc_q_rxl_flits_g2.ndr_akuncore interconnectFlits Received - Group 2; Non-Data Response Rx Flits - AKevent=3,umask=201Counts the number of flits received from the QPI Link.  This is one of three groups that allow us to track flits.  It includes filters for NDR, NCB, and NCS message classes.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the total number of flits received over the NDR (Non-Data Response) channel.  This channel is used to send a variety of protocol flits including grants and completions.  This is only for NDR packets destined for Route-thru to a remote socketunc_q_rxl_insertsuncore interconnectRx Flit Buffer Allocationsevent=801Number of allocations into the QPI Rx Flit Buffer.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetimeunc_q_rxl_inserts_drs.vn0uncore interconnectRx Flit Buffer Allocations - DRS; for VN0event=9,umask=101Number of allocations into the QPI Rx Flit Buffer.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.  This monitors only DRS flitsunc_q_rxl_inserts_drs.vn1uncore interconnectRx Flit Buffer Allocations - DRS; for VN1event=9,umask=201Number of allocations into the QPI Rx Flit Buffer.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.  This monitors only DRS flitsunc_q_rxl_inserts_hom.vn0uncore interconnectRx Flit Buffer Allocations - HOM; for VN0event=0xc,umask=101Number of allocations into the QPI Rx Flit Buffer.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.  This monitors only HOM flitsunc_q_rxl_inserts_hom.vn1uncore interconnectRx Flit Buffer Allocations - HOM; for VN1event=0xc,umask=201Number of allocations into the QPI Rx Flit Buffer.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.  This monitors only HOM flitsunc_q_rxl_inserts_ncb.vn0uncore interconnectRx Flit Buffer Allocations - NCB; for VN0event=0xa,umask=101Number of allocations into the QPI Rx Flit Buffer.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.  This monitors only NCB flitsunc_q_rxl_inserts_ncb.vn1uncore interconnectRx Flit Buffer Allocations - NCB; for VN1event=0xa,umask=201Number of allocations into the QPI Rx Flit Buffer.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.  This monitors only NCB flitsunc_q_rxl_inserts_ncs.vn0uncore interconnectRx Flit Buffer Allocations - NCS; for VN0event=0xb,umask=101Number of allocations into the QPI Rx Flit Buffer.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.  This monitors only NCS flitsunc_q_rxl_inserts_ncs.vn1uncore interconnectRx Flit Buffer Allocations - NCS; for VN1event=0xb,umask=201Number of allocations into the QPI Rx Flit Buffer.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.  This monitors only NCS flitsunc_q_rxl_inserts_ndr.vn0uncore interconnectRx Flit Buffer Allocations - NDR; for VN0event=0xe,umask=101Number of allocations into the QPI Rx Flit Buffer.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.  This monitors only NDR flitsunc_q_rxl_inserts_ndr.vn1uncore interconnectRx Flit Buffer Allocations - NDR; for VN1event=0xe,umask=201Number of allocations into the QPI Rx Flit Buffer.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.  This monitors only NDR flitsunc_q_rxl_inserts_snp.vn0uncore interconnectRx Flit Buffer Allocations - SNP; for VN0event=0xd,umask=101Number of allocations into the QPI Rx Flit Buffer.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.  This monitors only SNP flitsunc_q_rxl_inserts_snp.vn1uncore interconnectRx Flit Buffer Allocations - SNP; for VN1event=0xd,umask=201Number of allocations into the QPI Rx Flit Buffer.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.  This monitors only SNP flitsunc_q_rxl_occupancyuncore interconnectRxQ Occupancy - All Packetsevent=0xb01Accumulates the number of elements in the QPI RxQ in each cycle.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetimeunc_q_rxl_occupancy_drs.vn0uncore interconnectRxQ Occupancy - DRS; for VN0event=0x15,umask=101Accumulates the number of elements in the QPI RxQ in each cycle.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.  This monitors DRS flits onlyunc_q_rxl_occupancy_drs.vn1uncore interconnectRxQ Occupancy - DRS; for VN1event=0x15,umask=201Accumulates the number of elements in the QPI RxQ in each cycle.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.  This monitors DRS flits onlyunc_q_rxl_occupancy_hom.vn0uncore interconnectRxQ Occupancy - HOM; for VN0event=0x18,umask=101Accumulates the number of elements in the QPI RxQ in each cycle.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.  This monitors HOM flits onlyunc_q_rxl_occupancy_hom.vn1uncore interconnectRxQ Occupancy - HOM; for VN1event=0x18,umask=201Accumulates the number of elements in the QPI RxQ in each cycle.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.  This monitors HOM flits onlyunc_q_rxl_occupancy_ncb.vn0uncore interconnectRxQ Occupancy - NCB; for VN0event=0x16,umask=101Accumulates the number of elements in the QPI RxQ in each cycle.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.  This monitors NCB flits onlyunc_q_rxl_occupancy_ncb.vn1uncore interconnectRxQ Occupancy - NCB; for VN1event=0x16,umask=201Accumulates the number of elements in the QPI RxQ in each cycle.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.  This monitors NCB flits onlyunc_q_rxl_occupancy_ncs.vn0uncore interconnectRxQ Occupancy - NCS; for VN0event=0x17,umask=101Accumulates the number of elements in the QPI RxQ in each cycle.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.  This monitors NCS flits onlyunc_q_rxl_occupancy_ncs.vn1uncore interconnectRxQ Occupancy - NCS; for VN1event=0x17,umask=201Accumulates the number of elements in the QPI RxQ in each cycle.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.  This monitors NCS flits onlyunc_q_rxl_occupancy_ndr.vn0uncore interconnectRxQ Occupancy - NDR; for VN0event=0x1a,umask=101Accumulates the number of elements in the QPI RxQ in each cycle.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.  This monitors NDR flits onlyunc_q_rxl_occupancy_ndr.vn1uncore interconnectRxQ Occupancy - NDR; for VN1event=0x1a,umask=201Accumulates the number of elements in the QPI RxQ in each cycle.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.  This monitors NDR flits onlyunc_q_rxl_occupancy_snp.vn0uncore interconnectRxQ Occupancy - SNP; for VN0event=0x19,umask=101Accumulates the number of elements in the QPI RxQ in each cycle.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.  This monitors SNP flits onlyunc_q_rxl_occupancy_snp.vn1uncore interconnectRxQ Occupancy - SNP; for VN1event=0x19,umask=201Accumulates the number of elements in the QPI RxQ in each cycle.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.  This monitors SNP flits onlyunc_q_rxl_stalls_vn0.bgf_drsuncore interconnectStalls Sending to R3QPI on VN0; BGF Stall - HOMevent=0x35,umask=101Number of stalls trying to send to R3QPI on Virtual Network 0; Stalled a packet from the HOM message class because there were not enough BGF credits.  In bypass mode, we will stall on the packet boundary, while in RxQ mode we will stall on the flit boundaryunc_q_rxl_stalls_vn0.bgf_homuncore interconnectStalls Sending to R3QPI on VN0; BGF Stall - DRSevent=0x35,umask=801Number of stalls trying to send to R3QPI on Virtual Network 0; Stalled a packet from the DRS message class because there were not enough BGF credits.  In bypass mode, we will stall on the packet boundary, while in RxQ mode we will stall on the flit boundaryunc_q_rxl_stalls_vn0.bgf_ncbuncore interconnectStalls Sending to R3QPI on VN0; BGF Stall - SNPevent=0x35,umask=201Number of stalls trying to send to R3QPI on Virtual Network 0; Stalled a packet from the SNP message class because there were not enough BGF credits.  In bypass mode, we will stall on the packet boundary, while in RxQ mode we will stall on the flit boundaryunc_q_rxl_stalls_vn0.bgf_ncsuncore interconnectStalls Sending to R3QPI on VN0; BGF Stall - NDRevent=0x35,umask=401Number of stalls trying to send to R3QPI on Virtual Network 0; Stalled a packet from the NDR message class because there were not enough BGF credits.  In bypass mode, we will stall on the packet boundary, while in RxQ mode we will stall on the flit boundaryunc_q_rxl_stalls_vn0.bgf_ndruncore interconnectStalls Sending to R3QPI on VN0; BGF Stall - NCSevent=0x35,umask=0x2001Number of stalls trying to send to R3QPI on Virtual Network 0; Stalled a packet from the NCS message class because there were not enough BGF credits.  In bypass mode, we will stall on the packet boundary, while in RxQ mode we will stall on the flit boundaryunc_q_rxl_stalls_vn0.bgf_snpuncore interconnectStalls Sending to R3QPI on VN0; BGF Stall - NCBevent=0x35,umask=0x1001Number of stalls trying to send to R3QPI on Virtual Network 0; Stalled a packet from the NCB message class because there were not enough BGF credits.  In bypass mode, we will stall on the packet boundary, while in RxQ mode we will stall on the flit boundaryunc_q_rxl_stalls_vn0.egress_creditsuncore interconnectStalls Sending to R3QPI on VN0; Egress Creditsevent=0x35,umask=0x4001Number of stalls trying to send to R3QPI on Virtual Network 0; Stalled a packet because there were insufficient BGF credits.  For details on a message class granularity, use the Egress Credit Occupancy eventsunc_q_rxl_stalls_vn0.gvuncore interconnectStalls Sending to R3QPI on VN0; GVevent=0x35,umask=0x8001Number of stalls trying to send to R3QPI on Virtual Network 0; Stalled because a GV transition (frequency transition) was taking placeunc_q_rxl_stalls_vn1.bgf_drsuncore interconnectStalls Sending to R3QPI on VN1; BGF Stall - HOMevent=0x3a,umask=101Number of stalls trying to send to R3QPI on Virtual Network 1.; Stalled a packet from the HOM message class because there were not enough BGF credits.  In bypass mode, we will stall on the packet boundary, while in RxQ mode we will stall on the flit boundaryunc_q_rxl_stalls_vn1.bgf_homuncore interconnectStalls Sending to R3QPI on VN1; BGF Stall - DRSevent=0x3a,umask=801Number of stalls trying to send to R3QPI on Virtual Network 1.; Stalled a packet from the DRS message class because there were not enough BGF credits.  In bypass mode, we will stall on the packet boundary, while in RxQ mode we will stall on the flit boundaryunc_q_rxl_stalls_vn1.bgf_ncbuncore interconnectStalls Sending to R3QPI on VN1; BGF Stall - SNPevent=0x3a,umask=201Number of stalls trying to send to R3QPI on Virtual Network 1.; Stalled a packet from the SNP message class because there were not enough BGF credits.  In bypass mode, we will stall on the packet boundary, while in RxQ mode we will stall on the flit boundaryunc_q_rxl_stalls_vn1.bgf_ncsuncore interconnectStalls Sending to R3QPI on VN1; BGF Stall - NDRevent=0x3a,umask=401Number of stalls trying to send to R3QPI on Virtual Network 1.; Stalled a packet from the NDR message class because there were not enough BGF credits.  In bypass mode, we will stall on the packet boundary, while in RxQ mode we will stall on the flit boundaryunc_q_rxl_stalls_vn1.bgf_ndruncore interconnectStalls Sending to R3QPI on VN1; BGF Stall - NCSevent=0x3a,umask=0x2001Number of stalls trying to send to R3QPI on Virtual Network 1.; Stalled a packet from the NCS message class because there were not enough BGF credits.  In bypass mode, we will stall on the packet boundary, while in RxQ mode we will stall on the flit boundaryunc_q_rxl_stalls_vn1.bgf_snpuncore interconnectStalls Sending to R3QPI on VN1; BGF Stall - NCBevent=0x3a,umask=0x1001Number of stalls trying to send to R3QPI on Virtual Network 1.; Stalled a packet from the NCB message class because there were not enough BGF credits.  In bypass mode, we will stall on the packet boundary, while in RxQ mode we will stall on the flit boundaryunc_q_txl0p_power_cyclesuncore interconnectCycles in L0pevent=0xd01Number of QPI qfclk cycles spent in L0p power mode.  L0p is a mode where we disable 1/2 of the QPI lanes, decreasing our bandwidth in order to save power.  It increases snoop and data transfer latencies and decreases overall bandwidth.  This mode can be very useful in NUMA optimized workloads that largely only utilize QPI for snoops and their responses.  Use edge detect to count the number of instances when the QPI link entered L0p.  Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in anotherunc_q_txl0_power_cyclesuncore interconnectCycles in L0event=0xc01Number of QPI qfclk cycles spent in L0 power mode in the Link Layer.  L0 is the default mode which provides the highest performance with the most power.  Use edge detect to count the number of instances that the link entered L0.  Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another.  The phy layer  sometimes leaves L0 for training, which will not be captured by this eventunc_q_txl_bypasseduncore interconnectTx Flit Buffer Bypassedevent=501Counts the number of times that an incoming flit was able to bypass the Tx flit buffer and pass directly out the QPI Link. Generally, when data is transmitted across QPI, it will bypass the TxQ and pass directly to the link.  However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the linkunc_q_txl_crc_no_credits.almost_fulluncore interconnectCycles Stalled with no LLR Credits; LLR is almost fullevent=2,umask=201Number of cycles when the Tx side ran out of Link Layer Retry credits, causing the Tx to stall.; When LLR is almost full, we block some but not all packetsunc_q_txl_crc_no_credits.fulluncore interconnectCycles Stalled with no LLR Credits; LLR is fullevent=2,umask=101Number of cycles when the Tx side ran out of Link Layer Retry credits, causing the Tx to stall.; When LLR is totally full, we are not allowed to send any packetsunc_q_txl_cycles_neuncore interconnectTx Flit Buffer Cycles not Emptyevent=601Counts the number of cycles when the TxQ is not empty. Generally, when data is transmitted across QPI, it will bypass the TxQ and pass directly to the link.  However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the linkunc_q_txl_flits_g0.datauncore interconnectFlits Transferred - Group 0; Data Tx Flitsevent=0,umask=201Counts the number of flits transmitted across the QPI Link.  It includes filters for Idle, protocol, and Data Flits.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p.; Number of data flits transmitted over QPI.  Each flit contains 64b of data.  This includes both DRS and NCB data flits (coherent and non-coherent).  This can be used to calculate the data bandwidth of the QPI link.  One can get a good picture of the QPI-link characteristics by evaluating the protocol flits, data flits, and idle/null flits.  This does not include the header flits that go in data packetsunc_q_txl_flits_g0.non_datauncore interconnectFlits Transferred - Group 0; Non-Data protocol Tx Flitsevent=0,umask=401Counts the number of flits transmitted across the QPI Link.  It includes filters for Idle, protocol, and Data Flits.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p.; Number of non-NULL non-data flits transmitted across QPI.  This basically tracks the protocol overhead on the QPI link.  One can get a good picture of the QPI-link characteristics by evaluating the protocol flits, data flits, and idle/null flits.  This includes the header flits for data packetsunc_q_txl_flits_g1.drsuncore interconnectFlits Transferred - Group 1; DRS Flits (both Header and Data)event=0,umask=0x1801Counts the number of flits transmitted across the QPI Link.  This is one of three groups that allow us to track flits.  It includes filters for SNP, HOM, and DRS message classes.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the total number of flits transmitted over QPI on the DRS (Data Response) channel.  DRS flits are used to transmit data with coherencyunc_q_txl_flits_g1.drs_datauncore interconnectFlits Transferred - Group 1; DRS Data Flitsevent=0,umask=801Counts the number of flits transmitted across the QPI Link.  This is one of three groups that allow us to track flits.  It includes filters for SNP, HOM, and DRS message classes.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the total number of data flits transmitted over QPI on the DRS (Data Response) channel.  DRS flits are used to transmit data with coherency.  This does not count data flits transmitted over the NCB channel which transmits non-coherent data.  This includes only the data flits (not the header)unc_q_txl_flits_g1.drs_nondatauncore interconnectFlits Transferred - Group 1; DRS Header Flitsevent=0,umask=0x1001Counts the number of flits transmitted across the QPI Link.  This is one of three groups that allow us to track flits.  It includes filters for SNP, HOM, and DRS message classes.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the total number of protocol flits transmitted over QPI on the DRS (Data Response) channel.  DRS flits are used to transmit data with coherency.  This does not count data flits transmitted over the NCB channel which transmits non-coherent data.  This includes only the header flits (not the data).  This includes extended headersunc_q_txl_flits_g1.homuncore interconnectFlits Transferred - Group 1; HOM Flitsevent=0,umask=601Counts the number of flits transmitted across the QPI Link.  This is one of three groups that allow us to track flits.  It includes filters for SNP, HOM, and DRS message classes.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the number of flits transmitted over QPI on the home channelunc_q_txl_flits_g1.hom_nonrequncore interconnectFlits Transferred - Group 1; HOM Non-Request Flitsevent=0,umask=401Counts the number of flits transmitted across the QPI Link.  This is one of three groups that allow us to track flits.  It includes filters for SNP, HOM, and DRS message classes.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the number of non-request flits transmitted over QPI on the home channel.  These are most commonly snoop responses, and this event can be used as a proxy for thatunc_q_txl_flits_g1.hom_requncore interconnectFlits Transferred - Group 1; HOM Request Flitsevent=0,umask=201Counts the number of flits transmitted across the QPI Link.  This is one of three groups that allow us to track flits.  It includes filters for SNP, HOM, and DRS message classes.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the number of data request transmitted over QPI on the home channel.  This basically counts the number of remote memory requests transmitted over QPI.  In conjunction with the local read count in the Home Agent, one can calculate the number of LLC Missesunc_q_txl_flits_g1.snpuncore interconnectFlits Transferred - Group 1; SNP Flitsevent=0,umask=101Counts the number of flits transmitted across the QPI Link.  This is one of three groups that allow us to track flits.  It includes filters for SNP, HOM, and DRS message classes.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the number of snoop request flits transmitted over QPI.  These requests are contained in the snoop channel.  This does not include snoop responses, which are transmitted on the home channelunc_q_txl_flits_g2.ncbuncore interconnectFlits Transferred - Group 2; Non-Coherent Bypass Tx Flitsevent=1,umask=0xc01Counts the number of flits transmitted across the QPI Link.  This is one of three groups that allow us to track flits.  It includes filters for NDR, NCB, and NCS message classes.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Number of Non-Coherent Bypass flits.  These packets are generally used to transmit non-coherent data across QPIunc_q_txl_flits_g2.ncb_datauncore interconnectFlits Transferred - Group 2; Non-Coherent data Tx Flitsevent=1,umask=401Counts the number of flits transmitted across the QPI Link.  This is one of three groups that allow us to track flits.  It includes filters for NDR, NCB, and NCS message classes.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Number of Non-Coherent Bypass data flits.  These flits are generally used to transmit non-coherent data across QPI.  This does not include a count of the DRS (coherent) data flits.  This only counts the data flits, not the NCB headersunc_q_txl_flits_g2.ncb_nondatauncore interconnectFlits Transferred - Group 2; Non-Coherent non-data Tx Flitsevent=1,umask=801Counts the number of flits transmitted across the QPI Link.  This is one of three groups that allow us to track flits.  It includes filters for NDR, NCB, and NCS message classes.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Number of Non-Coherent Bypass non-data flits.  These packets are generally used to transmit non-coherent data across QPI, and the flits counted here are for headers and other non-data flits.  This includes extended headersunc_q_txl_flits_g2.ncsuncore interconnectFlits Transferred - Group 2; Non-Coherent standard Tx Flitsevent=1,umask=0x1001Counts the number of flits transmitted across the QPI Link.  This is one of three groups that allow us to track flits.  It includes filters for NDR, NCB, and NCS message classes.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Number of NCS (non-coherent standard) flits transmitted over QPI.    This includes extended headersunc_q_txl_flits_g2.ndr_aduncore interconnectFlits Transferred - Group 2; Non-Data Response Tx Flits - ADevent=1,umask=101Counts the number of flits transmitted across the QPI Link.  This is one of three groups that allow us to track flits.  It includes filters for NDR, NCB, and NCS message classes.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the total number of flits transmitted over the NDR (Non-Data Response) channel.  This channel is used to send a variety of protocol flits including grants and completions.  This is only for NDR packets to the local socket which use the AK ringunc_q_txl_flits_g2.ndr_akuncore interconnectFlits Transferred - Group 2; Non-Data Response Tx Flits - AKevent=1,umask=201Counts the number of flits transmitted across the QPI Link.  This is one of three groups that allow us to track flits.  It includes filters for NDR, NCB, and NCS message classes.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the total number of flits transmitted over the NDR (Non-Data Response) channel.  This channel is used to send a variety of protocol flits including grants and completions.  This is only for NDR packets destined for Route-thru to a remote socketunc_q_txl_insertsuncore interconnectTx Flit Buffer Allocationsevent=401Number of allocations into the QPI Tx Flit Buffer.  Generally, when data is transmitted across QPI, it will bypass the TxQ and pass directly to the link.  However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetimeunc_q_txl_occupancyuncore interconnectTx Flit Buffer Occupancyevent=701Accumulates the number of flits in the TxQ.  Generally, when data is transmitted across QPI, it will bypass the TxQ and pass directly to the link.  However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link. This can be used with the cycles not empty event to track average occupancy, or the allocations event to track average lifetime in the TxQunc_q_txr_ad_hom_credit_acquired.vn0uncore interconnectR3QPI Egress Credit Occupancy - HOM; for VN0event=0x26,umask=101Number of link layer credits into the R3 (for transactions across the BGF) acquired each cycle. Flow Control FIFO for Home messages on ADunc_q_txr_ad_hom_credit_acquired.vn1uncore interconnectR3QPI Egress Credit Occupancy - HOM; for VN1event=0x26,umask=201Number of link layer credits into the R3 (for transactions across the BGF) acquired each cycle. Flow Control FIFO for Home messages on ADunc_q_txr_ad_hom_credit_occupancy.vn0uncore interconnectR3QPI Egress Credit Occupancy - AD HOM; for VN0event=0x22,umask=101Occupancy event that tracks the number of link layer credits into the R3 (for transactions across the BGF) available in each cycle.  Flow Control FIFO for HOM messages on ADunc_q_txr_ad_hom_credit_occupancy.vn1uncore interconnectR3QPI Egress Credit Occupancy - AD HOM; for VN1event=0x22,umask=201Occupancy event that tracks the number of link layer credits into the R3 (for transactions across the BGF) available in each cycle.  Flow Control FIFO for HOM messages on ADunc_q_txr_ad_ndr_credit_acquired.vn0uncore interconnectR3QPI Egress Credit Occupancy - AD NDR; for VN0event=0x28,umask=101Number of link layer credits into the R3 (for transactions across the BGF) acquired each cycle.  Flow Control FIFO for NDR messages on ADunc_q_txr_ad_ndr_credit_acquired.vn1uncore interconnectR3QPI Egress Credit Occupancy - AD NDR; for VN1event=0x28,umask=201Number of link layer credits into the R3 (for transactions across the BGF) acquired each cycle.  Flow Control FIFO for NDR messages on ADunc_q_txr_ad_ndr_credit_occupancy.vn0uncore interconnectR3QPI Egress Credit Occupancy - AD NDR; for VN0event=0x24,umask=101Occupancy event that tracks the number of link layer credits into the R3 (for transactions across the BGF) available in each cycle. Flow Control FIFO  for NDR messages on ADunc_q_txr_ad_ndr_credit_occupancy.vn1uncore interconnectR3QPI Egress Credit Occupancy - AD NDR; for VN1event=0x24,umask=201Occupancy event that tracks the number of link layer credits into the R3 (for transactions across the BGF) available in each cycle. Flow Control FIFO  for NDR messages on ADunc_q_txr_ad_snp_credit_acquired.vn0uncore interconnectR3QPI Egress Credit Occupancy - SNP; for VN0event=0x27,umask=101Number of link layer credits into the R3 (for transactions across the BGF) acquired each cycle.  Flow Control FIFO for Snoop messages on ADunc_q_txr_ad_snp_credit_acquired.vn1uncore interconnectR3QPI Egress Credit Occupancy - SNP; for VN1event=0x27,umask=201Number of link layer credits into the R3 (for transactions across the BGF) acquired each cycle.  Flow Control FIFO for Snoop messages on ADunc_q_txr_ad_snp_credit_occupancy.vn0uncore interconnectR3QPI Egress Credit Occupancy - AD SNP; for VN0event=0x23,umask=101Occupancy event that tracks the number of link layer credits into the R3 (for transactions across the BGF) available in each cycle.  Flow Control FIFO for Snoop messages on ADunc_q_txr_ad_snp_credit_occupancy.vn1uncore interconnectR3QPI Egress Credit Occupancy - AD SNP; for VN1event=0x23,umask=201Occupancy event that tracks the number of link layer credits into the R3 (for transactions across the BGF) available in each cycle.  Flow Control FIFO for Snoop messages on ADunc_q_txr_ak_ndr_credit_acquireduncore interconnectR3QPI Egress Credit Occupancy - AK NDRevent=0x2901Number of credits into the R3 (for transactions across the BGF) acquired each cycle. Local NDR message class to AK Egressunc_q_txr_ak_ndr_credit_occupancyuncore interconnectR3QPI Egress Credit Occupancy - AK NDRevent=0x2501Occupancy event that tracks the number of credits into the R3 (for transactions across the BGF) available in each cycle.  Local NDR message class to AK Egressunc_q_txr_bl_drs_credit_acquired.vn0uncore interconnectR3QPI Egress Credit Occupancy - DRS; for VN0event=0x2a,umask=101Number of credits into the R3 (for transactions across the BGF) acquired each cycle. DRS message class to BL Egressunc_q_txr_bl_drs_credit_acquired.vn1uncore interconnectR3QPI Egress Credit Occupancy - DRS; for VN1event=0x2a,umask=201Number of credits into the R3 (for transactions across the BGF) acquired each cycle. DRS message class to BL Egressunc_q_txr_bl_drs_credit_acquired.vn_shruncore interconnectR3QPI Egress Credit Occupancy - DRS; for Shared VNevent=0x2a,umask=401Number of credits into the R3 (for transactions across the BGF) acquired each cycle. DRS message class to BL Egressunc_q_txr_bl_drs_credit_occupancy.vn0uncore interconnectR3QPI Egress Credit Occupancy - BL DRS; for VN0event=0x1f,umask=101Occupancy event that tracks the number of credits into the R3 (for transactions across the BGF) available in each cycle.  DRS message class to BL Egressunc_q_txr_bl_drs_credit_occupancy.vn1uncore interconnectR3QPI Egress Credit Occupancy - BL DRS; for VN1event=0x1f,umask=201Occupancy event that tracks the number of credits into the R3 (for transactions across the BGF) available in each cycle.  DRS message class to BL Egressunc_q_txr_bl_drs_credit_occupancy.vn_shruncore interconnectR3QPI Egress Credit Occupancy - BL DRS; for Shared VNevent=0x1f,umask=401Occupancy event that tracks the number of credits into the R3 (for transactions across the BGF) available in each cycle.  DRS message class to BL Egressunc_q_txr_bl_ncb_credit_acquired.vn0uncore interconnectR3QPI Egress Credit Occupancy - NCB; for VN0event=0x2b,umask=101Number of credits into the R3 (for transactions across the BGF) acquired each cycle. NCB message class to BL Egressunc_q_txr_bl_ncb_credit_acquired.vn1uncore interconnectR3QPI Egress Credit Occupancy - NCB; for VN1event=0x2b,umask=201Number of credits into the R3 (for transactions across the BGF) acquired each cycle. NCB message class to BL Egressunc_q_txr_bl_ncb_credit_occupancy.vn0uncore interconnectR3QPI Egress Credit Occupancy - BL NCB; for VN0event=0x20,umask=101Occupancy event that tracks the number of credits into the R3 (for transactions across the BGF) available in each cycle.  NCB message class to BL Egressunc_q_txr_bl_ncb_credit_occupancy.vn1uncore interconnectR3QPI Egress Credit Occupancy - BL NCB; for VN1event=0x20,umask=201Occupancy event that tracks the number of credits into the R3 (for transactions across the BGF) available in each cycle.  NCB message class to BL Egressunc_q_txr_bl_ncs_credit_acquired.vn0uncore interconnectR3QPI Egress Credit Occupancy - NCS; for VN0event=0x2c,umask=101Number of credits into the R3 (for transactions across the BGF) acquired each cycle. NCS message class to BL Egressunc_q_txr_bl_ncs_credit_acquired.vn1uncore interconnectR3QPI Egress Credit Occupancy - NCS; for VN1event=0x2c,umask=201Number of credits into the R3 (for transactions across the BGF) acquired each cycle. NCS message class to BL Egressunc_q_txr_bl_ncs_credit_occupancy.vn0uncore interconnectR3QPI Egress Credit Occupancy - BL NCS; for VN0event=0x21,umask=101Occupancy event that tracks the number of credits into the R3 (for transactions across the BGF) available in each cycle.  NCS message class to BL Egressunc_q_txr_bl_ncs_credit_occupancy.vn1uncore interconnectR3QPI Egress Credit Occupancy - BL NCS; for VN1event=0x21,umask=201Occupancy event that tracks the number of credits into the R3 (for transactions across the BGF) available in each cycle.  NCS message class to BL Egressunc_q_vna_credit_returnsuncore interconnectVNA Credits Returnedevent=0x1c01Number of VNA credits returnedunc_q_vna_credit_return_occupancyuncore interconnectVNA Credits Pending Return - Occupancyevent=0x1b01Number of VNA credits in the Rx side that are waitng to be returned back across the linkuncore_r3qpiunc_r3_clockticksuncore interconnectNumber of uclks in domainevent=101Counts the number of uclks in the QPI uclk domain.  This could be slightly different than the count in the Ubox because of enable/freeze delays.  However, because the QPI Agent is close to the Ubox, they generally should not diverge by more than a handful of cyclesunc_r3_c_hi_ad_credits_empty.cbo10uncore interconnectCBox AD Credits Emptyevent=0x1f,umask=401No credits available to send to Cbox on the AD Ring (covers higher CBoxes); Cbox 10unc_r3_c_hi_ad_credits_empty.cbo11uncore interconnectCBox AD Credits Emptyevent=0x1f,umask=801No credits available to send to Cbox on the AD Ring (covers higher CBoxes); Cbox 11unc_r3_c_hi_ad_credits_empty.cbo12uncore interconnectCBox AD Credits Emptyevent=0x1f,umask=0x1001No credits available to send to Cbox on the AD Ring (covers higher CBoxes); Cbox 12unc_r3_c_hi_ad_credits_empty.cbo13uncore interconnectCBox AD Credits Emptyevent=0x1f,umask=0x2001No credits available to send to Cbox on the AD Ring (covers higher CBoxes); Cbox 13unc_r3_c_hi_ad_credits_empty.cbo14_16uncore interconnectCBox AD Credits Emptyevent=0x1f,umask=0x4001No credits available to send to Cbox on the AD Ring (covers higher CBoxes); Cbox 14&16unc_r3_c_hi_ad_credits_empty.cbo8uncore interconnectCBox AD Credits Emptyevent=0x1f,umask=101No credits available to send to Cbox on the AD Ring (covers higher CBoxes); Cbox 8unc_r3_c_hi_ad_credits_empty.cbo9uncore interconnectCBox AD Credits Emptyevent=0x1f,umask=201No credits available to send to Cbox on the AD Ring (covers higher CBoxes); Cbox 9unc_r3_c_hi_ad_credits_empty.cbo_15_17uncore interconnectCBox AD Credits Emptyevent=0x1f,umask=0x8001No credits available to send to Cbox on the AD Ring (covers higher CBoxes); Cbox 15&17unc_r3_c_lo_ad_credits_empty.cbo0uncore interconnectCBox AD Credits Emptyevent=0x22,umask=101No credits available to send to Cbox on the AD Ring (covers lower CBoxes); Cbox 0unc_r3_c_lo_ad_credits_empty.cbo1uncore interconnectCBox AD Credits Emptyevent=0x22,umask=201No credits available to send to Cbox on the AD Ring (covers lower CBoxes); Cbox 1unc_r3_c_lo_ad_credits_empty.cbo2uncore interconnectCBox AD Credits Emptyevent=0x22,umask=401No credits available to send to Cbox on the AD Ring (covers lower CBoxes); Cbox 2unc_r3_c_lo_ad_credits_empty.cbo3uncore interconnectCBox AD Credits Emptyevent=0x22,umask=801No credits available to send to Cbox on the AD Ring (covers lower CBoxes); Cbox 3unc_r3_c_lo_ad_credits_empty.cbo4uncore interconnectCBox AD Credits Emptyevent=0x22,umask=0x1001No credits available to send to Cbox on the AD Ring (covers lower CBoxes); Cbox 4unc_r3_c_lo_ad_credits_empty.cbo5uncore interconnectCBox AD Credits Emptyevent=0x22,umask=0x2001No credits available to send to Cbox on the AD Ring (covers lower CBoxes); Cbox 5unc_r3_c_lo_ad_credits_empty.cbo6uncore interconnectCBox AD Credits Emptyevent=0x22,umask=0x4001No credits available to send to Cbox on the AD Ring (covers lower CBoxes); Cbox 6unc_r3_c_lo_ad_credits_empty.cbo7uncore interconnectCBox AD Credits Emptyevent=0x22,umask=0x8001No credits available to send to Cbox on the AD Ring (covers lower CBoxes); Cbox 7unc_r3_ha_r2_bl_credits_empty.ha0uncore interconnectHA/R2 AD Credits Emptyevent=0x2d,umask=101No credits available to send to either HA or R2 on the BL Ring; HA0unc_r3_ha_r2_bl_credits_empty.ha1uncore interconnectHA/R2 AD Credits Emptyevent=0x2d,umask=201No credits available to send to either HA or R2 on the BL Ring; HA1unc_r3_ha_r2_bl_credits_empty.r2_ncbuncore interconnectHA/R2 AD Credits Emptyevent=0x2d,umask=401No credits available to send to either HA or R2 on the BL Ring; R2 NCB Messagesunc_r3_ha_r2_bl_credits_empty.r2_ncsuncore interconnectHA/R2 AD Credits Emptyevent=0x2d,umask=801No credits available to send to either HA or R2 on the BL Ring; R2 NCS Messagesunc_r3_iot_backpressure.hubuncore interconnectIOT Backpressureevent=0xb,umask=201unc_r3_iot_backpressure.satuncore interconnectIOT Backpressureevent=0xb,umask=101unc_r3_iot_cts_hi.cts2uncore interconnectIOT Common Trigger Sequencer - Hievent=0xd,umask=101Debug Mask/Match Tie-Insunc_r3_iot_cts_hi.cts3uncore interconnectIOT Common Trigger Sequencer - Hievent=0xd,umask=201Debug Mask/Match Tie-Insunc_r3_iot_cts_lo.cts0uncore interconnectIOT Common Trigger Sequencer - Loevent=0xc,umask=101Debug Mask/Match Tie-Insunc_r3_iot_cts_lo.cts1uncore interconnectIOT Common Trigger Sequencer - Loevent=0xc,umask=201Debug Mask/Match Tie-Insunc_r3_qpi0_ad_credits_empty.vn0_homuncore interconnectQPI0 AD Credits Emptyevent=0x20,umask=201No credits available to send to QPI0 on the AD Ring; VN0 HOM Messagesunc_r3_qpi0_ad_credits_empty.vn0_ndruncore interconnectQPI0 AD Credits Emptyevent=0x20,umask=801No credits available to send to QPI0 on the AD Ring; VN0 NDR Messagesunc_r3_qpi0_ad_credits_empty.vn0_snpuncore interconnectQPI0 AD Credits Emptyevent=0x20,umask=401No credits available to send to QPI0 on the AD Ring; VN0 SNP Messagesunc_r3_qpi0_ad_credits_empty.vn1_homuncore interconnectQPI0 AD Credits Emptyevent=0x20,umask=0x1001No credits available to send to QPI0 on the AD Ring; VN1 HOM Messagesunc_r3_qpi0_ad_credits_empty.vn1_ndruncore interconnectQPI0 AD Credits Emptyevent=0x20,umask=0x4001No credits available to send to QPI0 on the AD Ring; VN1 NDR Messagesunc_r3_qpi0_ad_credits_empty.vn1_snpuncore interconnectQPI0 AD Credits Emptyevent=0x20,umask=0x2001No credits available to send to QPI0 on the AD Ring; VN1 SNP Messagesunc_r3_qpi0_ad_credits_empty.vnauncore interconnectQPI0 AD Credits Emptyevent=0x20,umask=101No credits available to send to QPI0 on the AD Ring; VNAunc_r3_qpi0_bl_credits_empty.vn1_homuncore interconnectQPI0 BL Credits Emptyevent=0x21,umask=0x1001No credits available to send to QPI0 on the BL Ring; VN1 HOM Messagesunc_r3_qpi0_bl_credits_empty.vn1_ndruncore interconnectQPI0 BL Credits Emptyevent=0x21,umask=0x4001No credits available to send to QPI0 on the BL Ring; VN1 NDR Messagesunc_r3_qpi0_bl_credits_empty.vn1_snpuncore interconnectQPI0 BL Credits Emptyevent=0x21,umask=0x2001No credits available to send to QPI0 on the BL Ring; VN1 SNP Messagesunc_r3_qpi0_bl_credits_empty.vnauncore interconnectQPI0 BL Credits Emptyevent=0x21,umask=101No credits available to send to QPI0 on the BL Ring; VNAunc_r3_qpi1_ad_credits_empty.vn1_homuncore interconnectQPI1 AD Credits Emptyevent=0x2e,umask=0x1001No credits available to send to QPI1 on the AD Ring; VN1 HOM Messagesunc_r3_qpi1_ad_credits_empty.vn1_ndruncore interconnectQPI1 AD Credits Emptyevent=0x2e,umask=0x4001No credits available to send to QPI1 on the AD Ring; VN1 NDR Messagesunc_r3_qpi1_ad_credits_empty.vn1_snpuncore interconnectQPI1 AD Credits Emptyevent=0x2e,umask=0x2001No credits available to send to QPI1 on the AD Ring; VN1 SNP Messagesunc_r3_qpi1_ad_credits_empty.vnauncore interconnectQPI1 AD Credits Emptyevent=0x2e,umask=101No credits available to send to QPI1 on the AD Ring; VNAunc_r3_qpi1_bl_credits_empty.vn0_homuncore interconnectQPI1 BL Credits Emptyevent=0x2f,umask=201No credits available to send to QPI1 on the BL Ring; VN0 HOM Messagesunc_r3_qpi1_bl_credits_empty.vn0_ndruncore interconnectQPI1 BL Credits Emptyevent=0x2f,umask=801No credits available to send to QPI1 on the BL Ring; VN0 NDR Messagesunc_r3_qpi1_bl_credits_empty.vn0_snpuncore interconnectQPI1 BL Credits Emptyevent=0x2f,umask=401No credits available to send to QPI1 on the BL Ring; VN0 SNP Messagesunc_r3_qpi1_bl_credits_empty.vn1_homuncore interconnectQPI1 BL Credits Emptyevent=0x2f,umask=0x1001No credits available to send to QPI1 on the BL Ring; VN1 HOM Messagesunc_r3_qpi1_bl_credits_empty.vn1_ndruncore interconnectQPI1 BL Credits Emptyevent=0x2f,umask=0x4001No credits available to send to QPI1 on the BL Ring; VN1 NDR Messagesunc_r3_qpi1_bl_credits_empty.vn1_snpuncore interconnectQPI1 BL Credits Emptyevent=0x2f,umask=0x2001No credits available to send to QPI1 on the BL Ring; VN1 SNP Messagesunc_r3_qpi1_bl_credits_empty.vnauncore interconnectQPI1 BL Credits Emptyevent=0x2f,umask=101No credits available to send to QPI1 on the BL Ring; VNAunc_r3_ring_ad_used.alluncore interconnectR3 AD Ring in Use; Allevent=7,umask=0xf01Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r3_ring_ad_used.ccwuncore interconnectR3 AD Ring in Use; Counterclockwiseevent=7,umask=0xc01Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r3_ring_ad_used.ccw_evenuncore interconnectR3 AD Ring in Use; Counterclockwise and Evenevent=7,umask=401Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarityunc_r3_ring_ad_used.ccw_odduncore interconnectR3 AD Ring in Use; Counterclockwise and Oddevent=7,umask=801Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarityunc_r3_ring_ad_used.cwuncore interconnectR3 AD Ring in Use; Clockwiseevent=7,umask=301Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r3_ring_ad_used.cw_evenuncore interconnectR3 AD Ring in Use; Clockwise and Evenevent=7,umask=101Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarityunc_r3_ring_ad_used.cw_odduncore interconnectR3 AD Ring in Use; Clockwise and Oddevent=7,umask=201Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarityunc_r3_ring_ak_used.alluncore interconnectR3 AK Ring in Use; Allevent=8,umask=0xf01Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r3_ring_ak_used.ccwuncore interconnectR3 AK Ring in Use; Counterclockwiseevent=8,umask=0xc01Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r3_ring_ak_used.ccw_evenuncore interconnectR3 AK Ring in Use; Counterclockwise and Evenevent=8,umask=401Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarityunc_r3_ring_ak_used.ccw_odduncore interconnectR3 AK Ring in Use; Counterclockwise and Oddevent=8,umask=801Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarityunc_r3_ring_ak_used.cwuncore interconnectR3 AK Ring in Use; Clockwiseevent=8,umask=301Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r3_ring_ak_used.cw_evenuncore interconnectR3 AK Ring in Use; Clockwise and Evenevent=8,umask=101Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarityunc_r3_ring_ak_used.cw_odduncore interconnectR3 AK Ring in Use; Clockwise and Oddevent=8,umask=201Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarityunc_r3_ring_bl_used.alluncore interconnectR3 BL Ring in Use; Allevent=9,umask=0xf01Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r3_ring_bl_used.ccwuncore interconnectR3 BL Ring in Use; Counterclockwiseevent=9,umask=0xc01Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r3_ring_bl_used.ccw_evenuncore interconnectR3 BL Ring in Use; Counterclockwise and Evenevent=9,umask=401Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarityunc_r3_ring_bl_used.ccw_odduncore interconnectR3 BL Ring in Use; Counterclockwise and Oddevent=9,umask=801Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarityunc_r3_ring_bl_used.cwuncore interconnectR3 BL Ring in Use; Clockwiseevent=9,umask=301Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r3_ring_bl_used.cw_evenuncore interconnectR3 BL Ring in Use; Clockwise and Evenevent=9,umask=101Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarityunc_r3_ring_bl_used.cw_odduncore interconnectR3 BL Ring in Use; Clockwise and Oddevent=9,umask=201Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarityunc_r3_ring_iv_used.anyuncore interconnectR3 IV Ring in Use; Anyevent=0xa,umask=0xf01Counts the number of cycles that the IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stopunc_r3_ring_iv_used.cwuncore interconnectR3 IV Ring in Use; Clockwiseevent=0xa,umask=301Counts the number of cycles that the IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stopunc_r3_ring_sink_starved.akuncore interconnectRing Stop Starved; AKevent=0xe,umask=201Number of cycles the ringstop is in starvation (per ring)unc_r3_rxr_cycles_ne.homuncore interconnectIngress Cycles Not Empty; HOMevent=0x10,umask=101Counts the number of cycles when the QPI Ingress is not empty.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple counters.; HOM Ingress Queueunc_r3_rxr_cycles_ne.ndruncore interconnectIngress Cycles Not Empty; NDRevent=0x10,umask=401Counts the number of cycles when the QPI Ingress is not empty.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple counters.; NDR Ingress Queueunc_r3_rxr_cycles_ne.snpuncore interconnectIngress Cycles Not Empty; SNPevent=0x10,umask=201Counts the number of cycles when the QPI Ingress is not empty.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple counters.; SNP Ingress Queueunc_r3_rxr_cycles_ne_vn1.drsuncore interconnectVN1 Ingress Cycles Not Empty; DRSevent=0x14,umask=801Counts the number of cycles when the QPI VN1  Ingress is not empty.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple counters.; DRS Ingress Queueunc_r3_rxr_cycles_ne_vn1.homuncore interconnectVN1 Ingress Cycles Not Empty; HOMevent=0x14,umask=101Counts the number of cycles when the QPI VN1  Ingress is not empty.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple counters.; HOM Ingress Queueunc_r3_rxr_cycles_ne_vn1.ncbuncore interconnectVN1 Ingress Cycles Not Empty; NCBevent=0x14,umask=0x1001Counts the number of cycles when the QPI VN1  Ingress is not empty.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple counters.; NCB Ingress Queueunc_r3_rxr_cycles_ne_vn1.ncsuncore interconnectVN1 Ingress Cycles Not Empty; NCSevent=0x14,umask=0x2001Counts the number of cycles when the QPI VN1  Ingress is not empty.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple counters.; NCS Ingress Queueunc_r3_rxr_cycles_ne_vn1.ndruncore interconnectVN1 Ingress Cycles Not Empty; NDRevent=0x14,umask=401Counts the number of cycles when the QPI VN1  Ingress is not empty.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple counters.; NDR Ingress Queueunc_r3_rxr_cycles_ne_vn1.snpuncore interconnectVN1 Ingress Cycles Not Empty; SNPevent=0x14,umask=201Counts the number of cycles when the QPI VN1  Ingress is not empty.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple counters.; SNP Ingress Queueunc_r3_rxr_inserts.drsuncore interconnectIngress Allocations; DRSevent=0x11,umask=801Counts the number of allocations into the QPI Ingress.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; DRS Ingress Queueunc_r3_rxr_inserts.homuncore interconnectIngress Allocations; HOMevent=0x11,umask=101Counts the number of allocations into the QPI Ingress.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; HOM Ingress Queueunc_r3_rxr_inserts.ncbuncore interconnectIngress Allocations; NCBevent=0x11,umask=0x1001Counts the number of allocations into the QPI Ingress.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; NCB Ingress Queueunc_r3_rxr_inserts.ncsuncore interconnectIngress Allocations; NCSevent=0x11,umask=0x2001Counts the number of allocations into the QPI Ingress.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; NCS Ingress Queueunc_r3_rxr_inserts.ndruncore interconnectIngress Allocations; NDRevent=0x11,umask=401Counts the number of allocations into the QPI Ingress.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; NDR Ingress Queueunc_r3_rxr_inserts.snpuncore interconnectIngress Allocations; SNPevent=0x11,umask=201Counts the number of allocations into the QPI Ingress.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; SNP Ingress Queueunc_r3_rxr_inserts_vn1.drsuncore interconnectVN1 Ingress Allocations; DRSevent=0x15,umask=801Counts the number of allocations into the QPI VN1  Ingress.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; DRS Ingress Queueunc_r3_rxr_inserts_vn1.homuncore interconnectVN1 Ingress Allocations; HOMevent=0x15,umask=101Counts the number of allocations into the QPI VN1  Ingress.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; HOM Ingress Queueunc_r3_rxr_inserts_vn1.ncbuncore interconnectVN1 Ingress Allocations; NCBevent=0x15,umask=0x1001Counts the number of allocations into the QPI VN1  Ingress.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; NCB Ingress Queueunc_r3_rxr_inserts_vn1.ncsuncore interconnectVN1 Ingress Allocations; NCSevent=0x15,umask=0x2001Counts the number of allocations into the QPI VN1  Ingress.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; NCS Ingress Queueunc_r3_rxr_inserts_vn1.ndruncore interconnectVN1 Ingress Allocations; NDRevent=0x15,umask=401Counts the number of allocations into the QPI VN1  Ingress.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; NDR Ingress Queueunc_r3_rxr_inserts_vn1.snpuncore interconnectVN1 Ingress Allocations; SNPevent=0x15,umask=201Counts the number of allocations into the QPI VN1  Ingress.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; SNP Ingress Queueunc_r3_rxr_occupancy_vn1.drsuncore interconnectVN1 Ingress Occupancy Accumulator; DRSevent=0x13,umask=801Accumulates the occupancy of a given QPI VN1  Ingress queue in each cycles.  This tracks one of the three ring Ingress buffers.  This can be used with the QPI VN1  Ingress Not Empty event to calculate average occupancy or the QPI VN1  Ingress Allocations event in order to calculate average queuing latency.; DRS Ingress Queueunc_r3_rxr_occupancy_vn1.homuncore interconnectVN1 Ingress Occupancy Accumulator; HOMevent=0x13,umask=101Accumulates the occupancy of a given QPI VN1  Ingress queue in each cycles.  This tracks one of the three ring Ingress buffers.  This can be used with the QPI VN1  Ingress Not Empty event to calculate average occupancy or the QPI VN1  Ingress Allocations event in order to calculate average queuing latency.; HOM Ingress Queueunc_r3_rxr_occupancy_vn1.ncbuncore interconnectVN1 Ingress Occupancy Accumulator; NCBevent=0x13,umask=0x1001Accumulates the occupancy of a given QPI VN1  Ingress queue in each cycles.  This tracks one of the three ring Ingress buffers.  This can be used with the QPI VN1  Ingress Not Empty event to calculate average occupancy or the QPI VN1  Ingress Allocations event in order to calculate average queuing latency.; NCB Ingress Queueunc_r3_rxr_occupancy_vn1.ncsuncore interconnectVN1 Ingress Occupancy Accumulator; NCSevent=0x13,umask=0x2001Accumulates the occupancy of a given QPI VN1  Ingress queue in each cycles.  This tracks one of the three ring Ingress buffers.  This can be used with the QPI VN1  Ingress Not Empty event to calculate average occupancy or the QPI VN1  Ingress Allocations event in order to calculate average queuing latency.; NCS Ingress Queueunc_r3_rxr_occupancy_vn1.ndruncore interconnectVN1 Ingress Occupancy Accumulator; NDRevent=0x13,umask=401Accumulates the occupancy of a given QPI VN1  Ingress queue in each cycles.  This tracks one of the three ring Ingress buffers.  This can be used with the QPI VN1  Ingress Not Empty event to calculate average occupancy or the QPI VN1  Ingress Allocations event in order to calculate average queuing latency.; NDR Ingress Queueunc_r3_rxr_occupancy_vn1.snpuncore interconnectVN1 Ingress Occupancy Accumulator; SNPevent=0x13,umask=201Accumulates the occupancy of a given QPI VN1  Ingress queue in each cycles.  This tracks one of the three ring Ingress buffers.  This can be used with the QPI VN1  Ingress Not Empty event to calculate average occupancy or the QPI VN1  Ingress Allocations event in order to calculate average queuing latency.; SNP Ingress Queueunc_r3_sbo0_credits_acquired.aduncore interconnectSBo0 Credits Acquired; For AD Ringevent=0x28,umask=101Number of Sbo 0 credits acquired in a given cycle, per ringunc_r3_sbo0_credits_acquired.bluncore interconnectSBo0 Credits Acquired; For BL Ringevent=0x28,umask=201Number of Sbo 0 credits acquired in a given cycle, per ringunc_r3_sbo0_credit_occupancy.aduncore interconnectSBo0 Credits Occupancy; For AD Ringevent=0x2a,umask=101Number of Sbo 0 credits in use in a given cycle, per ringunc_r3_sbo0_credit_occupancy.bluncore interconnectSBo0 Credits Occupancy; For BL Ringevent=0x2a,umask=201Number of Sbo 0 credits in use in a given cycle, per ringunc_r3_sbo1_credits_acquired.aduncore interconnectSBo1 Credits Acquired; For AD Ringevent=0x29,umask=101Number of Sbo 1 credits acquired in a given cycle, per ringunc_r3_sbo1_credits_acquired.bluncore interconnectSBo1 Credits Acquired; For BL Ringevent=0x29,umask=201Number of Sbo 1 credits acquired in a given cycle, per ringunc_r3_sbo1_credit_occupancy.aduncore interconnectSBo1 Credits Occupancy; For AD Ringevent=0x2b,umask=101Number of Sbo 1 credits in use in a given cycle, per ringunc_r3_sbo1_credit_occupancy.bluncore interconnectSBo1 Credits Occupancy; For BL Ringevent=0x2b,umask=201Number of Sbo 1 credits in use in a given cycle, per ringunc_r3_stall_no_sbo_credit.sbo0_aduncore interconnectStall on No Sbo Credits; For SBo0, AD Ringevent=0x2c,umask=101Number of cycles Egress is stalled waiting for an Sbo credit to become available.  Per Sbo, per Ringunc_r3_stall_no_sbo_credit.sbo0_bluncore interconnectStall on No Sbo Credits; For SBo0, BL Ringevent=0x2c,umask=401Number of cycles Egress is stalled waiting for an Sbo credit to become available.  Per Sbo, per Ringunc_r3_stall_no_sbo_credit.sbo1_aduncore interconnectStall on No Sbo Credits; For SBo1, AD Ringevent=0x2c,umask=201Number of cycles Egress is stalled waiting for an Sbo credit to become available.  Per Sbo, per Ringunc_r3_stall_no_sbo_credit.sbo1_bluncore interconnectStall on No Sbo Credits; For SBo1, BL Ringevent=0x2c,umask=801Number of cycles Egress is stalled waiting for an Sbo credit to become available.  Per Sbo, per Ringunc_r3_txr_nack.dn_aduncore interconnectEgress CCW NACK; AD CCWevent=0x26,umask=101AD CounterClockwise Egress Queueunc_r3_txr_nack.dn_akuncore interconnectEgress CCW NACK; AK CCWevent=0x26,umask=401AK CounterClockwise Egress Queueunc_r3_txr_nack.dn_bluncore interconnectEgress CCW NACK; BL CCWevent=0x26,umask=201BL CounterClockwise Egress Queueunc_r3_txr_nack.up_aduncore interconnectEgress CCW NACK; AK CCWevent=0x26,umask=801BL CounterClockwise Egress Queueunc_r3_txr_nack.up_akuncore interconnectEgress CCW NACK; BL CWevent=0x26,umask=0x2001AD Clockwise Egress Queueunc_r3_txr_nack.up_bluncore interconnectEgress CCW NACK; BL CCWevent=0x26,umask=0x1001AD CounterClockwise Egress Queueunc_r3_vn0_credits_reject.drsuncore interconnectVN0 Credit Acquisition Failed on DRS; DRS Message Classevent=0x37,umask=801Number of times a request failed to acquire a DRS VN0 credit.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed.  This should generally be a rare situation.; Filter for Data Response (DRS).  DRS is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using DRSunc_r3_vn0_credits_reject.homuncore interconnectVN0 Credit Acquisition Failed on DRS; HOM Message Classevent=0x37,umask=101Number of times a request failed to acquire a DRS VN0 credit.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed.  This should generally be a rare situation.; Filter for the Home (HOM) message class.  HOM is generally used to send requests, request responses, and snoop responsesunc_r3_vn0_credits_reject.ncbuncore interconnectVN0 Credit Acquisition Failed on DRS; NCB Message Classevent=0x37,umask=0x1001Number of times a request failed to acquire a DRS VN0 credit.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed.  This should generally be a rare situation.; Filter for Non-Coherent Broadcast (NCB).  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_r3_vn0_credits_reject.ncsuncore interconnectVN0 Credit Acquisition Failed on DRS; NCS Message Classevent=0x37,umask=0x2001Number of times a request failed to acquire a DRS VN0 credit.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed.  This should generally be a rare situation.; Filter for Non-Coherent Standard (NCS).  NCS is commonly used for ?unc_r3_vn0_credits_reject.ndruncore interconnectVN0 Credit Acquisition Failed on DRS; NDR Message Classevent=0x37,umask=401Number of times a request failed to acquire a DRS VN0 credit.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed.  This should generally be a rare situation.; NDR packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_r3_vn0_credits_reject.snpuncore interconnectVN0 Credit Acquisition Failed on DRS; SNP Message Classevent=0x37,umask=201Number of times a request failed to acquire a DRS VN0 credit.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed.  This should generally be a rare situation.; Filter for Snoop (SNP) message class.  SNP is used for outgoing snoops.  Note that snoop responses flow on the HOM message classunc_r3_vn0_credits_used.drsuncore interconnectVN0 Credit Used; DRS Message Classevent=0x36,umask=801Number of times a VN0 credit was used on the DRS message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This counts the number of times a VN0 credit was used.  Note that a single VN0 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN0 will only count a single credit even though it may use multiple buffers.; Filter for Data Response (DRS).  DRS is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using DRSunc_r3_vn0_credits_used.homuncore interconnectVN0 Credit Used; HOM Message Classevent=0x36,umask=101Number of times a VN0 credit was used on the DRS message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This counts the number of times a VN0 credit was used.  Note that a single VN0 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN0 will only count a single credit even though it may use multiple buffers.; Filter for the Home (HOM) message class.  HOM is generally used to send requests, request responses, and snoop responsesunc_r3_vn0_credits_used.ncbuncore interconnectVN0 Credit Used; NCB Message Classevent=0x36,umask=0x1001Number of times a VN0 credit was used on the DRS message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This counts the number of times a VN0 credit was used.  Note that a single VN0 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN0 will only count a single credit even though it may use multiple buffers.; Filter for Non-Coherent Broadcast (NCB).  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_r3_vn0_credits_used.ncsuncore interconnectVN0 Credit Used; NCS Message Classevent=0x36,umask=0x2001Number of times a VN0 credit was used on the DRS message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This counts the number of times a VN0 credit was used.  Note that a single VN0 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN0 will only count a single credit even though it may use multiple buffers.; Filter for Non-Coherent Standard (NCS).  NCS is commonly used for ?unc_r3_vn0_credits_used.ndruncore interconnectVN0 Credit Used; NDR Message Classevent=0x36,umask=401Number of times a VN0 credit was used on the DRS message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This counts the number of times a VN0 credit was used.  Note that a single VN0 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN0 will only count a single credit even though it may use multiple buffers.; NDR packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_r3_vn0_credits_used.snpuncore interconnectVN0 Credit Used; SNP Message Classevent=0x36,umask=201Number of times a VN0 credit was used on the DRS message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This counts the number of times a VN0 credit was used.  Note that a single VN0 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN0 will only count a single credit even though it may use multiple buffers.; Filter for Snoop (SNP) message class.  SNP is used for outgoing snoops.  Note that snoop responses flow on the HOM message classunc_r3_vn1_credits_reject.drsuncore interconnectVN1 Credit Acquisition Failed on DRS; DRS Message Classevent=0x39,umask=801Number of times a request failed to acquire a VN1 credit.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN1.  VNA is a shared pool used to achieve high performance.  The VN1 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail.  This therefore counts the number of times when a request failed to acquire either a VNA or VN1 credit and is delayed.  This should generally be a rare situation.; Filter for Data Response (DRS).  DRS is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using DRSunc_r3_vn1_credits_reject.homuncore interconnectVN1 Credit Acquisition Failed on DRS; HOM Message Classevent=0x39,umask=101Number of times a request failed to acquire a VN1 credit.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN1.  VNA is a shared pool used to achieve high performance.  The VN1 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail.  This therefore counts the number of times when a request failed to acquire either a VNA or VN1 credit and is delayed.  This should generally be a rare situation.; Filter for the Home (HOM) message class.  HOM is generally used to send requests, request responses, and snoop responsesunc_r3_vn1_credits_reject.ncbuncore interconnectVN1 Credit Acquisition Failed on DRS; NCB Message Classevent=0x39,umask=0x1001Number of times a request failed to acquire a VN1 credit.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN1.  VNA is a shared pool used to achieve high performance.  The VN1 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail.  This therefore counts the number of times when a request failed to acquire either a VNA or VN1 credit and is delayed.  This should generally be a rare situation.; Filter for Non-Coherent Broadcast (NCB).  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_r3_vn1_credits_reject.ncsuncore interconnectVN1 Credit Acquisition Failed on DRS; NCS Message Classevent=0x39,umask=0x2001Number of times a request failed to acquire a VN1 credit.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN1.  VNA is a shared pool used to achieve high performance.  The VN1 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail.  This therefore counts the number of times when a request failed to acquire either a VNA or VN1 credit and is delayed.  This should generally be a rare situation.; Filter for Non-Coherent Standard (NCS).  NCS is commonly used for ?unc_r3_vn1_credits_reject.ndruncore interconnectVN1 Credit Acquisition Failed on DRS; NDR Message Classevent=0x39,umask=401Number of times a request failed to acquire a VN1 credit.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN1.  VNA is a shared pool used to achieve high performance.  The VN1 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail.  This therefore counts the number of times when a request failed to acquire either a VNA or VN1 credit and is delayed.  This should generally be a rare situation.; NDR packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_r3_vn1_credits_reject.snpuncore interconnectVN1 Credit Acquisition Failed on DRS; SNP Message Classevent=0x39,umask=201Number of times a request failed to acquire a VN1 credit.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN1.  VNA is a shared pool used to achieve high performance.  The VN1 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail.  This therefore counts the number of times when a request failed to acquire either a VNA or VN1 credit and is delayed.  This should generally be a rare situation.; Filter for Snoop (SNP) message class.  SNP is used for outgoing snoops.  Note that snoop responses flow on the HOM message classunc_r3_vn1_credits_used.drsuncore interconnectVN1 Credit Used; DRS Message Classevent=0x38,umask=801Number of times a VN1 credit was used on the DRS message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN1.  VNA is a shared pool used to achieve high performance.  The VN1 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail.  This counts the number of times a VN1 credit was used.  Note that a single VN1 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN1 will only count a single credit even though it may use multiple buffers.; Filter for Data Response (DRS).  DRS is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using DRSunc_r3_vn1_credits_used.homuncore interconnectVN1 Credit Used; HOM Message Classevent=0x38,umask=101Number of times a VN1 credit was used on the DRS message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN1.  VNA is a shared pool used to achieve high performance.  The VN1 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail.  This counts the number of times a VN1 credit was used.  Note that a single VN1 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN1 will only count a single credit even though it may use multiple buffers.; Filter for the Home (HOM) message class.  HOM is generally used to send requests, request responses, and snoop responsesunc_r3_vn1_credits_used.ncbuncore interconnectVN1 Credit Used; NCB Message Classevent=0x38,umask=0x1001Number of times a VN1 credit was used on the DRS message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN1.  VNA is a shared pool used to achieve high performance.  The VN1 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail.  This counts the number of times a VN1 credit was used.  Note that a single VN1 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN1 will only count a single credit even though it may use multiple buffers.; Filter for Non-Coherent Broadcast (NCB).  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_r3_vn1_credits_used.ncsuncore interconnectVN1 Credit Used; NCS Message Classevent=0x38,umask=0x2001Number of times a VN1 credit was used on the DRS message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN1.  VNA is a shared pool used to achieve high performance.  The VN1 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail.  This counts the number of times a VN1 credit was used.  Note that a single VN1 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN1 will only count a single credit even though it may use multiple buffers.; Filter for Non-Coherent Standard (NCS).  NCS is commonly used for ?unc_r3_vn1_credits_used.ndruncore interconnectVN1 Credit Used; NDR Message Classevent=0x38,umask=401Number of times a VN1 credit was used on the DRS message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN1.  VNA is a shared pool used to achieve high performance.  The VN1 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail.  This counts the number of times a VN1 credit was used.  Note that a single VN1 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN1 will only count a single credit even though it may use multiple buffers.; NDR packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_r3_vn1_credits_used.snpuncore interconnectVN1 Credit Used; SNP Message Classevent=0x38,umask=201Number of times a VN1 credit was used on the DRS message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN1.  VNA is a shared pool used to achieve high performance.  The VN1 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail.  This counts the number of times a VN1 credit was used.  Note that a single VN1 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN1 will only count a single credit even though it may use multiple buffers.; Filter for Snoop (SNP) message class.  SNP is used for outgoing snoops.  Note that snoop responses flow on the HOM message classunc_r3_vna_credits_acquired.aduncore interconnectVNA credit Acquisitions; HOM Message Classevent=0x33,umask=101Number of QPI VNA Credit acquisitions.  This event can be used in conjunction with the VNA In-Use Accumulator to calculate the average lifetime of a credit holder.  VNA credits are used by all message classes in order to communicate across QPI.  If a packet is unable to acquire credits, it will then attempt to use credits from the VN0 pool.  Note that a single packet may require multiple flit buffers (i.e. when data is being transferred).  Therefore, this event will increment by the number of credits acquired in each cycle.  Filtering based on message class is not provided.  One can count the number of packets transferred in a given message class using an qfclk event.; Filter for the Home (HOM) message class.  HOM is generally used to send requests, request responses, and snoop responsesunc_r3_vna_credits_acquired.bluncore interconnectVNA credit Acquisitions; HOM Message Classevent=0x33,umask=401Number of QPI VNA Credit acquisitions.  This event can be used in conjunction with the VNA In-Use Accumulator to calculate the average lifetime of a credit holder.  VNA credits are used by all message classes in order to communicate across QPI.  If a packet is unable to acquire credits, it will then attempt to use credits from the VN0 pool.  Note that a single packet may require multiple flit buffers (i.e. when data is being transferred).  Therefore, this event will increment by the number of credits acquired in each cycle.  Filtering based on message class is not provided.  One can count the number of packets transferred in a given message class using an qfclk event.; Filter for the Home (HOM) message class.  HOM is generally used to send requests, request responses, and snoop responsesunc_r3_vna_credits_reject.drsuncore interconnectVNA Credit Reject; DRS Message Classevent=0x34,umask=801Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full).  It is possible to filter this event by message class.  Some packets use more than one flit buffer, and therefore must acquire multiple credits.  Therefore, one could get a reject even if the VNA credits were not fully used up.  The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress).  VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially.  This can happen if the rest of the uncore is unable to drain the requests fast enough.; Filter for Data Response (DRS).  DRS is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using DRSunc_r3_vna_credits_reject.homuncore interconnectVNA Credit Reject; HOM Message Classevent=0x34,umask=101Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full).  It is possible to filter this event by message class.  Some packets use more than one flit buffer, and therefore must acquire multiple credits.  Therefore, one could get a reject even if the VNA credits were not fully used up.  The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress).  VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially.  This can happen if the rest of the uncore is unable to drain the requests fast enough.; Filter for the Home (HOM) message class.  HOM is generally used to send requests, request responses, and snoop responsesunc_r3_vna_credits_reject.ncbuncore interconnectVNA Credit Reject; NCB Message Classevent=0x34,umask=0x1001Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full).  It is possible to filter this event by message class.  Some packets use more than one flit buffer, and therefore must acquire multiple credits.  Therefore, one could get a reject even if the VNA credits were not fully used up.  The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress).  VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially.  This can happen if the rest of the uncore is unable to drain the requests fast enough.; Filter for Non-Coherent Broadcast (NCB).  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_r3_vna_credits_reject.ncsuncore interconnectVNA Credit Reject; NCS Message Classevent=0x34,umask=0x2001Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full).  It is possible to filter this event by message class.  Some packets use more than one flit buffer, and therefore must acquire multiple credits.  Therefore, one could get a reject even if the VNA credits were not fully used up.  The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress).  VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially.  This can happen if the rest of the uncore is unable to drain the requests fast enough.; Filter for Non-Coherent Standard (NCS)unc_r3_vna_credits_reject.ndruncore interconnectVNA Credit Reject; NDR Message Classevent=0x34,umask=401Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full).  It is possible to filter this event by message class.  Some packets use more than one flit buffer, and therefore must acquire multiple credits.  Therefore, one could get a reject even if the VNA credits were not fully used up.  The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress).  VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially.  This can happen if the rest of the uncore is unable to drain the requests fast enough.; NDR packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_r3_vna_credits_reject.snpuncore interconnectVNA Credit Reject; SNP Message Classevent=0x34,umask=201Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full).  It is possible to filter this event by message class.  Some packets use more than one flit buffer, and therefore must acquire multiple credits.  Therefore, one could get a reject even if the VNA credits were not fully used up.  The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress).  VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially.  This can happen if the rest of the uncore is unable to drain the requests fast enough.; Filter for Snoop (SNP) message class.  SNP is used for outgoing snoops.  Note that snoop responses flow on the HOM message classuncore_sboxunc_s_bounce_controluncore interconnectBounce Controlevent=0xa01unc_s_clockticksuncore interconnectUncore Clocksevent=001unc_s_fast_asserteduncore interconnectFaST wire assertedevent=901Counts the number of cycles either the local or incoming distress signals are asserted.  Incoming distress includes up, dn and acrossunc_s_ring_ad_used.alluncore interconnectAD Ring In Use; Allevent=0x1b,umask=0xf01Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.  We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_s_ring_ad_used.downuncore interconnectAD Ring In Use; Downevent=0x1b,umask=0xc01Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.  We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_s_ring_ad_used.down_evenuncore interconnectAD Ring In Use; Down and Eventevent=0x1b,umask=401Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.  We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Event ring polarityunc_s_ring_ad_used.down_odduncore interconnectAD Ring In Use; Down and Oddevent=0x1b,umask=801Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.  We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Odd ring polarityunc_s_ring_ad_used.upuncore interconnectAD Ring In Use; Upevent=0x1b,umask=301Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.  We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_s_ring_ad_used.up_evenuncore interconnectAD Ring In Use; Up and Evenevent=0x1b,umask=101Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.  We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Even ring polarityunc_s_ring_ad_used.up_odduncore interconnectAD Ring In Use; Up and Oddevent=0x1b,umask=201Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.  We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Odd ring polarityunc_s_ring_ak_used.alluncore interconnectAK Ring In Use; Allevent=0x1c,umask=0xf01Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_s_ring_ak_used.downuncore interconnectAK Ring In Use; Downevent=0x1c,umask=0xc01Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_s_ring_ak_used.down_evenuncore interconnectAK Ring In Use; Down and Eventevent=0x1c,umask=401Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Event ring polarityunc_s_ring_ak_used.down_odduncore interconnectAK Ring In Use; Down and Oddevent=0x1c,umask=801Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Odd ring polarityunc_s_ring_ak_used.upuncore interconnectAK Ring In Use; Upevent=0x1c,umask=301Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_s_ring_ak_used.up_evenuncore interconnectAK Ring In Use; Up and Evenevent=0x1c,umask=101Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Even ring polarityunc_s_ring_ak_used.up_odduncore interconnectAK Ring In Use; Up and Oddevent=0x1c,umask=201Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Odd ring polarityunc_s_ring_bl_used.alluncore interconnectBL Ring in Use; Allevent=0x1d,umask=0xf01Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_s_ring_bl_used.downuncore interconnectBL Ring in Use; Downevent=0x1d,umask=0xc01Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_s_ring_bl_used.down_evenuncore interconnectBL Ring in Use; Down and Eventevent=0x1d,umask=401Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Event ring polarityunc_s_ring_bl_used.down_odduncore interconnectBL Ring in Use; Down and Oddevent=0x1d,umask=801Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Odd ring polarityunc_s_ring_bl_used.upuncore interconnectBL Ring in Use; Upevent=0x1d,umask=301Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_s_ring_bl_used.up_evenuncore interconnectBL Ring in Use; Up and Evenevent=0x1d,umask=101Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Even ring polarityunc_s_ring_bl_used.up_odduncore interconnectBL Ring in Use; Up and Oddevent=0x1d,umask=201Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in BDX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Odd ring polarityunc_s_ring_bounces.ad_cacheuncore interconnectNumber of LLC responses that bounced on the Ringevent=5,umask=101unc_s_ring_bounces.ak_coreuncore interconnectNumber of LLC responses that bounced on the Ring.; Acknowledgements to coreevent=5,umask=201unc_s_ring_bounces.bl_coreuncore interconnectNumber of LLC responses that bounced on the Ring.; Data Responses to coreevent=5,umask=401unc_s_ring_bounces.iv_coreuncore interconnectNumber of LLC responses that bounced on the Ring.; Snoops of processor's cacheevent=5,umask=801unc_s_ring_iv_used.dnuncore interconnectBL Ring in Use; Anyevent=0x1e,umask=0xc01Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.  There is only 1 IV ring in HSX.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODD.; Filters any polarityunc_s_ring_iv_used.upuncore interconnectBL Ring in Use; Anyevent=0x1e,umask=301Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.  There is only 1 IV ring in HSX.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODD.; Filters any polarityunc_s_ring_sink_starved.ad_cacheuncore interconnectUNC_S_RING_SINK_STARVED.AD_CACHEevent=6,umask=101unc_s_ring_sink_starved.ak_coreuncore interconnectUNC_S_RING_SINK_STARVED.AK_COREevent=6,umask=201unc_s_ring_sink_starved.bl_coreuncore interconnectUNC_S_RING_SINK_STARVED.BL_COREevent=6,umask=401unc_s_ring_sink_starved.iv_coreuncore interconnectUNC_S_RING_SINK_STARVED.IV_COREevent=6,umask=801unc_s_rxr_busy_starved.ad_bncuncore interconnectInjection Starvation; AD - Bouncesevent=0x15,umask=201Counts injection starvation.  This starvation is triggered when the Ingress cannot send a transaction onto the ring for a long period of time.  In this case, the Ingress but unable to forward to Egress because a message (credited/bounceable) is  being sentunc_s_rxr_busy_starved.ad_crduncore interconnectInjection Starvation; AD - Creditsevent=0x15,umask=101Counts injection starvation.  This starvation is triggered when the Ingress cannot send a transaction onto the ring for a long period of time.  In this case, the Ingress but unable to forward to Egress because a message (credited/bounceable) is  being sentunc_s_rxr_busy_starved.bl_bncuncore interconnectInjection Starvation; BL - Bouncesevent=0x15,umask=801Counts injection starvation.  This starvation is triggered when the Ingress cannot send a transaction onto the ring for a long period of time.  In this case, the Ingress but unable to forward to Egress because a message (credited/bounceable) is  being sentunc_s_rxr_busy_starved.bl_crduncore interconnectInjection Starvation; BL - Creditsevent=0x15,umask=401Counts injection starvation.  This starvation is triggered when the Ingress cannot send a transaction onto the ring for a long period of time.  In this case, the Ingress but unable to forward to Egress because a message (credited/bounceable) is  being sentunc_s_rxr_bypass.ad_bncuncore interconnectBypass; AD - Bouncesevent=0x12,umask=201Bypass the Sbo Ingressunc_s_rxr_bypass.ad_crduncore interconnectBypass; AD - Creditsevent=0x12,umask=101Bypass the Sbo Ingressunc_s_rxr_bypass.akuncore interconnectBypass; AKevent=0x12,umask=0x1001Bypass the Sbo Ingressunc_s_rxr_bypass.bl_bncuncore interconnectBypass; BL - Bouncesevent=0x12,umask=801Bypass the Sbo Ingressunc_s_rxr_bypass.bl_crduncore interconnectBypass; BL - Creditsevent=0x12,umask=401Bypass the Sbo Ingressunc_s_rxr_bypass.ivuncore interconnectBypass; IVevent=0x12,umask=0x2001Bypass the Sbo Ingressunc_s_rxr_crd_starved.ad_bncuncore interconnectInjection Starvation; AD - Bouncesevent=0x14,umask=201Counts injection starvation.  This starvation is triggered when the Ingress cannot send a transaction onto the ring for a long period of time.  In this case, the Ingress but unable to forward to Egress due to lack of creditunc_s_rxr_crd_starved.ad_crduncore interconnectInjection Starvation; AD - Creditsevent=0x14,umask=101Counts injection starvation.  This starvation is triggered when the Ingress cannot send a transaction onto the ring for a long period of time.  In this case, the Ingress but unable to forward to Egress due to lack of creditunc_s_rxr_crd_starved.akuncore interconnectInjection Starvation; AKevent=0x14,umask=0x1001Counts injection starvation.  This starvation is triggered when the Ingress cannot send a transaction onto the ring for a long period of time.  In this case, the Ingress but unable to forward to Egress due to lack of creditunc_s_rxr_crd_starved.bl_bncuncore interconnectInjection Starvation; BL - Bouncesevent=0x14,umask=801Counts injection starvation.  This starvation is triggered when the Ingress cannot send a transaction onto the ring for a long period of time.  In this case, the Ingress but unable to forward to Egress due to lack of creditunc_s_rxr_crd_starved.bl_crduncore interconnectInjection Starvation; BL - Creditsevent=0x14,umask=401Counts injection starvation.  This starvation is triggered when the Ingress cannot send a transaction onto the ring for a long period of time.  In this case, the Ingress but unable to forward to Egress due to lack of creditunc_s_rxr_crd_starved.ifvuncore interconnectInjection Starvation; IVF Creditevent=0x14,umask=0x4001Counts injection starvation.  This starvation is triggered when the Ingress cannot send a transaction onto the ring for a long period of time.  In this case, the Ingress but unable to forward to Egress due to lack of creditunc_s_rxr_crd_starved.ivuncore interconnectInjection Starvation; IVevent=0x14,umask=0x2001Counts injection starvation.  This starvation is triggered when the Ingress cannot send a transaction onto the ring for a long period of time.  In this case, the Ingress but unable to forward to Egress due to lack of creditunc_s_rxr_inserts.ad_bncuncore interconnectIngress Allocations; AD - Bouncesevent=0x13,umask=201Number of allocations into the Sbo Ingress  The Ingress is used to queue up requests received from the ringunc_s_rxr_inserts.ad_crduncore interconnectIngress Allocations; AD - Creditsevent=0x13,umask=101Number of allocations into the Sbo Ingress  The Ingress is used to queue up requests received from the ringunc_s_rxr_inserts.akuncore interconnectIngress Allocations; AKevent=0x13,umask=0x1001Number of allocations into the Sbo Ingress  The Ingress is used to queue up requests received from the ringunc_s_rxr_inserts.bl_bncuncore interconnectIngress Allocations; BL - Bouncesevent=0x13,umask=801Number of allocations into the Sbo Ingress  The Ingress is used to queue up requests received from the ringunc_s_rxr_inserts.bl_crduncore interconnectIngress Allocations; BL - Creditsevent=0x13,umask=401Number of allocations into the Sbo Ingress  The Ingress is used to queue up requests received from the ringunc_s_rxr_inserts.ivuncore interconnectIngress Allocations; IVevent=0x13,umask=0x2001Number of allocations into the Sbo Ingress  The Ingress is used to queue up requests received from the ringunc_s_rxr_occupancy.ad_bncuncore interconnectIngress Occupancy; AD - Bouncesevent=0x11,umask=201Occupancy event for the Ingress buffers in the Sbo.  The Ingress is used to queue up requests received from the ringunc_s_rxr_occupancy.ad_crduncore interconnectIngress Occupancy; AD - Creditsevent=0x11,umask=101Occupancy event for the Ingress buffers in the Sbo.  The Ingress is used to queue up requests received from the ringunc_s_rxr_occupancy.akuncore interconnectIngress Occupancy; AKevent=0x11,umask=0x1001Occupancy event for the Ingress buffers in the Sbo.  The Ingress is used to queue up requests received from the ringunc_s_rxr_occupancy.bl_bncuncore interconnectIngress Occupancy; BL - Bouncesevent=0x11,umask=801Occupancy event for the Ingress buffers in the Sbo.  The Ingress is used to queue up requests received from the ringunc_s_rxr_occupancy.bl_crduncore interconnectIngress Occupancy; BL - Creditsevent=0x11,umask=401Occupancy event for the Ingress buffers in the Sbo.  The Ingress is used to queue up requests received from the ringunc_s_rxr_occupancy.ivuncore interconnectIngress Occupancy; IVevent=0x11,umask=0x2001Occupancy event for the Ingress buffers in the Sbo.  The Ingress is used to queue up requests received from the ringunc_s_txr_ads_used.aduncore interconnectUNC_S_TxR_ADS_USED.ADevent=4,umask=101unc_s_txr_ads_used.akuncore interconnectUNC_S_TxR_ADS_USED.AKevent=4,umask=201unc_s_txr_ads_used.bluncore interconnectUNC_S_TxR_ADS_USED.BLevent=4,umask=401unc_s_txr_inserts.ad_bncuncore interconnectEgress Allocations; AD - Bouncesevent=2,umask=201Number of allocations into the Sbo Egress.  The Egress is used to queue up requests destined for the ringunc_s_txr_inserts.ad_crduncore interconnectEgress Allocations; AD - Creditsevent=2,umask=101Number of allocations into the Sbo Egress.  The Egress is used to queue up requests destined for the ringunc_s_txr_inserts.akuncore interconnectEgress Allocations; AKevent=2,umask=0x1001Number of allocations into the Sbo Egress.  The Egress is used to queue up requests destined for the ringunc_s_txr_inserts.bl_bncuncore interconnectEgress Allocations; BL - Bouncesevent=2,umask=801Number of allocations into the Sbo Egress.  The Egress is used to queue up requests destined for the ringunc_s_txr_inserts.bl_crduncore interconnectEgress Allocations; BL - Creditsevent=2,umask=401Number of allocations into the Sbo Egress.  The Egress is used to queue up requests destined for the ringunc_s_txr_inserts.ivuncore interconnectEgress Allocations; IVevent=2,umask=0x2001Number of allocations into the Sbo Egress.  The Egress is used to queue up requests destined for the ringunc_s_txr_occupancy.ad_bncuncore interconnectEgress Occupancy; AD - Bouncesevent=1,umask=201Occupancy event for the Egress buffers in the Sbo.  The egress is used to queue up requests destined for the ringunc_s_txr_occupancy.ad_crduncore interconnectEgress Occupancy; AD - Creditsevent=1,umask=101Occupancy event for the Egress buffers in the Sbo.  The egress is used to queue up requests destined for the ringunc_s_txr_occupancy.akuncore interconnectEgress Occupancy; AKevent=1,umask=0x1001Occupancy event for the Egress buffers in the Sbo.  The egress is used to queue up requests destined for the ringunc_s_txr_occupancy.bl_bncuncore interconnectEgress Occupancy; BL - Bouncesevent=1,umask=801Occupancy event for the Egress buffers in the Sbo.  The egress is used to queue up requests destined for the ringunc_s_txr_occupancy.bl_crduncore interconnectEgress Occupancy; BL - Creditsevent=1,umask=401Occupancy event for the Egress buffers in the Sbo.  The egress is used to queue up requests destined for the ringunc_s_txr_occupancy.ivuncore interconnectEgress Occupancy; IVevent=1,umask=0x2001Occupancy event for the Egress buffers in the Sbo.  The egress is used to queue up requests destined for the ringunc_s_txr_starved.aduncore interconnectInjection Starvation; Onto AD Ringevent=3,umask=101Counts injection starvation.  This starvation is triggered when the Egress cannot send a transaction onto the ring for a long period of timeunc_s_txr_starved.akuncore interconnectInjection Starvation; Onto AK Ringevent=3,umask=201Counts injection starvation.  This starvation is triggered when the Egress cannot send a transaction onto the ring for a long period of timeunc_s_txr_starved.bluncore interconnectInjection Starvation; Onto BL Ringevent=3,umask=401Counts injection starvation.  This starvation is triggered when the Egress cannot send a transaction onto the ring for a long period of timeunc_s_txr_starved.ivuncore interconnectInjection Starvation; Onto IV Ringevent=3,umask=801Counts injection starvation.  This starvation is triggered when the Egress cannot send a transaction onto the ring for a long period of timeunc_u_clockticksuncore interconnectClockticks in the UBOX using a dedicated 48-bit Fixed Counterevent=0xff01llc_misses.mem_readuncore memoryread requests to memory controller. Derived from unc_m_cas_count.rdevent=4,umask=30164BytesDRAM RD_CAS and WR_CAS Commands; Counts the total number of DRAM Read CAS commands issued on this channel (including underfills)llc_misses.mem_writeuncore memorywrite requests to memory controller. Derived from unc_m_cas_count.wrevent=4,umask=0xc0164BytesDRAM RD_CAS and WR_CAS Commands; Counts the total number of DRAM Write CAS commands issued on this channelunc_m_clockticksuncore memoryClockticks in the Memory Controller using a dedicated 48-bit Fixed Counterevent=0xff01unc_m_clockticks_puncore memoryClockticks in the Memory Controller using one of the programmable countersevent=001unc_m_dclockticksuncore memoryThis event is deprecated. Refer to new event UNC_M_CLOCKTICKS_Pevent=001l1d.replacementcacheL1D data line replacementsevent=0x51,period=2000003,umask=100Counts L1D data line replacements including opportunistic replacements, and replacements that require stall-for-replace or block-for-replacel1d_pend_miss.fb_fullcacheNumber of times a request needed a FB entry but there was no entry available for it. That is the FB unavailability was dominant reason for blocking the request. A request includes cacheable/uncacheable demands that is load, store or SW prefetchevent=0x48,period=2000003,umask=200Number of times a request needed a FB (Fill Buffer) entry but there was no entry available for it. A request includes cacheable/uncacheable demands that are load, store or SW prefetch instructionsl1d_pend_miss.pendingcacheL1D miss outstandings duration in cyclesevent=0x48,period=2000003,umask=100Counts duration of L1D miss outstanding, that is each cycle number of Fill Buffers (FB) outstanding required by Demand Reads. FB either is held by demand loads, or it is held by non-demand loads and gets hit at least once by demand. The valid outstanding interval is defined until the FB deallocation by one of the following ways: from FB allocation, if FB is allocated by demand from the demand Hit FB, if it is allocated by hardware or software prefetch.Note: In the L1D, a Demand Read contains cacheable or noncacheable demand loads, including ones causing cache-line splits and reads due to page walks resulted from any request typel1d_pend_miss.pending_cyclescacheCycles with L1D load Misses outstandingevent=0x48,cmask=1,period=2000003,umask=100Counts duration of L1D miss outstanding in cyclesl2_lines_in.allcacheL2 cache lines filling L2event=0xf1,period=100003,umask=0x1f00Counts the number of L2 cache lines filling the L2. Counting does not cover rejectsl2_lines_out.non_silentcacheCounts the number of lines that are evicted by L2 cache when triggered by an L2 cache fill. Those lines can be either in modified state or clean state. Modified lines may either be written back to L3 or directly written to memory and not allocated in L3.  Clean lines may either be allocated in L3 or droppedevent=0xf2,period=200003,umask=200Counts the number of lines that are evicted by L2 cache when triggered by an L2 cache fill. Those lines can be either in modified state or clean state. Modified lines may either be written back to L3 or directly written to memory and not allocated in L3.  Clean lines may either be allocated in L3 or droppedl2_lines_out.silentcacheCounts the number of lines that are silently dropped by L2 cache when triggered by an L2 cache fill. These lines are typically in Shared state. A non-threaded eventevent=0xf2,period=200003,umask=100l2_lines_out.useless_hwpfcacheCounts the number of lines that have been hardware prefetched but not used and now evicted by L2 cacheevent=0xf2,period=200003,umask=400l2_lines_out.useless_prefcacheThis event is deprecated. Refer to new event L2_LINES_OUT.USELESS_HWPFevent=0xf2,period=200003,umask=410l2_rqsts.all_demand_data_rdcacheDemand Data Read requestsevent=0x24,period=200003,umask=0xe100Counts the number of demand Data Read requests (including requests from L1D hardware prefetchers). These loads may hit or miss L2 cache. Only non rejected loads are countedl2_rqsts.all_demand_misscacheDemand requests that miss L2 cacheevent=0x24,period=200003,umask=0x2700Demand requests that miss L2 cachel2_rqsts.all_demand_referencescacheDemand requests to L2 cacheevent=0x24,period=200003,umask=0xe700Demand requests to L2 cachel2_rqsts.all_pfcacheRequests from the L1/L2/L3 hardware prefetchers or Load software prefetchesevent=0x24,period=200003,umask=0xf800Counts the total number of requests from the L2 hardware prefetchersl2_rqsts.demand_data_rd_misscacheDemand Data Read miss L2, no rejectsevent=0x24,period=200003,umask=0x2100Counts the number of demand Data Read requests that miss L2 cache. Only not rejected loads are countedl2_rqsts.misscacheAll requests that miss L2 cacheevent=0x24,period=200003,umask=0x3f00All requests that miss L2 cachel2_rqsts.pf_hitcacheRequests from the L1/L2/L3 hardware prefetchers or Load software prefetches that hit L2 cacheevent=0x24,period=200003,umask=0xd800Counts requests from the L1/L2/L3 hardware prefetchers or Load software prefetches that hit L2 cachel2_rqsts.pf_misscacheRequests from the L1/L2/L3 hardware prefetchers or Load software prefetches that miss L2 cacheevent=0x24,period=200003,umask=0x3800Counts requests from the L1/L2/L3 hardware prefetchers or Load software prefetches that miss L2 cachel2_rqsts.referencescacheAll L2 requestsevent=0x24,period=200003,umask=0xff00All L2 requestsl2_trans.l2_wbcacheL2 writebacks that access L2 cacheevent=0xf0,period=200003,umask=0x4000Counts L2 writebacks that access L2 cachelongest_lat_cache.misscacheCore-originated cacheable demand requests missed L3  Spec update: SKL057event=0x2e,period=100003,umask=0x4100Counts core-originated cacheable requests that miss the L3 cache (Longest Latency cache). Requests include data and code reads, Reads-for-Ownership (RFOs), speculative accesses and hardware prefetches from L1 and L2. It does not include all misses to the L3  Spec update: SKL057longest_lat_cache.referencecacheCore-originated cacheable demand requests that refer to L3  Spec update: SKL057event=0x2e,period=100003,umask=0x4f00Counts core-originated cacheable requests to the  L3 cache (Longest Latency cache). Requests include data and code reads, Reads-for-Ownership (RFOs), speculative accesses and hardware prefetches from L1 and L2.  It does not include all accesses to the L3  Spec update: SKL057mem_inst_retired.all_loadscacheRetired load instructions  Supports address when precise (Precise event)event=0xd0,period=2000003,umask=0x8100Counts all retired load instructions. This event accounts for SW prefetch instructions of PREFETCHNTA or PREFETCHT0/1/2 or PREFETCHW  Supports address when precise (Precise event)mem_inst_retired.all_storescacheRetired store instructions  Supports address when precise (Precise event)event=0xd0,period=2000003,umask=0x8200Counts all retired store instructions  Supports address when precise (Precise event)mem_inst_retired.anycacheAll retired memory instructions  Supports address when precise (Precise event)event=0xd0,period=2000003,umask=0x8300Counts all retired memory instructions - loads and stores  Supports address when precise (Precise event)mem_inst_retired.lock_loadscacheRetired load instructions with locked access  Supports address when precise (Precise event)event=0xd0,period=100007,umask=0x2100mem_load_l3_hit_retired.xsnp_hitcacheRetired load instructions which data sources were L3 and cross-core snoop hits in on-pkg core cache  Supports address when precise (Precise event)event=0xd2,period=20011,umask=200Retired load instructions which data sources were L3 and cross-core snoop hits in on-pkg core cache  Supports address when precise (Precise event)mem_load_l3_hit_retired.xsnp_hitmcacheRetired load instructions which data sources were HitM responses from shared L3  Supports address when precise (Precise event)event=0xd2,period=20011,umask=400Retired load instructions which data sources were HitM responses from shared L3  Supports address when precise (Precise event)mem_load_l3_hit_retired.xsnp_misscacheRetired load instructions which data sources were L3 hit and cross-core snoop missed in on-pkg core cache  Supports address when precise (Precise event)event=0xd2,period=20011,umask=100mem_load_l3_hit_retired.xsnp_nonecacheRetired load instructions which data sources were hits in L3 without snoops required  Supports address when precise (Precise event)event=0xd2,period=100003,umask=800Retired load instructions which data sources were hits in L3 without snoops required  Supports address when precise (Precise event)mem_load_l3_miss_retired.remote_dramcacheRetired load instructions which data sources missed L3 but serviced from remote dram  Supports address when precise (Precise event)event=0xd3,period=100007,umask=200mem_load_l3_miss_retired.remote_fwdcacheRetired load instructions whose data sources was forwarded from a remote cache  Supports address when preciseevent=0xd3,period=100007,umask=800Retired load instructions whose data sources was forwarded from a remote cache  Supports address when precisemem_load_l3_miss_retired.remote_hitmcacheRetired load instructions whose data sources was remote HITM  Supports address when precise (Precise event)event=0xd3,period=100007,umask=400Retired load instructions whose data sources was remote HITM  Supports address when precise (Precise event)mem_load_l3_miss_retired.remote_pmmcacheRetired load instructions with remote Intel(R) Optane(TM) DC persistent memory as the data source where the data request missed all caches  Supports address when precise (Precise event)event=0xd3,period=100007,umask=0x1000Counts retired load instructions with remote Intel(R) Optane(TM) DC persistent memory as the data source and the data request missed L3 (AppDirect or Memory Mode) and DRAM cache(Memory Mode)  Supports address when precise (Precise event)mem_load_misc_retired.uccacheRetired instructions with at least 1 uncacheable load or lock  Supports address when precise (Precise event)event=0xd4,period=100007,umask=400mem_load_retired.fb_hitcacheRetired load instructions which data sources were load missed L1 but hit FB due to preceding miss to the same cache line with data not ready  Supports address when precise (Precise event)event=0xd1,period=100007,umask=0x4000Counts retired load instructions with at least one uop was load missed in L1 but hit FB (Fill Buffers) due to preceding miss to the same cache line with data not ready  Supports address when precise (Precise event)mem_load_retired.l1_hitcacheRetired load instructions with L1 cache hits as data sources  Supports address when precise (Precise event)event=0xd1,period=2000003,umask=100Counts retired load instructions with at least one uop that hit in the L1 data cache. This event includes all SW prefetches and lock instructions regardless of the data source  Supports address when precise (Precise event)mem_load_retired.l1_misscacheRetired load instructions missed L1 cache as data sources  Supports address when precise (Precise event)event=0xd1,period=100003,umask=800Counts retired load instructions with at least one uop that missed in the L1 cache  Supports address when precise (Precise event)mem_load_retired.l2_hitcacheRetired load instructions with L2 cache hits as data sources  Supports address when precise (Precise event)event=0xd1,period=100003,umask=200Retired load instructions with L2 cache hits as data sources  Supports address when precise (Precise event)mem_load_retired.l2_misscacheRetired load instructions missed L2 cache as data sources  Supports address when precise (Precise event)event=0xd1,period=50021,umask=0x1000Retired load instructions missed L2 cache as data sources  Supports address when precise (Precise event)mem_load_retired.l3_hitcacheRetired load instructions with L3 cache hits as data sources  Supports address when precise (Precise event)event=0xd1,period=50021,umask=400Counts retired load instructions with at least one uop that hit in the L3 cache  Supports address when precise (Precise event)mem_load_retired.l3_misscacheRetired load instructions missed L3 cache as data sources  Supports address when precise (Precise event)event=0xd1,period=100007,umask=0x2000Counts retired load instructions with at least one uop that missed in the L3 cache  Supports address when precise (Precise event)mem_load_retired.local_pmmcacheRetired load instructions with local Intel(R) Optane(TM) DC persistent memory as the data source where the data request missed all caches  Supports address when precise (Precise event)event=0xd1,period=100003,umask=0x8000Counts retired load instructions with local Intel(R) Optane(TM) DC persistent memory as the data source and the data request missed L3 (AppDirect or Memory Mode) and DRAM cache(Memory Mode)  Supports address when precise (Precise event)ocr.all_data_rd.l3_hit.any_snoopcacheOCR.ALL_DATA_RD.L3_HIT.ANY_SNOOP OCR.ALL_DATA_RD.L3_HIT.ANY_SNOOP OCR.ALL_DATA_RD.L3_HIT.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C049100ocr.all_data_rd.l3_hit.hitm_other_corecacheOCR.ALL_DATA_RD.L3_HIT.HITM_OTHER_CORE OCR.ALL_DATA_RD.L3_HIT.HITM_OTHER_CORE OCR.ALL_DATA_RD.L3_HIT.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C049100ocr.all_data_rd.l3_hit.hit_other_core_fwdcacheOCR.ALL_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD OCR.ALL_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD OCR.ALL_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C049100ocr.all_data_rd.l3_hit.hit_other_core_no_fwdcacheOCR.ALL_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.ALL_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.ALL_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C049100ocr.all_data_rd.l3_hit.no_snoop_neededcacheOCR.ALL_DATA_RD.L3_HIT.NO_SNOOP_NEEDED OCR.ALL_DATA_RD.L3_HIT.NO_SNOOP_NEEDED OCR.ALL_DATA_RD.L3_HIT.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C049100ocr.all_data_rd.l3_hit.snoop_hit_with_fwdcacheOCR.ALL_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8007C049100ocr.all_data_rd.l3_hit.snoop_misscacheOCR.ALL_DATA_RD.L3_HIT.SNOOP_MISS OCR.ALL_DATA_RD.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C049100ocr.all_data_rd.l3_hit.snoop_nonecacheOCR.ALL_DATA_RD.L3_HIT.SNOOP_NONE OCR.ALL_DATA_RD.L3_HIT.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C049100ocr.all_data_rd.l3_hit_e.any_snoopcacheOCR.ALL_DATA_RD.L3_HIT_E.ANY_SNOOP  OCR.ALL_DATA_RD.L3_HIT_E.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8008049100ocr.all_data_rd.l3_hit_e.hitm_other_corecacheOCR.ALL_DATA_RD.L3_HIT_E.HITM_OTHER_CORE  OCR.ALL_DATA_RD.L3_HIT_E.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008049100ocr.all_data_rd.l3_hit_e.hit_other_core_fwdcacheOCR.ALL_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWD  OCR.ALL_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80008049100ocr.all_data_rd.l3_hit_e.hit_other_core_no_fwdcacheOCR.ALL_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWD  OCR.ALL_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008049100ocr.all_data_rd.l3_hit_e.no_snoop_neededcacheOCR.ALL_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDED  OCR.ALL_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008049100ocr.all_data_rd.l3_hit_e.snoop_misscacheOCR.ALL_DATA_RD.L3_HIT_E.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20008049100ocr.all_data_rd.l3_hit_e.snoop_nonecacheOCR.ALL_DATA_RD.L3_HIT_E.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008049100ocr.all_data_rd.l3_hit_f.any_snoopcacheOCR.ALL_DATA_RD.L3_HIT_F.ANY_SNOOP  OCR.ALL_DATA_RD.L3_HIT_F.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8020049100ocr.all_data_rd.l3_hit_f.hitm_other_corecacheOCR.ALL_DATA_RD.L3_HIT_F.HITM_OTHER_CORE  OCR.ALL_DATA_RD.L3_HIT_F.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100020049100ocr.all_data_rd.l3_hit_f.hit_other_core_fwdcacheOCR.ALL_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWD  OCR.ALL_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80020049100ocr.all_data_rd.l3_hit_f.hit_other_core_no_fwdcacheOCR.ALL_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWD  OCR.ALL_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40020049100ocr.all_data_rd.l3_hit_f.no_snoop_neededcacheOCR.ALL_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDED  OCR.ALL_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10020049100ocr.all_data_rd.l3_hit_f.snoop_misscacheOCR.ALL_DATA_RD.L3_HIT_F.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20020049100ocr.all_data_rd.l3_hit_f.snoop_nonecacheOCR.ALL_DATA_RD.L3_HIT_F.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8020049100ocr.all_data_rd.l3_hit_m.any_snoopcacheOCR.ALL_DATA_RD.L3_HIT_M.ANY_SNOOP  OCR.ALL_DATA_RD.L3_HIT_M.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8004049100ocr.all_data_rd.l3_hit_m.hitm_other_corecacheOCR.ALL_DATA_RD.L3_HIT_M.HITM_OTHER_CORE  OCR.ALL_DATA_RD.L3_HIT_M.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004049100ocr.all_data_rd.l3_hit_m.hit_other_core_fwdcacheOCR.ALL_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWD  OCR.ALL_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80004049100ocr.all_data_rd.l3_hit_m.hit_other_core_no_fwdcacheOCR.ALL_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWD  OCR.ALL_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004049100ocr.all_data_rd.l3_hit_m.no_snoop_neededcacheOCR.ALL_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDED  OCR.ALL_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004049100ocr.all_data_rd.l3_hit_m.snoop_misscacheOCR.ALL_DATA_RD.L3_HIT_M.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20004049100ocr.all_data_rd.l3_hit_m.snoop_nonecacheOCR.ALL_DATA_RD.L3_HIT_M.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8004049100ocr.all_data_rd.l3_hit_s.any_snoopcacheOCR.ALL_DATA_RD.L3_HIT_S.ANY_SNOOP  OCR.ALL_DATA_RD.L3_HIT_S.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8010049100ocr.all_data_rd.l3_hit_s.hitm_other_corecacheOCR.ALL_DATA_RD.L3_HIT_S.HITM_OTHER_CORE  OCR.ALL_DATA_RD.L3_HIT_S.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010049100ocr.all_data_rd.l3_hit_s.hit_other_core_fwdcacheOCR.ALL_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWD  OCR.ALL_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80010049100ocr.all_data_rd.l3_hit_s.hit_other_core_no_fwdcacheOCR.ALL_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWD  OCR.ALL_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010049100ocr.all_data_rd.l3_hit_s.no_snoop_neededcacheOCR.ALL_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDED  OCR.ALL_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010049100ocr.all_data_rd.l3_hit_s.snoop_misscacheOCR.ALL_DATA_RD.L3_HIT_S.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20010049100ocr.all_data_rd.l3_hit_s.snoop_nonecacheOCR.ALL_DATA_RD.L3_HIT_S.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8010049100ocr.all_pf_data_rd.l3_hit.any_snoopcacheOCR.ALL_PF_DATA_RD.L3_HIT.ANY_SNOOP OCR.ALL_PF_DATA_RD.L3_HIT.ANY_SNOOP OCR.ALL_PF_DATA_RD.L3_HIT.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C049000ocr.all_pf_data_rd.l3_hit.hitm_other_corecacheOCR.ALL_PF_DATA_RD.L3_HIT.HITM_OTHER_CORE OCR.ALL_PF_DATA_RD.L3_HIT.HITM_OTHER_CORE OCR.ALL_PF_DATA_RD.L3_HIT.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C049000ocr.all_pf_data_rd.l3_hit.hit_other_core_fwdcacheOCR.ALL_PF_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD OCR.ALL_PF_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD OCR.ALL_PF_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C049000ocr.all_pf_data_rd.l3_hit.hit_other_core_no_fwdcacheOCR.ALL_PF_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C049000ocr.all_pf_data_rd.l3_hit.no_snoop_neededcacheOCR.ALL_PF_DATA_RD.L3_HIT.NO_SNOOP_NEEDED OCR.ALL_PF_DATA_RD.L3_HIT.NO_SNOOP_NEEDED OCR.ALL_PF_DATA_RD.L3_HIT.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C049000ocr.all_pf_data_rd.l3_hit.snoop_hit_with_fwdcacheOCR.ALL_PF_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8007C049000ocr.all_pf_data_rd.l3_hit.snoop_misscacheOCR.ALL_PF_DATA_RD.L3_HIT.SNOOP_MISS OCR.ALL_PF_DATA_RD.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C049000ocr.all_pf_data_rd.l3_hit.snoop_nonecacheOCR.ALL_PF_DATA_RD.L3_HIT.SNOOP_NONE OCR.ALL_PF_DATA_RD.L3_HIT.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C049000ocr.all_pf_data_rd.l3_hit_e.any_snoopcacheOCR.ALL_PF_DATA_RD.L3_HIT_E.ANY_SNOOP  OCR.ALL_PF_DATA_RD.L3_HIT_E.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8008049000ocr.all_pf_data_rd.l3_hit_e.hitm_other_corecacheOCR.ALL_PF_DATA_RD.L3_HIT_E.HITM_OTHER_CORE  OCR.ALL_PF_DATA_RD.L3_HIT_E.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008049000ocr.all_pf_data_rd.l3_hit_e.hit_other_core_fwdcacheOCR.ALL_PF_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWD  OCR.ALL_PF_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80008049000ocr.all_pf_data_rd.l3_hit_e.hit_other_core_no_fwdcacheOCR.ALL_PF_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWD  OCR.ALL_PF_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008049000ocr.all_pf_data_rd.l3_hit_e.no_snoop_neededcacheOCR.ALL_PF_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDED  OCR.ALL_PF_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008049000ocr.all_pf_data_rd.l3_hit_e.snoop_misscacheOCR.ALL_PF_DATA_RD.L3_HIT_E.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20008049000ocr.all_pf_data_rd.l3_hit_e.snoop_nonecacheOCR.ALL_PF_DATA_RD.L3_HIT_E.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008049000ocr.all_pf_data_rd.l3_hit_f.any_snoopcacheOCR.ALL_PF_DATA_RD.L3_HIT_F.ANY_SNOOP  OCR.ALL_PF_DATA_RD.L3_HIT_F.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8020049000ocr.all_pf_data_rd.l3_hit_f.hitm_other_corecacheOCR.ALL_PF_DATA_RD.L3_HIT_F.HITM_OTHER_CORE  OCR.ALL_PF_DATA_RD.L3_HIT_F.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100020049000ocr.all_pf_data_rd.l3_hit_f.hit_other_core_fwdcacheOCR.ALL_PF_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWD  OCR.ALL_PF_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80020049000ocr.all_pf_data_rd.l3_hit_f.hit_other_core_no_fwdcacheOCR.ALL_PF_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWD  OCR.ALL_PF_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40020049000ocr.all_pf_data_rd.l3_hit_f.no_snoop_neededcacheOCR.ALL_PF_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDED  OCR.ALL_PF_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10020049000ocr.all_pf_data_rd.l3_hit_f.snoop_misscacheOCR.ALL_PF_DATA_RD.L3_HIT_F.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20020049000ocr.all_pf_data_rd.l3_hit_f.snoop_nonecacheOCR.ALL_PF_DATA_RD.L3_HIT_F.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8020049000ocr.all_pf_data_rd.l3_hit_m.any_snoopcacheOCR.ALL_PF_DATA_RD.L3_HIT_M.ANY_SNOOP  OCR.ALL_PF_DATA_RD.L3_HIT_M.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8004049000ocr.all_pf_data_rd.l3_hit_m.hitm_other_corecacheOCR.ALL_PF_DATA_RD.L3_HIT_M.HITM_OTHER_CORE  OCR.ALL_PF_DATA_RD.L3_HIT_M.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004049000ocr.all_pf_data_rd.l3_hit_m.hit_other_core_fwdcacheOCR.ALL_PF_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWD  OCR.ALL_PF_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80004049000ocr.all_pf_data_rd.l3_hit_m.hit_other_core_no_fwdcacheOCR.ALL_PF_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWD  OCR.ALL_PF_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004049000ocr.all_pf_data_rd.l3_hit_m.no_snoop_neededcacheOCR.ALL_PF_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDED  OCR.ALL_PF_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004049000ocr.all_pf_data_rd.l3_hit_m.snoop_misscacheOCR.ALL_PF_DATA_RD.L3_HIT_M.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20004049000ocr.all_pf_data_rd.l3_hit_m.snoop_nonecacheOCR.ALL_PF_DATA_RD.L3_HIT_M.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8004049000ocr.all_pf_data_rd.l3_hit_s.any_snoopcacheOCR.ALL_PF_DATA_RD.L3_HIT_S.ANY_SNOOP  OCR.ALL_PF_DATA_RD.L3_HIT_S.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8010049000ocr.all_pf_data_rd.l3_hit_s.hitm_other_corecacheOCR.ALL_PF_DATA_RD.L3_HIT_S.HITM_OTHER_CORE  OCR.ALL_PF_DATA_RD.L3_HIT_S.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010049000ocr.all_pf_data_rd.l3_hit_s.hit_other_core_fwdcacheOCR.ALL_PF_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWD  OCR.ALL_PF_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80010049000ocr.all_pf_data_rd.l3_hit_s.hit_other_core_no_fwdcacheOCR.ALL_PF_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWD  OCR.ALL_PF_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010049000ocr.all_pf_data_rd.l3_hit_s.no_snoop_neededcacheOCR.ALL_PF_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDED  OCR.ALL_PF_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010049000ocr.all_pf_data_rd.l3_hit_s.snoop_misscacheOCR.ALL_PF_DATA_RD.L3_HIT_S.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20010049000ocr.all_pf_data_rd.l3_hit_s.snoop_nonecacheOCR.ALL_PF_DATA_RD.L3_HIT_S.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8010049000ocr.all_pf_rfo.l3_hit.any_snoopcacheOCR.ALL_PF_RFO.L3_HIT.ANY_SNOOP OCR.ALL_PF_RFO.L3_HIT.ANY_SNOOP OCR.ALL_PF_RFO.L3_HIT.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C012000ocr.all_pf_rfo.l3_hit.hitm_other_corecacheOCR.ALL_PF_RFO.L3_HIT.HITM_OTHER_CORE OCR.ALL_PF_RFO.L3_HIT.HITM_OTHER_CORE OCR.ALL_PF_RFO.L3_HIT.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C012000ocr.all_pf_rfo.l3_hit.hit_other_core_fwdcacheOCR.ALL_PF_RFO.L3_HIT.HIT_OTHER_CORE_FWD OCR.ALL_PF_RFO.L3_HIT.HIT_OTHER_CORE_FWD OCR.ALL_PF_RFO.L3_HIT.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C012000ocr.all_pf_rfo.l3_hit.hit_other_core_no_fwdcacheOCR.ALL_PF_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C012000ocr.all_pf_rfo.l3_hit.no_snoop_neededcacheOCR.ALL_PF_RFO.L3_HIT.NO_SNOOP_NEEDED OCR.ALL_PF_RFO.L3_HIT.NO_SNOOP_NEEDED OCR.ALL_PF_RFO.L3_HIT.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C012000ocr.all_pf_rfo.l3_hit.snoop_hit_with_fwdcacheOCR.ALL_PF_RFO.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8007C012000ocr.all_pf_rfo.l3_hit.snoop_misscacheOCR.ALL_PF_RFO.L3_HIT.SNOOP_MISS OCR.ALL_PF_RFO.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C012000ocr.all_pf_rfo.l3_hit.snoop_nonecacheOCR.ALL_PF_RFO.L3_HIT.SNOOP_NONE OCR.ALL_PF_RFO.L3_HIT.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C012000ocr.all_pf_rfo.l3_hit_e.any_snoopcacheOCR.ALL_PF_RFO.L3_HIT_E.ANY_SNOOP  OCR.ALL_PF_RFO.L3_HIT_E.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8008012000ocr.all_pf_rfo.l3_hit_e.hitm_other_corecacheOCR.ALL_PF_RFO.L3_HIT_E.HITM_OTHER_CORE  OCR.ALL_PF_RFO.L3_HIT_E.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008012000ocr.all_pf_rfo.l3_hit_e.hit_other_core_fwdcacheOCR.ALL_PF_RFO.L3_HIT_E.HIT_OTHER_CORE_FWD  OCR.ALL_PF_RFO.L3_HIT_E.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80008012000ocr.all_pf_rfo.l3_hit_e.hit_other_core_no_fwdcacheOCR.ALL_PF_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWD  OCR.ALL_PF_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008012000ocr.all_pf_rfo.l3_hit_e.no_snoop_neededcacheOCR.ALL_PF_RFO.L3_HIT_E.NO_SNOOP_NEEDED  OCR.ALL_PF_RFO.L3_HIT_E.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008012000ocr.all_pf_rfo.l3_hit_e.snoop_misscacheOCR.ALL_PF_RFO.L3_HIT_E.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20008012000ocr.all_pf_rfo.l3_hit_e.snoop_nonecacheOCR.ALL_PF_RFO.L3_HIT_E.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008012000ocr.all_pf_rfo.l3_hit_f.any_snoopcacheOCR.ALL_PF_RFO.L3_HIT_F.ANY_SNOOP  OCR.ALL_PF_RFO.L3_HIT_F.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8020012000ocr.all_pf_rfo.l3_hit_f.hitm_other_corecacheOCR.ALL_PF_RFO.L3_HIT_F.HITM_OTHER_CORE  OCR.ALL_PF_RFO.L3_HIT_F.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100020012000ocr.all_pf_rfo.l3_hit_f.hit_other_core_fwdcacheOCR.ALL_PF_RFO.L3_HIT_F.HIT_OTHER_CORE_FWD  OCR.ALL_PF_RFO.L3_HIT_F.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80020012000ocr.all_pf_rfo.l3_hit_f.hit_other_core_no_fwdcacheOCR.ALL_PF_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWD  OCR.ALL_PF_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40020012000ocr.all_pf_rfo.l3_hit_f.no_snoop_neededcacheOCR.ALL_PF_RFO.L3_HIT_F.NO_SNOOP_NEEDED  OCR.ALL_PF_RFO.L3_HIT_F.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10020012000ocr.all_pf_rfo.l3_hit_f.snoop_misscacheOCR.ALL_PF_RFO.L3_HIT_F.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20020012000ocr.all_pf_rfo.l3_hit_f.snoop_nonecacheOCR.ALL_PF_RFO.L3_HIT_F.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8020012000ocr.all_pf_rfo.l3_hit_m.any_snoopcacheOCR.ALL_PF_RFO.L3_HIT_M.ANY_SNOOP  OCR.ALL_PF_RFO.L3_HIT_M.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8004012000ocr.all_pf_rfo.l3_hit_m.hitm_other_corecacheOCR.ALL_PF_RFO.L3_HIT_M.HITM_OTHER_CORE  OCR.ALL_PF_RFO.L3_HIT_M.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004012000ocr.all_pf_rfo.l3_hit_m.hit_other_core_fwdcacheOCR.ALL_PF_RFO.L3_HIT_M.HIT_OTHER_CORE_FWD  OCR.ALL_PF_RFO.L3_HIT_M.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80004012000ocr.all_pf_rfo.l3_hit_m.hit_other_core_no_fwdcacheOCR.ALL_PF_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWD  OCR.ALL_PF_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004012000ocr.all_pf_rfo.l3_hit_m.no_snoop_neededcacheOCR.ALL_PF_RFO.L3_HIT_M.NO_SNOOP_NEEDED  OCR.ALL_PF_RFO.L3_HIT_M.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004012000ocr.all_pf_rfo.l3_hit_m.snoop_misscacheOCR.ALL_PF_RFO.L3_HIT_M.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20004012000ocr.all_pf_rfo.l3_hit_m.snoop_nonecacheOCR.ALL_PF_RFO.L3_HIT_M.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8004012000ocr.all_pf_rfo.l3_hit_s.any_snoopcacheOCR.ALL_PF_RFO.L3_HIT_S.ANY_SNOOP  OCR.ALL_PF_RFO.L3_HIT_S.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8010012000ocr.all_pf_rfo.l3_hit_s.hitm_other_corecacheOCR.ALL_PF_RFO.L3_HIT_S.HITM_OTHER_CORE  OCR.ALL_PF_RFO.L3_HIT_S.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010012000ocr.all_pf_rfo.l3_hit_s.hit_other_core_fwdcacheOCR.ALL_PF_RFO.L3_HIT_S.HIT_OTHER_CORE_FWD  OCR.ALL_PF_RFO.L3_HIT_S.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80010012000ocr.all_pf_rfo.l3_hit_s.hit_other_core_no_fwdcacheOCR.ALL_PF_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWD  OCR.ALL_PF_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010012000ocr.all_pf_rfo.l3_hit_s.no_snoop_neededcacheOCR.ALL_PF_RFO.L3_HIT_S.NO_SNOOP_NEEDED  OCR.ALL_PF_RFO.L3_HIT_S.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010012000ocr.all_pf_rfo.l3_hit_s.snoop_misscacheOCR.ALL_PF_RFO.L3_HIT_S.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20010012000ocr.all_pf_rfo.l3_hit_s.snoop_nonecacheOCR.ALL_PF_RFO.L3_HIT_S.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8010012000ocr.all_reads.l3_hit.any_snoopcacheOCR.ALL_READS.L3_HIT.ANY_SNOOP OCR.ALL_READS.L3_HIT.ANY_SNOOP OCR.ALL_READS.L3_HIT.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C07F700ocr.all_reads.l3_hit.hitm_other_corecacheOCR.ALL_READS.L3_HIT.HITM_OTHER_CORE OCR.ALL_READS.L3_HIT.HITM_OTHER_CORE OCR.ALL_READS.L3_HIT.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C07F700ocr.all_reads.l3_hit.hit_other_core_fwdcacheOCR.ALL_READS.L3_HIT.HIT_OTHER_CORE_FWD OCR.ALL_READS.L3_HIT.HIT_OTHER_CORE_FWD OCR.ALL_READS.L3_HIT.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C07F700ocr.all_reads.l3_hit.hit_other_core_no_fwdcacheOCR.ALL_READS.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.ALL_READS.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.ALL_READS.L3_HIT.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C07F700ocr.all_reads.l3_hit.no_snoop_neededcacheOCR.ALL_READS.L3_HIT.NO_SNOOP_NEEDED OCR.ALL_READS.L3_HIT.NO_SNOOP_NEEDED OCR.ALL_READS.L3_HIT.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C07F700ocr.all_reads.l3_hit.snoop_hit_with_fwdcacheOCR.ALL_READS.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8007C07F700ocr.all_reads.l3_hit.snoop_misscacheOCR.ALL_READS.L3_HIT.SNOOP_MISS OCR.ALL_READS.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C07F700ocr.all_reads.l3_hit.snoop_nonecacheOCR.ALL_READS.L3_HIT.SNOOP_NONE OCR.ALL_READS.L3_HIT.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C07F700ocr.all_reads.l3_hit_e.any_snoopcacheOCR.ALL_READS.L3_HIT_E.ANY_SNOOP  OCR.ALL_READS.L3_HIT_E.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F800807F700ocr.all_reads.l3_hit_e.hitm_other_corecacheOCR.ALL_READS.L3_HIT_E.HITM_OTHER_CORE  OCR.ALL_READS.L3_HIT_E.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10000807F700ocr.all_reads.l3_hit_e.hit_other_core_fwdcacheOCR.ALL_READS.L3_HIT_E.HIT_OTHER_CORE_FWD  OCR.ALL_READS.L3_HIT_E.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8000807F700ocr.all_reads.l3_hit_e.hit_other_core_no_fwdcacheOCR.ALL_READS.L3_HIT_E.HIT_OTHER_CORE_NO_FWD  OCR.ALL_READS.L3_HIT_E.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4000807F700ocr.all_reads.l3_hit_e.no_snoop_neededcacheOCR.ALL_READS.L3_HIT_E.NO_SNOOP_NEEDED  OCR.ALL_READS.L3_HIT_E.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000807F700ocr.all_reads.l3_hit_e.snoop_misscacheOCR.ALL_READS.L3_HIT_E.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2000807F700ocr.all_reads.l3_hit_e.snoop_nonecacheOCR.ALL_READS.L3_HIT_E.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x800807F700ocr.all_reads.l3_hit_f.any_snoopcacheOCR.ALL_READS.L3_HIT_F.ANY_SNOOP  OCR.ALL_READS.L3_HIT_F.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F802007F700ocr.all_reads.l3_hit_f.hitm_other_corecacheOCR.ALL_READS.L3_HIT_F.HITM_OTHER_CORE  OCR.ALL_READS.L3_HIT_F.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002007F700ocr.all_reads.l3_hit_f.hit_other_core_fwdcacheOCR.ALL_READS.L3_HIT_F.HIT_OTHER_CORE_FWD  OCR.ALL_READS.L3_HIT_F.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002007F700ocr.all_reads.l3_hit_f.hit_other_core_no_fwdcacheOCR.ALL_READS.L3_HIT_F.HIT_OTHER_CORE_NO_FWD  OCR.ALL_READS.L3_HIT_F.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4002007F700ocr.all_reads.l3_hit_f.no_snoop_neededcacheOCR.ALL_READS.L3_HIT_F.NO_SNOOP_NEEDED  OCR.ALL_READS.L3_HIT_F.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1002007F700ocr.all_reads.l3_hit_f.snoop_misscacheOCR.ALL_READS.L3_HIT_F.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2002007F700ocr.all_reads.l3_hit_f.snoop_nonecacheOCR.ALL_READS.L3_HIT_F.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x802007F700ocr.all_reads.l3_hit_m.any_snoopcacheOCR.ALL_READS.L3_HIT_M.ANY_SNOOP  OCR.ALL_READS.L3_HIT_M.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F800407F700ocr.all_reads.l3_hit_m.hitm_other_corecacheOCR.ALL_READS.L3_HIT_M.HITM_OTHER_CORE  OCR.ALL_READS.L3_HIT_M.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10000407F700ocr.all_reads.l3_hit_m.hit_other_core_fwdcacheOCR.ALL_READS.L3_HIT_M.HIT_OTHER_CORE_FWD  OCR.ALL_READS.L3_HIT_M.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8000407F700ocr.all_reads.l3_hit_m.hit_other_core_no_fwdcacheOCR.ALL_READS.L3_HIT_M.HIT_OTHER_CORE_NO_FWD  OCR.ALL_READS.L3_HIT_M.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4000407F700ocr.all_reads.l3_hit_m.no_snoop_neededcacheOCR.ALL_READS.L3_HIT_M.NO_SNOOP_NEEDED  OCR.ALL_READS.L3_HIT_M.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000407F700ocr.all_reads.l3_hit_m.snoop_misscacheOCR.ALL_READS.L3_HIT_M.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2000407F700ocr.all_reads.l3_hit_m.snoop_nonecacheOCR.ALL_READS.L3_HIT_M.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x800407F700ocr.all_reads.l3_hit_s.any_snoopcacheOCR.ALL_READS.L3_HIT_S.ANY_SNOOP  OCR.ALL_READS.L3_HIT_S.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F801007F700ocr.all_reads.l3_hit_s.hitm_other_corecacheOCR.ALL_READS.L3_HIT_S.HITM_OTHER_CORE  OCR.ALL_READS.L3_HIT_S.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10001007F700ocr.all_reads.l3_hit_s.hit_other_core_fwdcacheOCR.ALL_READS.L3_HIT_S.HIT_OTHER_CORE_FWD  OCR.ALL_READS.L3_HIT_S.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8001007F700ocr.all_reads.l3_hit_s.hit_other_core_no_fwdcacheOCR.ALL_READS.L3_HIT_S.HIT_OTHER_CORE_NO_FWD  OCR.ALL_READS.L3_HIT_S.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4001007F700ocr.all_reads.l3_hit_s.no_snoop_neededcacheOCR.ALL_READS.L3_HIT_S.NO_SNOOP_NEEDED  OCR.ALL_READS.L3_HIT_S.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1001007F700ocr.all_reads.l3_hit_s.snoop_misscacheOCR.ALL_READS.L3_HIT_S.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2001007F700ocr.all_reads.l3_hit_s.snoop_nonecacheOCR.ALL_READS.L3_HIT_S.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x801007F700ocr.all_rfo.l3_hit.any_snoopcacheOCR.ALL_RFO.L3_HIT.ANY_SNOOP OCR.ALL_RFO.L3_HIT.ANY_SNOOP OCR.ALL_RFO.L3_HIT.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C012200ocr.all_rfo.l3_hit.hitm_other_corecacheOCR.ALL_RFO.L3_HIT.HITM_OTHER_CORE OCR.ALL_RFO.L3_HIT.HITM_OTHER_CORE OCR.ALL_RFO.L3_HIT.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C012200ocr.all_rfo.l3_hit.hit_other_core_fwdcacheOCR.ALL_RFO.L3_HIT.HIT_OTHER_CORE_FWD OCR.ALL_RFO.L3_HIT.HIT_OTHER_CORE_FWD OCR.ALL_RFO.L3_HIT.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C012200ocr.all_rfo.l3_hit.hit_other_core_no_fwdcacheOCR.ALL_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.ALL_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.ALL_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C012200ocr.all_rfo.l3_hit.no_snoop_neededcacheOCR.ALL_RFO.L3_HIT.NO_SNOOP_NEEDED OCR.ALL_RFO.L3_HIT.NO_SNOOP_NEEDED OCR.ALL_RFO.L3_HIT.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C012200ocr.all_rfo.l3_hit.snoop_hit_with_fwdcacheOCR.ALL_RFO.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8007C012200ocr.all_rfo.l3_hit.snoop_misscacheOCR.ALL_RFO.L3_HIT.SNOOP_MISS OCR.ALL_RFO.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C012200ocr.all_rfo.l3_hit.snoop_nonecacheOCR.ALL_RFO.L3_HIT.SNOOP_NONE OCR.ALL_RFO.L3_HIT.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C012200ocr.all_rfo.l3_hit_e.any_snoopcacheOCR.ALL_RFO.L3_HIT_E.ANY_SNOOP  OCR.ALL_RFO.L3_HIT_E.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8008012200ocr.all_rfo.l3_hit_e.hitm_other_corecacheOCR.ALL_RFO.L3_HIT_E.HITM_OTHER_CORE  OCR.ALL_RFO.L3_HIT_E.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008012200ocr.all_rfo.l3_hit_e.hit_other_core_fwdcacheOCR.ALL_RFO.L3_HIT_E.HIT_OTHER_CORE_FWD  OCR.ALL_RFO.L3_HIT_E.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80008012200ocr.all_rfo.l3_hit_e.hit_other_core_no_fwdcacheOCR.ALL_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWD  OCR.ALL_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008012200ocr.all_rfo.l3_hit_e.no_snoop_neededcacheOCR.ALL_RFO.L3_HIT_E.NO_SNOOP_NEEDED  OCR.ALL_RFO.L3_HIT_E.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008012200ocr.all_rfo.l3_hit_e.snoop_misscacheOCR.ALL_RFO.L3_HIT_E.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20008012200ocr.all_rfo.l3_hit_e.snoop_nonecacheOCR.ALL_RFO.L3_HIT_E.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008012200ocr.all_rfo.l3_hit_f.any_snoopcacheOCR.ALL_RFO.L3_HIT_F.ANY_SNOOP  OCR.ALL_RFO.L3_HIT_F.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8020012200ocr.all_rfo.l3_hit_f.hitm_other_corecacheOCR.ALL_RFO.L3_HIT_F.HITM_OTHER_CORE  OCR.ALL_RFO.L3_HIT_F.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100020012200ocr.all_rfo.l3_hit_f.hit_other_core_fwdcacheOCR.ALL_RFO.L3_HIT_F.HIT_OTHER_CORE_FWD  OCR.ALL_RFO.L3_HIT_F.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80020012200ocr.all_rfo.l3_hit_f.hit_other_core_no_fwdcacheOCR.ALL_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWD  OCR.ALL_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40020012200ocr.all_rfo.l3_hit_f.no_snoop_neededcacheOCR.ALL_RFO.L3_HIT_F.NO_SNOOP_NEEDED  OCR.ALL_RFO.L3_HIT_F.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10020012200ocr.all_rfo.l3_hit_f.snoop_misscacheOCR.ALL_RFO.L3_HIT_F.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20020012200ocr.all_rfo.l3_hit_f.snoop_nonecacheOCR.ALL_RFO.L3_HIT_F.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8020012200ocr.all_rfo.l3_hit_m.any_snoopcacheOCR.ALL_RFO.L3_HIT_M.ANY_SNOOP  OCR.ALL_RFO.L3_HIT_M.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8004012200ocr.all_rfo.l3_hit_m.hitm_other_corecacheOCR.ALL_RFO.L3_HIT_M.HITM_OTHER_CORE  OCR.ALL_RFO.L3_HIT_M.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004012200ocr.all_rfo.l3_hit_m.hit_other_core_fwdcacheOCR.ALL_RFO.L3_HIT_M.HIT_OTHER_CORE_FWD  OCR.ALL_RFO.L3_HIT_M.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80004012200ocr.all_rfo.l3_hit_m.hit_other_core_no_fwdcacheOCR.ALL_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWD  OCR.ALL_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004012200ocr.all_rfo.l3_hit_m.no_snoop_neededcacheOCR.ALL_RFO.L3_HIT_M.NO_SNOOP_NEEDED  OCR.ALL_RFO.L3_HIT_M.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004012200ocr.all_rfo.l3_hit_m.snoop_misscacheOCR.ALL_RFO.L3_HIT_M.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20004012200ocr.all_rfo.l3_hit_m.snoop_nonecacheOCR.ALL_RFO.L3_HIT_M.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8004012200ocr.all_rfo.l3_hit_s.any_snoopcacheOCR.ALL_RFO.L3_HIT_S.ANY_SNOOP  OCR.ALL_RFO.L3_HIT_S.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8010012200ocr.all_rfo.l3_hit_s.hitm_other_corecacheOCR.ALL_RFO.L3_HIT_S.HITM_OTHER_CORE  OCR.ALL_RFO.L3_HIT_S.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010012200ocr.all_rfo.l3_hit_s.hit_other_core_fwdcacheOCR.ALL_RFO.L3_HIT_S.HIT_OTHER_CORE_FWD  OCR.ALL_RFO.L3_HIT_S.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80010012200ocr.all_rfo.l3_hit_s.hit_other_core_no_fwdcacheOCR.ALL_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWD  OCR.ALL_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010012200ocr.all_rfo.l3_hit_s.no_snoop_neededcacheOCR.ALL_RFO.L3_HIT_S.NO_SNOOP_NEEDED  OCR.ALL_RFO.L3_HIT_S.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010012200ocr.all_rfo.l3_hit_s.snoop_misscacheOCR.ALL_RFO.L3_HIT_S.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20010012200ocr.all_rfo.l3_hit_s.snoop_nonecacheOCR.ALL_RFO.L3_HIT_S.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8010012200ocr.demand_code_rd.l3_hit.any_snoopcacheCounts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT.ANY_SNOOP OCR.DEMAND_CODE_RD.L3_HIT.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C000400ocr.demand_code_rd.l3_hit.hitm_other_corecacheCounts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT.HITM_OTHER_CORE OCR.DEMAND_CODE_RD.L3_HIT.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000400ocr.demand_code_rd.l3_hit.hit_other_core_fwdcacheCounts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT.HIT_OTHER_CORE_FWD OCR.DEMAND_CODE_RD.L3_HIT.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C000400ocr.demand_code_rd.l3_hit.hit_other_core_no_fwdcacheCounts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.DEMAND_CODE_RD.L3_HIT.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C000400ocr.demand_code_rd.l3_hit.no_snoop_neededcacheCounts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT.NO_SNOOP_NEEDED OCR.DEMAND_CODE_RD.L3_HIT.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C000400ocr.demand_code_rd.l3_hit.snoop_hit_with_fwdcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8007C000400ocr.demand_code_rd.l3_hit.snoop_misscacheCounts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C000400ocr.demand_code_rd.l3_hit.snoop_nonecacheCounts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C000400ocr.demand_code_rd.l3_hit_e.any_snoopcacheCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_HIT_E.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8008000400ocr.demand_code_rd.l3_hit_e.hitm_other_corecacheCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_HIT_E.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008000400ocr.demand_code_rd.l3_hit_e.hit_other_core_fwdcacheCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_HIT_E.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80008000400ocr.demand_code_rd.l3_hit_e.hit_other_core_no_fwdcacheCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008000400ocr.demand_code_rd.l3_hit_e.no_snoop_neededcacheCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_HIT_E.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008000400ocr.demand_code_rd.l3_hit_e.snoop_misscacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20008000400ocr.demand_code_rd.l3_hit_e.snoop_nonecacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008000400ocr.demand_code_rd.l3_hit_f.any_snoopcacheCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_HIT_F.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8020000400ocr.demand_code_rd.l3_hit_f.hitm_other_corecacheCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_HIT_F.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100020000400ocr.demand_code_rd.l3_hit_f.hit_other_core_fwdcacheCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_HIT_F.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80020000400ocr.demand_code_rd.l3_hit_f.hit_other_core_no_fwdcacheCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40020000400ocr.demand_code_rd.l3_hit_f.no_snoop_neededcacheCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_HIT_F.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10020000400ocr.demand_code_rd.l3_hit_f.snoop_misscacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20020000400ocr.demand_code_rd.l3_hit_f.snoop_nonecacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8020000400ocr.demand_code_rd.l3_hit_m.any_snoopcacheCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_HIT_M.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8004000400ocr.demand_code_rd.l3_hit_m.hitm_other_corecacheCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_HIT_M.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004000400ocr.demand_code_rd.l3_hit_m.hit_other_core_fwdcacheCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_HIT_M.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80004000400ocr.demand_code_rd.l3_hit_m.hit_other_core_no_fwdcacheCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004000400ocr.demand_code_rd.l3_hit_m.no_snoop_neededcacheCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_HIT_M.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004000400ocr.demand_code_rd.l3_hit_m.snoop_misscacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20004000400ocr.demand_code_rd.l3_hit_m.snoop_nonecacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8004000400ocr.demand_code_rd.l3_hit_s.any_snoopcacheCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_HIT_S.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8010000400ocr.demand_code_rd.l3_hit_s.hitm_other_corecacheCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_HIT_S.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010000400ocr.demand_code_rd.l3_hit_s.hit_other_core_fwdcacheCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_HIT_S.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80010000400ocr.demand_code_rd.l3_hit_s.hit_other_core_no_fwdcacheCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010000400ocr.demand_code_rd.l3_hit_s.no_snoop_neededcacheCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_HIT_S.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010000400ocr.demand_code_rd.l3_hit_s.snoop_misscacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20010000400ocr.demand_code_rd.l3_hit_s.snoop_nonecacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8010000400ocr.demand_data_rd.l3_hit.any_snoopcacheCounts demand data reads OCR.DEMAND_DATA_RD.L3_HIT.ANY_SNOOP OCR.DEMAND_DATA_RD.L3_HIT.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C000100ocr.demand_data_rd.l3_hit.hitm_other_corecacheCounts demand data reads OCR.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE OCR.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000100ocr.demand_data_rd.l3_hit.hit_other_core_fwdcacheCounts demand data reads OCR.DEMAND_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD OCR.DEMAND_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C000100ocr.demand_data_rd.l3_hit.hit_other_core_no_fwdcacheCounts demand data reads OCR.DEMAND_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.DEMAND_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C000100ocr.demand_data_rd.l3_hit.no_snoop_neededcacheCounts demand data reads OCR.DEMAND_DATA_RD.L3_HIT.NO_SNOOP_NEEDED OCR.DEMAND_DATA_RD.L3_HIT.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C000100ocr.demand_data_rd.l3_hit.snoop_hit_with_fwdcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8007C000100ocr.demand_data_rd.l3_hit.snoop_misscacheCounts demand data reads OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C000100ocr.demand_data_rd.l3_hit.snoop_nonecacheCounts demand data reads OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C000100ocr.demand_data_rd.l3_hit_e.any_snoopcacheCounts demand data reads  OCR.DEMAND_DATA_RD.L3_HIT_E.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8008000100ocr.demand_data_rd.l3_hit_e.hitm_other_corecacheCounts demand data reads  OCR.DEMAND_DATA_RD.L3_HIT_E.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008000100ocr.demand_data_rd.l3_hit_e.hit_other_core_fwdcacheCounts demand data reads  OCR.DEMAND_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80008000100ocr.demand_data_rd.l3_hit_e.hit_other_core_no_fwdcacheCounts demand data reads  OCR.DEMAND_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008000100ocr.demand_data_rd.l3_hit_e.no_snoop_neededcacheCounts demand data reads  OCR.DEMAND_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008000100ocr.demand_data_rd.l3_hit_e.snoop_misscacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20008000100ocr.demand_data_rd.l3_hit_e.snoop_nonecacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008000100ocr.demand_data_rd.l3_hit_f.any_snoopcacheCounts demand data reads  OCR.DEMAND_DATA_RD.L3_HIT_F.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8020000100ocr.demand_data_rd.l3_hit_f.hitm_other_corecacheCounts demand data reads  OCR.DEMAND_DATA_RD.L3_HIT_F.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100020000100ocr.demand_data_rd.l3_hit_f.hit_other_core_fwdcacheCounts demand data reads  OCR.DEMAND_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80020000100ocr.demand_data_rd.l3_hit_f.hit_other_core_no_fwdcacheCounts demand data reads  OCR.DEMAND_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40020000100ocr.demand_data_rd.l3_hit_f.no_snoop_neededcacheCounts demand data reads  OCR.DEMAND_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10020000100ocr.demand_data_rd.l3_hit_f.snoop_misscacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20020000100ocr.demand_data_rd.l3_hit_f.snoop_nonecacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8020000100ocr.demand_data_rd.l3_hit_m.any_snoopcacheCounts demand data reads  OCR.DEMAND_DATA_RD.L3_HIT_M.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8004000100ocr.demand_data_rd.l3_hit_m.hitm_other_corecacheCounts demand data reads  OCR.DEMAND_DATA_RD.L3_HIT_M.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004000100ocr.demand_data_rd.l3_hit_m.hit_other_core_fwdcacheCounts demand data reads  OCR.DEMAND_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80004000100ocr.demand_data_rd.l3_hit_m.hit_other_core_no_fwdcacheCounts demand data reads  OCR.DEMAND_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004000100ocr.demand_data_rd.l3_hit_m.no_snoop_neededcacheCounts demand data reads  OCR.DEMAND_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004000100ocr.demand_data_rd.l3_hit_m.snoop_misscacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20004000100ocr.demand_data_rd.l3_hit_m.snoop_nonecacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8004000100ocr.demand_data_rd.l3_hit_s.any_snoopcacheCounts demand data reads  OCR.DEMAND_DATA_RD.L3_HIT_S.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8010000100ocr.demand_data_rd.l3_hit_s.hitm_other_corecacheCounts demand data reads  OCR.DEMAND_DATA_RD.L3_HIT_S.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010000100ocr.demand_data_rd.l3_hit_s.hit_other_core_fwdcacheCounts demand data reads  OCR.DEMAND_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80010000100ocr.demand_data_rd.l3_hit_s.hit_other_core_no_fwdcacheCounts demand data reads  OCR.DEMAND_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010000100ocr.demand_data_rd.l3_hit_s.no_snoop_neededcacheCounts demand data reads  OCR.DEMAND_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010000100ocr.demand_data_rd.l3_hit_s.snoop_misscacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20010000100ocr.demand_data_rd.l3_hit_s.snoop_nonecacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8010000100ocr.demand_rfo.l3_hit.any_snoopcacheCounts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT.ANY_SNOOP OCR.DEMAND_RFO.L3_HIT.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C000200ocr.demand_rfo.l3_hit.hitm_other_corecacheCounts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT.HITM_OTHER_CORE OCR.DEMAND_RFO.L3_HIT.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000200ocr.demand_rfo.l3_hit.hit_other_core_fwdcacheCounts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT.HIT_OTHER_CORE_FWD OCR.DEMAND_RFO.L3_HIT.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C000200ocr.demand_rfo.l3_hit.hit_other_core_no_fwdcacheCounts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.DEMAND_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C000200ocr.demand_rfo.l3_hit.no_snoop_neededcacheCounts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT.NO_SNOOP_NEEDED OCR.DEMAND_RFO.L3_HIT.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C000200ocr.demand_rfo.l3_hit.snoop_hit_with_fwdcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x8007C000200ocr.demand_rfo.l3_hit.snoop_misscacheCounts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C000200ocr.demand_rfo.l3_hit.snoop_nonecacheCounts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C000200ocr.demand_rfo.l3_hit_e.any_snoopcacheCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_HIT_E.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8008000200ocr.demand_rfo.l3_hit_e.hitm_other_corecacheCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_HIT_E.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008000200ocr.demand_rfo.l3_hit_e.hit_other_core_fwdcacheCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_HIT_E.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80008000200ocr.demand_rfo.l3_hit_e.hit_other_core_no_fwdcacheCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008000200ocr.demand_rfo.l3_hit_e.no_snoop_neededcacheCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_HIT_E.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008000200ocr.demand_rfo.l3_hit_e.snoop_misscacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x20008000200ocr.demand_rfo.l3_hit_e.snoop_nonecacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x8008000200ocr.demand_rfo.l3_hit_f.any_snoopcacheCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_HIT_F.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8020000200ocr.demand_rfo.l3_hit_f.hitm_other_corecacheCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_HIT_F.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100020000200ocr.demand_rfo.l3_hit_f.hit_other_core_fwdcacheCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_HIT_F.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80020000200ocr.demand_rfo.l3_hit_f.hit_other_core_no_fwdcacheCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40020000200ocr.demand_rfo.l3_hit_f.no_snoop_neededcacheCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_HIT_F.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10020000200ocr.demand_rfo.l3_hit_f.snoop_misscacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x20020000200ocr.demand_rfo.l3_hit_f.snoop_nonecacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x8020000200ocr.demand_rfo.l3_hit_m.any_snoopcacheCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_HIT_M.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8004000200ocr.demand_rfo.l3_hit_m.hitm_other_corecacheCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_HIT_M.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004000200ocr.demand_rfo.l3_hit_m.hit_other_core_fwdcacheCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_HIT_M.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80004000200ocr.demand_rfo.l3_hit_m.hit_other_core_no_fwdcacheCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004000200ocr.demand_rfo.l3_hit_m.no_snoop_neededcacheCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_HIT_M.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004000200ocr.demand_rfo.l3_hit_m.snoop_misscacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x20004000200ocr.demand_rfo.l3_hit_m.snoop_nonecacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x8004000200ocr.demand_rfo.l3_hit_s.any_snoopcacheCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_HIT_S.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8010000200ocr.demand_rfo.l3_hit_s.hitm_other_corecacheCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_HIT_S.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010000200ocr.demand_rfo.l3_hit_s.hit_other_core_fwdcacheCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_HIT_S.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80010000200ocr.demand_rfo.l3_hit_s.hit_other_core_no_fwdcacheCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010000200ocr.demand_rfo.l3_hit_s.no_snoop_neededcacheCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_HIT_S.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010000200ocr.demand_rfo.l3_hit_s.snoop_misscacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x20010000200ocr.demand_rfo.l3_hit_s.snoop_nonecacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x8010000200ocr.other.l3_hit.any_snoopcacheCounts any other requests OCR.OTHER.L3_HIT.ANY_SNOOP OCR.OTHER.L3_HIT.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C800000ocr.other.l3_hit.hitm_other_corecacheCounts any other requests OCR.OTHER.L3_HIT.HITM_OTHER_CORE OCR.OTHER.L3_HIT.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C800000ocr.other.l3_hit.hit_other_core_fwdcacheCounts any other requests OCR.OTHER.L3_HIT.HIT_OTHER_CORE_FWD OCR.OTHER.L3_HIT.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C800000ocr.other.l3_hit.hit_other_core_no_fwdcacheCounts any other requests OCR.OTHER.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.OTHER.L3_HIT.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C800000ocr.other.l3_hit.no_snoop_neededcacheCounts any other requests OCR.OTHER.L3_HIT.NO_SNOOP_NEEDED OCR.OTHER.L3_HIT.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C800000ocr.other.l3_hit.snoop_hit_with_fwdcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8007C800000ocr.other.l3_hit.snoop_misscacheCounts any other requests OCR.OTHER.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C800000ocr.other.l3_hit.snoop_nonecacheCounts any other requests OCR.OTHER.L3_HIT.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C800000ocr.other.l3_hit_e.any_snoopcacheCounts any other requests  OCR.OTHER.L3_HIT_E.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8008800000ocr.other.l3_hit_e.hitm_other_corecacheCounts any other requests  OCR.OTHER.L3_HIT_E.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008800000ocr.other.l3_hit_e.hit_other_core_fwdcacheCounts any other requests  OCR.OTHER.L3_HIT_E.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80008800000ocr.other.l3_hit_e.hit_other_core_no_fwdcacheCounts any other requests  OCR.OTHER.L3_HIT_E.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008800000ocr.other.l3_hit_e.no_snoop_neededcacheCounts any other requests  OCR.OTHER.L3_HIT_E.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008800000ocr.other.l3_hit_e.snoop_misscacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20008800000ocr.other.l3_hit_e.snoop_nonecacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008800000ocr.other.l3_hit_f.any_snoopcacheCounts any other requests  OCR.OTHER.L3_HIT_F.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8020800000ocr.other.l3_hit_f.hitm_other_corecacheCounts any other requests  OCR.OTHER.L3_HIT_F.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100020800000ocr.other.l3_hit_f.hit_other_core_fwdcacheCounts any other requests  OCR.OTHER.L3_HIT_F.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80020800000ocr.other.l3_hit_f.hit_other_core_no_fwdcacheCounts any other requests  OCR.OTHER.L3_HIT_F.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40020800000ocr.other.l3_hit_f.no_snoop_neededcacheCounts any other requests  OCR.OTHER.L3_HIT_F.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10020800000ocr.other.l3_hit_f.snoop_misscacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20020800000ocr.other.l3_hit_f.snoop_nonecacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8020800000ocr.other.l3_hit_m.any_snoopcacheCounts any other requests  OCR.OTHER.L3_HIT_M.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8004800000ocr.other.l3_hit_m.hitm_other_corecacheCounts any other requests  OCR.OTHER.L3_HIT_M.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004800000ocr.other.l3_hit_m.hit_other_core_fwdcacheCounts any other requests  OCR.OTHER.L3_HIT_M.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80004800000ocr.other.l3_hit_m.hit_other_core_no_fwdcacheCounts any other requests  OCR.OTHER.L3_HIT_M.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004800000ocr.other.l3_hit_m.no_snoop_neededcacheCounts any other requests  OCR.OTHER.L3_HIT_M.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004800000ocr.other.l3_hit_m.snoop_misscacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20004800000ocr.other.l3_hit_m.snoop_nonecacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8004800000ocr.other.l3_hit_s.any_snoopcacheCounts any other requests  OCR.OTHER.L3_HIT_S.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8010800000ocr.other.l3_hit_s.hitm_other_corecacheCounts any other requests  OCR.OTHER.L3_HIT_S.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010800000ocr.other.l3_hit_s.hit_other_core_fwdcacheCounts any other requests  OCR.OTHER.L3_HIT_S.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80010800000ocr.other.l3_hit_s.hit_other_core_no_fwdcacheCounts any other requests  OCR.OTHER.L3_HIT_S.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010800000ocr.other.l3_hit_s.no_snoop_neededcacheCounts any other requests  OCR.OTHER.L3_HIT_S.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010800000ocr.other.l3_hit_s.snoop_misscacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20010800000ocr.other.l3_hit_s.snoop_nonecacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8010800000ocr.pf_l1d_and_sw.l3_hit.any_snoopcacheCounts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT.ANY_SNOOP OCR.PF_L1D_AND_SW.L3_HIT.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C040000ocr.pf_l1d_and_sw.l3_hit.hitm_other_corecacheCounts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT.HITM_OTHER_CORE OCR.PF_L1D_AND_SW.L3_HIT.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C040000ocr.pf_l1d_and_sw.l3_hit.hit_other_core_fwdcacheCounts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT.HIT_OTHER_CORE_FWD OCR.PF_L1D_AND_SW.L3_HIT.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C040000ocr.pf_l1d_and_sw.l3_hit.hit_other_core_no_fwdcacheCounts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.PF_L1D_AND_SW.L3_HIT.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C040000ocr.pf_l1d_and_sw.l3_hit.no_snoop_neededcacheCounts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT.NO_SNOOP_NEEDED OCR.PF_L1D_AND_SW.L3_HIT.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C040000ocr.pf_l1d_and_sw.l3_hit.snoop_hit_with_fwdcacheCounts L1 data cache hardware prefetch requests and software prefetch requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8007C040000ocr.pf_l1d_and_sw.l3_hit.snoop_misscacheCounts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C040000ocr.pf_l1d_and_sw.l3_hit.snoop_nonecacheCounts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C040000ocr.pf_l1d_and_sw.l3_hit_e.any_snoopcacheCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_HIT_E.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8008040000ocr.pf_l1d_and_sw.l3_hit_e.hitm_other_corecacheCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_HIT_E.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008040000ocr.pf_l1d_and_sw.l3_hit_e.hit_other_core_fwdcacheCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_HIT_E.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80008040000ocr.pf_l1d_and_sw.l3_hit_e.hit_other_core_no_fwdcacheCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_HIT_E.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008040000ocr.pf_l1d_and_sw.l3_hit_e.no_snoop_neededcacheCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_HIT_E.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008040000ocr.pf_l1d_and_sw.l3_hit_e.snoop_misscacheCounts L1 data cache hardware prefetch requests and software prefetch requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20008040000ocr.pf_l1d_and_sw.l3_hit_e.snoop_nonecacheCounts L1 data cache hardware prefetch requests and software prefetch requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008040000ocr.pf_l1d_and_sw.l3_hit_f.any_snoopcacheCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_HIT_F.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8020040000ocr.pf_l1d_and_sw.l3_hit_f.hitm_other_corecacheCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_HIT_F.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100020040000ocr.pf_l1d_and_sw.l3_hit_f.hit_other_core_fwdcacheCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_HIT_F.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80020040000ocr.pf_l1d_and_sw.l3_hit_f.hit_other_core_no_fwdcacheCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_HIT_F.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40020040000ocr.pf_l1d_and_sw.l3_hit_f.no_snoop_neededcacheCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_HIT_F.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10020040000ocr.pf_l1d_and_sw.l3_hit_f.snoop_misscacheCounts L1 data cache hardware prefetch requests and software prefetch requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20020040000ocr.pf_l1d_and_sw.l3_hit_f.snoop_nonecacheCounts L1 data cache hardware prefetch requests and software prefetch requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8020040000ocr.pf_l1d_and_sw.l3_hit_m.any_snoopcacheCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_HIT_M.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8004040000ocr.pf_l1d_and_sw.l3_hit_m.hitm_other_corecacheCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_HIT_M.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004040000ocr.pf_l1d_and_sw.l3_hit_m.hit_other_core_fwdcacheCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_HIT_M.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80004040000ocr.pf_l1d_and_sw.l3_hit_m.hit_other_core_no_fwdcacheCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_HIT_M.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004040000ocr.pf_l1d_and_sw.l3_hit_m.no_snoop_neededcacheCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_HIT_M.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004040000ocr.pf_l1d_and_sw.l3_hit_m.snoop_misscacheCounts L1 data cache hardware prefetch requests and software prefetch requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20004040000ocr.pf_l1d_and_sw.l3_hit_m.snoop_nonecacheCounts L1 data cache hardware prefetch requests and software prefetch requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8004040000ocr.pf_l1d_and_sw.l3_hit_s.any_snoopcacheCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_HIT_S.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8010040000ocr.pf_l1d_and_sw.l3_hit_s.hitm_other_corecacheCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_HIT_S.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010040000ocr.pf_l1d_and_sw.l3_hit_s.hit_other_core_fwdcacheCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_HIT_S.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80010040000ocr.pf_l1d_and_sw.l3_hit_s.hit_other_core_no_fwdcacheCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_HIT_S.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010040000ocr.pf_l1d_and_sw.l3_hit_s.no_snoop_neededcacheCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_HIT_S.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010040000ocr.pf_l1d_and_sw.l3_hit_s.snoop_misscacheCounts L1 data cache hardware prefetch requests and software prefetch requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20010040000ocr.pf_l1d_and_sw.l3_hit_s.snoop_nonecacheCounts L1 data cache hardware prefetch requests and software prefetch requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8010040000ocr.pf_l2_data_rd.l3_hit.any_snoopcacheCounts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT.ANY_SNOOP OCR.PF_L2_DATA_RD.L3_HIT.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C001000ocr.pf_l2_data_rd.l3_hit.hitm_other_corecacheCounts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT.HITM_OTHER_CORE OCR.PF_L2_DATA_RD.L3_HIT.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C001000ocr.pf_l2_data_rd.l3_hit.hit_other_core_fwdcacheCounts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD OCR.PF_L2_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C001000ocr.pf_l2_data_rd.l3_hit.hit_other_core_no_fwdcacheCounts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.PF_L2_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C001000ocr.pf_l2_data_rd.l3_hit.no_snoop_neededcacheCounts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT.NO_SNOOP_NEEDED OCR.PF_L2_DATA_RD.L3_HIT.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C001000ocr.pf_l2_data_rd.l3_hit.snoop_hit_with_fwdcacheCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8007C001000ocr.pf_l2_data_rd.l3_hit.snoop_misscacheCounts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C001000ocr.pf_l2_data_rd.l3_hit.snoop_nonecacheCounts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C001000ocr.pf_l2_data_rd.l3_hit_e.any_snoopcacheCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_HIT_E.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8008001000ocr.pf_l2_data_rd.l3_hit_e.hitm_other_corecacheCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_HIT_E.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008001000ocr.pf_l2_data_rd.l3_hit_e.hit_other_core_fwdcacheCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80008001000ocr.pf_l2_data_rd.l3_hit_e.hit_other_core_no_fwdcacheCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008001000ocr.pf_l2_data_rd.l3_hit_e.no_snoop_neededcacheCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008001000ocr.pf_l2_data_rd.l3_hit_e.snoop_misscacheCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20008001000ocr.pf_l2_data_rd.l3_hit_e.snoop_nonecacheCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008001000ocr.pf_l2_data_rd.l3_hit_f.any_snoopcacheCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_HIT_F.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8020001000ocr.pf_l2_data_rd.l3_hit_f.hitm_other_corecacheCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_HIT_F.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100020001000ocr.pf_l2_data_rd.l3_hit_f.hit_other_core_fwdcacheCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80020001000ocr.pf_l2_data_rd.l3_hit_f.hit_other_core_no_fwdcacheCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40020001000ocr.pf_l2_data_rd.l3_hit_f.no_snoop_neededcacheCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10020001000ocr.pf_l2_data_rd.l3_hit_f.snoop_misscacheCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20020001000ocr.pf_l2_data_rd.l3_hit_f.snoop_nonecacheCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8020001000ocr.pf_l2_data_rd.l3_hit_m.any_snoopcacheCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_HIT_M.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8004001000ocr.pf_l2_data_rd.l3_hit_m.hitm_other_corecacheCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_HIT_M.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004001000ocr.pf_l2_data_rd.l3_hit_m.hit_other_core_fwdcacheCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80004001000ocr.pf_l2_data_rd.l3_hit_m.hit_other_core_no_fwdcacheCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004001000ocr.pf_l2_data_rd.l3_hit_m.no_snoop_neededcacheCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004001000ocr.pf_l2_data_rd.l3_hit_m.snoop_misscacheCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20004001000ocr.pf_l2_data_rd.l3_hit_m.snoop_nonecacheCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8004001000ocr.pf_l2_data_rd.l3_hit_s.any_snoopcacheCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_HIT_S.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8010001000ocr.pf_l2_data_rd.l3_hit_s.hitm_other_corecacheCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_HIT_S.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010001000ocr.pf_l2_data_rd.l3_hit_s.hit_other_core_fwdcacheCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80010001000ocr.pf_l2_data_rd.l3_hit_s.hit_other_core_no_fwdcacheCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010001000ocr.pf_l2_data_rd.l3_hit_s.no_snoop_neededcacheCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010001000ocr.pf_l2_data_rd.l3_hit_s.snoop_misscacheCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20010001000ocr.pf_l2_data_rd.l3_hit_s.snoop_nonecacheCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8010001000ocr.pf_l2_rfo.l3_hit.any_snoopcacheCounts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT.ANY_SNOOP OCR.PF_L2_RFO.L3_HIT.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C002000ocr.pf_l2_rfo.l3_hit.hitm_other_corecacheCounts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT.HITM_OTHER_CORE OCR.PF_L2_RFO.L3_HIT.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C002000ocr.pf_l2_rfo.l3_hit.hit_other_core_fwdcacheCounts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT.HIT_OTHER_CORE_FWD OCR.PF_L2_RFO.L3_HIT.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C002000ocr.pf_l2_rfo.l3_hit.hit_other_core_no_fwdcacheCounts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.PF_L2_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C002000ocr.pf_l2_rfo.l3_hit.no_snoop_neededcacheCounts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT.NO_SNOOP_NEEDED OCR.PF_L2_RFO.L3_HIT.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C002000ocr.pf_l2_rfo.l3_hit.snoop_hit_with_fwdcacheCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8007C002000ocr.pf_l2_rfo.l3_hit.snoop_misscacheCounts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C002000ocr.pf_l2_rfo.l3_hit.snoop_nonecacheCounts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C002000ocr.pf_l2_rfo.l3_hit_e.any_snoopcacheCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_HIT_E.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8008002000ocr.pf_l2_rfo.l3_hit_e.hitm_other_corecacheCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_HIT_E.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008002000ocr.pf_l2_rfo.l3_hit_e.hit_other_core_fwdcacheCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_HIT_E.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80008002000ocr.pf_l2_rfo.l3_hit_e.hit_other_core_no_fwdcacheCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008002000ocr.pf_l2_rfo.l3_hit_e.no_snoop_neededcacheCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_HIT_E.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008002000ocr.pf_l2_rfo.l3_hit_e.snoop_misscacheCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20008002000ocr.pf_l2_rfo.l3_hit_e.snoop_nonecacheCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008002000ocr.pf_l2_rfo.l3_hit_f.any_snoopcacheCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_HIT_F.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8020002000ocr.pf_l2_rfo.l3_hit_f.hitm_other_corecacheCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_HIT_F.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100020002000ocr.pf_l2_rfo.l3_hit_f.hit_other_core_fwdcacheCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_HIT_F.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80020002000ocr.pf_l2_rfo.l3_hit_f.hit_other_core_no_fwdcacheCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40020002000ocr.pf_l2_rfo.l3_hit_f.no_snoop_neededcacheCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_HIT_F.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10020002000ocr.pf_l2_rfo.l3_hit_f.snoop_misscacheCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20020002000ocr.pf_l2_rfo.l3_hit_f.snoop_nonecacheCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8020002000ocr.pf_l2_rfo.l3_hit_m.any_snoopcacheCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_HIT_M.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8004002000ocr.pf_l2_rfo.l3_hit_m.hitm_other_corecacheCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_HIT_M.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004002000ocr.pf_l2_rfo.l3_hit_m.hit_other_core_fwdcacheCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_HIT_M.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80004002000ocr.pf_l2_rfo.l3_hit_m.hit_other_core_no_fwdcacheCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004002000ocr.pf_l2_rfo.l3_hit_m.no_snoop_neededcacheCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_HIT_M.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004002000ocr.pf_l2_rfo.l3_hit_m.snoop_misscacheCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20004002000ocr.pf_l2_rfo.l3_hit_m.snoop_nonecacheCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8004002000ocr.pf_l2_rfo.l3_hit_s.any_snoopcacheCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_HIT_S.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8010002000ocr.pf_l2_rfo.l3_hit_s.hitm_other_corecacheCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_HIT_S.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010002000ocr.pf_l2_rfo.l3_hit_s.hit_other_core_fwdcacheCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_HIT_S.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80010002000ocr.pf_l2_rfo.l3_hit_s.hit_other_core_no_fwdcacheCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010002000ocr.pf_l2_rfo.l3_hit_s.no_snoop_neededcacheCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_HIT_S.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010002000ocr.pf_l2_rfo.l3_hit_s.snoop_misscacheCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20010002000ocr.pf_l2_rfo.l3_hit_s.snoop_nonecacheCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8010002000ocr.pf_l3_data_rd.l3_hit.any_snoopcacheCounts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT.ANY_SNOOP OCR.PF_L3_DATA_RD.L3_HIT.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C008000ocr.pf_l3_data_rd.l3_hit.hitm_other_corecacheCounts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT.HITM_OTHER_CORE OCR.PF_L3_DATA_RD.L3_HIT.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C008000ocr.pf_l3_data_rd.l3_hit.hit_other_core_fwdcacheCounts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD OCR.PF_L3_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C008000ocr.pf_l3_data_rd.l3_hit.hit_other_core_no_fwdcacheCounts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.PF_L3_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C008000ocr.pf_l3_data_rd.l3_hit.no_snoop_neededcacheCounts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT.NO_SNOOP_NEEDED OCR.PF_L3_DATA_RD.L3_HIT.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C008000ocr.pf_l3_data_rd.l3_hit.snoop_hit_with_fwdcacheCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8007C008000ocr.pf_l3_data_rd.l3_hit.snoop_misscacheCounts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C008000ocr.pf_l3_data_rd.l3_hit.snoop_nonecacheCounts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C008000ocr.pf_l3_data_rd.l3_hit_e.any_snoopcacheCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_HIT_E.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8008008000ocr.pf_l3_data_rd.l3_hit_e.hitm_other_corecacheCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_HIT_E.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008008000ocr.pf_l3_data_rd.l3_hit_e.hit_other_core_fwdcacheCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80008008000ocr.pf_l3_data_rd.l3_hit_e.hit_other_core_no_fwdcacheCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008008000ocr.pf_l3_data_rd.l3_hit_e.no_snoop_neededcacheCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008008000ocr.pf_l3_data_rd.l3_hit_e.snoop_misscacheCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20008008000ocr.pf_l3_data_rd.l3_hit_e.snoop_nonecacheCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008008000ocr.pf_l3_data_rd.l3_hit_f.any_snoopcacheCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_HIT_F.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8020008000ocr.pf_l3_data_rd.l3_hit_f.hitm_other_corecacheCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_HIT_F.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100020008000ocr.pf_l3_data_rd.l3_hit_f.hit_other_core_fwdcacheCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80020008000ocr.pf_l3_data_rd.l3_hit_f.hit_other_core_no_fwdcacheCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40020008000ocr.pf_l3_data_rd.l3_hit_f.no_snoop_neededcacheCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10020008000ocr.pf_l3_data_rd.l3_hit_f.snoop_misscacheCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20020008000ocr.pf_l3_data_rd.l3_hit_f.snoop_nonecacheCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8020008000ocr.pf_l3_data_rd.l3_hit_m.any_snoopcacheCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_HIT_M.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8004008000ocr.pf_l3_data_rd.l3_hit_m.hitm_other_corecacheCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_HIT_M.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004008000ocr.pf_l3_data_rd.l3_hit_m.hit_other_core_fwdcacheCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80004008000ocr.pf_l3_data_rd.l3_hit_m.hit_other_core_no_fwdcacheCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004008000ocr.pf_l3_data_rd.l3_hit_m.no_snoop_neededcacheCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004008000ocr.pf_l3_data_rd.l3_hit_m.snoop_misscacheCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20004008000ocr.pf_l3_data_rd.l3_hit_m.snoop_nonecacheCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8004008000ocr.pf_l3_data_rd.l3_hit_s.any_snoopcacheCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_HIT_S.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8010008000ocr.pf_l3_data_rd.l3_hit_s.hitm_other_corecacheCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_HIT_S.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010008000ocr.pf_l3_data_rd.l3_hit_s.hit_other_core_fwdcacheCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80010008000ocr.pf_l3_data_rd.l3_hit_s.hit_other_core_no_fwdcacheCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010008000ocr.pf_l3_data_rd.l3_hit_s.no_snoop_neededcacheCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010008000ocr.pf_l3_data_rd.l3_hit_s.snoop_misscacheCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20010008000ocr.pf_l3_data_rd.l3_hit_s.snoop_nonecacheCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8010008000ocr.pf_l3_rfo.l3_hit.any_snoopcacheCounts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT.ANY_SNOOP OCR.PF_L3_RFO.L3_HIT.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C010000ocr.pf_l3_rfo.l3_hit.hitm_other_corecacheCounts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT.HITM_OTHER_CORE OCR.PF_L3_RFO.L3_HIT.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C010000ocr.pf_l3_rfo.l3_hit.hit_other_core_fwdcacheCounts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT.HIT_OTHER_CORE_FWD OCR.PF_L3_RFO.L3_HIT.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C010000ocr.pf_l3_rfo.l3_hit.hit_other_core_no_fwdcacheCounts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.PF_L3_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C010000ocr.pf_l3_rfo.l3_hit.no_snoop_neededcacheCounts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT.NO_SNOOP_NEEDED OCR.PF_L3_RFO.L3_HIT.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C010000ocr.pf_l3_rfo.l3_hit.snoop_hit_with_fwdcacheCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8007C010000ocr.pf_l3_rfo.l3_hit.snoop_misscacheCounts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C010000ocr.pf_l3_rfo.l3_hit.snoop_nonecacheCounts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C010000ocr.pf_l3_rfo.l3_hit_e.any_snoopcacheCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_HIT_E.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8008010000ocr.pf_l3_rfo.l3_hit_e.hitm_other_corecacheCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_HIT_E.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008010000ocr.pf_l3_rfo.l3_hit_e.hit_other_core_fwdcacheCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_HIT_E.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80008010000ocr.pf_l3_rfo.l3_hit_e.hit_other_core_no_fwdcacheCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008010000ocr.pf_l3_rfo.l3_hit_e.no_snoop_neededcacheCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_HIT_E.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008010000ocr.pf_l3_rfo.l3_hit_e.snoop_misscacheCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20008010000ocr.pf_l3_rfo.l3_hit_e.snoop_nonecacheCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008010000ocr.pf_l3_rfo.l3_hit_f.any_snoopcacheCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_HIT_F.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8020010000ocr.pf_l3_rfo.l3_hit_f.hitm_other_corecacheCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_HIT_F.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100020010000ocr.pf_l3_rfo.l3_hit_f.hit_other_core_fwdcacheCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_HIT_F.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80020010000ocr.pf_l3_rfo.l3_hit_f.hit_other_core_no_fwdcacheCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40020010000ocr.pf_l3_rfo.l3_hit_f.no_snoop_neededcacheCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_HIT_F.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10020010000ocr.pf_l3_rfo.l3_hit_f.snoop_misscacheCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20020010000ocr.pf_l3_rfo.l3_hit_f.snoop_nonecacheCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8020010000ocr.pf_l3_rfo.l3_hit_m.any_snoopcacheCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_HIT_M.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8004010000ocr.pf_l3_rfo.l3_hit_m.hitm_other_corecacheCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_HIT_M.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004010000ocr.pf_l3_rfo.l3_hit_m.hit_other_core_fwdcacheCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_HIT_M.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80004010000ocr.pf_l3_rfo.l3_hit_m.hit_other_core_no_fwdcacheCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004010000ocr.pf_l3_rfo.l3_hit_m.no_snoop_neededcacheCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_HIT_M.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004010000ocr.pf_l3_rfo.l3_hit_m.snoop_misscacheCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20004010000ocr.pf_l3_rfo.l3_hit_m.snoop_nonecacheCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8004010000ocr.pf_l3_rfo.l3_hit_s.any_snoopcacheCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_HIT_S.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8010010000ocr.pf_l3_rfo.l3_hit_s.hitm_other_corecacheCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_HIT_S.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010010000ocr.pf_l3_rfo.l3_hit_s.hit_other_core_fwdcacheCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_HIT_S.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80010010000ocr.pf_l3_rfo.l3_hit_s.hit_other_core_no_fwdcacheCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010010000ocr.pf_l3_rfo.l3_hit_s.no_snoop_neededcacheCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_HIT_S.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010010000ocr.pf_l3_rfo.l3_hit_s.snoop_misscacheCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20010010000ocr.pf_l3_rfo.l3_hit_s.snoop_nonecacheCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8010010000offcore_requests.all_data_rdcacheDemand and prefetch data readsevent=0xb0,period=100003,umask=800Counts the demand and prefetch data reads. All Core Data Reads include cacheable 'Demands' and L2 prefetchers (not L3 prefetchers). Counting also covers reads due to page walks resulted from any request typeoffcore_requests.all_requestscacheAny memory transaction that reached the SQevent=0xb0,period=100003,umask=0x8000Counts memory transactions reached the super queue including requests initiated by the core, all L3 prefetches, page walks, etc.offcore_requests.demand_code_rdcacheCacheable and non-cacheable code read requestsevent=0xb0,period=100003,umask=200Counts both cacheable and non-cacheable code read requestsoffcore_requests.demand_data_rdcacheDemand Data Read requests sent to uncoreevent=0xb0,period=100003,umask=100Counts the Demand Data Read requests sent to uncore. Use it in conjunction with OFFCORE_REQUESTS_OUTSTANDING to determine average latency in the uncoreoffcore_requests.demand_rfocacheDemand RFO requests including regular RFOs, locks, ItoMevent=0xb0,period=100003,umask=400Counts the demand RFO (read for ownership) requests including regular RFOs, locks, ItoMoffcore_requests_buffer.sq_fullcacheOffcore requests buffer cannot take more entries for this thread coreevent=0xb2,period=2000003,umask=100Counts the number of cases when the offcore requests buffer cannot take more entries for the core. This can happen when the superqueue does not contain eligible entries, or when L1D writeback pending FIFO requests is full.Note: Writeback pending FIFO has six entriesoffcore_requests_outstanding.all_data_rdcacheOffcore outstanding cacheable Core Data Read transactions in SuperQueue (SQ), queue to uncoreevent=0x60,period=2000003,umask=800Counts the number of offcore outstanding cacheable Core Data Read transactions in the super queue every cycle. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation). See corresponding Umask under OFFCORE_REQUESTSoffcore_requests_outstanding.cycles_with_data_rdcacheCycles when offcore outstanding cacheable Core Data Read transactions are present in SuperQueue (SQ), queue to uncoreevent=0x60,cmask=1,period=2000003,umask=800Counts cycles when offcore outstanding cacheable Core Data Read transactions are present in the super queue. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation). See corresponding Umask under OFFCORE_REQUESTSoffcore_requests_outstanding.cycles_with_demand_code_rdcacheCycles with offcore outstanding Code Reads transactions in the SuperQueue (SQ), queue to uncoreevent=0x60,cmask=1,period=2000003,umask=200Counts the number of offcore outstanding Code Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTSoffcore_requests_outstanding.cycles_with_demand_data_rdcacheCycles when offcore outstanding Demand Data Read transactions are present in SuperQueue (SQ), queue to uncoreevent=0x60,cmask=1,period=2000003,umask=100Counts cycles when offcore outstanding Demand Data Read transactions are present in the super queue (SQ). A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation)offcore_requests_outstanding.cycles_with_demand_rfocacheCycles with offcore outstanding demand rfo reads transactions in SuperQueue (SQ), queue to uncoreevent=0x60,cmask=1,period=2000003,umask=400Counts the number of offcore outstanding demand rfo Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTSoffcore_requests_outstanding.demand_code_rdcacheOffcore outstanding Code Reads transactions in the SuperQueue (SQ), queue to uncore, every cycleevent=0x60,period=2000003,umask=200Counts the number of offcore outstanding Code Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTSoffcore_requests_outstanding.demand_data_rdcacheOffcore outstanding Demand Data Read transactions in uncore queueevent=0x60,period=2000003,umask=100Counts the number of offcore outstanding Demand Data Read transactions in the super queue (SQ) every cycle. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor. See the corresponding Umask under OFFCORE_REQUESTS.Note: A prefetch promoted to Demand is counted from the promotion pointoffcore_requests_outstanding.demand_data_rd_ge_6cacheCycles with at least 6 offcore outstanding Demand Data Read transactions in uncore queueevent=0x60,cmask=6,period=2000003,umask=100offcore_requests_outstanding.demand_rfocacheOffcore outstanding demand rfo reads transactions in SuperQueue (SQ), queue to uncore, every cycleevent=0x60,period=2000003,umask=400Counts the number of offcore outstanding RFO (store) transactions in the super queue (SQ) every cycle. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation). See corresponding Umask under OFFCORE_REQUESTSoffcore_response.all_data_rd.any_responsecacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.ANY_RESPONSEevent=0xb7,period=100003,umask=1,offcore_rsp=0x1049110offcore_response.all_data_rd.l3_hit.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C049110offcore_response.all_data_rd.l3_hit.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C049110offcore_response.all_data_rd.l3_hit.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C049110offcore_response.all_data_rd.l3_hit.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C049110offcore_response.all_data_rd.l3_hit.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C049110offcore_response.all_data_rd.l3_hit.snoop_hit_with_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8007C049110offcore_response.all_data_rd.l3_hit.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C049110offcore_response.all_data_rd.l3_hit.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C049110offcore_response.all_data_rd.l3_hit_e.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_E.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8008049110offcore_response.all_data_rd.l3_hit_e.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_E.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008049110offcore_response.all_data_rd.l3_hit_e.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80008049110offcore_response.all_data_rd.l3_hit_e.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008049110offcore_response.all_data_rd.l3_hit_e.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008049110offcore_response.all_data_rd.l3_hit_e.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_E.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20008049110offcore_response.all_data_rd.l3_hit_e.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_E.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008049110offcore_response.all_data_rd.l3_hit_f.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_F.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8020049110offcore_response.all_data_rd.l3_hit_f.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_F.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100020049110offcore_response.all_data_rd.l3_hit_f.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80020049110offcore_response.all_data_rd.l3_hit_f.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40020049110offcore_response.all_data_rd.l3_hit_f.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10020049110offcore_response.all_data_rd.l3_hit_f.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_F.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20020049110offcore_response.all_data_rd.l3_hit_f.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_F.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8020049110offcore_response.all_data_rd.l3_hit_m.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_M.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8004049110offcore_response.all_data_rd.l3_hit_m.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_M.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004049110offcore_response.all_data_rd.l3_hit_m.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80004049110offcore_response.all_data_rd.l3_hit_m.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004049110offcore_response.all_data_rd.l3_hit_m.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004049110offcore_response.all_data_rd.l3_hit_m.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_M.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20004049110offcore_response.all_data_rd.l3_hit_m.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_M.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8004049110offcore_response.all_data_rd.l3_hit_s.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_S.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8010049110offcore_response.all_data_rd.l3_hit_s.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_S.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010049110offcore_response.all_data_rd.l3_hit_s.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80010049110offcore_response.all_data_rd.l3_hit_s.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010049110offcore_response.all_data_rd.l3_hit_s.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010049110offcore_response.all_data_rd.l3_hit_s.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_S.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20010049110offcore_response.all_data_rd.l3_hit_s.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_S.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8010049110offcore_response.all_data_rd.pmm_hit_local_pmm.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8040049110offcore_response.all_data_rd.pmm_hit_local_pmm.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040049110offcore_response.all_data_rd.pmm_hit_local_pmm.snoop_not_neededcacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040049110offcore_response.all_data_rd.supplier_none.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.SUPPLIER_NONE.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002049110offcore_response.all_data_rd.supplier_none.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.SUPPLIER_NONE.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002049110offcore_response.all_data_rd.supplier_none.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80002049110offcore_response.all_data_rd.supplier_none.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002049110offcore_response.all_data_rd.supplier_none.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002049110offcore_response.all_data_rd.supplier_none.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.SUPPLIER_NONE.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002049110offcore_response.all_data_rd.supplier_none.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.SUPPLIER_NONE.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002049110offcore_response.all_pf_data_rd.any_responsecacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.ANY_RESPONSEevent=0xb7,period=100003,umask=1,offcore_rsp=0x1049010offcore_response.all_pf_data_rd.l3_hit.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C049010offcore_response.all_pf_data_rd.l3_hit.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C049010offcore_response.all_pf_data_rd.l3_hit.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C049010offcore_response.all_pf_data_rd.l3_hit.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C049010offcore_response.all_pf_data_rd.l3_hit.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C049010offcore_response.all_pf_data_rd.l3_hit.snoop_hit_with_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8007C049010offcore_response.all_pf_data_rd.l3_hit.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C049010offcore_response.all_pf_data_rd.l3_hit.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C049010offcore_response.all_pf_data_rd.l3_hit_e.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_E.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8008049010offcore_response.all_pf_data_rd.l3_hit_e.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_E.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008049010offcore_response.all_pf_data_rd.l3_hit_e.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80008049010offcore_response.all_pf_data_rd.l3_hit_e.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008049010offcore_response.all_pf_data_rd.l3_hit_e.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008049010offcore_response.all_pf_data_rd.l3_hit_e.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_E.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20008049010offcore_response.all_pf_data_rd.l3_hit_e.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_E.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008049010offcore_response.all_pf_data_rd.l3_hit_f.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_F.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8020049010offcore_response.all_pf_data_rd.l3_hit_f.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_F.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100020049010offcore_response.all_pf_data_rd.l3_hit_f.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80020049010offcore_response.all_pf_data_rd.l3_hit_f.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40020049010offcore_response.all_pf_data_rd.l3_hit_f.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10020049010offcore_response.all_pf_data_rd.l3_hit_f.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_F.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20020049010offcore_response.all_pf_data_rd.l3_hit_f.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_F.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8020049010offcore_response.all_pf_data_rd.l3_hit_m.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_M.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8004049010offcore_response.all_pf_data_rd.l3_hit_m.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_M.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004049010offcore_response.all_pf_data_rd.l3_hit_m.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80004049010offcore_response.all_pf_data_rd.l3_hit_m.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004049010offcore_response.all_pf_data_rd.l3_hit_m.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004049010offcore_response.all_pf_data_rd.l3_hit_m.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_M.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20004049010offcore_response.all_pf_data_rd.l3_hit_m.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_M.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8004049010offcore_response.all_pf_data_rd.l3_hit_s.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_S.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8010049010offcore_response.all_pf_data_rd.l3_hit_s.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_S.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010049010offcore_response.all_pf_data_rd.l3_hit_s.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80010049010offcore_response.all_pf_data_rd.l3_hit_s.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010049010offcore_response.all_pf_data_rd.l3_hit_s.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010049010offcore_response.all_pf_data_rd.l3_hit_s.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_S.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20010049010offcore_response.all_pf_data_rd.l3_hit_s.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_S.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8010049010offcore_response.all_pf_data_rd.pmm_hit_local_pmm.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8040049010offcore_response.all_pf_data_rd.pmm_hit_local_pmm.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040049010offcore_response.all_pf_data_rd.pmm_hit_local_pmm.snoop_not_neededcacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040049010offcore_response.all_pf_data_rd.supplier_none.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002049010offcore_response.all_pf_data_rd.supplier_none.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002049010offcore_response.all_pf_data_rd.supplier_none.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80002049010offcore_response.all_pf_data_rd.supplier_none.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002049010offcore_response.all_pf_data_rd.supplier_none.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002049010offcore_response.all_pf_data_rd.supplier_none.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002049010offcore_response.all_pf_data_rd.supplier_none.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002049010offcore_response.all_pf_rfo.any_responsecacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.ANY_RESPONSEevent=0xb7,period=100003,umask=1,offcore_rsp=0x1012010offcore_response.all_pf_rfo.l3_hit.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C012010offcore_response.all_pf_rfo.l3_hit.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C012010offcore_response.all_pf_rfo.l3_hit.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C012010offcore_response.all_pf_rfo.l3_hit.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C012010offcore_response.all_pf_rfo.l3_hit.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C012010offcore_response.all_pf_rfo.l3_hit.snoop_hit_with_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8007C012010offcore_response.all_pf_rfo.l3_hit.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C012010offcore_response.all_pf_rfo.l3_hit.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C012010offcore_response.all_pf_rfo.l3_hit_e.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_E.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8008012010offcore_response.all_pf_rfo.l3_hit_e.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_E.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008012010offcore_response.all_pf_rfo.l3_hit_e.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_E.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80008012010offcore_response.all_pf_rfo.l3_hit_e.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008012010offcore_response.all_pf_rfo.l3_hit_e.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_E.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008012010offcore_response.all_pf_rfo.l3_hit_e.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_E.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20008012010offcore_response.all_pf_rfo.l3_hit_e.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_E.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008012010offcore_response.all_pf_rfo.l3_hit_f.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_F.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8020012010offcore_response.all_pf_rfo.l3_hit_f.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_F.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100020012010offcore_response.all_pf_rfo.l3_hit_f.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_F.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80020012010offcore_response.all_pf_rfo.l3_hit_f.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40020012010offcore_response.all_pf_rfo.l3_hit_f.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_F.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10020012010offcore_response.all_pf_rfo.l3_hit_f.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_F.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20020012010offcore_response.all_pf_rfo.l3_hit_f.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_F.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8020012010offcore_response.all_pf_rfo.l3_hit_m.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_M.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8004012010offcore_response.all_pf_rfo.l3_hit_m.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_M.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004012010offcore_response.all_pf_rfo.l3_hit_m.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_M.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80004012010offcore_response.all_pf_rfo.l3_hit_m.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004012010offcore_response.all_pf_rfo.l3_hit_m.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_M.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004012010offcore_response.all_pf_rfo.l3_hit_m.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_M.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20004012010offcore_response.all_pf_rfo.l3_hit_m.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_M.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8004012010offcore_response.all_pf_rfo.l3_hit_s.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_S.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8010012010offcore_response.all_pf_rfo.l3_hit_s.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_S.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010012010offcore_response.all_pf_rfo.l3_hit_s.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_S.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80010012010offcore_response.all_pf_rfo.l3_hit_s.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010012010offcore_response.all_pf_rfo.l3_hit_s.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_S.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010012010offcore_response.all_pf_rfo.l3_hit_s.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_S.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20010012010offcore_response.all_pf_rfo.l3_hit_s.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_S.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8010012010offcore_response.all_pf_rfo.pmm_hit_local_pmm.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8040012010offcore_response.all_pf_rfo.pmm_hit_local_pmm.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040012010offcore_response.all_pf_rfo.pmm_hit_local_pmm.snoop_not_neededcacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040012010offcore_response.all_pf_rfo.supplier_none.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.SUPPLIER_NONE.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002012010offcore_response.all_pf_rfo.supplier_none.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.SUPPLIER_NONE.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002012010offcore_response.all_pf_rfo.supplier_none.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80002012010offcore_response.all_pf_rfo.supplier_none.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002012010offcore_response.all_pf_rfo.supplier_none.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002012010offcore_response.all_pf_rfo.supplier_none.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.SUPPLIER_NONE.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002012010offcore_response.all_pf_rfo.supplier_none.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.SUPPLIER_NONE.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002012010offcore_response.all_reads.any_responsecacheThis event is deprecated. Refer to new event OCR.ALL_READS.ANY_RESPONSEevent=0xb7,period=100003,umask=1,offcore_rsp=0x107F710offcore_response.all_reads.l3_hit.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C07F710offcore_response.all_reads.l3_hit.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C07F710offcore_response.all_reads.l3_hit.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C07F710offcore_response.all_reads.l3_hit.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C07F710offcore_response.all_reads.l3_hit.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C07F710offcore_response.all_reads.l3_hit.snoop_hit_with_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8007C07F710offcore_response.all_reads.l3_hit.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C07F710offcore_response.all_reads.l3_hit.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C07F710offcore_response.all_reads.l3_hit_e.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_E.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F800807F710offcore_response.all_reads.l3_hit_e.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_E.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10000807F710offcore_response.all_reads.l3_hit_e.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_E.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8000807F710offcore_response.all_reads.l3_hit_e.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_E.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4000807F710offcore_response.all_reads.l3_hit_e.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_E.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000807F710offcore_response.all_reads.l3_hit_e.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_E.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2000807F710offcore_response.all_reads.l3_hit_e.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_E.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x800807F710offcore_response.all_reads.l3_hit_f.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_F.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F802007F710offcore_response.all_reads.l3_hit_f.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_F.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002007F710offcore_response.all_reads.l3_hit_f.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_F.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002007F710offcore_response.all_reads.l3_hit_f.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_F.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4002007F710offcore_response.all_reads.l3_hit_f.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_F.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1002007F710offcore_response.all_reads.l3_hit_f.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_F.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2002007F710offcore_response.all_reads.l3_hit_f.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_F.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x802007F710offcore_response.all_reads.l3_hit_m.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_M.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F800407F710offcore_response.all_reads.l3_hit_m.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_M.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10000407F710offcore_response.all_reads.l3_hit_m.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_M.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8000407F710offcore_response.all_reads.l3_hit_m.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_M.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4000407F710offcore_response.all_reads.l3_hit_m.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_M.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000407F710offcore_response.all_reads.l3_hit_m.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_M.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2000407F710offcore_response.all_reads.l3_hit_m.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_M.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x800407F710offcore_response.all_reads.l3_hit_s.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_S.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F801007F710offcore_response.all_reads.l3_hit_s.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_S.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10001007F710offcore_response.all_reads.l3_hit_s.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_S.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8001007F710offcore_response.all_reads.l3_hit_s.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_S.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4001007F710offcore_response.all_reads.l3_hit_s.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_S.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1001007F710offcore_response.all_reads.l3_hit_s.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_S.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2001007F710offcore_response.all_reads.l3_hit_s.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_S.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x801007F710offcore_response.all_reads.pmm_hit_local_pmm.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_READS.PMM_HIT_LOCAL_PMM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F804007F710offcore_response.all_reads.pmm_hit_local_pmm.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_READS.PMM_HIT_LOCAL_PMM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x804007F710offcore_response.all_reads.pmm_hit_local_pmm.snoop_not_neededcacheThis event is deprecated. Refer to new event OCR.ALL_READS.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1004007F710offcore_response.all_reads.supplier_none.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_READS.SUPPLIER_NONE.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F800207F710offcore_response.all_reads.supplier_none.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_READS.SUPPLIER_NONE.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10000207F710offcore_response.all_reads.supplier_none.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_READS.SUPPLIER_NONE.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8000207F710offcore_response.all_reads.supplier_none.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_READS.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4000207F710offcore_response.all_reads.supplier_none.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_READS.SUPPLIER_NONE.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000207F710offcore_response.all_reads.supplier_none.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_READS.SUPPLIER_NONE.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2000207F710offcore_response.all_reads.supplier_none.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_READS.SUPPLIER_NONE.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x800207F710offcore_response.all_rfo.any_responsecacheThis event is deprecated. Refer to new event OCR.ALL_RFO.ANY_RESPONSEevent=0xb7,period=100003,umask=1,offcore_rsp=0x1012210offcore_response.all_rfo.l3_hit.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C012210offcore_response.all_rfo.l3_hit.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C012210offcore_response.all_rfo.l3_hit.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C012210offcore_response.all_rfo.l3_hit.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C012210offcore_response.all_rfo.l3_hit.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C012210offcore_response.all_rfo.l3_hit.snoop_hit_with_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8007C012210offcore_response.all_rfo.l3_hit.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C012210offcore_response.all_rfo.l3_hit.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C012210offcore_response.all_rfo.l3_hit_e.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_E.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8008012210offcore_response.all_rfo.l3_hit_e.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_E.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008012210offcore_response.all_rfo.l3_hit_e.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_E.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80008012210offcore_response.all_rfo.l3_hit_e.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008012210offcore_response.all_rfo.l3_hit_e.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_E.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008012210offcore_response.all_rfo.l3_hit_e.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_E.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20008012210offcore_response.all_rfo.l3_hit_e.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_E.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008012210offcore_response.all_rfo.l3_hit_f.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_F.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8020012210offcore_response.all_rfo.l3_hit_f.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_F.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100020012210offcore_response.all_rfo.l3_hit_f.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_F.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80020012210offcore_response.all_rfo.l3_hit_f.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40020012210offcore_response.all_rfo.l3_hit_f.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_F.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10020012210offcore_response.all_rfo.l3_hit_f.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_F.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20020012210offcore_response.all_rfo.l3_hit_f.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_F.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8020012210offcore_response.all_rfo.l3_hit_m.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_M.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8004012210offcore_response.all_rfo.l3_hit_m.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_M.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004012210offcore_response.all_rfo.l3_hit_m.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_M.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80004012210offcore_response.all_rfo.l3_hit_m.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004012210offcore_response.all_rfo.l3_hit_m.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_M.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004012210offcore_response.all_rfo.l3_hit_m.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_M.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20004012210offcore_response.all_rfo.l3_hit_m.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_M.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8004012210offcore_response.all_rfo.l3_hit_s.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_S.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8010012210offcore_response.all_rfo.l3_hit_s.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_S.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010012210offcore_response.all_rfo.l3_hit_s.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_S.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80010012210offcore_response.all_rfo.l3_hit_s.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010012210offcore_response.all_rfo.l3_hit_s.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_S.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010012210offcore_response.all_rfo.l3_hit_s.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_S.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20010012210offcore_response.all_rfo.l3_hit_s.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_S.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8010012210offcore_response.all_rfo.pmm_hit_local_pmm.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8040012210offcore_response.all_rfo.pmm_hit_local_pmm.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040012210offcore_response.all_rfo.pmm_hit_local_pmm.snoop_not_neededcacheThis event is deprecated. Refer to new event OCR.ALL_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040012210offcore_response.all_rfo.supplier_none.any_snoopcacheThis event is deprecated. Refer to new event OCR.ALL_RFO.SUPPLIER_NONE.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002012210offcore_response.all_rfo.supplier_none.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.ALL_RFO.SUPPLIER_NONE.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002012210offcore_response.all_rfo.supplier_none.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80002012210offcore_response.all_rfo.supplier_none.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.ALL_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002012210offcore_response.all_rfo.supplier_none.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.ALL_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002012210offcore_response.all_rfo.supplier_none.snoop_misscacheThis event is deprecated. Refer to new event OCR.ALL_RFO.SUPPLIER_NONE.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002012210offcore_response.all_rfo.supplier_none.snoop_nonecacheThis event is deprecated. Refer to new event OCR.ALL_RFO.SUPPLIER_NONE.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002012210offcore_response.demand_code_rd.any_responsecacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.ANY_RESPONSEevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000410offcore_response.demand_code_rd.l3_hit.any_snoopcacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C000410offcore_response.demand_code_rd.l3_hit.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000410offcore_response.demand_code_rd.l3_hit.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C000410offcore_response.demand_code_rd.l3_hit.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C000410offcore_response.demand_code_rd.l3_hit.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C000410offcore_response.demand_code_rd.l3_hit.snoop_hit_with_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8007C000410offcore_response.demand_code_rd.l3_hit.snoop_misscacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C000410offcore_response.demand_code_rd.l3_hit.snoop_nonecacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C000410offcore_response.demand_code_rd.l3_hit_e.any_snoopcacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_E.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8008000410offcore_response.demand_code_rd.l3_hit_e.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_E.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008000410offcore_response.demand_code_rd.l3_hit_e.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_E.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80008000410offcore_response.demand_code_rd.l3_hit_e.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008000410offcore_response.demand_code_rd.l3_hit_e.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_E.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008000410offcore_response.demand_code_rd.l3_hit_e.snoop_misscacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_E.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20008000410offcore_response.demand_code_rd.l3_hit_e.snoop_nonecacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_E.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008000410offcore_response.demand_code_rd.l3_hit_f.any_snoopcacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_F.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8020000410offcore_response.demand_code_rd.l3_hit_f.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_F.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100020000410offcore_response.demand_code_rd.l3_hit_f.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_F.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80020000410offcore_response.demand_code_rd.l3_hit_f.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40020000410offcore_response.demand_code_rd.l3_hit_f.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_F.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10020000410offcore_response.demand_code_rd.l3_hit_f.snoop_misscacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_F.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20020000410offcore_response.demand_code_rd.l3_hit_f.snoop_nonecacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_F.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8020000410offcore_response.demand_code_rd.l3_hit_m.any_snoopcacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_M.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8004000410offcore_response.demand_code_rd.l3_hit_m.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_M.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004000410offcore_response.demand_code_rd.l3_hit_m.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_M.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80004000410offcore_response.demand_code_rd.l3_hit_m.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004000410offcore_response.demand_code_rd.l3_hit_m.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_M.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004000410offcore_response.demand_code_rd.l3_hit_m.snoop_misscacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_M.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20004000410offcore_response.demand_code_rd.l3_hit_m.snoop_nonecacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_M.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8004000410offcore_response.demand_code_rd.l3_hit_s.any_snoopcacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_S.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8010000410offcore_response.demand_code_rd.l3_hit_s.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_S.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010000410offcore_response.demand_code_rd.l3_hit_s.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_S.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80010000410offcore_response.demand_code_rd.l3_hit_s.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010000410offcore_response.demand_code_rd.l3_hit_s.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_S.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010000410offcore_response.demand_code_rd.l3_hit_s.snoop_misscacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_S.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20010000410offcore_response.demand_code_rd.l3_hit_s.snoop_nonecacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_S.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8010000410offcore_response.demand_code_rd.pmm_hit_local_pmm.any_snoopcacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8040000410offcore_response.demand_code_rd.pmm_hit_local_pmm.snoop_nonecacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040000410offcore_response.demand_code_rd.pmm_hit_local_pmm.snoop_not_neededcacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040000410offcore_response.demand_code_rd.supplier_none.any_snoopcacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.SUPPLIER_NONE.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002000410offcore_response.demand_code_rd.supplier_none.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.SUPPLIER_NONE.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002000410offcore_response.demand_code_rd.supplier_none.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80002000410offcore_response.demand_code_rd.supplier_none.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002000410offcore_response.demand_code_rd.supplier_none.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.SUPPLIER_NONE.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002000410offcore_response.demand_code_rd.supplier_none.snoop_misscacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.SUPPLIER_NONE.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002000410offcore_response.demand_code_rd.supplier_none.snoop_nonecacheThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.SUPPLIER_NONE.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002000410offcore_response.demand_data_rd.any_responsecacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.ANY_RESPONSEevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000110offcore_response.demand_data_rd.l3_hit.any_snoopcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C000110offcore_response.demand_data_rd.l3_hit.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000110offcore_response.demand_data_rd.l3_hit.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C000110offcore_response.demand_data_rd.l3_hit.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C000110offcore_response.demand_data_rd.l3_hit.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C000110offcore_response.demand_data_rd.l3_hit.snoop_hit_with_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8007C000110offcore_response.demand_data_rd.l3_hit.snoop_misscacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C000110offcore_response.demand_data_rd.l3_hit.snoop_nonecacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C000110offcore_response.demand_data_rd.l3_hit_e.any_snoopcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_E.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8008000110offcore_response.demand_data_rd.l3_hit_e.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_E.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008000110offcore_response.demand_data_rd.l3_hit_e.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80008000110offcore_response.demand_data_rd.l3_hit_e.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008000110offcore_response.demand_data_rd.l3_hit_e.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008000110offcore_response.demand_data_rd.l3_hit_e.snoop_misscacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_E.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20008000110offcore_response.demand_data_rd.l3_hit_e.snoop_nonecacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_E.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008000110offcore_response.demand_data_rd.l3_hit_f.any_snoopcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_F.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8020000110offcore_response.demand_data_rd.l3_hit_f.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_F.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100020000110offcore_response.demand_data_rd.l3_hit_f.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80020000110offcore_response.demand_data_rd.l3_hit_f.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40020000110offcore_response.demand_data_rd.l3_hit_f.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10020000110offcore_response.demand_data_rd.l3_hit_f.snoop_misscacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_F.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20020000110offcore_response.demand_data_rd.l3_hit_f.snoop_nonecacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_F.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8020000110offcore_response.demand_data_rd.l3_hit_m.any_snoopcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_M.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8004000110offcore_response.demand_data_rd.l3_hit_m.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_M.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004000110offcore_response.demand_data_rd.l3_hit_m.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80004000110offcore_response.demand_data_rd.l3_hit_m.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004000110offcore_response.demand_data_rd.l3_hit_m.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004000110offcore_response.demand_data_rd.l3_hit_m.snoop_misscacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_M.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20004000110offcore_response.demand_data_rd.l3_hit_m.snoop_nonecacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_M.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8004000110offcore_response.demand_data_rd.l3_hit_s.any_snoopcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_S.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8010000110offcore_response.demand_data_rd.l3_hit_s.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_S.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010000110offcore_response.demand_data_rd.l3_hit_s.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80010000110offcore_response.demand_data_rd.l3_hit_s.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010000110offcore_response.demand_data_rd.l3_hit_s.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010000110offcore_response.demand_data_rd.l3_hit_s.snoop_misscacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_S.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20010000110offcore_response.demand_data_rd.l3_hit_s.snoop_nonecacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_S.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8010000110offcore_response.demand_data_rd.pmm_hit_local_pmm.any_snoopcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8040000110offcore_response.demand_data_rd.pmm_hit_local_pmm.snoop_nonecacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040000110offcore_response.demand_data_rd.pmm_hit_local_pmm.snoop_not_neededcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040000110offcore_response.demand_data_rd.supplier_none.any_snoopcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.SUPPLIER_NONE.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002000110offcore_response.demand_data_rd.supplier_none.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.SUPPLIER_NONE.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002000110offcore_response.demand_data_rd.supplier_none.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80002000110offcore_response.demand_data_rd.supplier_none.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002000110offcore_response.demand_data_rd.supplier_none.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002000110offcore_response.demand_data_rd.supplier_none.snoop_misscacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.SUPPLIER_NONE.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002000110offcore_response.demand_data_rd.supplier_none.snoop_nonecacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.SUPPLIER_NONE.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002000110offcore_response.demand_rfo.any_responsecacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.ANY_RESPONSEevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000210offcore_response.demand_rfo.l3_hit.any_snoopcacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C000210offcore_response.demand_rfo.l3_hit.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000210offcore_response.demand_rfo.l3_hit.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C000210offcore_response.demand_rfo.l3_hit.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C000210offcore_response.demand_rfo.l3_hit.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C000210offcore_response.demand_rfo.l3_hit.snoop_hit_with_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8007C000210offcore_response.demand_rfo.l3_hit.snoop_misscacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C000210offcore_response.demand_rfo.l3_hit.snoop_nonecacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C000210offcore_response.demand_rfo.l3_hit_e.any_snoopcacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_E.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8008000210offcore_response.demand_rfo.l3_hit_e.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_E.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008000210offcore_response.demand_rfo.l3_hit_e.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_E.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80008000210offcore_response.demand_rfo.l3_hit_e.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008000210offcore_response.demand_rfo.l3_hit_e.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_E.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008000210offcore_response.demand_rfo.l3_hit_e.snoop_misscacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_E.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20008000210offcore_response.demand_rfo.l3_hit_e.snoop_nonecacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_E.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008000210offcore_response.demand_rfo.l3_hit_f.any_snoopcacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_F.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8020000210offcore_response.demand_rfo.l3_hit_f.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_F.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100020000210offcore_response.demand_rfo.l3_hit_f.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_F.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80020000210offcore_response.demand_rfo.l3_hit_f.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40020000210offcore_response.demand_rfo.l3_hit_f.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_F.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10020000210offcore_response.demand_rfo.l3_hit_f.snoop_misscacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_F.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20020000210offcore_response.demand_rfo.l3_hit_f.snoop_nonecacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_F.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8020000210offcore_response.demand_rfo.l3_hit_m.any_snoopcacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_M.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8004000210offcore_response.demand_rfo.l3_hit_m.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_M.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004000210offcore_response.demand_rfo.l3_hit_m.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_M.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80004000210offcore_response.demand_rfo.l3_hit_m.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004000210offcore_response.demand_rfo.l3_hit_m.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_M.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004000210offcore_response.demand_rfo.l3_hit_m.snoop_misscacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_M.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20004000210offcore_response.demand_rfo.l3_hit_m.snoop_nonecacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_M.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8004000210offcore_response.demand_rfo.l3_hit_s.any_snoopcacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_S.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8010000210offcore_response.demand_rfo.l3_hit_s.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_S.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010000210offcore_response.demand_rfo.l3_hit_s.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_S.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80010000210offcore_response.demand_rfo.l3_hit_s.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010000210offcore_response.demand_rfo.l3_hit_s.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_S.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010000210offcore_response.demand_rfo.l3_hit_s.snoop_misscacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_S.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20010000210offcore_response.demand_rfo.l3_hit_s.snoop_nonecacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_S.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8010000210offcore_response.demand_rfo.pmm_hit_local_pmm.any_snoopcacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8040000210offcore_response.demand_rfo.pmm_hit_local_pmm.snoop_nonecacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040000210offcore_response.demand_rfo.pmm_hit_local_pmm.snoop_not_neededcacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040000210offcore_response.demand_rfo.supplier_none.any_snoopcacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.SUPPLIER_NONE.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002000210offcore_response.demand_rfo.supplier_none.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.SUPPLIER_NONE.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002000210offcore_response.demand_rfo.supplier_none.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80002000210offcore_response.demand_rfo.supplier_none.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002000210offcore_response.demand_rfo.supplier_none.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002000210offcore_response.demand_rfo.supplier_none.snoop_misscacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.SUPPLIER_NONE.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002000210offcore_response.demand_rfo.supplier_none.snoop_nonecacheThis event is deprecated. Refer to new event OCR.DEMAND_RFO.SUPPLIER_NONE.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002000210offcore_response.other.any_responsecacheThis event is deprecated. Refer to new event OCR.OTHER.ANY_RESPONSEevent=0xb7,period=100003,umask=1,offcore_rsp=0x1800010offcore_response.other.l3_hit.any_snoopcacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C800010offcore_response.other.l3_hit.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C800010offcore_response.other.l3_hit.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C800010offcore_response.other.l3_hit.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C800010offcore_response.other.l3_hit.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C800010offcore_response.other.l3_hit.snoop_hit_with_fwdcacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8007C800010offcore_response.other.l3_hit.snoop_misscacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C800010offcore_response.other.l3_hit.snoop_nonecacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C800010offcore_response.other.l3_hit_e.any_snoopcacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT_E.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8008800010offcore_response.other.l3_hit_e.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT_E.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008800010offcore_response.other.l3_hit_e.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT_E.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80008800010offcore_response.other.l3_hit_e.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT_E.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008800010offcore_response.other.l3_hit_e.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT_E.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008800010offcore_response.other.l3_hit_e.snoop_misscacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT_E.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20008800010offcore_response.other.l3_hit_e.snoop_nonecacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT_E.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008800010offcore_response.other.l3_hit_f.any_snoopcacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT_F.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8020800010offcore_response.other.l3_hit_f.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT_F.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100020800010offcore_response.other.l3_hit_f.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT_F.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80020800010offcore_response.other.l3_hit_f.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT_F.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40020800010offcore_response.other.l3_hit_f.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT_F.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10020800010offcore_response.other.l3_hit_f.snoop_misscacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT_F.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20020800010offcore_response.other.l3_hit_f.snoop_nonecacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT_F.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8020800010offcore_response.other.l3_hit_m.any_snoopcacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT_M.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8004800010offcore_response.other.l3_hit_m.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT_M.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004800010offcore_response.other.l3_hit_m.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT_M.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80004800010offcore_response.other.l3_hit_m.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT_M.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004800010offcore_response.other.l3_hit_m.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT_M.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004800010offcore_response.other.l3_hit_m.snoop_misscacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT_M.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20004800010offcore_response.other.l3_hit_m.snoop_nonecacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT_M.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8004800010offcore_response.other.l3_hit_s.any_snoopcacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT_S.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8010800010offcore_response.other.l3_hit_s.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT_S.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010800010offcore_response.other.l3_hit_s.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT_S.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80010800010offcore_response.other.l3_hit_s.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT_S.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010800010offcore_response.other.l3_hit_s.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT_S.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010800010offcore_response.other.l3_hit_s.snoop_misscacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT_S.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20010800010offcore_response.other.l3_hit_s.snoop_nonecacheThis event is deprecated. Refer to new event OCR.OTHER.L3_HIT_S.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8010800010offcore_response.other.pmm_hit_local_pmm.any_snoopcacheThis event is deprecated. Refer to new event OCR.OTHER.PMM_HIT_LOCAL_PMM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8040800010offcore_response.other.pmm_hit_local_pmm.snoop_nonecacheThis event is deprecated. Refer to new event OCR.OTHER.PMM_HIT_LOCAL_PMM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040800010offcore_response.other.pmm_hit_local_pmm.snoop_not_neededcacheThis event is deprecated. Refer to new event OCR.OTHER.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040800010offcore_response.other.supplier_none.any_snoopcacheThis event is deprecated. Refer to new event OCR.OTHER.SUPPLIER_NONE.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002800010offcore_response.other.supplier_none.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.OTHER.SUPPLIER_NONE.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002800010offcore_response.other.supplier_none.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.OTHER.SUPPLIER_NONE.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80002800010offcore_response.other.supplier_none.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.OTHER.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002800010offcore_response.other.supplier_none.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.OTHER.SUPPLIER_NONE.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002800010offcore_response.other.supplier_none.snoop_misscacheThis event is deprecated. Refer to new event OCR.OTHER.SUPPLIER_NONE.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002800010offcore_response.other.supplier_none.snoop_nonecacheThis event is deprecated. Refer to new event OCR.OTHER.SUPPLIER_NONE.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002800010offcore_response.pf_l1d_and_sw.any_responsecacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.ANY_RESPONSEevent=0xb7,period=100003,umask=1,offcore_rsp=0x1040010offcore_response.pf_l1d_and_sw.l3_hit.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C040010offcore_response.pf_l1d_and_sw.l3_hit.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C040010offcore_response.pf_l1d_and_sw.l3_hit.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C040010offcore_response.pf_l1d_and_sw.l3_hit.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C040010offcore_response.pf_l1d_and_sw.l3_hit.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C040010offcore_response.pf_l1d_and_sw.l3_hit.snoop_hit_with_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8007C040010offcore_response.pf_l1d_and_sw.l3_hit.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C040010offcore_response.pf_l1d_and_sw.l3_hit.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C040010offcore_response.pf_l1d_and_sw.l3_hit_e.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_E.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8008040010offcore_response.pf_l1d_and_sw.l3_hit_e.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_E.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008040010offcore_response.pf_l1d_and_sw.l3_hit_e.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_E.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80008040010offcore_response.pf_l1d_and_sw.l3_hit_e.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_E.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008040010offcore_response.pf_l1d_and_sw.l3_hit_e.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_E.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008040010offcore_response.pf_l1d_and_sw.l3_hit_e.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_E.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20008040010offcore_response.pf_l1d_and_sw.l3_hit_e.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_E.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008040010offcore_response.pf_l1d_and_sw.l3_hit_f.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_F.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8020040010offcore_response.pf_l1d_and_sw.l3_hit_f.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_F.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100020040010offcore_response.pf_l1d_and_sw.l3_hit_f.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_F.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80020040010offcore_response.pf_l1d_and_sw.l3_hit_f.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_F.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40020040010offcore_response.pf_l1d_and_sw.l3_hit_f.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_F.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10020040010offcore_response.pf_l1d_and_sw.l3_hit_f.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_F.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20020040010offcore_response.pf_l1d_and_sw.l3_hit_f.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_F.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8020040010offcore_response.pf_l1d_and_sw.l3_hit_m.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_M.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8004040010offcore_response.pf_l1d_and_sw.l3_hit_m.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_M.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004040010offcore_response.pf_l1d_and_sw.l3_hit_m.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_M.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80004040010offcore_response.pf_l1d_and_sw.l3_hit_m.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_M.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004040010offcore_response.pf_l1d_and_sw.l3_hit_m.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_M.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004040010offcore_response.pf_l1d_and_sw.l3_hit_m.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_M.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20004040010offcore_response.pf_l1d_and_sw.l3_hit_m.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_M.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8004040010offcore_response.pf_l1d_and_sw.l3_hit_s.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_S.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8010040010offcore_response.pf_l1d_and_sw.l3_hit_s.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_S.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010040010offcore_response.pf_l1d_and_sw.l3_hit_s.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_S.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80010040010offcore_response.pf_l1d_and_sw.l3_hit_s.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_S.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010040010offcore_response.pf_l1d_and_sw.l3_hit_s.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_S.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010040010offcore_response.pf_l1d_and_sw.l3_hit_s.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_S.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20010040010offcore_response.pf_l1d_and_sw.l3_hit_s.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_S.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8010040010offcore_response.pf_l1d_and_sw.pmm_hit_local_pmm.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.PMM_HIT_LOCAL_PMM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8040040010offcore_response.pf_l1d_and_sw.pmm_hit_local_pmm.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.PMM_HIT_LOCAL_PMM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040040010offcore_response.pf_l1d_and_sw.pmm_hit_local_pmm.snoop_not_neededcacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040040010offcore_response.pf_l1d_and_sw.supplier_none.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.SUPPLIER_NONE.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002040010offcore_response.pf_l1d_and_sw.supplier_none.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.SUPPLIER_NONE.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002040010offcore_response.pf_l1d_and_sw.supplier_none.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.SUPPLIER_NONE.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80002040010offcore_response.pf_l1d_and_sw.supplier_none.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002040010offcore_response.pf_l1d_and_sw.supplier_none.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.SUPPLIER_NONE.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002040010offcore_response.pf_l1d_and_sw.supplier_none.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.SUPPLIER_NONE.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002040010offcore_response.pf_l1d_and_sw.supplier_none.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.SUPPLIER_NONE.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002040010offcore_response.pf_l2_data_rd.any_responsecacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.ANY_RESPONSEevent=0xb7,period=100003,umask=1,offcore_rsp=0x1001010offcore_response.pf_l2_data_rd.l3_hit.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C001010offcore_response.pf_l2_data_rd.l3_hit.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C001010offcore_response.pf_l2_data_rd.l3_hit.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C001010offcore_response.pf_l2_data_rd.l3_hit.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C001010offcore_response.pf_l2_data_rd.l3_hit.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C001010offcore_response.pf_l2_data_rd.l3_hit.snoop_hit_with_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8007C001010offcore_response.pf_l2_data_rd.l3_hit.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C001010offcore_response.pf_l2_data_rd.l3_hit.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C001010offcore_response.pf_l2_data_rd.l3_hit_e.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_E.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8008001010offcore_response.pf_l2_data_rd.l3_hit_e.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_E.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008001010offcore_response.pf_l2_data_rd.l3_hit_e.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80008001010offcore_response.pf_l2_data_rd.l3_hit_e.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008001010offcore_response.pf_l2_data_rd.l3_hit_e.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008001010offcore_response.pf_l2_data_rd.l3_hit_e.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_E.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20008001010offcore_response.pf_l2_data_rd.l3_hit_e.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_E.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008001010offcore_response.pf_l2_data_rd.l3_hit_f.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_F.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8020001010offcore_response.pf_l2_data_rd.l3_hit_f.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_F.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100020001010offcore_response.pf_l2_data_rd.l3_hit_f.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80020001010offcore_response.pf_l2_data_rd.l3_hit_f.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40020001010offcore_response.pf_l2_data_rd.l3_hit_f.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10020001010offcore_response.pf_l2_data_rd.l3_hit_f.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_F.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20020001010offcore_response.pf_l2_data_rd.l3_hit_f.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_F.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8020001010offcore_response.pf_l2_data_rd.l3_hit_m.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_M.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8004001010offcore_response.pf_l2_data_rd.l3_hit_m.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_M.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004001010offcore_response.pf_l2_data_rd.l3_hit_m.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80004001010offcore_response.pf_l2_data_rd.l3_hit_m.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004001010offcore_response.pf_l2_data_rd.l3_hit_m.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004001010offcore_response.pf_l2_data_rd.l3_hit_m.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_M.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20004001010offcore_response.pf_l2_data_rd.l3_hit_m.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_M.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8004001010offcore_response.pf_l2_data_rd.l3_hit_s.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_S.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8010001010offcore_response.pf_l2_data_rd.l3_hit_s.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_S.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010001010offcore_response.pf_l2_data_rd.l3_hit_s.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80010001010offcore_response.pf_l2_data_rd.l3_hit_s.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010001010offcore_response.pf_l2_data_rd.l3_hit_s.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010001010offcore_response.pf_l2_data_rd.l3_hit_s.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_S.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20010001010offcore_response.pf_l2_data_rd.l3_hit_s.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_S.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8010001010offcore_response.pf_l2_data_rd.pmm_hit_local_pmm.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8040001010offcore_response.pf_l2_data_rd.pmm_hit_local_pmm.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040001010offcore_response.pf_l2_data_rd.pmm_hit_local_pmm.snoop_not_neededcacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040001010offcore_response.pf_l2_data_rd.supplier_none.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.SUPPLIER_NONE.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002001010offcore_response.pf_l2_data_rd.supplier_none.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.SUPPLIER_NONE.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002001010offcore_response.pf_l2_data_rd.supplier_none.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80002001010offcore_response.pf_l2_data_rd.supplier_none.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002001010offcore_response.pf_l2_data_rd.supplier_none.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002001010offcore_response.pf_l2_data_rd.supplier_none.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.SUPPLIER_NONE.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002001010offcore_response.pf_l2_data_rd.supplier_none.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.SUPPLIER_NONE.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002001010offcore_response.pf_l2_rfo.any_responsecacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.ANY_RESPONSEevent=0xb7,period=100003,umask=1,offcore_rsp=0x1002010offcore_response.pf_l2_rfo.l3_hit.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C002010offcore_response.pf_l2_rfo.l3_hit.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C002010offcore_response.pf_l2_rfo.l3_hit.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C002010offcore_response.pf_l2_rfo.l3_hit.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C002010offcore_response.pf_l2_rfo.l3_hit.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C002010offcore_response.pf_l2_rfo.l3_hit.snoop_hit_with_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8007C002010offcore_response.pf_l2_rfo.l3_hit.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C002010offcore_response.pf_l2_rfo.l3_hit.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C002010offcore_response.pf_l2_rfo.l3_hit_e.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_E.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8008002010offcore_response.pf_l2_rfo.l3_hit_e.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_E.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008002010offcore_response.pf_l2_rfo.l3_hit_e.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_E.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80008002010offcore_response.pf_l2_rfo.l3_hit_e.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008002010offcore_response.pf_l2_rfo.l3_hit_e.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_E.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008002010offcore_response.pf_l2_rfo.l3_hit_e.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_E.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20008002010offcore_response.pf_l2_rfo.l3_hit_e.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_E.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008002010offcore_response.pf_l2_rfo.l3_hit_f.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_F.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8020002010offcore_response.pf_l2_rfo.l3_hit_f.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_F.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100020002010offcore_response.pf_l2_rfo.l3_hit_f.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_F.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80020002010offcore_response.pf_l2_rfo.l3_hit_f.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40020002010offcore_response.pf_l2_rfo.l3_hit_f.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_F.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10020002010offcore_response.pf_l2_rfo.l3_hit_f.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_F.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20020002010offcore_response.pf_l2_rfo.l3_hit_f.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_F.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8020002010offcore_response.pf_l2_rfo.l3_hit_m.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_M.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8004002010offcore_response.pf_l2_rfo.l3_hit_m.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_M.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004002010offcore_response.pf_l2_rfo.l3_hit_m.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_M.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80004002010offcore_response.pf_l2_rfo.l3_hit_m.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004002010offcore_response.pf_l2_rfo.l3_hit_m.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_M.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004002010offcore_response.pf_l2_rfo.l3_hit_m.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_M.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20004002010offcore_response.pf_l2_rfo.l3_hit_m.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_M.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8004002010offcore_response.pf_l2_rfo.l3_hit_s.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_S.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8010002010offcore_response.pf_l2_rfo.l3_hit_s.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_S.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010002010offcore_response.pf_l2_rfo.l3_hit_s.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_S.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80010002010offcore_response.pf_l2_rfo.l3_hit_s.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010002010offcore_response.pf_l2_rfo.l3_hit_s.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_S.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010002010offcore_response.pf_l2_rfo.l3_hit_s.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_S.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20010002010offcore_response.pf_l2_rfo.l3_hit_s.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_S.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8010002010offcore_response.pf_l2_rfo.pmm_hit_local_pmm.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8040002010offcore_response.pf_l2_rfo.pmm_hit_local_pmm.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040002010offcore_response.pf_l2_rfo.pmm_hit_local_pmm.snoop_not_neededcacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040002010offcore_response.pf_l2_rfo.supplier_none.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.SUPPLIER_NONE.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002002010offcore_response.pf_l2_rfo.supplier_none.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.SUPPLIER_NONE.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002002010offcore_response.pf_l2_rfo.supplier_none.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80002002010offcore_response.pf_l2_rfo.supplier_none.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002002010offcore_response.pf_l2_rfo.supplier_none.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002002010offcore_response.pf_l2_rfo.supplier_none.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.SUPPLIER_NONE.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002002010offcore_response.pf_l2_rfo.supplier_none.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L2_RFO.SUPPLIER_NONE.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002002010offcore_response.pf_l3_data_rd.any_responsecacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.ANY_RESPONSEevent=0xb7,period=100003,umask=1,offcore_rsp=0x1008010offcore_response.pf_l3_data_rd.l3_hit.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C008010offcore_response.pf_l3_data_rd.l3_hit.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C008010offcore_response.pf_l3_data_rd.l3_hit.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C008010offcore_response.pf_l3_data_rd.l3_hit.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C008010offcore_response.pf_l3_data_rd.l3_hit.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C008010offcore_response.pf_l3_data_rd.l3_hit.snoop_hit_with_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8007C008010offcore_response.pf_l3_data_rd.l3_hit.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C008010offcore_response.pf_l3_data_rd.l3_hit.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C008010offcore_response.pf_l3_data_rd.l3_hit_e.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_E.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8008008010offcore_response.pf_l3_data_rd.l3_hit_e.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_E.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008008010offcore_response.pf_l3_data_rd.l3_hit_e.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80008008010offcore_response.pf_l3_data_rd.l3_hit_e.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008008010offcore_response.pf_l3_data_rd.l3_hit_e.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008008010offcore_response.pf_l3_data_rd.l3_hit_e.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_E.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20008008010offcore_response.pf_l3_data_rd.l3_hit_e.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_E.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008008010offcore_response.pf_l3_data_rd.l3_hit_f.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_F.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8020008010offcore_response.pf_l3_data_rd.l3_hit_f.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_F.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100020008010offcore_response.pf_l3_data_rd.l3_hit_f.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80020008010offcore_response.pf_l3_data_rd.l3_hit_f.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40020008010offcore_response.pf_l3_data_rd.l3_hit_f.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10020008010offcore_response.pf_l3_data_rd.l3_hit_f.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_F.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20020008010offcore_response.pf_l3_data_rd.l3_hit_f.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_F.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8020008010offcore_response.pf_l3_data_rd.l3_hit_m.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_M.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8004008010offcore_response.pf_l3_data_rd.l3_hit_m.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_M.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004008010offcore_response.pf_l3_data_rd.l3_hit_m.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80004008010offcore_response.pf_l3_data_rd.l3_hit_m.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004008010offcore_response.pf_l3_data_rd.l3_hit_m.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004008010offcore_response.pf_l3_data_rd.l3_hit_m.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_M.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20004008010offcore_response.pf_l3_data_rd.l3_hit_m.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_M.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8004008010offcore_response.pf_l3_data_rd.l3_hit_s.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_S.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8010008010offcore_response.pf_l3_data_rd.l3_hit_s.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_S.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010008010offcore_response.pf_l3_data_rd.l3_hit_s.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80010008010offcore_response.pf_l3_data_rd.l3_hit_s.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010008010offcore_response.pf_l3_data_rd.l3_hit_s.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010008010offcore_response.pf_l3_data_rd.l3_hit_s.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_S.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20010008010offcore_response.pf_l3_data_rd.l3_hit_s.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_S.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8010008010offcore_response.pf_l3_data_rd.pmm_hit_local_pmm.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8040008010offcore_response.pf_l3_data_rd.pmm_hit_local_pmm.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040008010offcore_response.pf_l3_data_rd.pmm_hit_local_pmm.snoop_not_neededcacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040008010offcore_response.pf_l3_data_rd.supplier_none.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.SUPPLIER_NONE.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002008010offcore_response.pf_l3_data_rd.supplier_none.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.SUPPLIER_NONE.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002008010offcore_response.pf_l3_data_rd.supplier_none.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80002008010offcore_response.pf_l3_data_rd.supplier_none.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002008010offcore_response.pf_l3_data_rd.supplier_none.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002008010offcore_response.pf_l3_data_rd.supplier_none.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.SUPPLIER_NONE.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002008010offcore_response.pf_l3_data_rd.supplier_none.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.SUPPLIER_NONE.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002008010offcore_response.pf_l3_rfo.any_responsecacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.ANY_RESPONSEevent=0xb7,period=100003,umask=1,offcore_rsp=0x1010010offcore_response.pf_l3_rfo.l3_hit.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C010010offcore_response.pf_l3_rfo.l3_hit.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C010010offcore_response.pf_l3_rfo.l3_hit.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C010010offcore_response.pf_l3_rfo.l3_hit.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C010010offcore_response.pf_l3_rfo.l3_hit.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C010010offcore_response.pf_l3_rfo.l3_hit.snoop_hit_with_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8007C010010offcore_response.pf_l3_rfo.l3_hit.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C010010offcore_response.pf_l3_rfo.l3_hit.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x803C010010offcore_response.pf_l3_rfo.l3_hit_e.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_E.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8008010010offcore_response.pf_l3_rfo.l3_hit_e.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_E.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008010010offcore_response.pf_l3_rfo.l3_hit_e.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_E.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80008010010offcore_response.pf_l3_rfo.l3_hit_e.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008010010offcore_response.pf_l3_rfo.l3_hit_e.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_E.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008010010offcore_response.pf_l3_rfo.l3_hit_e.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_E.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20008010010offcore_response.pf_l3_rfo.l3_hit_e.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_E.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008010010offcore_response.pf_l3_rfo.l3_hit_f.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_F.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8020010010offcore_response.pf_l3_rfo.l3_hit_f.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_F.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100020010010offcore_response.pf_l3_rfo.l3_hit_f.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_F.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80020010010offcore_response.pf_l3_rfo.l3_hit_f.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40020010010offcore_response.pf_l3_rfo.l3_hit_f.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_F.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10020010010offcore_response.pf_l3_rfo.l3_hit_f.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_F.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20020010010offcore_response.pf_l3_rfo.l3_hit_f.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_F.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8020010010offcore_response.pf_l3_rfo.l3_hit_m.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_M.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8004010010offcore_response.pf_l3_rfo.l3_hit_m.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_M.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004010010offcore_response.pf_l3_rfo.l3_hit_m.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_M.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80004010010offcore_response.pf_l3_rfo.l3_hit_m.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004010010offcore_response.pf_l3_rfo.l3_hit_m.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_M.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004010010offcore_response.pf_l3_rfo.l3_hit_m.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_M.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20004010010offcore_response.pf_l3_rfo.l3_hit_m.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_M.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8004010010offcore_response.pf_l3_rfo.l3_hit_s.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_S.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8010010010offcore_response.pf_l3_rfo.l3_hit_s.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_S.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010010010offcore_response.pf_l3_rfo.l3_hit_s.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_S.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80010010010offcore_response.pf_l3_rfo.l3_hit_s.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010010010offcore_response.pf_l3_rfo.l3_hit_s.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_S.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010010010offcore_response.pf_l3_rfo.l3_hit_s.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_S.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20010010010offcore_response.pf_l3_rfo.l3_hit_s.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_S.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8010010010offcore_response.pf_l3_rfo.pmm_hit_local_pmm.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8040010010offcore_response.pf_l3_rfo.pmm_hit_local_pmm.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040010010offcore_response.pf_l3_rfo.pmm_hit_local_pmm.snoop_not_neededcacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040010010offcore_response.pf_l3_rfo.supplier_none.any_snoopcacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.SUPPLIER_NONE.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002010010offcore_response.pf_l3_rfo.supplier_none.hitm_other_corecacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.SUPPLIER_NONE.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002010010offcore_response.pf_l3_rfo.supplier_none.hit_other_core_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80002010010offcore_response.pf_l3_rfo.supplier_none.hit_other_core_no_fwdcacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002010010offcore_response.pf_l3_rfo.supplier_none.no_snoop_neededcacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002010010offcore_response.pf_l3_rfo.supplier_none.snoop_misscacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.SUPPLIER_NONE.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002010010offcore_response.pf_l3_rfo.supplier_none.snoop_nonecacheThis event is deprecated. Refer to new event OCR.PF_L3_RFO.SUPPLIER_NONE.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002010010sq_misc.split_lockcacheNumber of cache line split locks sent to uncoreevent=0xf4,period=100003,umask=0x1000Counts the number of cache line split locks sent to the uncoresw_prefetch_access.anycacheCounts the number of PREFETCHNTA, PREFETCHW, PREFETCHT0, PREFETCHT1 or PREFETCHT2 instructions executedevent=0x32,period=2000003,umask=0xf00sw_prefetch_access.ntacacheNumber of PREFETCHNTA instructions executedevent=0x32,period=2000003,umask=100sw_prefetch_access.prefetchwcacheNumber of PREFETCHW instructions executedevent=0x32,period=2000003,umask=800sw_prefetch_access.t0cacheNumber of PREFETCHT0 instructions executedevent=0x32,period=2000003,umask=200sw_prefetch_access.t1_t2cacheNumber of PREFETCHT1 or PREFETCHT2 instructions executedevent=0x32,period=2000003,umask=400fp_arith_inst_retired.128b_packed_doublefloating pointCounts once for most SIMD 128-bit packed computational double precision floating-point instructions retired. Counts twice for DPP and FM(N)ADD/SUB instructions retiredevent=0xc7,period=2000003,umask=400Counts once for most SIMD 128-bit packed computational double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 2 computation operations, one for each element.  Applies to packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.128b_packed_singlefloating pointCounts once for most SIMD 128-bit packed computational single precision floating-point instruction retired. Counts twice for DPP and FM(N)ADD/SUB instructions retiredevent=0xc7,period=2000003,umask=800Counts once for most SIMD 128-bit packed computational single precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 4 computation operations, one for each element.  Applies to packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.256b_packed_doublefloating pointCounts once for most SIMD 256-bit packed double computational precision floating-point instructions retired. Counts twice for DPP and FM(N)ADD/SUB instructions retiredevent=0xc7,period=2000003,umask=0x1000Counts once for most SIMD 256-bit packed double computational precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 4 computation operations, one for each element.  Applies to packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB.  FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.256b_packed_singlefloating pointCounts once for most SIMD 256-bit packed single computational precision floating-point instructions retired. Counts twice for DPP and FM(N)ADD/SUB instructions retiredevent=0xc7,period=2000003,umask=0x2000Counts once for most SIMD 256-bit packed single computational precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 8 computation operations, one for each element.  Applies to packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.4_flopsfloating pointNumber of SSE/AVX computational 128-bit packed single and 256-bit packed double precision FP instructions retired; some instructions will count twice as noted below.  Each count represents 2 or/and 4 computation operations, 1 for each element.  Applies to SSE* and AVX* packed single precision and packed double precision FP instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB count twice as they perform 2 calculations per elementevent=0xc7,period=1000003,umask=0x1800Number of SSE/AVX computational 128-bit packed single precision and 256-bit packed double precision  floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 2 or/and 4 computation operations, one for each element.  Applies to SSE* and AVX* packed single precision floating-point and packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.512b_packed_doublefloating pointNumber of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 8 computation operations, one for each element.  Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per elementevent=0xc7,period=2000003,umask=0x4000Number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 8 computation operations, one for each element.  Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.  The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.512b_packed_singlefloating pointNumber of SSE/AVX computational 512-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 16 computation operations, one for each element.  Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per elementevent=0xc7,period=2000003,umask=0x8000Number of SSE/AVX computational 512-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 16 computation operations, one for each element.  Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.8_flopsfloating pointNumber of SSE/AVX computational 256-bit packed single precision and 512-bit packed double precision  FP instructions retired; some instructions will count twice as noted below.  Each count represents 8 computation operations, 1 for each element.  Applies to SSE* and AVX* packed single precision and double precision FP instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RSQRT14 RCP RCP14 DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB count twice as they perform 2 calculations per elementevent=0xc7,period=1000003,umask=0x1800Number of SSE/AVX computational 256-bit packed single precision and 512-bit packed double precision  floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 8 computation operations, one for each element.  Applies to SSE* and AVX* packed single precision and double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RSQRT14 RCP RCP14 DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.scalarfloating pointCounts once for most SIMD scalar computational floating-point instructions retired. Counts twice for DPP and FM(N)ADD/SUB instructions retiredevent=0xc7,period=2000003,umask=300Counts once for most SIMD scalar computational single precision and double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 1 computational operation. Applies to SIMD scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB.  FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.scalar_doublefloating pointCounts once for most SIMD scalar computational double precision floating-point instructions retired. Counts twice for DPP and FM(N)ADD/SUB instructions retiredevent=0xc7,period=2000003,umask=100Counts once for most SIMD scalar computational double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 1 computational operation. Applies to SIMD scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB.  FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.scalar_singlefloating pointCounts once for most SIMD scalar computational single precision floating-point instructions retired. Counts twice for DPP and FM(N)ADD/SUB instructions retiredevent=0xc7,period=2000003,umask=200Counts once for most SIMD scalar computational single precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 1 computational operation. Applies to SIMD scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB.  FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired2.128bit_packed_bf16floating pointIntel AVX-512 computational 512-bit packed BFloat16 instructions retiredevent=0xcf,period=2000003,umask=0x2000Counts once for each Intel AVX-512 computational 512-bit packed BFloat16 floating-point instruction retired. Applies to the ZMM based VDPBF16PS instruction.  Each count represents 64 computation operations. This event is only supported on products formerly named Cooper Lake and is not supported on products formerly named Cascade Lakefp_arith_inst_retired2.256bit_packed_bf16floating pointIntel AVX-512 computational 128-bit packed BFloat16 instructions retiredevent=0xcf,period=2000003,umask=0x4000Counts once for each Intel AVX-512 computational 128-bit packed BFloat16 floating-point instruction retired. Applies to the XMM based VDPBF16PS instruction. Each count represents 16 computation operations. This event is only supported on products formerly named Cooper Lake and is not supported on products formerly named Cascade Lakefp_arith_inst_retired2.512bit_packed_bf16floating pointIntel AVX-512 computational 256-bit packed BFloat16 instructions retiredevent=0xcf,period=2000003,umask=0x8000Counts once for each Intel AVX-512 computational 256-bit packed BFloat16 floating-point instruction retired. Applies to the YMM based VDPBF16PS instruction.  Each count represents 32 computation operations. This event is only supported on products formerly named Cooper Lake and is not supported on products formerly named Cascade Lakefp_assist.anyfloating pointCycles with any input/output SSE or FP assistevent=0xca,cmask=1,period=100003,umask=0x1e00Counts cycles with any input and output SSE or x87 FP assist. If an input and output assist are detected on the same cycle the event increments by 1baclears.anyfrontendCounts the total number when the front end is resteered, mainly when the BPU cannot provide a correct prediction and this is corrected by other branch handling mechanisms at the front endevent=0xe6,period=100003,umask=100Counts the number of times the front-end is resteered when it finds a branch instruction in a fetch line. This occurs for the first time a branch instruction is fetched or when the branch is not tracked by the BPU (Branch Prediction Unit) anymoredecode.lcpfrontendStalls caused by changing prefix length of the instruction. [This event is alias to ILD_STALL.LCP]event=0x87,period=2000003,umask=100Counts cycles that the Instruction Length decoder (ILD) stalls occurred due to dynamically changing prefix length of the decoded instruction (by operand size prefix instruction 0x66, address size prefix instruction 0x67 or REX.W for Intel64). Count is proportional to the number of prefixes in a 16B-line. This may result in a three-cycle penalty for each LCP (Length changing prefix) in a 16-byte chunk. [This event is alias to ILD_STALL.LCP]dsb2mite_switches.countfrontendDecode Stream Buffer (DSB)-to-MITE switchesevent=0xab,period=2000003,umask=100This event counts the number of the Decode Stream Buffer (DSB)-to-MITE switches including all misses because of missing Decode Stream Buffer (DSB) cache and u-arch forced misses. Note: Invoking MITE requires two or three cycles delaydsb2mite_switches.penalty_cyclesfrontendDecode Stream Buffer (DSB)-to-MITE switch true penalty cyclesevent=0xab,period=2000003,umask=200Counts Decode Stream Buffer (DSB)-to-MITE switch true penalty cycles. These cycles do not include uops routed through because of the switch itself, for example, when Instruction Decode Queue (IDQ) pre-allocation is unavailable, or Instruction Decode Queue (IDQ) is full. SBD-to-MITE switch true penalty cycles happen after the merge mux (MM) receives Decode Stream Buffer (DSB) Sync-indication until receiving the first MITE uop. MM is placed before Instruction Decode Queue (IDQ) to merge uops being fed from the MITE and Decode Stream Buffer (DSB) paths. Decode Stream Buffer (DSB) inserts the Sync-indication whenever a Decode Stream Buffer (DSB)-to-MITE switch occurs.Penalty: A Decode Stream Buffer (DSB) hit followed by a Decode Stream Buffer (DSB) miss can cost up to six cycles in which no uops are delivered to the IDQ. Most often, such switches from the Decode Stream Buffer (DSB) to the legacy pipeline cost 02 cyclesfrontend_retired.l1i_missfrontendRetired Instructions who experienced Instruction L1 Cache true miss (Precise event)event=0xc6,period=100007,umask=1,frontend=0x1200frontend_retired.l2_missfrontendRetired Instructions who experienced Instruction L2 Cache true miss (Precise event)event=0xc6,period=100007,umask=1,frontend=0x1300frontend_retired.latency_ge_1frontendRetired instructions after front-end starvation of at least 1 cycle (Must be precise)event=0xc6,period=100007,umask=1,frontend=0x40010600Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of at least 1 cycle which was not interrupted by a back-end stall (Must be precise)frontend_retired.latency_ge_128frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=1,frontend=0x40800600frontend_retired.latency_ge_16frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 16 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=1,frontend=0x40100600Counts retired instructions that are delivered to the back-end after a front-end stall of at least 16 cycles. During this period the front-end delivered no uops (Precise event)frontend_retired.latency_ge_2frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 2 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=1,frontend=0x40020600frontend_retired.latency_ge_256frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=1,frontend=0x41000600frontend_retired.latency_ge_2_bubbles_ge_2frontendRetired instructions that are fetched after an interval where the front-end had at least 2 bubble-slots for a period of 2 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=1,frontend=0x20020600frontend_retired.latency_ge_2_bubbles_ge_3frontendRetired instructions that are fetched after an interval where the front-end had at least 3 bubble-slots for a period of 2 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=1,frontend=0x30020600frontend_retired.latency_ge_32frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 32 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=1,frontend=0x40200600Counts retired instructions that are delivered to the back-end after a front-end stall of at least 32 cycles. During this period the front-end delivered no uops (Precise event)frontend_retired.latency_ge_4frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=1,frontend=0x40040600frontend_retired.latency_ge_512frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=1,frontend=0x42000600frontend_retired.latency_ge_64frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=1,frontend=0x40400600frontend_retired.latency_ge_8frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 8 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=1,frontend=0x40080600Counts retired instructions that are delivered to the back-end after a front-end stall of at least 8 cycles. During this period the front-end delivered no uops (Precise event)icache_16b.ifdata_stallfrontendCycles where a code fetch is stalled due to L1 instruction cache missevent=0x80,period=2000003,umask=400Cycles where a code line fetch is stalled due to an L1 instruction cache miss. The legacy decode pipeline works at a 16 Byte granularityicache_64b.iftag_hitfrontendInstruction fetch tag lookups that hit in the instruction cache (L1I). Counts at 64-byte cache-line granularityevent=0x83,period=200003,umask=100icache_64b.iftag_missfrontendInstruction fetch tag lookups that miss in the instruction cache (L1I). Counts at 64-byte cache-line granularityevent=0x83,period=200003,umask=200icache_64b.iftag_stallfrontendCycles where a code fetch is stalled due to L1 instruction cache tag miss. [This event is alias to ICACHE_TAG.STALLS]event=0x83,period=200003,umask=400icache_tag.stallsfrontendCycles where a code fetch is stalled due to L1 instruction cache tag miss. [This event is alias to ICACHE_64B.IFTAG_STALL]event=0x83,period=200003,umask=400idq.all_dsb_cycles_4_uopsfrontendCycles Decode Stream Buffer (DSB) is delivering 4 or more Uops [This event is alias to IDQ.DSB_CYCLES_OK]event=0x79,cmask=4,period=2000003,umask=0x1800Counts the number of cycles 4 or more uops were delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Count includes uops that may 'bypass' the IDQ. [This event is alias to IDQ.DSB_CYCLES_OK]idq.all_dsb_cycles_any_uopsfrontendCycles Decode Stream Buffer (DSB) is delivering any Uop [This event is alias to IDQ.DSB_CYCLES_ANY]event=0x79,cmask=1,period=2000003,umask=0x1800Counts the number of cycles uops were delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Count includes uops that may 'bypass' the IDQ. [This event is alias to IDQ.DSB_CYCLES_ANY]idq.all_mite_cycles_4_uopsfrontendCycles MITE is delivering 4 Uopsevent=0x79,cmask=4,period=2000003,umask=0x2400Counts the number of cycles 4 uops were delivered to the Instruction Decode Queue (IDQ) from the MITE (legacy decode pipeline) path. Counting includes uops that may 'bypass' the IDQ. During these cycles uops are not being delivered from the Decode Stream Buffer (DSB)idq.all_mite_cycles_any_uopsfrontendCycles MITE is delivering any Uopevent=0x79,cmask=1,period=2000003,umask=0x2400Counts the number of cycles uops were delivered to the Instruction Decode Queue (IDQ) from the MITE (legacy decode pipeline) path. Counting includes uops that may 'bypass' the IDQ. During these cycles uops are not being delivered from the Decode Stream Buffer (DSB)idq.dsb_cyclesfrontendCycles when uops are being delivered to Instruction Decode Queue (IDQ) from Decode Stream Buffer (DSB) pathevent=0x79,cmask=1,period=2000003,umask=800Counts cycles during which uops are being delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Counting includes uops that may 'bypass' the IDQidq.dsb_cycles_anyfrontendCycles Decode Stream Buffer (DSB) is delivering any Uop [This event is alias to IDQ.ALL_DSB_CYCLES_ANY_UOPS]event=0x79,cmask=1,period=2000003,umask=0x1800Counts the number of cycles uops were delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Count includes uops that may 'bypass' the IDQ. [This event is alias to IDQ.ALL_DSB_CYCLES_ANY_UOPS]idq.dsb_cycles_okfrontendCycles Decode Stream Buffer (DSB) is delivering 4 or more Uops [This event is alias to IDQ.ALL_DSB_CYCLES_4_UOPS]event=0x79,cmask=4,period=2000003,umask=0x1800Counts the number of cycles 4 or more uops were delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Count includes uops that may 'bypass' the IDQ. [This event is alias to IDQ.ALL_DSB_CYCLES_4_UOPS]idq.dsb_uopsfrontendUops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) pathevent=0x79,period=2000003,umask=800Counts the number of uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Counting includes uops that may 'bypass' the IDQidq.mite_cyclesfrontendCycles when uops are being delivered to Instruction Decode Queue (IDQ) from MITE pathevent=0x79,cmask=1,period=2000003,umask=400Counts cycles during which uops are being delivered to Instruction Decode Queue (IDQ) from the MITE path. Counting includes uops that may 'bypass' the IDQidq.mite_uopsfrontendUops delivered to Instruction Decode Queue (IDQ) from MITE pathevent=0x79,period=2000003,umask=400Counts the number of uops delivered to Instruction Decode Queue (IDQ) from the MITE path. Counting includes uops that may 'bypass' the IDQ. This also means that uops are not being delivered from the Decode Stream Buffer (DSB)idq.ms_cyclesfrontendCycles when uops are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busyevent=0x79,cmask=1,period=2000003,umask=0x3000Counts cycles during which uops are being delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Counting includes uops that may 'bypass' the IDQ. Uops maybe initiated by Decode Stream Buffer (DSB) or MITEidq.ms_dsb_cyclesfrontendCycles when uops initiated by Decode Stream Buffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busyevent=0x79,cmask=1,period=2000003,umask=0x1000Counts cycles during which uops initiated by Decode Stream Buffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Counting includes uops that may 'bypass' the IDQidq.ms_mite_uopsfrontendUops initiated by MITE and delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busyevent=0x79,period=2000003,umask=0x2000Counts the number of uops initiated by MITE and delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Counting includes uops that may 'bypass' the IDQidq.ms_switchesfrontendNumber of switches from DSB (Decode Stream Buffer) or MITE (legacy decode pipeline) to the Microcode Sequencerevent=0x79,cmask=1,edge=1,period=2000003,umask=0x3000Number of switches from DSB (Decode Stream Buffer) or MITE (legacy decode pipeline) to the Microcode Sequenceridq.ms_uopsfrontendUops delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busyevent=0x79,period=2000003,umask=0x3000Counts the total number of uops delivered by the Microcode Sequencer (MS). Any instruction over 4 uops will be delivered by the MS. Some instructions such as transcendentals may additionally generate uops from the MSidq_uops_not_delivered.corefrontendUops not delivered to Resource Allocation Table (RAT) per thread when backend of the machine is not stalledevent=0x9c,period=2000003,umask=100Counts the number of uops not delivered to Resource Allocation Table (RAT) per thread adding 4  x when Resource Allocation Table (RAT) is not stalled and Instruction Decode Queue (IDQ) delivers x uops to Resource Allocation Table (RAT) (where x belongs to {0,1,2,3}). Counting does not cover cases when: a. IDQ-Resource Allocation Table (RAT) pipe serves the other thread. b. Resource Allocation Table (RAT) is stalled for the thread (including uop drops and clear BE conditions).  c. Instruction Decode Queue (IDQ) delivers four uopsidq_uops_not_delivered.cycles_0_uops_deliv.corefrontendCycles per thread when 4 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalledevent=0x9c,cmask=4,period=2000003,umask=100Counts, on the per-thread basis, cycles when no uops are delivered to Resource Allocation Table (RAT). IDQ_Uops_Not_Delivered.core =4idq_uops_not_delivered.cycles_le_1_uop_deliv.corefrontendCycles per thread when 3 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalledevent=0x9c,cmask=3,period=2000003,umask=100Counts, on the per-thread basis, cycles when less than 1 uop is delivered to Resource Allocation Table (RAT). IDQ_Uops_Not_Delivered.core >= 3idq_uops_not_delivered.cycles_le_2_uop_deliv.corefrontendCycles with less than 2 uops delivered by the front endevent=0x9c,cmask=2,period=2000003,umask=100Cycles with less than 2 uops delivered by the front-endidq_uops_not_delivered.cycles_le_3_uop_deliv.corefrontendCycles with less than 3 uops delivered by the front endevent=0x9c,cmask=1,period=2000003,umask=100Cycles with less than 3 uops delivered by the front-endcycle_activity.cycles_l3_missmemoryCycles while L3 cache miss demand load is outstandingevent=0xa3,cmask=2,period=2000003,umask=200cycle_activity.stalls_l3_missmemoryExecution stalls while L3 cache miss demand load is outstandingevent=0xa3,cmask=6,period=2000003,umask=600hle_retired.abortedmemoryNumber of times an HLE execution aborted due to any reasons (multiple categories may count as one) (Precise event)event=0xc8,period=2000003,umask=400Number of times HLE abort was triggered (Precise event)hle_retired.aborted_eventsmemoryNumber of times an HLE execution aborted due to unfriendly events (such as interrupts)event=0xc8,period=2000003,umask=0x8000hle_retired.aborted_memmemoryNumber of times an HLE execution aborted due to various memory events (e.g., read/write capacity and conflicts)event=0xc8,period=2000003,umask=800hle_retired.aborted_memtypememoryNumber of times an HLE execution aborted due to incompatible memory typeevent=0xc8,period=2000003,umask=0x4000Number of times an HLE execution aborted due to incompatible memory typehle_retired.aborted_timermemoryNumber of times an HLE execution aborted due to hardware timer expirationevent=0xc8,period=2000003,umask=0x1000hle_retired.aborted_unfriendlymemoryNumber of times an HLE execution aborted due to HLE-unfriendly instructions and certain unfriendly events (such as AD assists etc.)event=0xc8,period=2000003,umask=0x2000hle_retired.commitmemoryNumber of times an HLE execution successfully committedevent=0xc8,period=2000003,umask=200Number of times HLE commit succeededhle_retired.startmemoryNumber of times an HLE execution startedevent=0xc8,period=2000003,umask=100Number of times we entered an HLE region. Does not count nested transactionsmachine_clears.memory_orderingmemoryCounts the number of machine clears due to memory order conflicts  Spec update: SKL089event=0xc3,period=100003,umask=200Counts the number of memory ordering Machine Clears detected. Memory Ordering Machine Clears can result from one of the following:a. memory disambiguation,b. external snoop, orc. cross SMT-HW-thread snoop (stores) hitting load buffer  Spec update: SKL089ocr.all_data_rd.l3_miss.any_snoopmemoryOCR.ALL_DATA_RD.L3_MISS.ANY_SNOOP OCR.ALL_DATA_RD.L3_MISS.ANY_SNOOP OCR.ALL_DATA_RD.L3_MISS.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00049100ocr.all_data_rd.l3_miss.hitm_other_corememoryOCR.ALL_DATA_RD.L3_MISS.HITM_OTHER_CORE OCR.ALL_DATA_RD.L3_MISS.HITM_OTHER_CORE OCR.ALL_DATA_RD.L3_MISS.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C00049100ocr.all_data_rd.l3_miss.hit_other_core_fwdmemoryOCR.ALL_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD OCR.ALL_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD OCR.ALL_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83C00049100ocr.all_data_rd.l3_miss.hit_other_core_no_fwdmemoryOCR.ALL_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.ALL_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.ALL_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00049100ocr.all_data_rd.l3_miss.no_snoop_neededmemoryOCR.ALL_DATA_RD.L3_MISS.NO_SNOOP_NEEDED OCR.ALL_DATA_RD.L3_MISS.NO_SNOOP_NEEDED OCR.ALL_DATA_RD.L3_MISS.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00049100ocr.all_data_rd.l3_miss.remote_hitmmemoryOCR.ALL_DATA_RD.L3_MISS.REMOTE_HITM OCR.ALL_DATA_RD.L3_MISS.REMOTE_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0049100ocr.all_data_rd.l3_miss.remote_hit_forwardmemoryOCR.ALL_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD OCR.ALL_DATA_RD.L3_MISS.REMOTE_HIT_FORWARDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0049100ocr.all_data_rd.l3_miss.snoop_missmemoryOCR.ALL_DATA_RD.L3_MISS.SNOOP_MISS OCR.ALL_DATA_RD.L3_MISS.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00049100ocr.all_data_rd.l3_miss.snoop_nonememoryOCR.ALL_DATA_RD.L3_MISS.SNOOP_NONE OCR.ALL_DATA_RD.L3_MISS.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00049100ocr.all_data_rd.l3_miss_local_dram.any_snoopmemoryOCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP  OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400049100ocr.all_data_rd.l3_miss_local_dram.hitm_other_corememoryOCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE  OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400049100ocr.all_data_rd.l3_miss_local_dram.hit_other_core_fwdmemoryOCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD  OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80400049100ocr.all_data_rd.l3_miss_local_dram.hit_other_core_no_fwdmemoryOCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD  OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400049100ocr.all_data_rd.l3_miss_local_dram.no_snoop_neededmemoryOCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED  OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400049100ocr.all_data_rd.l3_miss_local_dram.snoop_missmemoryOCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400049100ocr.all_data_rd.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryOCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400049100ocr.all_data_rd.l3_miss_local_dram.snoop_nonememoryOCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400049100ocr.all_data_rd.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryOCR.ALL_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD OCR.ALL_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80049100ocr.all_data_rd.l3_miss_remote_hop1_dram.any_snoopmemoryOCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP  OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F9000049100ocr.all_data_rd.l3_miss_remote_hop1_dram.hitm_other_corememoryOCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE  OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x101000049100ocr.all_data_rd.l3_miss_remote_hop1_dram.hit_other_core_fwdmemoryOCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD  OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x81000049100ocr.all_data_rd.l3_miss_remote_hop1_dram.hit_other_core_no_fwdmemoryOCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD  OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x41000049100ocr.all_data_rd.l3_miss_remote_hop1_dram.no_snoop_neededmemoryOCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED  OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x11000049100ocr.all_data_rd.l3_miss_remote_hop1_dram.snoop_missmemoryOCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x21000049100ocr.all_data_rd.l3_miss_remote_hop1_dram.snoop_nonememoryOCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x9000049100ocr.all_pf_data_rd.l3_miss.any_snoopmemoryOCR.ALL_PF_DATA_RD.L3_MISS.ANY_SNOOP OCR.ALL_PF_DATA_RD.L3_MISS.ANY_SNOOP OCR.ALL_PF_DATA_RD.L3_MISS.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00049000ocr.all_pf_data_rd.l3_miss.hitm_other_corememoryOCR.ALL_PF_DATA_RD.L3_MISS.HITM_OTHER_CORE OCR.ALL_PF_DATA_RD.L3_MISS.HITM_OTHER_CORE OCR.ALL_PF_DATA_RD.L3_MISS.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C00049000ocr.all_pf_data_rd.l3_miss.hit_other_core_fwdmemoryOCR.ALL_PF_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD OCR.ALL_PF_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD OCR.ALL_PF_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83C00049000ocr.all_pf_data_rd.l3_miss.hit_other_core_no_fwdmemoryOCR.ALL_PF_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00049000ocr.all_pf_data_rd.l3_miss.no_snoop_neededmemoryOCR.ALL_PF_DATA_RD.L3_MISS.NO_SNOOP_NEEDED OCR.ALL_PF_DATA_RD.L3_MISS.NO_SNOOP_NEEDED OCR.ALL_PF_DATA_RD.L3_MISS.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00049000ocr.all_pf_data_rd.l3_miss.remote_hitmmemoryOCR.ALL_PF_DATA_RD.L3_MISS.REMOTE_HITM OCR.ALL_PF_DATA_RD.L3_MISS.REMOTE_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0049000ocr.all_pf_data_rd.l3_miss.remote_hit_forwardmemoryOCR.ALL_PF_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD OCR.ALL_PF_DATA_RD.L3_MISS.REMOTE_HIT_FORWARDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0049000ocr.all_pf_data_rd.l3_miss.snoop_missmemoryOCR.ALL_PF_DATA_RD.L3_MISS.SNOOP_MISS OCR.ALL_PF_DATA_RD.L3_MISS.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00049000ocr.all_pf_data_rd.l3_miss.snoop_nonememoryOCR.ALL_PF_DATA_RD.L3_MISS.SNOOP_NONE OCR.ALL_PF_DATA_RD.L3_MISS.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00049000ocr.all_pf_data_rd.l3_miss_local_dram.any_snoopmemoryOCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP  OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400049000ocr.all_pf_data_rd.l3_miss_local_dram.hitm_other_corememoryOCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE  OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400049000ocr.all_pf_data_rd.l3_miss_local_dram.hit_other_core_fwdmemoryOCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD  OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80400049000ocr.all_pf_data_rd.l3_miss_local_dram.hit_other_core_no_fwdmemoryOCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD  OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400049000ocr.all_pf_data_rd.l3_miss_local_dram.no_snoop_neededmemoryOCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED  OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400049000ocr.all_pf_data_rd.l3_miss_local_dram.snoop_missmemoryOCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400049000ocr.all_pf_data_rd.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryOCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400049000ocr.all_pf_data_rd.l3_miss_local_dram.snoop_nonememoryOCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400049000ocr.all_pf_data_rd.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryOCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80049000ocr.all_pf_data_rd.l3_miss_remote_hop1_dram.any_snoopmemoryOCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP  OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F9000049000ocr.all_pf_data_rd.l3_miss_remote_hop1_dram.hitm_other_corememoryOCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE  OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x101000049000ocr.all_pf_data_rd.l3_miss_remote_hop1_dram.hit_other_core_fwdmemoryOCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD  OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x81000049000ocr.all_pf_data_rd.l3_miss_remote_hop1_dram.hit_other_core_no_fwdmemoryOCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD  OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x41000049000ocr.all_pf_data_rd.l3_miss_remote_hop1_dram.no_snoop_neededmemoryOCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED  OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x11000049000ocr.all_pf_data_rd.l3_miss_remote_hop1_dram.snoop_missmemoryOCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x21000049000ocr.all_pf_data_rd.l3_miss_remote_hop1_dram.snoop_nonememoryOCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x9000049000ocr.all_pf_rfo.l3_miss.any_snoopmemoryOCR.ALL_PF_RFO.L3_MISS.ANY_SNOOP OCR.ALL_PF_RFO.L3_MISS.ANY_SNOOP OCR.ALL_PF_RFO.L3_MISS.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00012000ocr.all_pf_rfo.l3_miss.hitm_other_corememoryOCR.ALL_PF_RFO.L3_MISS.HITM_OTHER_CORE OCR.ALL_PF_RFO.L3_MISS.HITM_OTHER_CORE OCR.ALL_PF_RFO.L3_MISS.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C00012000ocr.all_pf_rfo.l3_miss.hit_other_core_fwdmemoryOCR.ALL_PF_RFO.L3_MISS.HIT_OTHER_CORE_FWD OCR.ALL_PF_RFO.L3_MISS.HIT_OTHER_CORE_FWD OCR.ALL_PF_RFO.L3_MISS.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83C00012000ocr.all_pf_rfo.l3_miss.hit_other_core_no_fwdmemoryOCR.ALL_PF_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00012000ocr.all_pf_rfo.l3_miss.no_snoop_neededmemoryOCR.ALL_PF_RFO.L3_MISS.NO_SNOOP_NEEDED OCR.ALL_PF_RFO.L3_MISS.NO_SNOOP_NEEDED OCR.ALL_PF_RFO.L3_MISS.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00012000ocr.all_pf_rfo.l3_miss.remote_hitmmemoryOCR.ALL_PF_RFO.L3_MISS.REMOTE_HITM OCR.ALL_PF_RFO.L3_MISS.REMOTE_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0012000ocr.all_pf_rfo.l3_miss.remote_hit_forwardmemoryOCR.ALL_PF_RFO.L3_MISS.REMOTE_HIT_FORWARD OCR.ALL_PF_RFO.L3_MISS.REMOTE_HIT_FORWARDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0012000ocr.all_pf_rfo.l3_miss.snoop_missmemoryOCR.ALL_PF_RFO.L3_MISS.SNOOP_MISS OCR.ALL_PF_RFO.L3_MISS.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00012000ocr.all_pf_rfo.l3_miss.snoop_nonememoryOCR.ALL_PF_RFO.L3_MISS.SNOOP_NONE OCR.ALL_PF_RFO.L3_MISS.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00012000ocr.all_pf_rfo.l3_miss_local_dram.any_snoopmemoryOCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP  OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400012000ocr.all_pf_rfo.l3_miss_local_dram.hitm_other_corememoryOCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE  OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400012000ocr.all_pf_rfo.l3_miss_local_dram.hit_other_core_fwdmemoryOCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD  OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80400012000ocr.all_pf_rfo.l3_miss_local_dram.hit_other_core_no_fwdmemoryOCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD  OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400012000ocr.all_pf_rfo.l3_miss_local_dram.no_snoop_neededmemoryOCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED  OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400012000ocr.all_pf_rfo.l3_miss_local_dram.snoop_missmemoryOCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400012000ocr.all_pf_rfo.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryOCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400012000ocr.all_pf_rfo.l3_miss_local_dram.snoop_nonememoryOCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400012000ocr.all_pf_rfo.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryOCR.ALL_PF_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD OCR.ALL_PF_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80012000ocr.all_pf_rfo.l3_miss_remote_hop1_dram.any_snoopmemoryOCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP  OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F9000012000ocr.all_pf_rfo.l3_miss_remote_hop1_dram.hitm_other_corememoryOCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE  OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x101000012000ocr.all_pf_rfo.l3_miss_remote_hop1_dram.hit_other_core_fwdmemoryOCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD  OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x81000012000ocr.all_pf_rfo.l3_miss_remote_hop1_dram.hit_other_core_no_fwdmemoryOCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD  OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x41000012000ocr.all_pf_rfo.l3_miss_remote_hop1_dram.no_snoop_neededmemoryOCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED  OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x11000012000ocr.all_pf_rfo.l3_miss_remote_hop1_dram.snoop_missmemoryOCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x21000012000ocr.all_pf_rfo.l3_miss_remote_hop1_dram.snoop_nonememoryOCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x9000012000ocr.all_reads.l3_miss.any_snoopmemoryOCR.ALL_READS.L3_MISS.ANY_SNOOP OCR.ALL_READS.L3_MISS.ANY_SNOOP OCR.ALL_READS.L3_MISS.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC0007F700ocr.all_reads.l3_miss.hitm_other_corememoryOCR.ALL_READS.L3_MISS.HITM_OTHER_CORE OCR.ALL_READS.L3_MISS.HITM_OTHER_CORE OCR.ALL_READS.L3_MISS.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C0007F700ocr.all_reads.l3_miss.hit_other_core_fwdmemoryOCR.ALL_READS.L3_MISS.HIT_OTHER_CORE_FWD OCR.ALL_READS.L3_MISS.HIT_OTHER_CORE_FWD OCR.ALL_READS.L3_MISS.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83C0007F700ocr.all_reads.l3_miss.hit_other_core_no_fwdmemoryOCR.ALL_READS.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.ALL_READS.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.ALL_READS.L3_MISS.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C0007F700ocr.all_reads.l3_miss.no_snoop_neededmemoryOCR.ALL_READS.L3_MISS.NO_SNOOP_NEEDED OCR.ALL_READS.L3_MISS.NO_SNOOP_NEEDED OCR.ALL_READS.L3_MISS.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C0007F700ocr.all_reads.l3_miss.remote_hitmmemoryOCR.ALL_READS.L3_MISS.REMOTE_HITM OCR.ALL_READS.L3_MISS.REMOTE_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC007F700ocr.all_reads.l3_miss.remote_hit_forwardmemoryOCR.ALL_READS.L3_MISS.REMOTE_HIT_FORWARD OCR.ALL_READS.L3_MISS.REMOTE_HIT_FORWARDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC007F700ocr.all_reads.l3_miss.snoop_missmemoryOCR.ALL_READS.L3_MISS.SNOOP_MISS OCR.ALL_READS.L3_MISS.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C0007F700ocr.all_reads.l3_miss.snoop_nonememoryOCR.ALL_READS.L3_MISS.SNOOP_NONE OCR.ALL_READS.L3_MISS.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC0007F700ocr.all_reads.l3_miss_local_dram.any_snoopmemoryOCR.ALL_READS.L3_MISS_LOCAL_DRAM.ANY_SNOOP  OCR.ALL_READS.L3_MISS_LOCAL_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F840007F700ocr.all_reads.l3_miss_local_dram.hitm_other_corememoryOCR.ALL_READS.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE  OCR.ALL_READS.L3_MISS_LOCAL_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040007F700ocr.all_reads.l3_miss_local_dram.hit_other_core_fwdmemoryOCR.ALL_READS.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD  OCR.ALL_READS.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040007F700ocr.all_reads.l3_miss_local_dram.hit_other_core_no_fwdmemoryOCR.ALL_READS.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD  OCR.ALL_READS.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4040007F700ocr.all_reads.l3_miss_local_dram.no_snoop_neededmemoryOCR.ALL_READS.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED  OCR.ALL_READS.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1040007F700ocr.all_reads.l3_miss_local_dram.snoop_missmemoryOCR.ALL_READS.L3_MISS_LOCAL_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2040007F700ocr.all_reads.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryOCR.ALL_READS.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD OCR.ALL_READS.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x6040007F700ocr.all_reads.l3_miss_local_dram.snoop_nonememoryOCR.ALL_READS.L3_MISS_LOCAL_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x840007F700ocr.all_reads.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryOCR.ALL_READS.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD OCR.ALL_READS.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B8007F700ocr.all_reads.l3_miss_remote_hop1_dram.any_snoopmemoryOCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP  OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F900007F700ocr.all_reads.l3_miss_remote_hop1_dram.hitm_other_corememoryOCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE  OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10100007F700ocr.all_reads.l3_miss_remote_hop1_dram.hit_other_core_fwdmemoryOCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD  OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8100007F700ocr.all_reads.l3_miss_remote_hop1_dram.hit_other_core_no_fwdmemoryOCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD  OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4100007F700ocr.all_reads.l3_miss_remote_hop1_dram.no_snoop_neededmemoryOCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED  OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1100007F700ocr.all_reads.l3_miss_remote_hop1_dram.snoop_missmemoryOCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2100007F700ocr.all_reads.l3_miss_remote_hop1_dram.snoop_nonememoryOCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x900007F700ocr.all_rfo.l3_miss.any_snoopmemoryOCR.ALL_RFO.L3_MISS.ANY_SNOOP OCR.ALL_RFO.L3_MISS.ANY_SNOOP OCR.ALL_RFO.L3_MISS.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00012200ocr.all_rfo.l3_miss.hitm_other_corememoryOCR.ALL_RFO.L3_MISS.HITM_OTHER_CORE OCR.ALL_RFO.L3_MISS.HITM_OTHER_CORE OCR.ALL_RFO.L3_MISS.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C00012200ocr.all_rfo.l3_miss.hit_other_core_fwdmemoryOCR.ALL_RFO.L3_MISS.HIT_OTHER_CORE_FWD OCR.ALL_RFO.L3_MISS.HIT_OTHER_CORE_FWD OCR.ALL_RFO.L3_MISS.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83C00012200ocr.all_rfo.l3_miss.hit_other_core_no_fwdmemoryOCR.ALL_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.ALL_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.ALL_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00012200ocr.all_rfo.l3_miss.no_snoop_neededmemoryOCR.ALL_RFO.L3_MISS.NO_SNOOP_NEEDED OCR.ALL_RFO.L3_MISS.NO_SNOOP_NEEDED OCR.ALL_RFO.L3_MISS.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00012200ocr.all_rfo.l3_miss.remote_hitmmemoryOCR.ALL_RFO.L3_MISS.REMOTE_HITM OCR.ALL_RFO.L3_MISS.REMOTE_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0012200ocr.all_rfo.l3_miss.remote_hit_forwardmemoryOCR.ALL_RFO.L3_MISS.REMOTE_HIT_FORWARD OCR.ALL_RFO.L3_MISS.REMOTE_HIT_FORWARDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0012200ocr.all_rfo.l3_miss.snoop_missmemoryOCR.ALL_RFO.L3_MISS.SNOOP_MISS OCR.ALL_RFO.L3_MISS.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00012200ocr.all_rfo.l3_miss.snoop_nonememoryOCR.ALL_RFO.L3_MISS.SNOOP_NONE OCR.ALL_RFO.L3_MISS.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00012200ocr.all_rfo.l3_miss_local_dram.any_snoopmemoryOCR.ALL_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP  OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400012200ocr.all_rfo.l3_miss_local_dram.hitm_other_corememoryOCR.ALL_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE  OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400012200ocr.all_rfo.l3_miss_local_dram.hit_other_core_fwdmemoryOCR.ALL_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD  OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80400012200ocr.all_rfo.l3_miss_local_dram.hit_other_core_no_fwdmemoryOCR.ALL_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD  OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400012200ocr.all_rfo.l3_miss_local_dram.no_snoop_neededmemoryOCR.ALL_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED  OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400012200ocr.all_rfo.l3_miss_local_dram.snoop_missmemoryOCR.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400012200ocr.all_rfo.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryOCR.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400012200ocr.all_rfo.l3_miss_local_dram.snoop_nonememoryOCR.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400012200ocr.all_rfo.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryOCR.ALL_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD OCR.ALL_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80012200ocr.all_rfo.l3_miss_remote_hop1_dram.any_snoopmemoryOCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP  OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F9000012200ocr.all_rfo.l3_miss_remote_hop1_dram.hitm_other_corememoryOCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE  OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x101000012200ocr.all_rfo.l3_miss_remote_hop1_dram.hit_other_core_fwdmemoryOCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD  OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x81000012200ocr.all_rfo.l3_miss_remote_hop1_dram.hit_other_core_no_fwdmemoryOCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD  OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x41000012200ocr.all_rfo.l3_miss_remote_hop1_dram.no_snoop_neededmemoryOCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED  OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x11000012200ocr.all_rfo.l3_miss_remote_hop1_dram.snoop_missmemoryOCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x21000012200ocr.all_rfo.l3_miss_remote_hop1_dram.snoop_nonememoryOCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x9000012200ocr.demand_code_rd.l3_miss.any_snoopmemoryCounts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS.ANY_SNOOP OCR.DEMAND_CODE_RD.L3_MISS.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00000400ocr.demand_code_rd.l3_miss.hitm_other_corememoryCounts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS.HITM_OTHER_CORE OCR.DEMAND_CODE_RD.L3_MISS.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C00000400ocr.demand_code_rd.l3_miss.hit_other_core_fwdmemoryCounts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS.HIT_OTHER_CORE_FWD OCR.DEMAND_CODE_RD.L3_MISS.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83C00000400ocr.demand_code_rd.l3_miss.hit_other_core_no_fwdmemoryCounts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.DEMAND_CODE_RD.L3_MISS.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00000400ocr.demand_code_rd.l3_miss.no_snoop_neededmemoryCounts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS.NO_SNOOP_NEEDED OCR.DEMAND_CODE_RD.L3_MISS.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00000400ocr.demand_code_rd.l3_miss.remote_hitmmemoryCounts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS.REMOTE_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0000400ocr.demand_code_rd.l3_miss.remote_hit_forwardmemoryCounts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS.REMOTE_HIT_FORWARDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0000400ocr.demand_code_rd.l3_miss.snoop_missmemoryCounts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00000400ocr.demand_code_rd.l3_miss.snoop_nonememoryCounts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00000400ocr.demand_code_rd.l3_miss_local_dram.any_snoopmemoryCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400000400ocr.demand_code_rd.l3_miss_local_dram.hitm_other_corememoryCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400000400ocr.demand_code_rd.l3_miss_local_dram.hit_other_core_fwdmemoryCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80400000400ocr.demand_code_rd.l3_miss_local_dram.hit_other_core_no_fwdmemoryCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400000400ocr.demand_code_rd.l3_miss_local_dram.no_snoop_neededmemoryCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400000400ocr.demand_code_rd.l3_miss_local_dram.snoop_missmemoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400000400ocr.demand_code_rd.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryCounts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400000400ocr.demand_code_rd.l3_miss_local_dram.snoop_nonememoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400000400ocr.demand_code_rd.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryCounts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80000400ocr.demand_code_rd.l3_miss_remote_hop1_dram.any_snoopmemoryCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F9000000400ocr.demand_code_rd.l3_miss_remote_hop1_dram.hitm_other_corememoryCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x101000000400ocr.demand_code_rd.l3_miss_remote_hop1_dram.hit_other_core_fwdmemoryCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x81000000400ocr.demand_code_rd.l3_miss_remote_hop1_dram.hit_other_core_no_fwdmemoryCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x41000000400ocr.demand_code_rd.l3_miss_remote_hop1_dram.no_snoop_neededmemoryCounts all demand code reads  OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x11000000400ocr.demand_code_rd.l3_miss_remote_hop1_dram.snoop_missmemoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x21000000400ocr.demand_code_rd.l3_miss_remote_hop1_dram.snoop_nonememoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x9000000400ocr.demand_data_rd.l3_miss.any_snoopmemoryCounts demand data reads OCR.DEMAND_DATA_RD.L3_MISS.ANY_SNOOP OCR.DEMAND_DATA_RD.L3_MISS.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00000100ocr.demand_data_rd.l3_miss.hitm_other_corememoryCounts demand data reads OCR.DEMAND_DATA_RD.L3_MISS.HITM_OTHER_CORE OCR.DEMAND_DATA_RD.L3_MISS.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C00000100ocr.demand_data_rd.l3_miss.hit_other_core_fwdmemoryCounts demand data reads OCR.DEMAND_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD OCR.DEMAND_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83C00000100ocr.demand_data_rd.l3_miss.hit_other_core_no_fwdmemoryCounts demand data reads OCR.DEMAND_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.DEMAND_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00000100ocr.demand_data_rd.l3_miss.no_snoop_neededmemoryCounts demand data reads OCR.DEMAND_DATA_RD.L3_MISS.NO_SNOOP_NEEDED OCR.DEMAND_DATA_RD.L3_MISS.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00000100ocr.demand_data_rd.l3_miss.remote_hitmmemoryCounts demand data reads OCR.DEMAND_DATA_RD.L3_MISS.REMOTE_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0000100ocr.demand_data_rd.l3_miss.remote_hit_forwardmemoryCounts demand data reads OCR.DEMAND_DATA_RD.L3_MISS.REMOTE_HIT_FORWARDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0000100ocr.demand_data_rd.l3_miss.snoop_missmemoryCounts demand data reads OCR.DEMAND_DATA_RD.L3_MISS.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00000100ocr.demand_data_rd.l3_miss.snoop_nonememoryCounts demand data reads OCR.DEMAND_DATA_RD.L3_MISS.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00000100ocr.demand_data_rd.l3_miss_local_dram.any_snoopmemoryCounts demand data reads  OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400000100ocr.demand_data_rd.l3_miss_local_dram.hitm_other_corememoryCounts demand data reads  OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400000100ocr.demand_data_rd.l3_miss_local_dram.hit_other_core_fwdmemoryCounts demand data reads  OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80400000100ocr.demand_data_rd.l3_miss_local_dram.hit_other_core_no_fwdmemoryCounts demand data reads  OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400000100ocr.demand_data_rd.l3_miss_local_dram.no_snoop_neededmemoryCounts demand data reads  OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400000100ocr.demand_data_rd.l3_miss_local_dram.snoop_missmemoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400000100ocr.demand_data_rd.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryCounts demand data reads OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400000100ocr.demand_data_rd.l3_miss_local_dram.snoop_nonememoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400000100ocr.demand_data_rd.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryCounts demand data reads OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80000100ocr.demand_data_rd.l3_miss_remote_hop1_dram.any_snoopmemoryCounts demand data reads  OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F9000000100ocr.demand_data_rd.l3_miss_remote_hop1_dram.hitm_other_corememoryCounts demand data reads  OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x101000000100ocr.demand_data_rd.l3_miss_remote_hop1_dram.hit_other_core_fwdmemoryCounts demand data reads  OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x81000000100ocr.demand_data_rd.l3_miss_remote_hop1_dram.hit_other_core_no_fwdmemoryCounts demand data reads  OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x41000000100ocr.demand_data_rd.l3_miss_remote_hop1_dram.no_snoop_neededmemoryCounts demand data reads  OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x11000000100ocr.demand_data_rd.l3_miss_remote_hop1_dram.snoop_missmemoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x21000000100ocr.demand_data_rd.l3_miss_remote_hop1_dram.snoop_nonememoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x9000000100ocr.demand_rfo.l3_miss.any_snoopmemoryCounts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS.ANY_SNOOP OCR.DEMAND_RFO.L3_MISS.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00000200ocr.demand_rfo.l3_miss.hitm_other_corememoryCounts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS.HITM_OTHER_CORE OCR.DEMAND_RFO.L3_MISS.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C00000200ocr.demand_rfo.l3_miss.hit_other_core_fwdmemoryCounts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS.HIT_OTHER_CORE_FWD OCR.DEMAND_RFO.L3_MISS.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83C00000200ocr.demand_rfo.l3_miss.hit_other_core_no_fwdmemoryCounts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.DEMAND_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00000200ocr.demand_rfo.l3_miss.no_snoop_neededmemoryCounts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS.NO_SNOOP_NEEDED OCR.DEMAND_RFO.L3_MISS.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00000200ocr.demand_rfo.l3_miss.remote_hitmmemoryCounts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS.REMOTE_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0000200ocr.demand_rfo.l3_miss.remote_hit_forwardmemoryCounts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS.REMOTE_HIT_FORWARDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0000200ocr.demand_rfo.l3_miss.snoop_missmemoryCounts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00000200ocr.demand_rfo.l3_miss.snoop_nonememoryCounts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00000200ocr.demand_rfo.l3_miss_local_dram.any_snoopmemoryCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400000200ocr.demand_rfo.l3_miss_local_dram.hitm_other_corememoryCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400000200ocr.demand_rfo.l3_miss_local_dram.hit_other_core_fwdmemoryCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80400000200ocr.demand_rfo.l3_miss_local_dram.hit_other_core_no_fwdmemoryCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400000200ocr.demand_rfo.l3_miss_local_dram.no_snoop_neededmemoryCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400000200ocr.demand_rfo.l3_miss_local_dram.snoop_missmemoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x20400000200ocr.demand_rfo.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryCounts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400000200ocr.demand_rfo.l3_miss_local_dram.snoop_nonememoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x8400000200ocr.demand_rfo.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryCounts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80000200ocr.demand_rfo.l3_miss_remote_hop1_dram.any_snoopmemoryCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F9000000200ocr.demand_rfo.l3_miss_remote_hop1_dram.hitm_other_corememoryCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x101000000200ocr.demand_rfo.l3_miss_remote_hop1_dram.hit_other_core_fwdmemoryCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x81000000200ocr.demand_rfo.l3_miss_remote_hop1_dram.hit_other_core_no_fwdmemoryCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x41000000200ocr.demand_rfo.l3_miss_remote_hop1_dram.no_snoop_neededmemoryCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x11000000200ocr.demand_rfo.l3_miss_remote_hop1_dram.snoop_missmemoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x21000000200ocr.demand_rfo.l3_miss_remote_hop1_dram.snoop_nonememoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x9000000200ocr.other.l3_miss.any_snoopmemoryCounts any other requests OCR.OTHER.L3_MISS.ANY_SNOOP OCR.OTHER.L3_MISS.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00800000ocr.other.l3_miss.hitm_other_corememoryCounts any other requests OCR.OTHER.L3_MISS.HITM_OTHER_CORE OCR.OTHER.L3_MISS.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C00800000ocr.other.l3_miss.hit_other_core_fwdmemoryCounts any other requests OCR.OTHER.L3_MISS.HIT_OTHER_CORE_FWD OCR.OTHER.L3_MISS.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83C00800000ocr.other.l3_miss.hit_other_core_no_fwdmemoryCounts any other requests OCR.OTHER.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.OTHER.L3_MISS.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00800000ocr.other.l3_miss.no_snoop_neededmemoryCounts any other requests OCR.OTHER.L3_MISS.NO_SNOOP_NEEDED OCR.OTHER.L3_MISS.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00800000ocr.other.l3_miss.remote_hitmmemoryCounts any other requests OCR.OTHER.L3_MISS.REMOTE_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0800000ocr.other.l3_miss.remote_hit_forwardmemoryCounts any other requests OCR.OTHER.L3_MISS.REMOTE_HIT_FORWARDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0800000ocr.other.l3_miss.snoop_missmemoryCounts any other requests OCR.OTHER.L3_MISS.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00800000ocr.other.l3_miss.snoop_nonememoryCounts any other requests OCR.OTHER.L3_MISS.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00800000ocr.other.l3_miss_local_dram.any_snoopmemoryCounts any other requests  OCR.OTHER.L3_MISS_LOCAL_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400800000ocr.other.l3_miss_local_dram.hitm_other_corememoryCounts any other requests  OCR.OTHER.L3_MISS_LOCAL_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400800000ocr.other.l3_miss_local_dram.hit_other_core_fwdmemoryCounts any other requests  OCR.OTHER.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80400800000ocr.other.l3_miss_local_dram.hit_other_core_no_fwdmemoryCounts any other requests  OCR.OTHER.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400800000ocr.other.l3_miss_local_dram.no_snoop_neededmemoryCounts any other requests  OCR.OTHER.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400800000ocr.other.l3_miss_local_dram.snoop_missmemoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400800000ocr.other.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryCounts any other requests OCR.OTHER.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400800000ocr.other.l3_miss_local_dram.snoop_nonememoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400800000ocr.other.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryCounts any other requests OCR.OTHER.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80800000ocr.other.l3_miss_remote_hop1_dram.any_snoopmemoryCounts any other requests  OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F9000800000ocr.other.l3_miss_remote_hop1_dram.hitm_other_corememoryCounts any other requests  OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x101000800000ocr.other.l3_miss_remote_hop1_dram.hit_other_core_fwdmemoryCounts any other requests  OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x81000800000ocr.other.l3_miss_remote_hop1_dram.hit_other_core_no_fwdmemoryCounts any other requests  OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x41000800000ocr.other.l3_miss_remote_hop1_dram.no_snoop_neededmemoryCounts any other requests  OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x11000800000ocr.other.l3_miss_remote_hop1_dram.snoop_missmemoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x21000800000ocr.other.l3_miss_remote_hop1_dram.snoop_nonememoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x9000800000ocr.pf_l1d_and_sw.l3_miss.any_snoopmemoryCounts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS.ANY_SNOOP OCR.PF_L1D_AND_SW.L3_MISS.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00040000ocr.pf_l1d_and_sw.l3_miss.hitm_other_corememoryCounts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS.HITM_OTHER_CORE OCR.PF_L1D_AND_SW.L3_MISS.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C00040000ocr.pf_l1d_and_sw.l3_miss.hit_other_core_fwdmemoryCounts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS.HIT_OTHER_CORE_FWD OCR.PF_L1D_AND_SW.L3_MISS.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83C00040000ocr.pf_l1d_and_sw.l3_miss.hit_other_core_no_fwdmemoryCounts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.PF_L1D_AND_SW.L3_MISS.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00040000ocr.pf_l1d_and_sw.l3_miss.no_snoop_neededmemoryCounts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS.NO_SNOOP_NEEDED OCR.PF_L1D_AND_SW.L3_MISS.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00040000ocr.pf_l1d_and_sw.l3_miss.remote_hitmmemoryCounts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS.REMOTE_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0040000ocr.pf_l1d_and_sw.l3_miss.remote_hit_forwardmemoryCounts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS.REMOTE_HIT_FORWARDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0040000ocr.pf_l1d_and_sw.l3_miss.snoop_missmemoryCounts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00040000ocr.pf_l1d_and_sw.l3_miss.snoop_nonememoryCounts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00040000ocr.pf_l1d_and_sw.l3_miss_local_dram.any_snoopmemoryCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400040000ocr.pf_l1d_and_sw.l3_miss_local_dram.hitm_other_corememoryCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400040000ocr.pf_l1d_and_sw.l3_miss_local_dram.hit_other_core_fwdmemoryCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80400040000ocr.pf_l1d_and_sw.l3_miss_local_dram.hit_other_core_no_fwdmemoryCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400040000ocr.pf_l1d_and_sw.l3_miss_local_dram.no_snoop_neededmemoryCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400040000ocr.pf_l1d_and_sw.l3_miss_local_dram.snoop_missmemoryCounts L1 data cache hardware prefetch requests and software prefetch requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400040000ocr.pf_l1d_and_sw.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryCounts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400040000ocr.pf_l1d_and_sw.l3_miss_local_dram.snoop_nonememoryCounts L1 data cache hardware prefetch requests and software prefetch requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400040000ocr.pf_l1d_and_sw.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryCounts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80040000ocr.pf_l1d_and_sw.l3_miss_remote_hop1_dram.any_snoopmemoryCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F9000040000ocr.pf_l1d_and_sw.l3_miss_remote_hop1_dram.hitm_other_corememoryCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x101000040000ocr.pf_l1d_and_sw.l3_miss_remote_hop1_dram.hit_other_core_fwdmemoryCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x81000040000ocr.pf_l1d_and_sw.l3_miss_remote_hop1_dram.hit_other_core_no_fwdmemoryCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x41000040000ocr.pf_l1d_and_sw.l3_miss_remote_hop1_dram.no_snoop_neededmemoryCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x11000040000ocr.pf_l1d_and_sw.l3_miss_remote_hop1_dram.snoop_missmemoryCounts L1 data cache hardware prefetch requests and software prefetch requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x21000040000ocr.pf_l1d_and_sw.l3_miss_remote_hop1_dram.snoop_nonememoryCounts L1 data cache hardware prefetch requests and software prefetch requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x9000040000ocr.pf_l2_data_rd.l3_miss.any_snoopmemoryCounts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS.ANY_SNOOP OCR.PF_L2_DATA_RD.L3_MISS.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00001000ocr.pf_l2_data_rd.l3_miss.hitm_other_corememoryCounts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS.HITM_OTHER_CORE OCR.PF_L2_DATA_RD.L3_MISS.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C00001000ocr.pf_l2_data_rd.l3_miss.hit_other_core_fwdmemoryCounts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD OCR.PF_L2_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83C00001000ocr.pf_l2_data_rd.l3_miss.hit_other_core_no_fwdmemoryCounts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.PF_L2_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00001000ocr.pf_l2_data_rd.l3_miss.no_snoop_neededmemoryCounts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS.NO_SNOOP_NEEDED OCR.PF_L2_DATA_RD.L3_MISS.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00001000ocr.pf_l2_data_rd.l3_miss.remote_hitmmemoryCounts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS.REMOTE_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0001000ocr.pf_l2_data_rd.l3_miss.remote_hit_forwardmemoryCounts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS.REMOTE_HIT_FORWARDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0001000ocr.pf_l2_data_rd.l3_miss.snoop_missmemoryCounts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00001000ocr.pf_l2_data_rd.l3_miss.snoop_nonememoryCounts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00001000ocr.pf_l2_data_rd.l3_miss_local_dram.any_snoopmemoryCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400001000ocr.pf_l2_data_rd.l3_miss_local_dram.hitm_other_corememoryCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400001000ocr.pf_l2_data_rd.l3_miss_local_dram.hit_other_core_fwdmemoryCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80400001000ocr.pf_l2_data_rd.l3_miss_local_dram.hit_other_core_no_fwdmemoryCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400001000ocr.pf_l2_data_rd.l3_miss_local_dram.no_snoop_neededmemoryCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400001000ocr.pf_l2_data_rd.l3_miss_local_dram.snoop_missmemoryCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400001000ocr.pf_l2_data_rd.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryCounts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400001000ocr.pf_l2_data_rd.l3_miss_local_dram.snoop_nonememoryCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400001000ocr.pf_l2_data_rd.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryCounts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80001000ocr.pf_l2_data_rd.l3_miss_remote_hop1_dram.any_snoopmemoryCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F9000001000ocr.pf_l2_data_rd.l3_miss_remote_hop1_dram.hitm_other_corememoryCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x101000001000ocr.pf_l2_data_rd.l3_miss_remote_hop1_dram.hit_other_core_fwdmemoryCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x81000001000ocr.pf_l2_data_rd.l3_miss_remote_hop1_dram.hit_other_core_no_fwdmemoryCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x41000001000ocr.pf_l2_data_rd.l3_miss_remote_hop1_dram.no_snoop_neededmemoryCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x11000001000ocr.pf_l2_data_rd.l3_miss_remote_hop1_dram.snoop_missmemoryCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x21000001000ocr.pf_l2_data_rd.l3_miss_remote_hop1_dram.snoop_nonememoryCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x9000001000ocr.pf_l2_rfo.l3_miss.any_snoopmemoryCounts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS.ANY_SNOOP OCR.PF_L2_RFO.L3_MISS.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00002000ocr.pf_l2_rfo.l3_miss.hitm_other_corememoryCounts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS.HITM_OTHER_CORE OCR.PF_L2_RFO.L3_MISS.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C00002000ocr.pf_l2_rfo.l3_miss.hit_other_core_fwdmemoryCounts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS.HIT_OTHER_CORE_FWD OCR.PF_L2_RFO.L3_MISS.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83C00002000ocr.pf_l2_rfo.l3_miss.hit_other_core_no_fwdmemoryCounts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.PF_L2_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00002000ocr.pf_l2_rfo.l3_miss.no_snoop_neededmemoryCounts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS.NO_SNOOP_NEEDED OCR.PF_L2_RFO.L3_MISS.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00002000ocr.pf_l2_rfo.l3_miss.remote_hitmmemoryCounts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS.REMOTE_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0002000ocr.pf_l2_rfo.l3_miss.remote_hit_forwardmemoryCounts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS.REMOTE_HIT_FORWARDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0002000ocr.pf_l2_rfo.l3_miss.snoop_missmemoryCounts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00002000ocr.pf_l2_rfo.l3_miss.snoop_nonememoryCounts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00002000ocr.pf_l2_rfo.l3_miss_local_dram.any_snoopmemoryCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400002000ocr.pf_l2_rfo.l3_miss_local_dram.hitm_other_corememoryCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400002000ocr.pf_l2_rfo.l3_miss_local_dram.hit_other_core_fwdmemoryCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80400002000ocr.pf_l2_rfo.l3_miss_local_dram.hit_other_core_no_fwdmemoryCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400002000ocr.pf_l2_rfo.l3_miss_local_dram.no_snoop_neededmemoryCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400002000ocr.pf_l2_rfo.l3_miss_local_dram.snoop_missmemoryCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400002000ocr.pf_l2_rfo.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryCounts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400002000ocr.pf_l2_rfo.l3_miss_local_dram.snoop_nonememoryCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400002000ocr.pf_l2_rfo.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryCounts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80002000ocr.pf_l2_rfo.l3_miss_remote_hop1_dram.any_snoopmemoryCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F9000002000ocr.pf_l2_rfo.l3_miss_remote_hop1_dram.hitm_other_corememoryCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x101000002000ocr.pf_l2_rfo.l3_miss_remote_hop1_dram.hit_other_core_fwdmemoryCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x81000002000ocr.pf_l2_rfo.l3_miss_remote_hop1_dram.hit_other_core_no_fwdmemoryCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x41000002000ocr.pf_l2_rfo.l3_miss_remote_hop1_dram.no_snoop_neededmemoryCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x11000002000ocr.pf_l2_rfo.l3_miss_remote_hop1_dram.snoop_missmemoryCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x21000002000ocr.pf_l2_rfo.l3_miss_remote_hop1_dram.snoop_nonememoryCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x9000002000ocr.pf_l3_data_rd.l3_miss.any_snoopmemoryCounts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS.ANY_SNOOP OCR.PF_L3_DATA_RD.L3_MISS.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00008000ocr.pf_l3_data_rd.l3_miss.hitm_other_corememoryCounts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS.HITM_OTHER_CORE OCR.PF_L3_DATA_RD.L3_MISS.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C00008000ocr.pf_l3_data_rd.l3_miss.hit_other_core_fwdmemoryCounts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD OCR.PF_L3_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83C00008000ocr.pf_l3_data_rd.l3_miss.hit_other_core_no_fwdmemoryCounts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.PF_L3_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00008000ocr.pf_l3_data_rd.l3_miss.no_snoop_neededmemoryCounts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS.NO_SNOOP_NEEDED OCR.PF_L3_DATA_RD.L3_MISS.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00008000ocr.pf_l3_data_rd.l3_miss.remote_hitmmemoryCounts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS.REMOTE_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0008000ocr.pf_l3_data_rd.l3_miss.remote_hit_forwardmemoryCounts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS.REMOTE_HIT_FORWARDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0008000ocr.pf_l3_data_rd.l3_miss.snoop_missmemoryCounts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00008000ocr.pf_l3_data_rd.l3_miss.snoop_nonememoryCounts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00008000ocr.pf_l3_data_rd.l3_miss_local_dram.any_snoopmemoryCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400008000ocr.pf_l3_data_rd.l3_miss_local_dram.hitm_other_corememoryCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400008000ocr.pf_l3_data_rd.l3_miss_local_dram.hit_other_core_fwdmemoryCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80400008000ocr.pf_l3_data_rd.l3_miss_local_dram.hit_other_core_no_fwdmemoryCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400008000ocr.pf_l3_data_rd.l3_miss_local_dram.no_snoop_neededmemoryCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400008000ocr.pf_l3_data_rd.l3_miss_local_dram.snoop_missmemoryCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400008000ocr.pf_l3_data_rd.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryCounts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400008000ocr.pf_l3_data_rd.l3_miss_local_dram.snoop_nonememoryCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400008000ocr.pf_l3_data_rd.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryCounts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80008000ocr.pf_l3_data_rd.l3_miss_remote_hop1_dram.any_snoopmemoryCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F9000008000ocr.pf_l3_data_rd.l3_miss_remote_hop1_dram.hitm_other_corememoryCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x101000008000ocr.pf_l3_data_rd.l3_miss_remote_hop1_dram.hit_other_core_fwdmemoryCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x81000008000ocr.pf_l3_data_rd.l3_miss_remote_hop1_dram.hit_other_core_no_fwdmemoryCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x41000008000ocr.pf_l3_data_rd.l3_miss_remote_hop1_dram.no_snoop_neededmemoryCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x11000008000ocr.pf_l3_data_rd.l3_miss_remote_hop1_dram.snoop_missmemoryCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x21000008000ocr.pf_l3_data_rd.l3_miss_remote_hop1_dram.snoop_nonememoryCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x9000008000ocr.pf_l3_rfo.l3_miss.any_snoopmemoryCounts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS.ANY_SNOOP OCR.PF_L3_RFO.L3_MISS.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00010000ocr.pf_l3_rfo.l3_miss.hitm_other_corememoryCounts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS.HITM_OTHER_CORE OCR.PF_L3_RFO.L3_MISS.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C00010000ocr.pf_l3_rfo.l3_miss.hit_other_core_fwdmemoryCounts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS.HIT_OTHER_CORE_FWD OCR.PF_L3_RFO.L3_MISS.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83C00010000ocr.pf_l3_rfo.l3_miss.hit_other_core_no_fwdmemoryCounts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.PF_L3_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00010000ocr.pf_l3_rfo.l3_miss.no_snoop_neededmemoryCounts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS.NO_SNOOP_NEEDED OCR.PF_L3_RFO.L3_MISS.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00010000ocr.pf_l3_rfo.l3_miss.remote_hitmmemoryCounts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS.REMOTE_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0010000ocr.pf_l3_rfo.l3_miss.remote_hit_forwardmemoryCounts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS.REMOTE_HIT_FORWARDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0010000ocr.pf_l3_rfo.l3_miss.snoop_missmemoryCounts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00010000ocr.pf_l3_rfo.l3_miss.snoop_nonememoryCounts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00010000ocr.pf_l3_rfo.l3_miss_local_dram.any_snoopmemoryCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400010000ocr.pf_l3_rfo.l3_miss_local_dram.hitm_other_corememoryCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400010000ocr.pf_l3_rfo.l3_miss_local_dram.hit_other_core_fwdmemoryCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80400010000ocr.pf_l3_rfo.l3_miss_local_dram.hit_other_core_no_fwdmemoryCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400010000ocr.pf_l3_rfo.l3_miss_local_dram.no_snoop_neededmemoryCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400010000ocr.pf_l3_rfo.l3_miss_local_dram.snoop_missmemoryCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400010000ocr.pf_l3_rfo.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryCounts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400010000ocr.pf_l3_rfo.l3_miss_local_dram.snoop_nonememoryCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400010000ocr.pf_l3_rfo.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryCounts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80010000ocr.pf_l3_rfo.l3_miss_remote_hop1_dram.any_snoopmemoryCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F9000010000ocr.pf_l3_rfo.l3_miss_remote_hop1_dram.hitm_other_corememoryCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x101000010000ocr.pf_l3_rfo.l3_miss_remote_hop1_dram.hit_other_core_fwdmemoryCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x81000010000ocr.pf_l3_rfo.l3_miss_remote_hop1_dram.hit_other_core_no_fwdmemoryCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x41000010000ocr.pf_l3_rfo.l3_miss_remote_hop1_dram.no_snoop_neededmemoryCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x11000010000ocr.pf_l3_rfo.l3_miss_remote_hop1_dram.snoop_missmemoryCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x21000010000ocr.pf_l3_rfo.l3_miss_remote_hop1_dram.snoop_nonememoryCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x9000010000offcore_requests.l3_miss_demand_data_rdmemoryDemand Data Read requests who miss L3 cacheevent=0xb0,period=100003,umask=0x1000Demand Data Read requests who miss L3 cacheoffcore_requests_outstanding.cycles_with_l3_miss_demand_data_rdmemoryCycles with at least 1 Demand Data Read requests who miss L3 cache in the superQevent=0x60,cmask=1,period=2000003,umask=0x1000offcore_requests_outstanding.l3_miss_demand_data_rdmemoryCounts number of Offcore outstanding Demand Data Read requests that miss L3 cache in the superQ every cycleevent=0x60,period=2000003,umask=0x1000offcore_requests_outstanding.l3_miss_demand_data_rd_ge_6memoryCycles with at least 6 Demand Data Read requests that miss L3 cache in the superQevent=0x60,cmask=6,period=2000003,umask=0x1000offcore_response.all_data_rd.l3_miss.any_snoopmemoryThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00049110offcore_response.all_data_rd.l3_miss.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C00049110offcore_response.all_data_rd.l3_miss.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83C00049110offcore_response.all_data_rd.l3_miss.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00049110offcore_response.all_data_rd.l3_miss.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00049110offcore_response.all_data_rd.l3_miss.remote_hitmmemoryThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS.REMOTE_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0049110offcore_response.all_data_rd.l3_miss.remote_hit_forwardmemoryThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS.REMOTE_HIT_FORWARDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0049110offcore_response.all_data_rd.l3_miss.snoop_missmemoryThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00049110offcore_response.all_data_rd.l3_miss.snoop_nonememoryThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00049110offcore_response.all_data_rd.l3_miss_local_dram.any_snoopmemoryThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400049110offcore_response.all_data_rd.l3_miss_local_dram.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400049110offcore_response.all_data_rd.l3_miss_local_dram.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80400049110offcore_response.all_data_rd.l3_miss_local_dram.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400049110offcore_response.all_data_rd.l3_miss_local_dram.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400049110offcore_response.all_data_rd.l3_miss_local_dram.snoop_missmemoryThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400049110offcore_response.all_data_rd.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400049110offcore_response.all_data_rd.l3_miss_local_dram.snoop_nonememoryThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400049110offcore_response.all_data_rd.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80049110offcore_response.all_data_rd.l3_miss_remote_hop1_dram.any_snoopmemoryThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F9000049110offcore_response.all_data_rd.l3_miss_remote_hop1_dram.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x101000049110offcore_response.all_data_rd.l3_miss_remote_hop1_dram.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x81000049110offcore_response.all_data_rd.l3_miss_remote_hop1_dram.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x41000049110offcore_response.all_data_rd.l3_miss_remote_hop1_dram.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x11000049110offcore_response.all_data_rd.l3_miss_remote_hop1_dram.snoop_missmemoryThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x21000049110offcore_response.all_data_rd.l3_miss_remote_hop1_dram.snoop_nonememoryThis event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x9000049110offcore_response.all_pf_data_rd.l3_miss.any_snoopmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00049010offcore_response.all_pf_data_rd.l3_miss.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C00049010offcore_response.all_pf_data_rd.l3_miss.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83C00049010offcore_response.all_pf_data_rd.l3_miss.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00049010offcore_response.all_pf_data_rd.l3_miss.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00049010offcore_response.all_pf_data_rd.l3_miss.remote_hitmmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS.REMOTE_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0049010offcore_response.all_pf_data_rd.l3_miss.remote_hit_forwardmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS.REMOTE_HIT_FORWARDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0049010offcore_response.all_pf_data_rd.l3_miss.snoop_missmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00049010offcore_response.all_pf_data_rd.l3_miss.snoop_nonememoryThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00049010offcore_response.all_pf_data_rd.l3_miss_local_dram.any_snoopmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400049010offcore_response.all_pf_data_rd.l3_miss_local_dram.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400049010offcore_response.all_pf_data_rd.l3_miss_local_dram.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80400049010offcore_response.all_pf_data_rd.l3_miss_local_dram.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400049010offcore_response.all_pf_data_rd.l3_miss_local_dram.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400049010offcore_response.all_pf_data_rd.l3_miss_local_dram.snoop_missmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400049010offcore_response.all_pf_data_rd.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400049010offcore_response.all_pf_data_rd.l3_miss_local_dram.snoop_nonememoryThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400049010offcore_response.all_pf_data_rd.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80049010offcore_response.all_pf_data_rd.l3_miss_remote_hop1_dram.any_snoopmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F9000049010offcore_response.all_pf_data_rd.l3_miss_remote_hop1_dram.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x101000049010offcore_response.all_pf_data_rd.l3_miss_remote_hop1_dram.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x81000049010offcore_response.all_pf_data_rd.l3_miss_remote_hop1_dram.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x41000049010offcore_response.all_pf_data_rd.l3_miss_remote_hop1_dram.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x11000049010offcore_response.all_pf_data_rd.l3_miss_remote_hop1_dram.snoop_missmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x21000049010offcore_response.all_pf_data_rd.l3_miss_remote_hop1_dram.snoop_nonememoryThis event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x9000049010offcore_response.all_pf_rfo.l3_miss.any_snoopmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00012010offcore_response.all_pf_rfo.l3_miss.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C00012010offcore_response.all_pf_rfo.l3_miss.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83C00012010offcore_response.all_pf_rfo.l3_miss.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00012010offcore_response.all_pf_rfo.l3_miss.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00012010offcore_response.all_pf_rfo.l3_miss.remote_hitmmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS.REMOTE_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0012010offcore_response.all_pf_rfo.l3_miss.remote_hit_forwardmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS.REMOTE_HIT_FORWARDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0012010offcore_response.all_pf_rfo.l3_miss.snoop_missmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00012010offcore_response.all_pf_rfo.l3_miss.snoop_nonememoryThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00012010offcore_response.all_pf_rfo.l3_miss_local_dram.any_snoopmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400012010offcore_response.all_pf_rfo.l3_miss_local_dram.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400012010offcore_response.all_pf_rfo.l3_miss_local_dram.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80400012010offcore_response.all_pf_rfo.l3_miss_local_dram.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400012010offcore_response.all_pf_rfo.l3_miss_local_dram.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400012010offcore_response.all_pf_rfo.l3_miss_local_dram.snoop_missmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400012010offcore_response.all_pf_rfo.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400012010offcore_response.all_pf_rfo.l3_miss_local_dram.snoop_nonememoryThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400012010offcore_response.all_pf_rfo.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80012010offcore_response.all_pf_rfo.l3_miss_remote_hop1_dram.any_snoopmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F9000012010offcore_response.all_pf_rfo.l3_miss_remote_hop1_dram.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x101000012010offcore_response.all_pf_rfo.l3_miss_remote_hop1_dram.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x81000012010offcore_response.all_pf_rfo.l3_miss_remote_hop1_dram.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x41000012010offcore_response.all_pf_rfo.l3_miss_remote_hop1_dram.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x11000012010offcore_response.all_pf_rfo.l3_miss_remote_hop1_dram.snoop_missmemoryThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x21000012010offcore_response.all_pf_rfo.l3_miss_remote_hop1_dram.snoop_nonememoryThis event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x9000012010offcore_response.all_reads.l3_miss.any_snoopmemoryThis event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC0007F710offcore_response.all_reads.l3_miss.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C0007F710offcore_response.all_reads.l3_miss.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83C0007F710offcore_response.all_reads.l3_miss.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C0007F710offcore_response.all_reads.l3_miss.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C0007F710offcore_response.all_reads.l3_miss.remote_hitmmemoryThis event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS.REMOTE_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC007F710offcore_response.all_reads.l3_miss.remote_hit_forwardmemoryThis event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS.REMOTE_HIT_FORWARDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC007F710offcore_response.all_reads.l3_miss.snoop_missmemoryThis event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C0007F710offcore_response.all_reads.l3_miss.snoop_nonememoryThis event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC0007F710offcore_response.all_reads.l3_miss_local_dram.any_snoopmemoryThis event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_LOCAL_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F840007F710offcore_response.all_reads.l3_miss_local_dram.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_LOCAL_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040007F710offcore_response.all_reads.l3_miss_local_dram.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040007F710offcore_response.all_reads.l3_miss_local_dram.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4040007F710offcore_response.all_reads.l3_miss_local_dram.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1040007F710offcore_response.all_reads.l3_miss_local_dram.snoop_missmemoryThis event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_LOCAL_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2040007F710offcore_response.all_reads.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x6040007F710offcore_response.all_reads.l3_miss_local_dram.snoop_nonememoryThis event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_LOCAL_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x840007F710offcore_response.all_reads.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B8007F710offcore_response.all_reads.l3_miss_remote_hop1_dram.any_snoopmemoryThis event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F900007F710offcore_response.all_reads.l3_miss_remote_hop1_dram.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10100007F710offcore_response.all_reads.l3_miss_remote_hop1_dram.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8100007F710offcore_response.all_reads.l3_miss_remote_hop1_dram.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4100007F710offcore_response.all_reads.l3_miss_remote_hop1_dram.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1100007F710offcore_response.all_reads.l3_miss_remote_hop1_dram.snoop_missmemoryThis event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2100007F710offcore_response.all_reads.l3_miss_remote_hop1_dram.snoop_nonememoryThis event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x900007F710offcore_response.all_rfo.l3_miss.any_snoopmemoryThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00012210offcore_response.all_rfo.l3_miss.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C00012210offcore_response.all_rfo.l3_miss.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83C00012210offcore_response.all_rfo.l3_miss.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00012210offcore_response.all_rfo.l3_miss.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00012210offcore_response.all_rfo.l3_miss.remote_hitmmemoryThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS.REMOTE_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0012210offcore_response.all_rfo.l3_miss.remote_hit_forwardmemoryThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS.REMOTE_HIT_FORWARDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0012210offcore_response.all_rfo.l3_miss.snoop_missmemoryThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00012210offcore_response.all_rfo.l3_miss.snoop_nonememoryThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00012210offcore_response.all_rfo.l3_miss_local_dram.any_snoopmemoryThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400012210offcore_response.all_rfo.l3_miss_local_dram.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400012210offcore_response.all_rfo.l3_miss_local_dram.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80400012210offcore_response.all_rfo.l3_miss_local_dram.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400012210offcore_response.all_rfo.l3_miss_local_dram.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400012210offcore_response.all_rfo.l3_miss_local_dram.snoop_missmemoryThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400012210offcore_response.all_rfo.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400012210offcore_response.all_rfo.l3_miss_local_dram.snoop_nonememoryThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400012210offcore_response.all_rfo.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80012210offcore_response.all_rfo.l3_miss_remote_hop1_dram.any_snoopmemoryThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F9000012210offcore_response.all_rfo.l3_miss_remote_hop1_dram.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x101000012210offcore_response.all_rfo.l3_miss_remote_hop1_dram.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x81000012210offcore_response.all_rfo.l3_miss_remote_hop1_dram.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x41000012210offcore_response.all_rfo.l3_miss_remote_hop1_dram.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x11000012210offcore_response.all_rfo.l3_miss_remote_hop1_dram.snoop_missmemoryThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x21000012210offcore_response.all_rfo.l3_miss_remote_hop1_dram.snoop_nonememoryThis event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x9000012210offcore_response.demand_code_rd.l3_miss.any_snoopmemoryThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00000410offcore_response.demand_code_rd.l3_miss.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C00000410offcore_response.demand_code_rd.l3_miss.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83C00000410offcore_response.demand_code_rd.l3_miss.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00000410offcore_response.demand_code_rd.l3_miss.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00000410offcore_response.demand_code_rd.l3_miss.remote_hitmmemoryThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS.REMOTE_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0000410offcore_response.demand_code_rd.l3_miss.remote_hit_forwardmemoryThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS.REMOTE_HIT_FORWARDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0000410offcore_response.demand_code_rd.l3_miss.snoop_missmemoryThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00000410offcore_response.demand_code_rd.l3_miss.snoop_nonememoryThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00000410offcore_response.demand_code_rd.l3_miss_local_dram.any_snoopmemoryThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400000410offcore_response.demand_code_rd.l3_miss_local_dram.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400000410offcore_response.demand_code_rd.l3_miss_local_dram.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80400000410offcore_response.demand_code_rd.l3_miss_local_dram.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400000410offcore_response.demand_code_rd.l3_miss_local_dram.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400000410offcore_response.demand_code_rd.l3_miss_local_dram.snoop_missmemoryThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400000410offcore_response.demand_code_rd.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400000410offcore_response.demand_code_rd.l3_miss_local_dram.snoop_nonememoryThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400000410offcore_response.demand_code_rd.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80000410offcore_response.demand_code_rd.l3_miss_remote_hop1_dram.any_snoopmemoryThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F9000000410offcore_response.demand_code_rd.l3_miss_remote_hop1_dram.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x101000000410offcore_response.demand_code_rd.l3_miss_remote_hop1_dram.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x81000000410offcore_response.demand_code_rd.l3_miss_remote_hop1_dram.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x41000000410offcore_response.demand_code_rd.l3_miss_remote_hop1_dram.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x11000000410offcore_response.demand_code_rd.l3_miss_remote_hop1_dram.snoop_missmemoryThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x21000000410offcore_response.demand_code_rd.l3_miss_remote_hop1_dram.snoop_nonememoryThis event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x9000000410offcore_response.demand_data_rd.l3_miss.any_snoopmemoryThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00000110offcore_response.demand_data_rd.l3_miss.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C00000110offcore_response.demand_data_rd.l3_miss.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83C00000110offcore_response.demand_data_rd.l3_miss.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00000110offcore_response.demand_data_rd.l3_miss.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00000110offcore_response.demand_data_rd.l3_miss.remote_hitmmemoryThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS.REMOTE_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0000110offcore_response.demand_data_rd.l3_miss.remote_hit_forwardmemoryThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS.REMOTE_HIT_FORWARDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0000110offcore_response.demand_data_rd.l3_miss.snoop_missmemoryThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00000110offcore_response.demand_data_rd.l3_miss.snoop_nonememoryThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00000110offcore_response.demand_data_rd.l3_miss_local_dram.any_snoopmemoryThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400000110offcore_response.demand_data_rd.l3_miss_local_dram.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400000110offcore_response.demand_data_rd.l3_miss_local_dram.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80400000110offcore_response.demand_data_rd.l3_miss_local_dram.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400000110offcore_response.demand_data_rd.l3_miss_local_dram.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400000110offcore_response.demand_data_rd.l3_miss_local_dram.snoop_missmemoryThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400000110offcore_response.demand_data_rd.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400000110offcore_response.demand_data_rd.l3_miss_local_dram.snoop_nonememoryThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400000110offcore_response.demand_data_rd.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80000110offcore_response.demand_data_rd.l3_miss_remote_hop1_dram.any_snoopmemoryThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F9000000110offcore_response.demand_data_rd.l3_miss_remote_hop1_dram.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x101000000110offcore_response.demand_data_rd.l3_miss_remote_hop1_dram.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x81000000110offcore_response.demand_data_rd.l3_miss_remote_hop1_dram.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x41000000110offcore_response.demand_data_rd.l3_miss_remote_hop1_dram.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x11000000110offcore_response.demand_data_rd.l3_miss_remote_hop1_dram.snoop_missmemoryThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x21000000110offcore_response.demand_data_rd.l3_miss_remote_hop1_dram.snoop_nonememoryThis event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x9000000110offcore_response.demand_rfo.l3_miss.any_snoopmemoryThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00000210offcore_response.demand_rfo.l3_miss.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C00000210offcore_response.demand_rfo.l3_miss.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83C00000210offcore_response.demand_rfo.l3_miss.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00000210offcore_response.demand_rfo.l3_miss.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00000210offcore_response.demand_rfo.l3_miss.remote_hitmmemoryThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS.REMOTE_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0000210offcore_response.demand_rfo.l3_miss.remote_hit_forwardmemoryThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS.REMOTE_HIT_FORWARDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0000210offcore_response.demand_rfo.l3_miss.snoop_missmemoryThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00000210offcore_response.demand_rfo.l3_miss.snoop_nonememoryThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00000210offcore_response.demand_rfo.l3_miss_local_dram.any_snoopmemoryThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400000210offcore_response.demand_rfo.l3_miss_local_dram.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400000210offcore_response.demand_rfo.l3_miss_local_dram.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80400000210offcore_response.demand_rfo.l3_miss_local_dram.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400000210offcore_response.demand_rfo.l3_miss_local_dram.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400000210offcore_response.demand_rfo.l3_miss_local_dram.snoop_missmemoryThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400000210offcore_response.demand_rfo.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400000210offcore_response.demand_rfo.l3_miss_local_dram.snoop_nonememoryThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400000210offcore_response.demand_rfo.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80000210offcore_response.demand_rfo.l3_miss_remote_hop1_dram.any_snoopmemoryThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F9000000210offcore_response.demand_rfo.l3_miss_remote_hop1_dram.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x101000000210offcore_response.demand_rfo.l3_miss_remote_hop1_dram.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x81000000210offcore_response.demand_rfo.l3_miss_remote_hop1_dram.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x41000000210offcore_response.demand_rfo.l3_miss_remote_hop1_dram.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x11000000210offcore_response.demand_rfo.l3_miss_remote_hop1_dram.snoop_missmemoryThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x21000000210offcore_response.demand_rfo.l3_miss_remote_hop1_dram.snoop_nonememoryThis event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x9000000210offcore_response.other.l3_miss.any_snoopmemoryThis event is deprecated. Refer to new event OCR.OTHER.L3_MISS.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00800010offcore_response.other.l3_miss.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.OTHER.L3_MISS.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C00800010offcore_response.other.l3_miss.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.OTHER.L3_MISS.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83C00800010offcore_response.other.l3_miss.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.OTHER.L3_MISS.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00800010offcore_response.other.l3_miss.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.OTHER.L3_MISS.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00800010offcore_response.other.l3_miss.remote_hitmmemoryThis event is deprecated. Refer to new event OCR.OTHER.L3_MISS.REMOTE_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0800010offcore_response.other.l3_miss.remote_hit_forwardmemoryThis event is deprecated. Refer to new event OCR.OTHER.L3_MISS.REMOTE_HIT_FORWARDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0800010offcore_response.other.l3_miss.snoop_missmemoryThis event is deprecated. Refer to new event OCR.OTHER.L3_MISS.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00800010offcore_response.other.l3_miss.snoop_nonememoryThis event is deprecated. Refer to new event OCR.OTHER.L3_MISS.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00800010offcore_response.other.l3_miss_local_dram.any_snoopmemoryThis event is deprecated. Refer to new event OCR.OTHER.L3_MISS_LOCAL_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400800010offcore_response.other.l3_miss_local_dram.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.OTHER.L3_MISS_LOCAL_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400800010offcore_response.other.l3_miss_local_dram.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.OTHER.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80400800010offcore_response.other.l3_miss_local_dram.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.OTHER.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400800010offcore_response.other.l3_miss_local_dram.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.OTHER.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400800010offcore_response.other.l3_miss_local_dram.snoop_missmemoryThis event is deprecated. Refer to new event OCR.OTHER.L3_MISS_LOCAL_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400800010offcore_response.other.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryThis event is deprecated. Refer to new event OCR.OTHER.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400800010offcore_response.other.l3_miss_local_dram.snoop_nonememoryThis event is deprecated. Refer to new event OCR.OTHER.L3_MISS_LOCAL_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400800010offcore_response.other.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryThis event is deprecated. Refer to new event OCR.OTHER.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80800010offcore_response.other.l3_miss_remote_hop1_dram.any_snoopmemoryThis event is deprecated. Refer to new event OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F9000800010offcore_response.other.l3_miss_remote_hop1_dram.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x101000800010offcore_response.other.l3_miss_remote_hop1_dram.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x81000800010offcore_response.other.l3_miss_remote_hop1_dram.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x41000800010offcore_response.other.l3_miss_remote_hop1_dram.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x11000800010offcore_response.other.l3_miss_remote_hop1_dram.snoop_missmemoryThis event is deprecated. Refer to new event OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x21000800010offcore_response.other.l3_miss_remote_hop1_dram.snoop_nonememoryThis event is deprecated. Refer to new event OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x9000800010offcore_response.pf_l1d_and_sw.l3_miss.any_snoopmemoryThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00040010offcore_response.pf_l1d_and_sw.l3_miss.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C00040010offcore_response.pf_l1d_and_sw.l3_miss.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83C00040010offcore_response.pf_l1d_and_sw.l3_miss.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00040010offcore_response.pf_l1d_and_sw.l3_miss.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00040010offcore_response.pf_l1d_and_sw.l3_miss.remote_hitmmemoryThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS.REMOTE_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0040010offcore_response.pf_l1d_and_sw.l3_miss.remote_hit_forwardmemoryThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS.REMOTE_HIT_FORWARDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0040010offcore_response.pf_l1d_and_sw.l3_miss.snoop_missmemoryThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00040010offcore_response.pf_l1d_and_sw.l3_miss.snoop_nonememoryThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00040010offcore_response.pf_l1d_and_sw.l3_miss_local_dram.any_snoopmemoryThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400040010offcore_response.pf_l1d_and_sw.l3_miss_local_dram.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400040010offcore_response.pf_l1d_and_sw.l3_miss_local_dram.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80400040010offcore_response.pf_l1d_and_sw.l3_miss_local_dram.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400040010offcore_response.pf_l1d_and_sw.l3_miss_local_dram.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400040010offcore_response.pf_l1d_and_sw.l3_miss_local_dram.snoop_missmemoryThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400040010offcore_response.pf_l1d_and_sw.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400040010offcore_response.pf_l1d_and_sw.l3_miss_local_dram.snoop_nonememoryThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400040010offcore_response.pf_l1d_and_sw.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80040010offcore_response.pf_l1d_and_sw.l3_miss_remote_hop1_dram.any_snoopmemoryThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F9000040010offcore_response.pf_l1d_and_sw.l3_miss_remote_hop1_dram.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x101000040010offcore_response.pf_l1d_and_sw.l3_miss_remote_hop1_dram.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x81000040010offcore_response.pf_l1d_and_sw.l3_miss_remote_hop1_dram.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x41000040010offcore_response.pf_l1d_and_sw.l3_miss_remote_hop1_dram.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x11000040010offcore_response.pf_l1d_and_sw.l3_miss_remote_hop1_dram.snoop_missmemoryThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x21000040010offcore_response.pf_l1d_and_sw.l3_miss_remote_hop1_dram.snoop_nonememoryThis event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x9000040010offcore_response.pf_l2_data_rd.l3_miss.any_snoopmemoryThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00001010offcore_response.pf_l2_data_rd.l3_miss.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C00001010offcore_response.pf_l2_data_rd.l3_miss.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83C00001010offcore_response.pf_l2_data_rd.l3_miss.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00001010offcore_response.pf_l2_data_rd.l3_miss.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00001010offcore_response.pf_l2_data_rd.l3_miss.remote_hitmmemoryThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS.REMOTE_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0001010offcore_response.pf_l2_data_rd.l3_miss.remote_hit_forwardmemoryThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS.REMOTE_HIT_FORWARDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0001010offcore_response.pf_l2_data_rd.l3_miss.snoop_missmemoryThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00001010offcore_response.pf_l2_data_rd.l3_miss.snoop_nonememoryThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00001010offcore_response.pf_l2_data_rd.l3_miss_local_dram.any_snoopmemoryThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400001010offcore_response.pf_l2_data_rd.l3_miss_local_dram.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400001010offcore_response.pf_l2_data_rd.l3_miss_local_dram.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80400001010offcore_response.pf_l2_data_rd.l3_miss_local_dram.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400001010offcore_response.pf_l2_data_rd.l3_miss_local_dram.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400001010offcore_response.pf_l2_data_rd.l3_miss_local_dram.snoop_missmemoryThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400001010offcore_response.pf_l2_data_rd.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400001010offcore_response.pf_l2_data_rd.l3_miss_local_dram.snoop_nonememoryThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400001010offcore_response.pf_l2_data_rd.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80001010offcore_response.pf_l2_data_rd.l3_miss_remote_hop1_dram.any_snoopmemoryThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F9000001010offcore_response.pf_l2_data_rd.l3_miss_remote_hop1_dram.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x101000001010offcore_response.pf_l2_data_rd.l3_miss_remote_hop1_dram.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x81000001010offcore_response.pf_l2_data_rd.l3_miss_remote_hop1_dram.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x41000001010offcore_response.pf_l2_data_rd.l3_miss_remote_hop1_dram.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x11000001010offcore_response.pf_l2_data_rd.l3_miss_remote_hop1_dram.snoop_missmemoryThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x21000001010offcore_response.pf_l2_data_rd.l3_miss_remote_hop1_dram.snoop_nonememoryThis event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x9000001010offcore_response.pf_l2_rfo.l3_miss.any_snoopmemoryThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00002010offcore_response.pf_l2_rfo.l3_miss.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C00002010offcore_response.pf_l2_rfo.l3_miss.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83C00002010offcore_response.pf_l2_rfo.l3_miss.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00002010offcore_response.pf_l2_rfo.l3_miss.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00002010offcore_response.pf_l2_rfo.l3_miss.remote_hitmmemoryThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS.REMOTE_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0002010offcore_response.pf_l2_rfo.l3_miss.remote_hit_forwardmemoryThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS.REMOTE_HIT_FORWARDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0002010offcore_response.pf_l2_rfo.l3_miss.snoop_missmemoryThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00002010offcore_response.pf_l2_rfo.l3_miss.snoop_nonememoryThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00002010offcore_response.pf_l2_rfo.l3_miss_local_dram.any_snoopmemoryThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400002010offcore_response.pf_l2_rfo.l3_miss_local_dram.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400002010offcore_response.pf_l2_rfo.l3_miss_local_dram.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80400002010offcore_response.pf_l2_rfo.l3_miss_local_dram.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400002010offcore_response.pf_l2_rfo.l3_miss_local_dram.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400002010offcore_response.pf_l2_rfo.l3_miss_local_dram.snoop_missmemoryThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400002010offcore_response.pf_l2_rfo.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400002010offcore_response.pf_l2_rfo.l3_miss_local_dram.snoop_nonememoryThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400002010offcore_response.pf_l2_rfo.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80002010offcore_response.pf_l2_rfo.l3_miss_remote_hop1_dram.any_snoopmemoryThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F9000002010offcore_response.pf_l2_rfo.l3_miss_remote_hop1_dram.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x101000002010offcore_response.pf_l2_rfo.l3_miss_remote_hop1_dram.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x81000002010offcore_response.pf_l2_rfo.l3_miss_remote_hop1_dram.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x41000002010offcore_response.pf_l2_rfo.l3_miss_remote_hop1_dram.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x11000002010offcore_response.pf_l2_rfo.l3_miss_remote_hop1_dram.snoop_missmemoryThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x21000002010offcore_response.pf_l2_rfo.l3_miss_remote_hop1_dram.snoop_nonememoryThis event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x9000002010offcore_response.pf_l3_data_rd.l3_miss.any_snoopmemoryThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00008010offcore_response.pf_l3_data_rd.l3_miss.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C00008010offcore_response.pf_l3_data_rd.l3_miss.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83C00008010offcore_response.pf_l3_data_rd.l3_miss.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00008010offcore_response.pf_l3_data_rd.l3_miss.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00008010offcore_response.pf_l3_data_rd.l3_miss.remote_hitmmemoryThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS.REMOTE_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0008010offcore_response.pf_l3_data_rd.l3_miss.remote_hit_forwardmemoryThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS.REMOTE_HIT_FORWARDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0008010offcore_response.pf_l3_data_rd.l3_miss.snoop_missmemoryThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00008010offcore_response.pf_l3_data_rd.l3_miss.snoop_nonememoryThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00008010offcore_response.pf_l3_data_rd.l3_miss_local_dram.any_snoopmemoryThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400008010offcore_response.pf_l3_data_rd.l3_miss_local_dram.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400008010offcore_response.pf_l3_data_rd.l3_miss_local_dram.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80400008010offcore_response.pf_l3_data_rd.l3_miss_local_dram.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400008010offcore_response.pf_l3_data_rd.l3_miss_local_dram.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400008010offcore_response.pf_l3_data_rd.l3_miss_local_dram.snoop_missmemoryThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400008010offcore_response.pf_l3_data_rd.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400008010offcore_response.pf_l3_data_rd.l3_miss_local_dram.snoop_nonememoryThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400008010offcore_response.pf_l3_data_rd.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80008010offcore_response.pf_l3_data_rd.l3_miss_remote_hop1_dram.any_snoopmemoryThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F9000008010offcore_response.pf_l3_data_rd.l3_miss_remote_hop1_dram.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x101000008010offcore_response.pf_l3_data_rd.l3_miss_remote_hop1_dram.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x81000008010offcore_response.pf_l3_data_rd.l3_miss_remote_hop1_dram.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x41000008010offcore_response.pf_l3_data_rd.l3_miss_remote_hop1_dram.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x11000008010offcore_response.pf_l3_data_rd.l3_miss_remote_hop1_dram.snoop_missmemoryThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x21000008010offcore_response.pf_l3_data_rd.l3_miss_remote_hop1_dram.snoop_nonememoryThis event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x9000008010offcore_response.pf_l3_rfo.l3_miss.any_snoopmemoryThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00010010offcore_response.pf_l3_rfo.l3_miss.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C00010010offcore_response.pf_l3_rfo.l3_miss.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83C00010010offcore_response.pf_l3_rfo.l3_miss.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C00010010offcore_response.pf_l3_rfo.l3_miss.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C00010010offcore_response.pf_l3_rfo.l3_miss.remote_hitmmemoryThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS.REMOTE_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0010010offcore_response.pf_l3_rfo.l3_miss.remote_hit_forwardmemoryThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS.REMOTE_HIT_FORWARDevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0010010offcore_response.pf_l3_rfo.l3_miss.snoop_missmemoryThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C00010010offcore_response.pf_l3_rfo.l3_miss.snoop_nonememoryThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC00010010offcore_response.pf_l3_rfo.l3_miss_local_dram.any_snoopmemoryThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8400010010offcore_response.pf_l3_rfo.l3_miss_local_dram.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100400010010offcore_response.pf_l3_rfo.l3_miss_local_dram.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80400010010offcore_response.pf_l3_rfo.l3_miss_local_dram.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40400010010offcore_response.pf_l3_rfo.l3_miss_local_dram.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400010010offcore_response.pf_l3_rfo.l3_miss_local_dram.snoop_missmemoryThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20400010010offcore_response.pf_l3_rfo.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400010010offcore_response.pf_l3_rfo.l3_miss_local_dram.snoop_nonememoryThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400010010offcore_response.pf_l3_rfo.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80010010offcore_response.pf_l3_rfo.l3_miss_remote_hop1_dram.any_snoopmemoryThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F9000010010offcore_response.pf_l3_rfo.l3_miss_remote_hop1_dram.hitm_other_corememoryThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x101000010010offcore_response.pf_l3_rfo.l3_miss_remote_hop1_dram.hit_other_core_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x81000010010offcore_response.pf_l3_rfo.l3_miss_remote_hop1_dram.hit_other_core_no_fwdmemoryThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x41000010010offcore_response.pf_l3_rfo.l3_miss_remote_hop1_dram.no_snoop_neededmemoryThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x11000010010offcore_response.pf_l3_rfo.l3_miss_remote_hop1_dram.snoop_missmemoryThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x21000010010offcore_response.pf_l3_rfo.l3_miss_remote_hop1_dram.snoop_nonememoryThis event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x9000010010rtm_retired.abortedmemoryNumber of times an RTM execution aborted due to any reasons (multiple categories may count as one) (Must be precise)event=0xc9,period=2000003,umask=400Number of times RTM abort was triggered (Must be precise)rtm_retired.aborted_eventsmemoryNumber of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt)event=0xc9,period=2000003,umask=0x8000Number of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt)rtm_retired.aborted_memmemoryNumber of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts)event=0xc9,period=2000003,umask=800Number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts)rtm_retired.aborted_memtypememoryNumber of times an RTM execution aborted due to incompatible memory typeevent=0xc9,period=2000003,umask=0x4000Number of times an RTM execution aborted due to incompatible memory typertm_retired.aborted_timermemoryNumber of times an RTM execution aborted due to uncommon conditionsevent=0xc9,period=2000003,umask=0x1000rtm_retired.aborted_unfriendlymemoryNumber of times an RTM execution aborted due to HLE-unfriendly instructionsevent=0xc9,period=2000003,umask=0x2000Number of times an RTM execution aborted due to HLE-unfriendly instructionsrtm_retired.commitmemoryNumber of times an RTM execution successfully committedevent=0xc9,period=2000003,umask=200Number of times RTM commit succeededrtm_retired.startmemoryNumber of times an RTM execution startedevent=0xc9,period=2000003,umask=100Number of times we entered an RTM region. Does not count nested transactionstx_exec.misc2memoryCounts the number of times a class of instructions (e.g., vzeroupper) that may cause a transactional abort was executed inside a transactional regionevent=0x5d,period=2000003,umask=200Unfriendly TSX abort triggered by a vzeroupper instructiontx_exec.misc5memoryCounts the number of times an HLE XACQUIRE instruction was executed inside an RTM transactional regionevent=0x5d,period=2000003,umask=0x1000Counts the number of times an HLE XACQUIRE instruction was executed inside an RTM transactional regiontx_mem.abort_capacitymemoryNumber of times a transactional abort was signaled due to a data capacity limitation for transactional reads or writesevent=0x54,period=2000003,umask=200tx_mem.abort_conflictmemoryNumber of times a transactional abort was signaled due to a data conflict on a transactionally accessed addressevent=0x54,period=2000003,umask=100Number of times a TSX line had a cache conflicttx_mem.abort_hle_elision_buffer_mismatchmemoryNumber of times an HLE transactional execution aborted due to XRELEASE lock not satisfying the address and value requirements in the elision bufferevent=0x54,period=2000003,umask=0x1000Number of times a TSX Abort was triggered due to release/commit but data and address mismatchtx_mem.abort_hle_elision_buffer_not_emptymemoryNumber of times an HLE transactional execution aborted due to NoAllocatedElisionBuffer being non-zeroevent=0x54,period=2000003,umask=800Number of times a TSX Abort was triggered due to commit but Lock Buffer not emptytx_mem.abort_hle_elision_buffer_unsupported_alignmentmemoryNumber of times an HLE transactional execution aborted due to an unsupported read alignment from the elision bufferevent=0x54,period=2000003,umask=0x2000Number of times a TSX Abort was triggered due to attempting an unsupported alignment from Lock Buffertx_mem.abort_hle_store_to_elided_lockmemoryNumber of times a HLE transactional region aborted due to a non XRELEASE prefixed instruction writing to an elided lock in the elision bufferevent=0x54,period=2000003,umask=400Number of times a TSX Abort was triggered due to a non-release/commit store to locktx_mem.hle_elision_buffer_fullmemoryNumber of times HLE lock could not be elided due to ElisionBufferAvailable being zeroevent=0x54,period=2000003,umask=0x4000Number of times we could not allocate Lock Buffercore_power.lvl0_turbo_licenseotherCore cycles where the core was running in a manner where Turbo may be clipped to the Non-AVX turbo scheduleevent=0x28,period=200003,umask=700Core cycles where the core was running with power-delivery for baseline license level 0.  This includes non-AVX codes, SSE, AVX 128-bit, and low-current AVX 256-bit codescore_power.lvl1_turbo_licenseotherCore cycles where the core was running in a manner where Turbo may be clipped to the AVX2 turbo scheduleevent=0x28,period=200003,umask=0x1800Core cycles where the core was running with power-delivery for license level 1.  This includes high current AVX 256-bit instructions as well as low current AVX 512-bit instructionscore_power.lvl2_turbo_licenseotherCore cycles where the core was running in a manner where Turbo may be clipped to the AVX512 turbo scheduleevent=0x28,period=200003,umask=0x2000Core cycles where the core was running with power-delivery for license level 2 (introduced in Skylake Server microarchitecture).  This includes high current AVX 512-bit instructionscore_power.throttleotherCore cycles the core was throttled due to a pending power level requestevent=0x28,period=200003,umask=0x4000Core cycles the out-of-order engine was throttled due to a pending power level requestcore_snoop_response.rsp_ifwdfeotherCORE_SNOOP_RESPONSE.RSP_IFWDFEevent=0xef,period=2000003,umask=0x2000core_snoop_response.rsp_ifwdmotherCORE_SNOOP_RESPONSE.RSP_IFWDMevent=0xef,period=2000003,umask=0x1000core_snoop_response.rsp_ihitfseotherCORE_SNOOP_RESPONSE.RSP_IHITFSEevent=0xef,period=2000003,umask=200core_snoop_response.rsp_ihitiotherCORE_SNOOP_RESPONSE.RSP_IHITIevent=0xef,period=2000003,umask=100core_snoop_response.rsp_sfwdfeotherCORE_SNOOP_RESPONSE.RSP_SFWDFEevent=0xef,period=2000003,umask=0x4000core_snoop_response.rsp_sfwdmotherCORE_SNOOP_RESPONSE.RSP_SFWDMevent=0xef,period=2000003,umask=800core_snoop_response.rsp_shitfseotherCORE_SNOOP_RESPONSE.RSP_SHITFSEevent=0xef,period=2000003,umask=400hw_interrupts.receivedotherNumber of hardware interrupts received by the processorevent=0xcb,period=203,umask=100Counts the number of hardware interruptions received by the processoridi_misc.wb_downgradeotherCounts number of cache lines that are dropped and not written back to L3 as they are deemed to be less likely to be reused shortlyevent=0xfe,period=100003,umask=400Counts number of cache lines that are dropped and not written back to L3 as they are deemed to be less likely to be reused shortlyidi_misc.wb_upgradeotherCounts number of cache lines that are allocated and written back to L3 with the intention that they are more likely to be reused shortlyevent=0xfe,period=100003,umask=200Counts number of cache lines that are allocated and written back to L3 with the intention that they are more likely to be reused shortlyocr.all_data_rd.any_responseotherOCR.ALL_DATA_RD.ANY_RESPONSE have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1049100ocr.all_data_rd.pmm_hit_local_pmm.any_snoopotherOCR.ALL_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP OCR.ALL_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8040049100ocr.all_data_rd.pmm_hit_local_pmm.snoop_noneotherOCR.ALL_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE OCR.ALL_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040049100ocr.all_data_rd.pmm_hit_local_pmm.snoop_not_neededotherOCR.ALL_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED OCR.ALL_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040049100ocr.all_data_rd.supplier_none.any_snoopotherOCR.ALL_DATA_RD.SUPPLIER_NONE.ANY_SNOOP  OCR.ALL_DATA_RD.SUPPLIER_NONE.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002049100ocr.all_data_rd.supplier_none.hitm_other_coreotherOCR.ALL_DATA_RD.SUPPLIER_NONE.HITM_OTHER_CORE  OCR.ALL_DATA_RD.SUPPLIER_NONE.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002049100ocr.all_data_rd.supplier_none.hit_other_core_fwdotherOCR.ALL_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD  OCR.ALL_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80002049100ocr.all_data_rd.supplier_none.hit_other_core_no_fwdotherOCR.ALL_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD  OCR.ALL_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002049100ocr.all_data_rd.supplier_none.no_snoop_neededotherOCR.ALL_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED  OCR.ALL_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002049100ocr.all_data_rd.supplier_none.snoop_missotherOCR.ALL_DATA_RD.SUPPLIER_NONE.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002049100ocr.all_data_rd.supplier_none.snoop_noneotherOCR.ALL_DATA_RD.SUPPLIER_NONE.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002049100ocr.all_pf_data_rd.any_responseotherOCR.ALL_PF_DATA_RD.ANY_RESPONSE have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1049000ocr.all_pf_data_rd.pmm_hit_local_pmm.any_snoopotherOCR.ALL_PF_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP OCR.ALL_PF_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8040049000ocr.all_pf_data_rd.pmm_hit_local_pmm.snoop_noneotherOCR.ALL_PF_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE OCR.ALL_PF_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040049000ocr.all_pf_data_rd.pmm_hit_local_pmm.snoop_not_neededotherOCR.ALL_PF_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED OCR.ALL_PF_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040049000ocr.all_pf_data_rd.supplier_none.any_snoopotherOCR.ALL_PF_DATA_RD.SUPPLIER_NONE.ANY_SNOOP  OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002049000ocr.all_pf_data_rd.supplier_none.hitm_other_coreotherOCR.ALL_PF_DATA_RD.SUPPLIER_NONE.HITM_OTHER_CORE  OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002049000ocr.all_pf_data_rd.supplier_none.hit_other_core_fwdotherOCR.ALL_PF_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD  OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80002049000ocr.all_pf_data_rd.supplier_none.hit_other_core_no_fwdotherOCR.ALL_PF_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD  OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002049000ocr.all_pf_data_rd.supplier_none.no_snoop_neededotherOCR.ALL_PF_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED  OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002049000ocr.all_pf_data_rd.supplier_none.snoop_missotherOCR.ALL_PF_DATA_RD.SUPPLIER_NONE.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002049000ocr.all_pf_data_rd.supplier_none.snoop_noneotherOCR.ALL_PF_DATA_RD.SUPPLIER_NONE.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002049000ocr.all_pf_rfo.any_responseotherOCR.ALL_PF_RFO.ANY_RESPONSE have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1012000ocr.all_pf_rfo.pmm_hit_local_pmm.any_snoopotherOCR.ALL_PF_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOP OCR.ALL_PF_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8040012000ocr.all_pf_rfo.pmm_hit_local_pmm.snoop_noneotherOCR.ALL_PF_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONE OCR.ALL_PF_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040012000ocr.all_pf_rfo.pmm_hit_local_pmm.snoop_not_neededotherOCR.ALL_PF_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED OCR.ALL_PF_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040012000ocr.all_pf_rfo.supplier_none.any_snoopotherOCR.ALL_PF_RFO.SUPPLIER_NONE.ANY_SNOOP  OCR.ALL_PF_RFO.SUPPLIER_NONE.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002012000ocr.all_pf_rfo.supplier_none.hitm_other_coreotherOCR.ALL_PF_RFO.SUPPLIER_NONE.HITM_OTHER_CORE  OCR.ALL_PF_RFO.SUPPLIER_NONE.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002012000ocr.all_pf_rfo.supplier_none.hit_other_core_fwdotherOCR.ALL_PF_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWD  OCR.ALL_PF_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80002012000ocr.all_pf_rfo.supplier_none.hit_other_core_no_fwdotherOCR.ALL_PF_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD  OCR.ALL_PF_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002012000ocr.all_pf_rfo.supplier_none.no_snoop_neededotherOCR.ALL_PF_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDED  OCR.ALL_PF_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002012000ocr.all_pf_rfo.supplier_none.snoop_missotherOCR.ALL_PF_RFO.SUPPLIER_NONE.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002012000ocr.all_pf_rfo.supplier_none.snoop_noneotherOCR.ALL_PF_RFO.SUPPLIER_NONE.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002012000ocr.all_reads.any_responseotherOCR.ALL_READS.ANY_RESPONSE have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x107F700ocr.all_reads.pmm_hit_local_pmm.any_snoopotherOCR.ALL_READS.PMM_HIT_LOCAL_PMM.ANY_SNOOP OCR.ALL_READS.PMM_HIT_LOCAL_PMM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F804007F700ocr.all_reads.pmm_hit_local_pmm.snoop_noneotherOCR.ALL_READS.PMM_HIT_LOCAL_PMM.SNOOP_NONE OCR.ALL_READS.PMM_HIT_LOCAL_PMM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x804007F700ocr.all_reads.pmm_hit_local_pmm.snoop_not_neededotherOCR.ALL_READS.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED OCR.ALL_READS.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1004007F700ocr.all_reads.supplier_none.any_snoopotherOCR.ALL_READS.SUPPLIER_NONE.ANY_SNOOP  OCR.ALL_READS.SUPPLIER_NONE.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F800207F700ocr.all_reads.supplier_none.hitm_other_coreotherOCR.ALL_READS.SUPPLIER_NONE.HITM_OTHER_CORE  OCR.ALL_READS.SUPPLIER_NONE.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x10000207F700ocr.all_reads.supplier_none.hit_other_core_fwdotherOCR.ALL_READS.SUPPLIER_NONE.HIT_OTHER_CORE_FWD  OCR.ALL_READS.SUPPLIER_NONE.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8000207F700ocr.all_reads.supplier_none.hit_other_core_no_fwdotherOCR.ALL_READS.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD  OCR.ALL_READS.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4000207F700ocr.all_reads.supplier_none.no_snoop_neededotherOCR.ALL_READS.SUPPLIER_NONE.NO_SNOOP_NEEDED  OCR.ALL_READS.SUPPLIER_NONE.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000207F700ocr.all_reads.supplier_none.snoop_missotherOCR.ALL_READS.SUPPLIER_NONE.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2000207F700ocr.all_reads.supplier_none.snoop_noneotherOCR.ALL_READS.SUPPLIER_NONE.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x800207F700ocr.all_rfo.any_responseotherOCR.ALL_RFO.ANY_RESPONSE have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1012200ocr.all_rfo.pmm_hit_local_pmm.any_snoopotherOCR.ALL_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOP OCR.ALL_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8040012200ocr.all_rfo.pmm_hit_local_pmm.snoop_noneotherOCR.ALL_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONE OCR.ALL_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040012200ocr.all_rfo.pmm_hit_local_pmm.snoop_not_neededotherOCR.ALL_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED OCR.ALL_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040012200ocr.all_rfo.supplier_none.any_snoopotherOCR.ALL_RFO.SUPPLIER_NONE.ANY_SNOOP  OCR.ALL_RFO.SUPPLIER_NONE.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002012200ocr.all_rfo.supplier_none.hitm_other_coreotherOCR.ALL_RFO.SUPPLIER_NONE.HITM_OTHER_CORE  OCR.ALL_RFO.SUPPLIER_NONE.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002012200ocr.all_rfo.supplier_none.hit_other_core_fwdotherOCR.ALL_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWD  OCR.ALL_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80002012200ocr.all_rfo.supplier_none.hit_other_core_no_fwdotherOCR.ALL_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD  OCR.ALL_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002012200ocr.all_rfo.supplier_none.no_snoop_neededotherOCR.ALL_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDED  OCR.ALL_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002012200ocr.all_rfo.supplier_none.snoop_missotherOCR.ALL_RFO.SUPPLIER_NONE.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002012200ocr.all_rfo.supplier_none.snoop_noneotherOCR.ALL_RFO.SUPPLIER_NONE.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002012200ocr.demand_code_rd.any_responseotherCounts all demand code reads have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000400ocr.demand_code_rd.pmm_hit_local_pmm.any_snoopotherCounts all demand code reads OCR.DEMAND_CODE_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8040000400ocr.demand_code_rd.pmm_hit_local_pmm.snoop_noneotherCounts all demand code reads OCR.DEMAND_CODE_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040000400ocr.demand_code_rd.pmm_hit_local_pmm.snoop_not_neededotherCounts all demand code reads OCR.DEMAND_CODE_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040000400ocr.demand_code_rd.supplier_none.any_snoopotherCounts all demand code reads  OCR.DEMAND_CODE_RD.SUPPLIER_NONE.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002000400ocr.demand_code_rd.supplier_none.hitm_other_coreotherCounts all demand code reads  OCR.DEMAND_CODE_RD.SUPPLIER_NONE.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002000400ocr.demand_code_rd.supplier_none.hit_other_core_fwdotherCounts all demand code reads  OCR.DEMAND_CODE_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80002000400ocr.demand_code_rd.supplier_none.hit_other_core_no_fwdotherCounts all demand code reads  OCR.DEMAND_CODE_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002000400ocr.demand_code_rd.supplier_none.no_snoop_neededotherCounts all demand code reads  OCR.DEMAND_CODE_RD.SUPPLIER_NONE.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002000400ocr.demand_code_rd.supplier_none.snoop_missotherCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002000400ocr.demand_code_rd.supplier_none.snoop_noneotherCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002000400ocr.demand_data_rd.any_responseotherCounts demand data reads have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000100ocr.demand_data_rd.pmm_hit_local_pmm.any_snoopotherCounts demand data reads OCR.DEMAND_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8040000100ocr.demand_data_rd.pmm_hit_local_pmm.snoop_noneotherCounts demand data reads OCR.DEMAND_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040000100ocr.demand_data_rd.pmm_hit_local_pmm.snoop_not_neededotherCounts demand data reads OCR.DEMAND_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040000100ocr.demand_data_rd.supplier_none.any_snoopotherCounts demand data reads  OCR.DEMAND_DATA_RD.SUPPLIER_NONE.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002000100ocr.demand_data_rd.supplier_none.hitm_other_coreotherCounts demand data reads  OCR.DEMAND_DATA_RD.SUPPLIER_NONE.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002000100ocr.demand_data_rd.supplier_none.hit_other_core_fwdotherCounts demand data reads  OCR.DEMAND_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80002000100ocr.demand_data_rd.supplier_none.hit_other_core_no_fwdotherCounts demand data reads  OCR.DEMAND_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002000100ocr.demand_data_rd.supplier_none.no_snoop_neededotherCounts demand data reads  OCR.DEMAND_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002000100ocr.demand_data_rd.supplier_none.snoop_missotherCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002000100ocr.demand_data_rd.supplier_none.snoop_noneotherCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002000100ocr.demand_rfo.any_responseotherCounts all demand data writes (RFOs) have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000200ocr.demand_rfo.pmm_hit_local_pmm.any_snoopotherCounts all demand data writes (RFOs) OCR.DEMAND_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8040000200ocr.demand_rfo.pmm_hit_local_pmm.snoop_noneotherCounts all demand data writes (RFOs) OCR.DEMAND_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040000200ocr.demand_rfo.pmm_hit_local_pmm.snoop_not_neededotherCounts all demand data writes (RFOs) OCR.DEMAND_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040000200ocr.demand_rfo.supplier_none.any_snoopotherCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.SUPPLIER_NONE.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002000200ocr.demand_rfo.supplier_none.hitm_other_coreotherCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.SUPPLIER_NONE.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002000200ocr.demand_rfo.supplier_none.hit_other_core_fwdotherCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80002000200ocr.demand_rfo.supplier_none.hit_other_core_no_fwdotherCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002000200ocr.demand_rfo.supplier_none.no_snoop_neededotherCounts all demand data writes (RFOs)  OCR.DEMAND_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002000200ocr.demand_rfo.supplier_none.snoop_missotherCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x20002000200ocr.demand_rfo.supplier_none.snoop_noneotherCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x8002000200ocr.other.any_responseotherCounts any other requests have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1800000ocr.other.pmm_hit_local_pmm.any_snoopotherCounts any other requests OCR.OTHER.PMM_HIT_LOCAL_PMM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8040800000ocr.other.pmm_hit_local_pmm.snoop_noneotherCounts any other requests OCR.OTHER.PMM_HIT_LOCAL_PMM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040800000ocr.other.pmm_hit_local_pmm.snoop_not_neededotherCounts any other requests OCR.OTHER.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040800000ocr.other.supplier_none.any_snoopotherCounts any other requests  OCR.OTHER.SUPPLIER_NONE.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002800000ocr.other.supplier_none.hitm_other_coreotherCounts any other requests  OCR.OTHER.SUPPLIER_NONE.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002800000ocr.other.supplier_none.hit_other_core_fwdotherCounts any other requests  OCR.OTHER.SUPPLIER_NONE.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80002800000ocr.other.supplier_none.hit_other_core_no_fwdotherCounts any other requests  OCR.OTHER.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002800000ocr.other.supplier_none.no_snoop_neededotherCounts any other requests  OCR.OTHER.SUPPLIER_NONE.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002800000ocr.other.supplier_none.snoop_missotherCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002800000ocr.other.supplier_none.snoop_noneotherCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002800000ocr.pf_l1d_and_sw.any_responseotherCounts L1 data cache hardware prefetch requests and software prefetch requests have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1040000ocr.pf_l1d_and_sw.pmm_hit_local_pmm.any_snoopotherCounts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.PMM_HIT_LOCAL_PMM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8040040000ocr.pf_l1d_and_sw.pmm_hit_local_pmm.snoop_noneotherCounts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.PMM_HIT_LOCAL_PMM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040040000ocr.pf_l1d_and_sw.pmm_hit_local_pmm.snoop_not_neededotherCounts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040040000ocr.pf_l1d_and_sw.supplier_none.any_snoopotherCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.SUPPLIER_NONE.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002040000ocr.pf_l1d_and_sw.supplier_none.hitm_other_coreotherCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.SUPPLIER_NONE.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002040000ocr.pf_l1d_and_sw.supplier_none.hit_other_core_fwdotherCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.SUPPLIER_NONE.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80002040000ocr.pf_l1d_and_sw.supplier_none.hit_other_core_no_fwdotherCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002040000ocr.pf_l1d_and_sw.supplier_none.no_snoop_neededotherCounts L1 data cache hardware prefetch requests and software prefetch requests  OCR.PF_L1D_AND_SW.SUPPLIER_NONE.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002040000ocr.pf_l1d_and_sw.supplier_none.snoop_missotherCounts L1 data cache hardware prefetch requests and software prefetch requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002040000ocr.pf_l1d_and_sw.supplier_none.snoop_noneotherCounts L1 data cache hardware prefetch requests and software prefetch requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002040000ocr.pf_l2_data_rd.any_responseotherCounts prefetch (that bring data to L2) data reads have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1001000ocr.pf_l2_data_rd.pmm_hit_local_pmm.any_snoopotherCounts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8040001000ocr.pf_l2_data_rd.pmm_hit_local_pmm.snoop_noneotherCounts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040001000ocr.pf_l2_data_rd.pmm_hit_local_pmm.snoop_not_neededotherCounts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040001000ocr.pf_l2_data_rd.supplier_none.any_snoopotherCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.SUPPLIER_NONE.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002001000ocr.pf_l2_data_rd.supplier_none.hitm_other_coreotherCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.SUPPLIER_NONE.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002001000ocr.pf_l2_data_rd.supplier_none.hit_other_core_fwdotherCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80002001000ocr.pf_l2_data_rd.supplier_none.hit_other_core_no_fwdotherCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002001000ocr.pf_l2_data_rd.supplier_none.no_snoop_neededotherCounts prefetch (that bring data to L2) data reads  OCR.PF_L2_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002001000ocr.pf_l2_data_rd.supplier_none.snoop_missotherCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002001000ocr.pf_l2_data_rd.supplier_none.snoop_noneotherCounts prefetch (that bring data to L2) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002001000ocr.pf_l2_rfo.any_responseotherCounts all prefetch (that bring data to L2) RFOs have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1002000ocr.pf_l2_rfo.pmm_hit_local_pmm.any_snoopotherCounts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8040002000ocr.pf_l2_rfo.pmm_hit_local_pmm.snoop_noneotherCounts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040002000ocr.pf_l2_rfo.pmm_hit_local_pmm.snoop_not_neededotherCounts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040002000ocr.pf_l2_rfo.supplier_none.any_snoopotherCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.SUPPLIER_NONE.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002002000ocr.pf_l2_rfo.supplier_none.hitm_other_coreotherCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.SUPPLIER_NONE.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002002000ocr.pf_l2_rfo.supplier_none.hit_other_core_fwdotherCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80002002000ocr.pf_l2_rfo.supplier_none.hit_other_core_no_fwdotherCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002002000ocr.pf_l2_rfo.supplier_none.no_snoop_neededotherCounts all prefetch (that bring data to L2) RFOs  OCR.PF_L2_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002002000ocr.pf_l2_rfo.supplier_none.snoop_missotherCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002002000ocr.pf_l2_rfo.supplier_none.snoop_noneotherCounts all prefetch (that bring data to L2) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002002000ocr.pf_l3_data_rd.any_responseotherCounts all prefetch (that bring data to LLC only) data reads have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1008000ocr.pf_l3_data_rd.pmm_hit_local_pmm.any_snoopotherCounts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8040008000ocr.pf_l3_data_rd.pmm_hit_local_pmm.snoop_noneotherCounts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040008000ocr.pf_l3_data_rd.pmm_hit_local_pmm.snoop_not_neededotherCounts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040008000ocr.pf_l3_data_rd.supplier_none.any_snoopotherCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.SUPPLIER_NONE.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002008000ocr.pf_l3_data_rd.supplier_none.hitm_other_coreotherCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.SUPPLIER_NONE.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002008000ocr.pf_l3_data_rd.supplier_none.hit_other_core_fwdotherCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80002008000ocr.pf_l3_data_rd.supplier_none.hit_other_core_no_fwdotherCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002008000ocr.pf_l3_data_rd.supplier_none.no_snoop_neededotherCounts all prefetch (that bring data to LLC only) data reads  OCR.PF_L3_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002008000ocr.pf_l3_data_rd.supplier_none.snoop_missotherCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002008000ocr.pf_l3_data_rd.supplier_none.snoop_noneotherCounts all prefetch (that bring data to LLC only) data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002008000ocr.pf_l3_rfo.any_responseotherCounts all prefetch (that bring data to LLC only) RFOs have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1010000ocr.pf_l3_rfo.pmm_hit_local_pmm.any_snoopotherCounts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8040010000ocr.pf_l3_rfo.pmm_hit_local_pmm.snoop_noneotherCounts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONEevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040010000ocr.pf_l3_rfo.pmm_hit_local_pmm.snoop_not_neededotherCounts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040010000ocr.pf_l3_rfo.supplier_none.any_snoopotherCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.SUPPLIER_NONE.ANY_SNOOPevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8002010000ocr.pf_l3_rfo.supplier_none.hitm_other_coreotherCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.SUPPLIER_NONE.HITM_OTHER_COREevent=0xb7,period=100003,umask=1,offcore_rsp=0x100002010000ocr.pf_l3_rfo.supplier_none.hit_other_core_fwdotherCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x80002010000ocr.pf_l3_rfo.supplier_none.hit_other_core_no_fwdotherCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x40002010000ocr.pf_l3_rfo.supplier_none.no_snoop_neededotherCounts all prefetch (that bring data to LLC only) RFOs  OCR.PF_L3_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x10002010000ocr.pf_l3_rfo.supplier_none.snoop_missotherCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20002010000ocr.pf_l3_rfo.supplier_none.snoop_noneotherCounts all prefetch (that bring data to LLC only) RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8002010000arith.divider_activepipelineCycles when divide unit is busy executing divide or square root operations. Accounts for integer and floating-point operationsevent=0x14,cmask=1,period=2000003,umask=100br_inst_retired.all_branchespipelineAll (macro) branch instructions retired  Spec update: SKL091event=0xc4,period=40000900Counts all (macro) branch instructions retired  Spec update: SKL091br_inst_retired.all_branches_pebspipelineAll (macro) branch instructions retired  Spec update: SKL091 (Must be precise)event=0xc4,period=400009,umask=400This is a precise version of BR_INST_RETIRED.ALL_BRANCHES that counts all (macro) branch instructions retired  Spec update: SKL091 (Must be precise)br_inst_retired.condpipelineConditional branch instructions retired. [This event is alias to BR_INST_RETIRED.CONDITIONAL]  Spec update: SKL091event=0xc4,period=400009,umask=100This event counts conditional branch instructions retired. [This event is alias to BR_INST_RETIRED.CONDITIONAL]  Spec update: SKL091br_inst_retired.conditionalpipelineConditional branch instructions retired. [This event is alias to BR_INST_RETIRED.COND]  Spec update: SKL091 (Precise event)event=0xc4,period=400009,umask=100This event counts conditional branch instructions retired. [This event is alias to BR_INST_RETIRED.COND]  Spec update: SKL091 (Precise event)br_inst_retired.cond_ntakenpipelineNot taken branch instructions retired  Spec update: SKL091event=0xc4,period=400009,umask=0x1000This event counts not taken branch instructions retired  Spec update: SKL091br_inst_retired.far_branchpipelineFar branch instructions retired  Spec update: SKL091 (Precise event)event=0xc4,period=100007,umask=0x4000This event counts far branch instructions retired  Spec update: SKL091 (Precise event)br_inst_retired.near_callpipelineDirect and indirect near call instructions retired  Spec update: SKL091 (Precise event)event=0xc4,period=100007,umask=200This event counts both direct and indirect near call instructions retired  Spec update: SKL091 (Precise event)br_inst_retired.near_returnpipelineReturn instructions retired  Spec update: SKL091 (Precise event)event=0xc4,period=100007,umask=800This event counts return instructions retired  Spec update: SKL091 (Precise event)br_inst_retired.near_takenpipelineTaken branch instructions retired  Spec update: SKL091 (Precise event)event=0xc4,period=400009,umask=0x2000This event counts taken branch instructions retired  Spec update: SKL091 (Precise event)br_inst_retired.not_takenpipelineNot taken branch instructions retired  Spec update: SKL091event=0xc4,period=400009,umask=0x1000This event counts not taken branch instructions retired  Spec update: SKL091br_misp_retired.all_branchespipelineAll mispredicted macro branch instructions retiredevent=0xc5,period=40000900Counts all the retired branch instructions that were mispredicted by the processor. A branch misprediction occurs when the processor incorrectly predicts the destination of the branch.  When the misprediction is discovered at execution, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct pathbr_misp_retired.all_branches_pebspipelineMispredicted macro branch instructions retired (Must be precise)event=0xc5,period=400009,umask=400This is a precise version of BR_MISP_RETIRED.ALL_BRANCHES that counts all mispredicted macro branch instructions retired (Must be precise)br_misp_retired.near_callpipelineMispredicted direct and indirect near call instructions retired (Precise event)event=0xc5,period=400009,umask=200Counts both taken and not taken retired mispredicted direct and indirect near calls, including both register and memory indirect (Precise event)br_misp_retired.near_takenpipelineNumber of near branch instructions retired that were mispredicted and taken (Precise event)event=0xc5,period=400009,umask=0x2000cpu_clk_thread_unhalted.one_thread_activepipelineCore crystal clock cycles when this thread is unhalted and the other thread is haltedevent=0x3c,period=25003,umask=200cpu_clk_thread_unhalted.ref_xclkpipelineCore crystal clock cycles when the thread is unhaltedevent=0x3c,period=25003,umask=100cpu_clk_thread_unhalted.ref_xclk_anypipelineCore crystal clock cycles when at least one thread on the physical core is unhaltedevent=0x3c,any=1,period=25003,umask=100cpu_clk_unhalted.one_thread_activepipelineCore crystal clock cycles when this thread is unhalted and the other thread is haltedevent=0x3c,period=25003,umask=200cpu_clk_unhalted.ref_tscpipelineReference cycles when the core is not in halt stateevent=0,period=2000003,umask=300Counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. This event has a constant ratio with the CPU_CLK_UNHALTED.REF_XCLK event. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'.  The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'.  After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this casecpu_clk_unhalted.ref_xclkpipelineCore crystal clock cycles when the thread is unhaltedevent=0x3c,period=25003,umask=100cpu_clk_unhalted.ref_xclk_anypipelineCore crystal clock cycles when at least one thread on the physical core is unhaltedevent=0x3c,any=1,period=25003,umask=100cpu_clk_unhalted.ring0_transpipelineCounts when there is a transition from ring 1, 2 or 3 to ring 0event=0x3c,cmask=1,edge=1,period=10000700Counts when the Current Privilege Level (CPL) transitions from ring 1, 2 or 3 to ring 0 (Kernel)cpu_clk_unhalted.threadpipelineCore cycles when the thread is not in halt stateevent=0x3c,period=200000300Counts the number of core cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios. The core frequency may change from time to time due to transitions associated with Enhanced Intel SpeedStep Technology or TM2. For this reason this event may have a changing ratio with regards to time. When the core frequency is constant, this event can approximate elapsed time while the core was not in the halt state. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other eventscycle_activity.cycles_mem_anypipelineCycles while memory subsystem has an outstanding loadevent=0xa3,cmask=16,period=2000003,umask=0x1000cycle_activity.stalls_mem_anypipelineExecution stalls while memory subsystem has an outstanding loadevent=0xa3,cmask=20,period=2000003,umask=0x1400exe_activity.bound_on_storespipelineCycles where the Store Buffer was full and no outstanding loadevent=0xa6,period=2000003,umask=0x4000exe_activity.exe_bound_0_portspipelineCycles where no uops were executed, the Reservation Station was not empty, the Store Buffer was full and there was no outstanding loadevent=0xa6,period=2000003,umask=100Counts cycles during which no uops were executed on all ports and Reservation Station (RS) was not emptyild_stall.lcppipelineStalls caused by changing prefix length of the instruction. [This event is alias to DECODE.LCP]event=0x87,period=2000003,umask=100Counts cycles that the Instruction Length decoder (ILD) stalls occurred due to dynamically changing prefix length of the decoded instruction (by operand size prefix instruction 0x66, address size prefix instruction 0x67 or REX.W for Intel64). Count is proportional to the number of prefixes in a 16B-line. This may result in a three-cycle penalty for each LCP (Length changing prefix) in a 16-byte chunk. [This event is alias to DECODE.LCP]inst_decoded.decoderspipelineInstruction decoders utilized in a cycleevent=0x55,period=2000003,umask=100Number of decoders utilized in a cycle when the MITE (legacy decode pipeline) fetches instructionsinst_retired.anypipelineInstructions retired from executionevent=0xc0,period=200000300Counts the number of instructions retired from execution. For instructions that consist of multiple micro-ops, Counts the retirement of the last micro-op of the instruction. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. INST_RETIRED.ANY_P is counted by a programmable counter and it is an architectural performance event. Counting: Faulting executions of GETSEC/VM entry/VM Exit/MWait will not count as retired instructionsinst_retired.any_ppipelineNumber of instructions retired. General Counter - architectural event  Spec update: SKL091, SKL044event=0xc0,period=200000300Counts the number of instructions (EOMs) retired. Counting covers macro-fused instructions individually (that is, increments by two)  Spec update: SKL091, SKL044inst_retired.noppipelineNumber of all retired NOP instructions  Spec update: SKL091, SKL044 (Precise event)event=0xc0,period=2000003,umask=200inst_retired.prec_distpipelinePrecise instruction retired event with HW to reduce effect of PEBS shadow in IP distribution  Spec update: SKL091, SKL044 (Must be precise)event=0xc0,period=2000003,umask=100A version of INST_RETIRED that allows for a more unbiased distribution of samples across instructions retired. It utilizes the Precise Distribution of Instructions Retired (PDIR) feature to mitigate some bias in how retired instructions get sampled  Spec update: SKL091, SKL044 (Must be precise)inst_retired.total_cycles_pspipelineNumber of cycles using always true condition applied to  PEBS instructions retired event  Spec update: SKL091, SKL044 (Must be precise)event=0xc0,cmask=10,inv=1,period=2000003,umask=100Number of cycles using an always true condition applied to  PEBS instructions retired event. (inst_ret< 16)  Spec update: SKL091, SKL044 (Must be precise)int_misc.clears_countpipelineClears speculative countevent=0xd,cmask=1,edge=1,period=2000003,umask=100Counts the number of speculative clears due to any type of branch misprediction or machine clearsint_misc.clear_resteer_cyclespipelineCycles the issue-stage is waiting for front-end to fetch from resteered path following branch misprediction or machine clear eventsevent=0xd,period=2000003,umask=0x8000int_misc.recovery_cyclespipelineCore cycles the allocator was stalled due to recovery from earlier clear event for this thread (e.g. misprediction or memory nuke)event=0xd,period=2000003,umask=100Core cycles the Resource allocator was stalled due to recovery from an earlier branch misprediction or machine clear eventint_misc.recovery_cycles_anypipelineCore cycles the allocator was stalled due to recovery from earlier clear event for any thread running on the physical core (e.g. misprediction or memory nuke)event=0xd,any=1,period=2000003,umask=100ld_blocks.no_srpipelineThe number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in useevent=3,period=100003,umask=800The number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in useld_blocks.store_forwardpipelineLoads blocked due to overlapping with a preceding store that cannot be forwardedevent=3,period=100003,umask=200Counts the number of times where store forwarding was prevented for a load operation. The most common case is a load blocked due to the address of memory access (partially) overlapping with a preceding uncompleted store. Note: See the table of not supported store forwards in the Optimization Guideld_blocks_partial.address_aliaspipelineFalse dependencies in MOB due to partial compare on addressevent=7,period=100003,umask=100Counts false dependencies in MOB when the partial comparison upon loose net check and dependency was resolved by the Enhanced Loose net mechanism. This may not result in high performance penalties. Loose net checks can fail when loads and stores are 4k aliasedload_hit_pre.sw_pfpipelineDemand load dispatches that hit L1D fill buffer (FB) allocated for software prefetchevent=0x4c,period=100003,umask=100Counts all not software-prefetch load dispatches that hit the fill buffer (FB) allocated for the software prefetch. It can also be incremented by some lock instructions. So it should only be used with profiling so that the locks can be excluded by ASM (Assembly File) inspection of the nearby instructionslsd.cycles_4_uopspipelineCycles 4 Uops delivered by the LSD, but didn't come from the decoder. [This event is alias to LSD.CYCLES_OK]event=0xa8,cmask=4,period=2000003,umask=100Counts the cycles when 4 uops are delivered by the LSD (Loop-stream detector). [This event is alias to LSD.CYCLES_OK]lsd.cycles_okpipelineCycles 4 Uops delivered by the LSD, but didn't come from the decoder. [This event is alias to LSD.CYCLES_4_UOPS]event=0xa8,cmask=4,period=2000003,umask=100Counts the cycles when 4 uops are delivered by the LSD (Loop-stream detector). [This event is alias to LSD.CYCLES_4_UOPS]lsd.uopspipelineNumber of Uops delivered by the LSDevent=0xa8,period=2000003,umask=100Number of uops delivered to the back-end by the LSD(Loop Stream Detector)other_assists.anypipelineNumber of times a microcode assist is invoked by HW other than FP-assist. Examples include AD (page Access Dirty) and AVX* related assistsevent=0xc1,period=100003,umask=0x3f00partial_rat_stalls.scoreboardpipelineCycles where the pipeline is stalled due to serializing operationsevent=0x59,period=2000003,umask=100This event counts cycles during which the microcode scoreboard stalls happenresource_stalls.anypipelineResource-related stall cyclesevent=0xa2,period=2000003,umask=100Counts resource-related stall cyclesresource_stalls.sbpipelineCycles stalled due to no store buffers available. (not including draining form sync)event=0xa2,period=2000003,umask=800Counts allocation stall cycles caused by the store buffer (SB) being full. This counts cycles that the pipeline back-end blocked uop delivery from the front-endrob_misc_events.lbr_insertspipelineIncrements whenever there is an update to the LBR arrayevent=0xcc,period=2000003,umask=0x2000Increments when an entry is added to the Last Branch Record (LBR) array (or removed from the array in case of RETURNs in call stack mode). The event requires LBR enable via IA32_DEBUGCTL MSR and branch type selection via MSR_LBR_SELECTrob_misc_events.pause_instpipelineNumber of retired PAUSE instructions (that do not end up with a VMExit to the VMM; TSX aborted Instructions may be counted). This event is not supported on first SKL and KBL productsevent=0xcc,period=2000003,umask=0x4000rs_events.empty_cyclespipelineCycles when Reservation Station (RS) is empty for the threadevent=0x5e,period=2000003,umask=100Counts cycles during which the reservation station (RS) is empty for the thread.; Note: In ST-mode, not active thread should drive 0. This is usually caused by severely costly branch mispredictions, or allocator/FE issuesrs_events.empty_endpipelineCounts end of periods where the Reservation Station (RS) was empty. Could be useful to precisely locate Frontend Latency Bound issuesevent=0x5e,cmask=1,edge=1,inv=1,period=2000003,umask=100Counts end of periods where the Reservation Station (RS) was empty. Could be useful to precisely locate front-end Latency Bound issuesuops_dispatched_port.port_0pipelineCycles per thread when uops are executed in port 0event=0xa1,period=2000003,umask=100Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 0uops_dispatched_port.port_1pipelineCycles per thread when uops are executed in port 1event=0xa1,period=2000003,umask=200Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 1uops_dispatched_port.port_2pipelineCycles per thread when uops are executed in port 2event=0xa1,period=2000003,umask=400Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 2uops_dispatched_port.port_3pipelineCycles per thread when uops are executed in port 3event=0xa1,period=2000003,umask=800Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 3uops_dispatched_port.port_4pipelineCycles per thread when uops are executed in port 4event=0xa1,period=2000003,umask=0x1000Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 4uops_dispatched_port.port_5pipelineCycles per thread when uops are executed in port 5event=0xa1,period=2000003,umask=0x2000Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 5uops_dispatched_port.port_6pipelineCycles per thread when uops are executed in port 6event=0xa1,period=2000003,umask=0x4000Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 6uops_dispatched_port.port_7pipelineCycles per thread when uops are executed in port 7event=0xa1,period=2000003,umask=0x8000Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 7uops_executed.core_cycles_nonepipelineCycles with no micro-ops executed from any thread on physical coreevent=0xb1,cmask=1,inv=1,period=2000003,umask=200uops_executed.cycles_ge_1_uop_execpipelineCycles where at least 1 uop was executed per-threadevent=0xb1,cmask=1,period=2000003,umask=100Cycles where at least 1 uop was executed per-threaduops_executed.cycles_ge_2_uops_execpipelineCycles where at least 2 uops were executed per-threadevent=0xb1,cmask=2,period=2000003,umask=100Cycles where at least 2 uops were executed per-threaduops_executed.cycles_ge_3_uops_execpipelineCycles where at least 3 uops were executed per-threadevent=0xb1,cmask=3,period=2000003,umask=100Cycles where at least 3 uops were executed per-threaduops_executed.cycles_ge_4_uops_execpipelineCycles where at least 4 uops were executed per-threadevent=0xb1,cmask=4,period=2000003,umask=100Cycles where at least 4 uops were executed per-threaduops_executed.stall_cyclespipelineCounts number of cycles no uops were dispatched to be executed on this threadevent=0xb1,cmask=1,inv=1,period=2000003,umask=100Counts cycles during which no uops were dispatched from the Reservation Station (RS) per threaduops_issued.anypipelineUops that Resource Allocation Table (RAT) issues to Reservation Station (RS)event=0xe,period=2000003,umask=100Counts the number of uops that the Resource Allocation Table (RAT) issues to the Reservation Station (RS)uops_issued.stall_cyclespipelineCycles when Resource Allocation Table (RAT) does not issue Uops to Reservation Station (RS) for the threadevent=0xe,cmask=1,inv=1,period=2000003,umask=100Counts cycles during which the Resource Allocation Table (RAT) does not issue any Uops to the reservation station (RS) for the current threaduops_issued.vector_width_mismatchpipelineUops inserted at issue-stage in order to preserve upper bits of vector registersevent=0xe,period=2000003,umask=200Counts the number of Blend Uops issued by the Resource Allocation Table (RAT) to the reservation station (RS) in order to preserve upper bits of vector registers. Starting with the Skylake microarchitecture, these Blend uops are needed since every Intel SSE instruction executed in Dirty Upper State needs to preserve bits 128-255 of the destination register. For more information, refer to Mixing Intel AVX and Intel SSE Code section of the Optimization Guideuops_retired.macro_fusedpipelineNumber of macro-fused uops retired. (non precise)event=0xc2,period=2000003,umask=400Counts the number of macro-fused uops retired. (non precise)uops_retired.retire_slotspipelineRetirement slots usedevent=0xc2,period=2000003,umask=200Counts the retirement slots useduops_retired.stall_cyclespipelineCycles without actually retired uopsevent=0xc2,cmask=1,inv=1,period=2000003,umask=200This event counts cycles without actually retired uopsuops_retired.total_cyclespipelineCycles with less than 10 actually retired uopsevent=0xc2,cmask=16,inv=1,period=2000003,umask=200Number of cycles using always true condition (uops_ret < 16) applied to non PEBS uops retired eventuncore_challc_misses.mmio_readuncore cacheMMIO reads. Derived from unc_cha_tor_inserts.ia_missevent=0x35,umask=0x21,config1=0x40040e3301TOR Inserts : All requests from iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsllc_misses.mmio_writeuncore cacheMMIO writes. Derived from unc_cha_tor_inserts.ia_missevent=0x35,umask=0x21,config1=0x40041e3301TOR Inserts : All requests from iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsllc_misses.uncacheableuncore cacheLLC misses - Uncacheable reads (from cpu) . Derived from unc_cha_tor_inserts.ia_missevent=0x35,umask=0x21,config1=0x40e3301TOR Inserts : All requests from iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsllc_references.streaming_fulluncore cacheStreaming stores (full cache line). Derived from unc_cha_tor_inserts.ia_missevent=0x35,umask=0x21,config1=0x418330164BytesTOR Inserts : All requests from iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsllc_references.streaming_partialuncore cacheStreaming stores (partial cache line). Derived from unc_cha_tor_inserts.ia_missevent=0x35,umask=0x21,config1=0x41a330164BytesTOR Inserts : All requests from iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_ag0_ad_crd_acquired.tgr0uncore cacheCMS Agent0 AD Credits Acquired; For Transgress 0event=0x80,umask=101Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_cha_ag0_ad_crd_acquired.tgr1uncore cacheCMS Agent0 AD Credits Acquired; For Transgress 1event=0x80,umask=201Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_cha_ag0_ad_crd_acquired.tgr2uncore cacheCMS Agent0 AD Credits Acquired; For Transgress 2event=0x80,umask=401Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_cha_ag0_ad_crd_acquired.tgr3uncore cacheCMS Agent0 AD Credits Acquired; For Transgress 3event=0x80,umask=801Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_cha_ag0_ad_crd_acquired.tgr4uncore cacheCMS Agent0 AD Credits Acquired; For Transgress 4event=0x80,umask=0x1001Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_cha_ag0_ad_crd_acquired.tgr5uncore cacheCMS Agent0 AD Credits Acquired; For Transgress 5event=0x80,umask=0x2001Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_cha_ag0_ad_crd_occupancy.tgr0uncore cacheCMS Agent0 AD Credits Occupancy; For Transgress 0event=0x82,umask=101Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_cha_ag0_ad_crd_occupancy.tgr1uncore cacheCMS Agent0 AD Credits Occupancy; For Transgress 1event=0x82,umask=201Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_cha_ag0_ad_crd_occupancy.tgr2uncore cacheCMS Agent0 AD Credits Occupancy; For Transgress 2event=0x82,umask=401Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_cha_ag0_ad_crd_occupancy.tgr3uncore cacheCMS Agent0 AD Credits Occupancy; For Transgress 3event=0x82,umask=801Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_cha_ag0_ad_crd_occupancy.tgr4uncore cacheCMS Agent0 AD Credits Occupancy; For Transgress 4event=0x82,umask=0x1001Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_cha_ag0_ad_crd_occupancy.tgr5uncore cacheCMS Agent0 AD Credits Occupancy; For Transgress 5event=0x82,umask=0x2001Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_cha_ag0_bl_crd_acquired.tgr0uncore cacheCMS Agent0 BL Credits Acquired; For Transgress 0event=0x88,umask=101Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_cha_ag0_bl_crd_acquired.tgr1uncore cacheCMS Agent0 BL Credits Acquired; For Transgress 1event=0x88,umask=201Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_cha_ag0_bl_crd_acquired.tgr2uncore cacheCMS Agent0 BL Credits Acquired; For Transgress 2event=0x88,umask=401Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_cha_ag0_bl_crd_acquired.tgr3uncore cacheCMS Agent0 BL Credits Acquired; For Transgress 3event=0x88,umask=801Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_cha_ag0_bl_crd_acquired.tgr4uncore cacheCMS Agent0 BL Credits Acquired; For Transgress 4event=0x88,umask=0x1001Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_cha_ag0_bl_crd_acquired.tgr5uncore cacheCMS Agent0 BL Credits Acquired; For Transgress 5event=0x88,umask=0x2001Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_cha_ag0_bl_crd_occupancy.tgr0uncore cacheCMS Agent0 BL Credits Occupancy; For Transgress 0event=0x8a,umask=101Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_cha_ag0_bl_crd_occupancy.tgr1uncore cacheCMS Agent0 BL Credits Occupancy; For Transgress 1event=0x8a,umask=201Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_cha_ag0_bl_crd_occupancy.tgr2uncore cacheCMS Agent0 BL Credits Occupancy; For Transgress 2event=0x8a,umask=401Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_cha_ag0_bl_crd_occupancy.tgr3uncore cacheCMS Agent0 BL Credits Occupancy; For Transgress 3event=0x8a,umask=801Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_cha_ag0_bl_crd_occupancy.tgr4uncore cacheCMS Agent0 BL Credits Occupancy; For Transgress 4event=0x8a,umask=0x1001Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_cha_ag0_bl_crd_occupancy.tgr5uncore cacheCMS Agent0 BL Credits Occupancy; For Transgress 5event=0x8a,umask=0x2001Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_cha_ag1_ad_crd_acquired.tgr0uncore cacheCMS Agent1 AD Credits Acquired; For Transgress 0event=0x84,umask=101Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_cha_ag1_ad_crd_acquired.tgr1uncore cacheCMS Agent1 AD Credits Acquired; For Transgress 1event=0x84,umask=201Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_cha_ag1_ad_crd_acquired.tgr2uncore cacheCMS Agent1 AD Credits Acquired; For Transgress 2event=0x84,umask=401Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_cha_ag1_ad_crd_acquired.tgr3uncore cacheCMS Agent1 AD Credits Acquired; For Transgress 3event=0x84,umask=801Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_cha_ag1_ad_crd_acquired.tgr4uncore cacheCMS Agent1 AD Credits Acquired; For Transgress 4event=0x84,umask=0x1001Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_cha_ag1_ad_crd_acquired.tgr5uncore cacheCMS Agent1 AD Credits Acquired; For Transgress 5event=0x84,umask=0x2001Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_cha_ag1_ad_crd_occupancy.tgr0uncore cacheCMS Agent1 AD Credits Occupancy; For Transgress 0event=0x86,umask=101Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_cha_ag1_ad_crd_occupancy.tgr1uncore cacheCMS Agent1 AD Credits Occupancy; For Transgress 1event=0x86,umask=201Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_cha_ag1_ad_crd_occupancy.tgr2uncore cacheCMS Agent1 AD Credits Occupancy; For Transgress 2event=0x86,umask=401Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_cha_ag1_ad_crd_occupancy.tgr3uncore cacheCMS Agent1 AD Credits Occupancy; For Transgress 3event=0x86,umask=801Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_cha_ag1_ad_crd_occupancy.tgr4uncore cacheCMS Agent1 AD Credits Occupancy; For Transgress 4event=0x86,umask=0x1001Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_cha_ag1_ad_crd_occupancy.tgr5uncore cacheCMS Agent1 AD Credits Occupancy; For Transgress 5event=0x86,umask=0x2001Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_cha_ag1_bl_crd_occupancy.tgr0uncore cacheCMS Agent1 BL Credits Occupancy; For Transgress 0event=0x8e,umask=101Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_cha_ag1_bl_crd_occupancy.tgr1uncore cacheCMS Agent1 BL Credits Occupancy; For Transgress 1event=0x8e,umask=201Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_cha_ag1_bl_crd_occupancy.tgr2uncore cacheCMS Agent1 BL Credits Occupancy; For Transgress 2event=0x8e,umask=401Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_cha_ag1_bl_crd_occupancy.tgr3uncore cacheCMS Agent1 BL Credits Occupancy; For Transgress 3event=0x8e,umask=801Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_cha_ag1_bl_crd_occupancy.tgr4uncore cacheCMS Agent1 BL Credits Occupancy; For Transgress 4event=0x8e,umask=0x1001Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_cha_ag1_bl_crd_occupancy.tgr5uncore cacheCMS Agent1 BL Credits Occupancy; For Transgress 5event=0x8e,umask=0x2001Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_cha_ag1_bl_credits_acquired.tgr0uncore cacheCMS Agent1 BL Credits Acquired; For Transgress 0event=0x8c,umask=101Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_cha_ag1_bl_credits_acquired.tgr1uncore cacheCMS Agent1 BL Credits Acquired; For Transgress 1event=0x8c,umask=201Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_cha_ag1_bl_credits_acquired.tgr2uncore cacheCMS Agent1 BL Credits Acquired; For Transgress 2event=0x8c,umask=401Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_cha_ag1_bl_credits_acquired.tgr3uncore cacheCMS Agent1 BL Credits Acquired; For Transgress 3event=0x8c,umask=801Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_cha_ag1_bl_credits_acquired.tgr4uncore cacheCMS Agent1 BL Credits Acquired; For Transgress 4event=0x8c,umask=0x1001Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_cha_ag1_bl_credits_acquired.tgr5uncore cacheCMS Agent1 BL Credits Acquired; For Transgress 5event=0x8c,umask=0x2001Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_cha_bypass_cha_imc.intermediateuncore cacheCHA to iMC Bypass; Intermediate bypass Takenevent=0x57,umask=201Counts the number of times when the CHA was able to bypass HA pipe on the way to iMC.  This is a latency optimization for situations when there is light loadings on the memory subsystem.  This can be filtered by when the bypass was taken and when it was not.; Filter for transactions that succeeded in taking the intermediate bypassunc_cha_bypass_cha_imc.not_takenuncore cacheCHA to iMC Bypass; Not Takenevent=0x57,umask=401Counts the number of times when the CHA was able to bypass HA pipe on the way to iMC.  This is a latency optimization for situations when there is light loadings on the memory subsystem.  This can be filtered by when the bypass was taken and when it was not.; Filter for transactions that could not take the bypass, and issues a read to memory. Note that transactions that did not take the bypass but did not issue read to memory will not be countedunc_cha_bypass_cha_imc.takenuncore cacheCHA to iMC Bypass; Takenevent=0x57,umask=101Counts the number of times when the CHA was able to bypass HA pipe on the way to iMC.  This is a latency optimization for situations when there is light loadings on the memory subsystem.  This can be filtered by when the bypass was taken and when it was not.; Filter for transactions that succeeded in taking the full bypassunc_cha_clockticksuncore cacheUncore cache clock ticksevent=001Counts clockticks of the clock controlling the uncore caching and home agent (CHA)unc_cha_cms_clockticksuncore cacheCMS Clockticksevent=0xc001unc_cha_core_pma.c1_stateuncore cacheCore PMA Events; C1  Stateevent=0x17,umask=101unc_cha_core_pma.c1_transitionuncore cacheCore PMA Events; C1 Transitionevent=0x17,umask=201unc_cha_core_pma.c6_stateuncore cacheCore PMA Events; C6 Stateevent=0x17,umask=401unc_cha_core_pma.c6_transitionuncore cacheCore PMA Events; C6 Transitionevent=0x17,umask=801unc_cha_core_pma.gvuncore cacheCore PMA Events; GVevent=0x17,umask=0x1001unc_cha_core_snp.any_gtoneuncore cacheCore Cross Snoops Issued; Any Cycle with Multiple Snoopsevent=0x33,umask=0xe201Counts the number of transactions that trigger a configurable number of cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set.  For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them.  However, if only 1 CV bit is set the core my have modified the data.  If the transaction was an RFO, it would need to invalidate the lines.  This event can be filtered based on who triggered the initial snoop(s)unc_cha_core_snp.any_oneuncore cacheCore Cross Snoops Issued; Any Single Snoopevent=0x33,umask=0xe101Counts the number of transactions that trigger a configurable number of cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set.  For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them.  However, if only 1 CV bit is set the core my have modified the data.  If the transaction was an RFO, it would need to invalidate the lines.  This event can be filtered based on who triggered the initial snoop(s)unc_cha_core_snp.any_remoteuncore cacheCore Cross Snoops Issued; Any Snoop to Remote Nodeevent=0x33,umask=0xe401Counts the number of transactions that trigger a configurable number of cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set.  For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them.  However, if only 1 CV bit is set the core my have modified the data.  If the transaction was an RFO, it would need to invalidate the lines.  This event can be filtered based on who triggered the initial snoop(s)unc_cha_core_snp.core_gtoneuncore cacheCore Cross Snoops Issued; Multiple Core Requestsevent=0x33,umask=0x4201Counts the number of transactions that trigger a configurable number of cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set.  For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them.  However, if only 1 CV bit is set the core my have modified the data.  If the transaction was an RFO, it would need to invalidate the lines.  This event can be filtered based on who triggered the initial snoop(s)unc_cha_core_snp.core_oneuncore cacheCore Cross Snoops Issued; Single Core Requestsevent=0x33,umask=0x4101Counts the number of transactions that trigger a configurable number of cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set.  For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them.  However, if only 1 CV bit is set the core my have modified the data.  If the transaction was an RFO, it would need to invalidate the lines.  This event can be filtered based on who triggered the initial snoop(s)unc_cha_core_snp.core_remoteuncore cacheCore Cross Snoops Issued; Core Request to Remote Nodeevent=0x33,umask=0x4401Counts the number of transactions that trigger a configurable number of cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set.  For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them.  However, if only 1 CV bit is set the core my have modified the data.  If the transaction was an RFO, it would need to invalidate the lines.  This event can be filtered based on who triggered the initial snoop(s)unc_cha_core_snp.evict_gtoneuncore cacheCore Cross Snoops Issued; Multiple Evictionevent=0x33,umask=0x8201Counts the number of transactions that trigger a configurable number of cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set.  For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them.  However, if only 1 CV bit is set the core my have modified the data.  If the transaction was an RFO, it would need to invalidate the lines.  This event can be filtered based on who triggered the initial snoop(s)unc_cha_core_snp.evict_oneuncore cacheCore Cross Snoops Issued; Single Evictionevent=0x33,umask=0x8101Counts the number of transactions that trigger a configurable number of cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set.  For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them.  However, if only 1 CV bit is set the core my have modified the data.  If the transaction was an RFO, it would need to invalidate the lines.  This event can be filtered based on who triggered the initial snoop(s)unc_cha_core_snp.evict_remoteuncore cacheCore Cross Snoops Issued; Eviction to Remote Nodeevent=0x33,umask=0x8401Counts the number of transactions that trigger a configurable number of cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set.  For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them.  However, if only 1 CV bit is set the core my have modified the data.  If the transaction was an RFO, it would need to invalidate the lines.  This event can be filtered based on who triggered the initial snoop(s)unc_cha_core_snp.ext_gtoneuncore cacheCore Cross Snoops Issued; Multiple External Snoopsevent=0x33,umask=0x2201Counts the number of transactions that trigger a configurable number of cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set.  For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them.  However, if only 1 CV bit is set the core my have modified the data.  If the transaction was an RFO, it would need to invalidate the lines.  This event can be filtered based on who triggered the initial snoop(s)unc_cha_core_snp.ext_oneuncore cacheCore Cross Snoops Issued; Single External Snoopsevent=0x33,umask=0x2101Counts the number of transactions that trigger a configurable number of cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set.  For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them.  However, if only 1 CV bit is set the core my have modified the data.  If the transaction was an RFO, it would need to invalidate the lines.  This event can be filtered based on who triggered the initial snoop(s)unc_cha_core_snp.ext_remoteuncore cacheCore Cross Snoops Issued; External Snoop to Remote Nodeevent=0x33,umask=0x2401Counts the number of transactions that trigger a configurable number of cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set.  For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them.  However, if only 1 CV bit is set the core my have modified the data.  If the transaction was an RFO, it would need to invalidate the lines.  This event can be filtered based on who triggered the initial snoop(s)unc_cha_counter0_occupancyuncore cacheCounter 0 Occupancyevent=0x1f01Since occupancy counts can only be captured in the Cbo's 0 counter, this event allows a user to capture occupancy related information by filtering the Cb0 occupancy count captured in Counter 0.   The filtering available is found in the control register - threshold, invert and edge detect.   E.g. setting threshold to 1 can effectively monitor how many cycles the monitored queue has an entryunc_cha_dir_lookup.no_snpuncore cacheMulti-socket cacheline Directory state lookups; Snoop Not Neededevent=0x53,umask=201Counts transactions that looked into the multi-socket cacheline Directory state, and therefore did not send a snoop because the Directory indicated it was not neededunc_cha_dir_lookup.snpuncore cacheMulti-socket cacheline Directory state lookups; Snoop Neededevent=0x53,umask=101Counts  transactions that looked into the multi-socket cacheline Directory state, and sent one or more snoops, because the Directory indicated it was neededunc_cha_dir_update.hauncore cacheMulti-socket cacheline Directory state updates; Directory Updated memory write from the HA pipeevent=0x54,umask=101Counts only multi-socket cacheline Directory state updates memory writes issued from the HA pipe. This does not include memory write requests which are for I (Invalid) or E (Exclusive) cachelinesunc_cha_dir_update.toruncore cacheMulti-socket cacheline Directory state updates; Directory Updated memory write from TOR pipeevent=0x54,umask=201Counts only multi-socket cacheline Directory state updates due to memory writes issued from the TOR pipe which are the result of remote transaction hitting the SF/LLC and returning data Core2Core. This does not include memory write requests which are for I (Invalid) or E (Exclusive) cachelinesunc_cha_egress_ordering.iv_snoopgo_dnuncore cacheEgress Blocking due to Ordering requirements; Downevent=0xae,umask=401Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirementsunc_cha_egress_ordering.iv_snoopgo_upuncore cacheEgress Blocking due to Ordering requirements; Upevent=0xae,umask=101Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirementsunc_cha_fast_asserted.horzuncore cacheFaST wire asserted; Horizontalevent=0xa5,umask=201Counts the number of cycles either the local or incoming distress signals are asserted.  Incoming distress includes up, dn and acrossunc_cha_fast_asserted.vertuncore cacheFaST wire asserted; Verticalevent=0xa5,umask=101Counts the number of cycles either the local or incoming distress signals are asserted.  Incoming distress includes up, dn and acrossunc_cha_hitme_hit.ex_rdsuncore cacheRead request from a remote socket which hit in the HitMe Cache to a line In the E stateevent=0x5f,umask=101Counts read requests from a remote socket which hit in the HitME cache (used to cache the multi-socket Directory state) to a line in the E(Exclusive) state.  This includes the following read opcodes (RdCode, RdData, RdDataMigratory, RdCur, RdInv*, Inv*)unc_cha_hitme_hit.shared_ownrequncore cacheCounts Number of Hits in HitMe Cache; Shared hit and op is RdInvOwn, RdInv, Inv*event=0x5f,umask=401unc_cha_hitme_hit.wbmtoeuncore cacheCounts Number of Hits in HitMe Cache; op is WbMtoEevent=0x5f,umask=801unc_cha_hitme_hit.wbmtoi_or_suncore cacheCounts Number of Hits in HitMe Cache; op is WbMtoI, WbPushMtoI, WbFlush, or WbMtoSevent=0x5f,umask=0x1001unc_cha_hitme_lookup.readuncore cacheCounts Number of times HitMe Cache is accessed; op is RdCode, RdData, RdDataMigratory, RdCur, RdInvOwn, RdInv, Inv*event=0x5e,umask=101unc_cha_hitme_lookup.writeuncore cacheCounts Number of times HitMe Cache is accessed; op is WbMtoE, WbMtoI, WbPushMtoI, WbFlush, or WbMtoSevent=0x5e,umask=201unc_cha_hitme_miss.notshared_rdinvownuncore cacheCounts Number of Misses in HitMe Cache; No SF/LLC HitS/F and op is RdInvOwnevent=0x60,umask=0x4001unc_cha_hitme_miss.read_or_invuncore cacheCounts Number of Misses in HitMe Cache; op is RdCode, RdData, RdDataMigratory, RdCur, RdInv, Inv*event=0x60,umask=0x8001unc_cha_hitme_miss.shared_rdinvownuncore cacheCounts Number of Misses in HitMe Cache; SF/LLC HitS/F and op is RdInvOwnevent=0x60,umask=0x2001unc_cha_hitme_update.deallocateuncore cacheCounts the number of Allocate/Update to HitMe Cache; Deallocate HitME$ on Reads without RspFwdI*event=0x61,umask=0x1001unc_cha_hitme_update.deallocate_rspfwdi_locuncore cacheCounts the number of Allocate/Update to HitMe Cache; op is RspIFwd or RspIFwdWb for a local requestevent=0x61,umask=101Received RspFwdI* for a local request, but converted HitME$ to SF entryunc_cha_hitme_update.rdinvownuncore cacheCounts the number of Allocate/Update to HitMe Cache; Update HitMe Cache on RdInvOwn even if not RspFwdI*event=0x61,umask=801unc_cha_hitme_update.rspfwdi_remuncore cacheCounts the number of Allocate/Update to HitMe Cache; op is RspIFwd or RspIFwdWb for a remote requestevent=0x61,umask=201Updated HitME$ on RspFwdI* or local HitM/E received for a remote requestunc_cha_hitme_update.shareduncore cacheCounts the number of Allocate/Update to HitMe Cache; Update HitMe Cache to SHARedevent=0x61,umask=401unc_cha_horz_ring_ad_in_use.left_evenuncore cacheHorizontal AD Ring In Use; Left and Evenevent=0xa7,umask=101Counts the number of cycles that the Horizontal AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_horz_ring_ad_in_use.left_odduncore cacheHorizontal AD Ring In Use; Left and Oddevent=0xa7,umask=201Counts the number of cycles that the Horizontal AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_horz_ring_ad_in_use.right_evenuncore cacheHorizontal AD Ring In Use; Right and Evenevent=0xa7,umask=401Counts the number of cycles that the Horizontal AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_horz_ring_ad_in_use.right_odduncore cacheHorizontal AD Ring In Use; Right and Oddevent=0xa7,umask=801Counts the number of cycles that the Horizontal AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_horz_ring_ak_in_use.left_evenuncore cacheHorizontal AK Ring In Use; Left and Evenevent=0xa9,umask=101Counts the number of cycles that the Horizontal AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_horz_ring_ak_in_use.left_odduncore cacheHorizontal AK Ring In Use; Left and Oddevent=0xa9,umask=201Counts the number of cycles that the Horizontal AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_horz_ring_ak_in_use.right_evenuncore cacheHorizontal AK Ring In Use; Right and Evenevent=0xa9,umask=401Counts the number of cycles that the Horizontal AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_horz_ring_ak_in_use.right_odduncore cacheHorizontal AK Ring In Use; Right and Oddevent=0xa9,umask=801Counts the number of cycles that the Horizontal AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_horz_ring_bl_in_use.left_evenuncore cacheHorizontal BL Ring in Use; Left and Evenevent=0xab,umask=101Counts the number of cycles that the Horizontal BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_horz_ring_bl_in_use.left_odduncore cacheHorizontal BL Ring in Use; Left and Oddevent=0xab,umask=201Counts the number of cycles that the Horizontal BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_horz_ring_bl_in_use.right_evenuncore cacheHorizontal BL Ring in Use; Right and Evenevent=0xab,umask=401Counts the number of cycles that the Horizontal BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_horz_ring_bl_in_use.right_odduncore cacheHorizontal BL Ring in Use; Right and Oddevent=0xab,umask=801Counts the number of cycles that the Horizontal BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_horz_ring_iv_in_use.leftuncore cacheHorizontal IV Ring in Use; Leftevent=0xad,umask=101Counts the number of cycles that the Horizontal IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODDunc_cha_horz_ring_iv_in_use.rightuncore cacheHorizontal IV Ring in Use; Rightevent=0xad,umask=401Counts the number of cycles that the Horizontal IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODDunc_cha_imc_reads_count.normaluncore cacheNormal priority reads issued to the memory controller from the CHAevent=0x59,umask=101Counts when a normal (Non-Isochronous) read is issued to any of the memory controller channels from the CHAunc_cha_imc_reads_count.priorityuncore cacheHA to iMC Reads Issued; ISOCHevent=0x59,umask=201Count of the number of reads issued to any of the memory controller channels.  This can be filtered by the priority of the readsunc_cha_imc_writes_count.fulluncore cacheCHA to iMC Full Line Writes Issued; Full Line Non-ISOCHevent=0x5b,umask=101Counts when a normal (Non-Isochronous) full line write is issued from the CHA to the any of the memory controller channelsunc_cha_imc_writes_count.full_miguncore cacheWrites Issued to the iMC by the HA; Full Line MIGevent=0x5b,umask=0x1001Counts the total number of writes issued from the HA into the memory controller.  This counts for all four channels.  It can be filtered by full/partial and ISOCH/non-ISOCHunc_cha_imc_writes_count.full_priorityuncore cacheWrites Issued to the iMC by the HA; ISOCH Full Lineevent=0x5b,umask=401Counts the total number of writes issued from the HA into the memory controller.  This counts for all four channels.  It can be filtered by full/partial and ISOCH/non-ISOCHunc_cha_imc_writes_count.partialuncore cacheWrites Issued to the iMC by the HA; Partial Non-ISOCHevent=0x5b,umask=201Counts the total number of writes issued from the HA into the memory controller.  This counts for all four channels.  It can be filtered by full/partial and ISOCH/non-ISOCHunc_cha_imc_writes_count.partial_miguncore cacheWrites Issued to the iMC by the HA; Partial MIGevent=0x5b,umask=0x2001Counts the total number of writes issued from the HA into the memory controller.  This counts for all four channels.  It can be filtered by full/partial and ISOCH/non-ISOCH.; Filter for memory controller 5 onlyunc_cha_imc_writes_count.partial_priorityuncore cacheWrites Issued to the iMC by the HA; ISOCH Partialevent=0x5b,umask=801Counts the total number of writes issued from the HA into the memory controller.  This counts for all four channels.  It can be filtered by full/partial and ISOCH/non-ISOCHunc_cha_iodc_alloc.invitomuncore cacheCounts Number of times IODC entry allocation is attempted; Number of IODC allocationsevent=0x62,umask=101unc_cha_iodc_alloc.iodcfulluncore cacheCounts Number of times IODC entry allocation is attempted; Number of IODC allocations dropped due to IODC Fullevent=0x62,umask=201unc_cha_iodc_alloc.osbgateduncore cacheCounts Number of times IODC entry allocation is attempted; Number of IDOC allocation dropped due to OSB gateevent=0x62,umask=401unc_cha_iodc_dealloc.alluncore cacheCounts number of IODC deallocations; IODC deallocated due to any reasonevent=0x63,umask=0x1001unc_cha_iodc_dealloc.snpoutuncore cacheCounts number of IODC deallocations; IODC deallocated due to conflicting transactionevent=0x63,umask=801unc_cha_iodc_dealloc.wbmtoeuncore cacheCounts number of IODC deallocations; IODC deallocated due to WbMtoEevent=0x63,umask=101unc_cha_iodc_dealloc.wbmtoiuncore cacheCounts number of IODC deallocations; IODC deallocated due to WbMtoIevent=0x63,umask=201unc_cha_iodc_dealloc.wbpushmtoiuncore cacheCounts number of IODC deallocations; IODC deallocated due to WbPushMtoIevent=0x63,umask=401Moved to Cbo sectionunc_cha_llc_lookup.anyuncore cacheCache and Snoop Filter Lookups; Any Requestevent=0x34,umask=0x1101Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CHAFilter0[24:21,17] bits correspond to [FMESI] state.; Filters for any transaction originating from the IPQ or IRQ.  This does not include lookups originating from the ISMQunc_cha_llc_lookup.data_readuncore cacheCache and Snoop Filter Lookups; Data Read Requestevent=0x34,umask=301Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CHAFilter0[24:21,17] bits correspond to [FMESI] state.; Read transactionsunc_cha_llc_lookup.localuncore cacheCache and Snoop Filter Lookups; Localevent=0x34,umask=0x3101Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CHAFilter0[24:21,17] bits correspond to [FMESI] stateunc_cha_llc_lookup.remoteuncore cacheCache and Snoop Filter Lookups; Remoteevent=0x34,umask=0x9101Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CHAFilter0[24:21,17] bits correspond to [FMESI] stateunc_cha_llc_lookup.remote_snoopuncore cacheCache and Snoop Filter Lookups; External Snoop Requestevent=0x34,umask=901Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CHAFilter0[24:21,17] bits correspond to [FMESI] state.; Filters for only snoop requests coming from the remote socket(s) through the IPQunc_cha_llc_lookup.writeuncore cacheCache and Snoop Filter Lookups; Write Requestsevent=0x34,umask=501Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CHAFilter0[24:21,17] bits correspond to [FMESI] state.; Writeback transactions from L2 to the LLC  This includes all write transactions -- both Cacheable and UCunc_cha_llc_victims.e_stateuncore cacheThis event is deprecated. Refer to new event UNC_CHA_LLC_VICTIMS.TOTAL_Eevent=0x37,umask=211unc_cha_llc_victims.f_stateuncore cacheThis event is deprecated. Refer to new event UNC_CHA_LLC_VICTIMS.TOTAL_Fevent=0x37,umask=811unc_cha_llc_victims.localuncore cacheThis event is deprecatedevent=0x37,umask=0x2011unc_cha_llc_victims.local_alluncore cacheLines Victimized; Local - All Linesevent=0x37,umask=0x2f01Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.local_euncore cacheLines Victimized; Local - Lines in E Stateevent=0x37,umask=0x2201Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.local_funcore cacheLines Victimized; Local - Lines in F Stateevent=0x37,umask=0x2801Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.local_muncore cacheLines Victimized; Local - Lines in M Stateevent=0x37,umask=0x2101Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.local_suncore cacheLines Victimized; Local - Lines in S Stateevent=0x37,umask=0x2401Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.m_stateuncore cacheThis event is deprecated. Refer to new event UNC_CHA_LLC_VICTIMS.TOTAL_Mevent=0x37,umask=111unc_cha_llc_victims.remoteuncore cacheThis event is deprecated. Refer to new event UNC_CHA_LLC_VICTIMS.REMOTE_ALLevent=0x37,umask=0x8011unc_cha_llc_victims.remote_alluncore cacheLines Victimized; Remote - All Linesevent=0x37,umask=0x8f01Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.remote_euncore cacheLines Victimized; Remote - Lines in E Stateevent=0x37,umask=0x8201Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.remote_funcore cacheLines Victimized; Remote - Lines in F Stateevent=0x37,umask=0x8801Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.remote_muncore cacheLines Victimized; Remote - Lines in M Stateevent=0x37,umask=0x8101Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.remote_suncore cacheLines Victimized; Remote - Lines in S Stateevent=0x37,umask=0x8401Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.s_stateuncore cacheThis event is deprecated. Refer to new event UNC_CHA_LLC_VICTIMS.TOTAL_Sevent=0x37,umask=411unc_cha_llc_victims.total_euncore cacheLines Victimized; Lines in E stateevent=0x37,umask=201Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.total_funcore cacheLines Victimized; Lines in F Stateevent=0x37,umask=801Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.total_muncore cacheLines Victimized; Lines in M stateevent=0x37,umask=101Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.total_suncore cacheLines Victimized; Lines in S Stateevent=0x37,umask=401Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_misc.cv0_pref_missuncore cacheCbo Misc; CV0 Prefetch Missevent=0x39,umask=0x2001Miscellaneous events in the Cbounc_cha_misc.cv0_pref_vicuncore cacheCbo Misc; CV0 Prefetch Victimevent=0x39,umask=0x1001Miscellaneous events in the Cbounc_cha_misc.rfo_hit_suncore cacheNumber of times that an RFO hit in S stateevent=0x39,umask=801Counts when a RFO (the Read for Ownership issued before a  write) request hit a cacheline in the S (Shared) stateunc_cha_misc.rspi_was_fseuncore cacheCbo Misc; Silent Snoop Evictionevent=0x39,umask=101Miscellaneous events in the Cbo.; Counts the number of times when a Snoop hit in FSE states and triggered a silent eviction.  This is useful because this information is lost in the PRE encodingsunc_cha_misc.wc_aliasinguncore cacheCbo Misc; Write Combining Aliasingevent=0x39,umask=201Miscellaneous events in the Cbo.; Counts the number of times that a USWC write (WCIL(F)) transaction hit in the LLC in M state, triggering a WBMtoI followed by the USWC write.  This occurs when there is WC aliasingunc_cha_osbuncore cacheOSB Snoop Broadcastevent=0x5501Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSBunc_cha_pmm_memmode_nm_setconflicts.iodcuncore cacheMemory Mode related events; Counts the number of times CHA saw NM Set conflict in IODCevent=0x64,umask=0x10012LM related events; Counts the number of times CHA saw NM Set conflict in IODCunc_cha_pmm_memmode_nm_setconflicts.llcuncore cacheMemory Mode related events; Counts the number of times CHA saw NM Set conflict in SF/LLCevent=0x64,umask=201NM evictions due to another read to the same near memory set in the LLCunc_cha_pmm_memmode_nm_setconflicts.sfuncore cacheMemory Mode related events; Counts the number of times CHA saw NM Set conflict in SF/LLCevent=0x64,umask=101NM evictions due to another read to the same near memory set in the SFunc_cha_pmm_memmode_nm_setconflicts.toruncore cacheMemory Mode related events; Counts the number of times CHA saw NM Set conflict in TORevent=0x64,umask=401No Reject in the CHA due to a pending read to the same near memory set in the TORunc_cha_pmm_memmode_nm_setconflicts.tor_rejectuncore cacheMemory mode related events; Counts the number of times CHA saw NM Set conflict in TOR and the transaction was rejectedevent=0x64,umask=801Rejects in the CHA due to a pending read to the same near memory set in the TORunc_cha_read_no_credits.edc0_smi2uncore cacheCHA iMC CHNx READ Credits Empty; EDC0_SMI2event=0x58,umask=401Counts the number of times when there are no credits available for sending reads from the CHA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue.; Filter for memory controller 2 onlyunc_cha_read_no_credits.edc1_smi3uncore cacheCHA iMC CHNx READ Credits Empty; EDC1_SMI3event=0x58,umask=801Counts the number of times when there are no credits available for sending reads from the CHA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue.; Filter for memory controller 3 onlyunc_cha_read_no_credits.edc2_smi4uncore cacheCHA iMC CHNx READ Credits Empty; EDC2_SMI4event=0x58,umask=0x1001Counts the number of times when there are no credits available for sending reads from the CHA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue.; Filter for memory controller 4 onlyunc_cha_read_no_credits.edc3_smi5uncore cacheCHA iMC CHNx READ Credits Empty; EDC3_SMI5event=0x58,umask=0x2001Counts the number of times when there are no credits available for sending reads from the CHA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue.; Filter for memory controller 5 onlyunc_cha_read_no_credits.mc0_smi0uncore cacheCHA iMC CHNx READ Credits Empty; MC0_SMI0event=0x58,umask=101Counts the number of times when there are no credits available for sending reads from the CHA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue.; Filter for memory controller 0 onlyunc_cha_read_no_credits.mc1_smi1uncore cacheCHA iMC CHNx READ Credits Empty; MC1_SMI1event=0x58,umask=201Counts the number of times when there are no credits available for sending reads from the CHA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue.; Filter for memory controller 1 onlyunc_cha_requests.invitoe_localuncore cacheLocal requests for exclusive ownership of a cache line  without receiving dataevent=0x50,umask=0x1001Counts the total number of requests coming from a unit on this socket for exclusive ownership of a cache line without receiving data (INVITOE) to the CHAunc_cha_requests.invitoe_remoteuncore cacheLocal requests for exclusive ownership of a cache line without receiving dataevent=0x50,umask=0x2001Counts the total number of requests coming from a remote socket for exclusive ownership of a cache line without receiving data (INVITOE) to the CHAunc_cha_requests.readsuncore cacheRead requestsevent=0x50,umask=301Counts read requests made into this CHA. Reads include all read opcodes (including RFO: the Read for Ownership issued before a  write) unc_cha_requests.reads_localuncore cacheRead requests from a unit on this socketevent=0x50,umask=101Counts read requests coming from a unit on this socket made into this CHA. Reads include all read opcodes (including RFO: the Read for Ownership issued before a  write)unc_cha_requests.reads_remoteuncore cacheRead requests from a remote socketevent=0x50,umask=201Counts read requests coming from a remote socket made into the CHA. Reads include all read opcodes (including RFO: the Read for Ownership issued before a  write)unc_cha_requests.writesuncore cacheWrite requestsevent=0x50,umask=0xc01Counts write requests made into the CHA, including streaming, evictions, HitM (Reads from another core to a Modified cacheline), etcunc_cha_requests.writes_localuncore cacheWrite Requests from a unit on this socketevent=0x50,umask=401Counts  write requests coming from a unit on this socket made into this CHA, including streaming, evictions, HitM (Reads from another core to a Modified cacheline), etcunc_cha_requests.writes_remoteuncore cacheRead and Write Requests; Writes Remoteevent=0x50,umask=801Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO).  Writes include all writes (streaming, evictions, HitM, etc)unc_cha_ring_bounces_horz.aduncore cacheMessages that bounced on the Horizontal Ring.; ADevent=0xa1,umask=101Number of cycles incoming messages from the Horizontal ring that were bounced, by ring typeunc_cha_ring_bounces_horz.akuncore cacheMessages that bounced on the Horizontal Ring.; AKevent=0xa1,umask=201Number of cycles incoming messages from the Horizontal ring that were bounced, by ring typeunc_cha_ring_bounces_horz.bluncore cacheMessages that bounced on the Horizontal Ring.; BLevent=0xa1,umask=401Number of cycles incoming messages from the Horizontal ring that were bounced, by ring typeunc_cha_ring_bounces_horz.ivuncore cacheMessages that bounced on the Horizontal Ring.; IVevent=0xa1,umask=801Number of cycles incoming messages from the Horizontal ring that were bounced, by ring typeunc_cha_ring_bounces_vert.aduncore cacheMessages that bounced on the Vertical Ring.; ADevent=0xa0,umask=101Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_cha_ring_bounces_vert.akuncore cacheMessages that bounced on the Vertical Ring.; Acknowledgements to coreevent=0xa0,umask=201Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_cha_ring_bounces_vert.bluncore cacheMessages that bounced on the Vertical Ring.; Data Responses to coreevent=0xa0,umask=401Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_cha_ring_bounces_vert.ivuncore cacheMessages that bounced on the Vertical Ring.; Snoops of processor's cacheevent=0xa0,umask=801Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_cha_ring_sink_starved_horz.aduncore cacheSink Starvation on Horizontal Ring; ADevent=0xa3,umask=101unc_cha_ring_sink_starved_horz.akuncore cacheSink Starvation on Horizontal Ring; AKevent=0xa3,umask=201unc_cha_ring_sink_starved_horz.ak_ag1uncore cacheSink Starvation on Horizontal Ring; Acknowledgements to Agent 1event=0xa3,umask=0x2001unc_cha_ring_sink_starved_horz.bluncore cacheSink Starvation on Horizontal Ring; BLevent=0xa3,umask=401unc_cha_ring_sink_starved_horz.ivuncore cacheSink Starvation on Horizontal Ring; IVevent=0xa3,umask=801unc_cha_ring_sink_starved_vert.aduncore cacheSink Starvation on Vertical Ring; ADevent=0xa2,umask=101unc_cha_ring_sink_starved_vert.akuncore cacheSink Starvation on Vertical Ring; Acknowledgements to coreevent=0xa2,umask=201unc_cha_ring_sink_starved_vert.bluncore cacheSink Starvation on Vertical Ring; Data Responses to coreevent=0xa2,umask=401unc_cha_ring_sink_starved_vert.ivuncore cacheSink Starvation on Vertical Ring; Snoops of processor's cacheevent=0xa2,umask=801unc_cha_ring_src_thrtluncore cacheSource Throttleevent=0xa401unc_cha_rxc_inserts.ipquncore cacheIngress (from CMS) Allocations; IPQevent=0x13,umask=401Counts number of allocations per cycle into the specified Ingress queueunc_cha_rxc_inserts.irquncore cacheIngress (from CMS) Allocations; IRQevent=0x13,umask=101Counts number of allocations per cycle into the specified Ingress queueunc_cha_rxc_inserts.irq_rejuncore cacheIngress (from CMS) Allocations; IRQ Rejectedevent=0x13,umask=201Counts number of allocations per cycle into the specified Ingress queueunc_cha_rxc_inserts.prquncore cacheIngress (from CMS) Allocations; PRQevent=0x13,umask=0x1001Counts number of allocations per cycle into the specified Ingress queueunc_cha_rxc_inserts.prq_rejuncore cacheIngress (from CMS) Allocations; PRQevent=0x13,umask=0x2001Counts number of allocations per cycle into the specified Ingress queueunc_cha_rxc_inserts.rrquncore cacheIngress (from CMS) Allocations; RRQevent=0x13,umask=0x4001Counts number of allocations per cycle into the specified Ingress queueunc_cha_rxc_inserts.wbquncore cacheIngress (from CMS) Allocations; WBQevent=0x13,umask=0x8001Counts number of allocations per cycle into the specified Ingress queueunc_cha_rxc_ipq0_reject.ad_req_vn0uncore cacheIngress Probe Queue Rejects; AD REQ on VN0event=0x22,umask=101unc_cha_rxc_ipq0_reject.ad_rsp_vn0uncore cacheIngress Probe Queue Rejects; AD RSP on VN0event=0x22,umask=201unc_cha_rxc_ipq0_reject.ak_non_upiuncore cacheIngress Probe Queue Rejects; Non UPI AK Requestevent=0x22,umask=0x4001unc_cha_rxc_ipq0_reject.bl_ncb_vn0uncore cacheIngress Probe Queue Rejects; BL NCB on VN0event=0x22,umask=0x1001unc_cha_rxc_ipq0_reject.bl_ncs_vn0uncore cacheIngress Probe Queue Rejects; BL NCS on VN0event=0x22,umask=0x2001unc_cha_rxc_ipq0_reject.bl_rsp_vn0uncore cacheIngress Probe Queue Rejects; BL RSP on VN0event=0x22,umask=401unc_cha_rxc_ipq0_reject.bl_wb_vn0uncore cacheIngress Probe Queue Rejects; BL WB on VN0event=0x22,umask=801unc_cha_rxc_ipq0_reject.iv_non_upiuncore cacheIngress Probe Queue Rejects; Non UPI IV Requestevent=0x22,umask=0x8001unc_cha_rxc_ipq1_reject.allow_snpuncore cacheIngress Probe Queue Rejects; Allow Snoopevent=0x23,umask=0x4001unc_cha_rxc_ipq1_reject.any0uncore cacheIngress Probe Queue Rejects; ANY0event=0x23,umask=101unc_cha_rxc_ipq1_reject.hauncore cacheIngress Probe Queue Rejects; HAevent=0x23,umask=201unc_cha_rxc_ipq1_reject.llc_or_sf_wayuncore cacheIngress Probe Queue Rejects; Merging these two together to make room for ANY_REJECT_*0event=0x23,umask=0x2001unc_cha_rxc_ipq1_reject.llc_victimuncore cacheIngress Probe Queue Rejects; LLC Victimevent=0x23,umask=401unc_cha_rxc_ipq1_reject.pa_matchuncore cacheIngress Probe Queue Rejects; PhyAddr Matchevent=0x23,umask=0x8001unc_cha_rxc_ipq1_reject.sf_victimuncore cacheIngress Probe Queue Rejects; SF Victimevent=0x23,umask=801unc_cha_rxc_ipq1_reject.victimuncore cacheIngress Probe Queue Rejects; Victimevent=0x23,umask=0x1001unc_cha_rxc_irq0_reject.ad_req_vn0uncore cacheIngress (from CMS) Request Queue Rejects; AD REQ on VN0event=0x18,umask=101unc_cha_rxc_irq0_reject.ad_rsp_vn0uncore cacheIngress (from CMS) Request Queue Rejects; AD RSP on VN0event=0x18,umask=201unc_cha_rxc_irq0_reject.ak_non_upiuncore cacheIngress (from CMS) Request Queue Rejects; Non UPI AK Requestevent=0x18,umask=0x4001unc_cha_rxc_irq0_reject.bl_ncb_vn0uncore cacheIngress (from CMS) Request Queue Rejects; BL NCB on VN0event=0x18,umask=0x1001unc_cha_rxc_irq0_reject.bl_ncs_vn0uncore cacheIngress (from CMS) Request Queue Rejects; BL NCS on VN0event=0x18,umask=0x2001unc_cha_rxc_irq0_reject.bl_rsp_vn0uncore cacheIngress (from CMS) Request Queue Rejects; BL RSP on VN0event=0x18,umask=401unc_cha_rxc_irq0_reject.bl_wb_vn0uncore cacheIngress (from CMS) Request Queue Rejects; BL WB on VN0event=0x18,umask=801unc_cha_rxc_irq0_reject.iv_non_upiuncore cacheIngress (from CMS) Request Queue Rejects; Non UPI IV Requestevent=0x18,umask=0x8001unc_cha_rxc_irq1_reject.allow_snpuncore cacheIngress (from CMS) Request Queue Rejects; Allow Snoopevent=0x19,umask=0x4001unc_cha_rxc_irq1_reject.any0uncore cacheIngress (from CMS) Request Queue Rejects; ANY0event=0x19,umask=101unc_cha_rxc_irq1_reject.hauncore cacheIngress (from CMS) Request Queue Rejects; HAevent=0x19,umask=201unc_cha_rxc_irq1_reject.llc_or_sf_wayuncore cacheIngress (from CMS) Request Queue Rejects; Merging these two together to make room for ANY_REJECT_*0event=0x19,umask=0x2001unc_cha_rxc_irq1_reject.llc_victimuncore cacheIngress (from CMS) Request Queue Rejects; LLC Victimevent=0x19,umask=401unc_cha_rxc_irq1_reject.pa_matchuncore cacheIngress (from CMS) Request Queue Rejects; PhyAddr Matchevent=0x19,umask=0x8001unc_cha_rxc_irq1_reject.sf_victimuncore cacheIngress (from CMS) Request Queue Rejects; SF Victimevent=0x19,umask=801unc_cha_rxc_irq1_reject.victimuncore cacheIngress (from CMS) Request Queue Rejects; Victimevent=0x19,umask=0x1001unc_cha_rxc_ismq0_reject.ad_req_vn0uncore cacheISMQ Rejects; AD REQ on VN0event=0x24,umask=101Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the coresunc_cha_rxc_ismq0_reject.ad_rsp_vn0uncore cacheISMQ Rejects; AD RSP on VN0event=0x24,umask=201Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the coresunc_cha_rxc_ismq0_reject.ak_non_upiuncore cacheISMQ Rejects; Non UPI AK Requestevent=0x24,umask=0x4001Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the coresunc_cha_rxc_ismq0_reject.bl_ncb_vn0uncore cacheISMQ Rejects; BL NCB on VN0event=0x24,umask=0x1001Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the coresunc_cha_rxc_ismq0_reject.bl_ncs_vn0uncore cacheISMQ Rejects; BL NCS on VN0event=0x24,umask=0x2001Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the coresunc_cha_rxc_ismq0_reject.bl_rsp_vn0uncore cacheISMQ Rejects; BL RSP on VN0event=0x24,umask=401Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the coresunc_cha_rxc_ismq0_reject.bl_wb_vn0uncore cacheISMQ Rejects; BL WB on VN0event=0x24,umask=801Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the coresunc_cha_rxc_ismq0_reject.iv_non_upiuncore cacheISMQ Rejects; Non UPI IV Requestevent=0x24,umask=0x8001Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the coresunc_cha_rxc_ismq0_retry.ad_req_vn0uncore cacheISMQ Retries; AD REQ on VN0event=0x2c,umask=101Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the coresunc_cha_rxc_ismq0_retry.ad_rsp_vn0uncore cacheISMQ Retries; AD RSP on VN0event=0x2c,umask=201Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the coresunc_cha_rxc_ismq0_retry.ak_non_upiuncore cacheISMQ Retries; Non UPI AK Requestevent=0x2c,umask=0x4001Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the coresunc_cha_rxc_ismq0_retry.bl_ncb_vn0uncore cacheISMQ Retries; BL NCB on VN0event=0x2c,umask=0x1001Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the coresunc_cha_rxc_ismq0_retry.bl_ncs_vn0uncore cacheISMQ Retries; BL NCS on VN0event=0x2c,umask=0x2001Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the coresunc_cha_rxc_ismq0_retry.bl_rsp_vn0uncore cacheISMQ Retries; BL RSP on VN0event=0x2c,umask=401Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the coresunc_cha_rxc_ismq0_retry.bl_wb_vn0uncore cacheISMQ Retries; BL WB on VN0event=0x2c,umask=801Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the coresunc_cha_rxc_ismq0_retry.iv_non_upiuncore cacheISMQ Retries; Non UPI IV Requestevent=0x2c,umask=0x8001Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the coresunc_cha_rxc_ismq1_reject.any0uncore cacheISMQ Rejects; ANY0event=0x25,umask=101Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the coresunc_cha_rxc_ismq1_reject.hauncore cacheISMQ Rejects; HAevent=0x25,umask=201Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the coresunc_cha_rxc_ismq1_retry.any0uncore cacheISMQ Retries; ANY0event=0x2d,umask=101Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the coresunc_cha_rxc_ismq1_retry.hauncore cacheISMQ Retries; HAevent=0x2d,umask=201Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the coresunc_cha_rxc_occupancy.ipquncore cacheIngress (from CMS) Occupancy; IPQevent=0x11,umask=401Counts number of entries in the specified Ingress queue in each cycleunc_cha_rxc_occupancy.irquncore cacheIngress (from CMS) Occupancy; IRQevent=0x11,umask=101Counts number of entries in the specified Ingress queue in each cycleunc_cha_rxc_occupancy.rrquncore cacheIngress (from CMS) Occupancy; RRQevent=0x11,umask=0x4001Counts number of entries in the specified Ingress queue in each cycleunc_cha_rxc_occupancy.wbquncore cacheIngress (from CMS) Occupancy; WBQevent=0x11,umask=0x8001Counts number of entries in the specified Ingress queue in each cycleunc_cha_rxc_other0_retry.ad_req_vn0uncore cacheOther Retries; AD REQ on VN0event=0x2e,umask=101Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)unc_cha_rxc_other0_retry.ad_rsp_vn0uncore cacheOther Retries; AD RSP on VN0event=0x2e,umask=201Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)unc_cha_rxc_other0_retry.ak_non_upiuncore cacheOther Retries; Non UPI AK Requestevent=0x2e,umask=0x4001Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)unc_cha_rxc_other0_retry.bl_ncb_vn0uncore cacheOther Retries; BL NCB on VN0event=0x2e,umask=0x1001Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)unc_cha_rxc_other0_retry.bl_ncs_vn0uncore cacheOther Retries; BL NCS on VN0event=0x2e,umask=0x2001Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)unc_cha_rxc_other0_retry.bl_rsp_vn0uncore cacheOther Retries; BL RSP on VN0event=0x2e,umask=401Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)unc_cha_rxc_other0_retry.bl_wb_vn0uncore cacheOther Retries; BL WB on VN0event=0x2e,umask=801Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)unc_cha_rxc_other0_retry.iv_non_upiuncore cacheOther Retries; Non UPI IV Requestevent=0x2e,umask=0x8001Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)unc_cha_rxc_other1_retry.allow_snpuncore cacheOther Retries; Allow Snoopevent=0x2f,umask=0x4001Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)unc_cha_rxc_other1_retry.any0uncore cacheOther Retries; ANY0event=0x2f,umask=101Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)unc_cha_rxc_other1_retry.hauncore cacheOther Retries; HAevent=0x2f,umask=201Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)unc_cha_rxc_other1_retry.llc_or_sf_wayuncore cacheOther Retries; Merging these two together to make room for ANY_REJECT_*0event=0x2f,umask=0x2001Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)unc_cha_rxc_other1_retry.llc_victimuncore cacheOther Retries; LLC Victimevent=0x2f,umask=401Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)unc_cha_rxc_other1_retry.pa_matchuncore cacheOther Retries; PhyAddr Matchevent=0x2f,umask=0x8001Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)unc_cha_rxc_other1_retry.sf_victimuncore cacheOther Retries; SF Victimevent=0x2f,umask=801Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)unc_cha_rxc_other1_retry.victimuncore cacheOther Retries; Victimevent=0x2f,umask=0x1001Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)unc_cha_rxc_prq0_reject.ad_req_vn0uncore cacheIngress (from CMS) Request Queue Rejects; AD REQ on VN0event=0x20,umask=101unc_cha_rxc_prq0_reject.ad_rsp_vn0uncore cacheIngress (from CMS) Request Queue Rejects; AD RSP on VN0event=0x20,umask=201unc_cha_rxc_prq0_reject.ak_non_upiuncore cacheIngress (from CMS) Request Queue Rejects; Non UPI AK Requestevent=0x20,umask=0x4001unc_cha_rxc_prq0_reject.bl_ncb_vn0uncore cacheIngress (from CMS) Request Queue Rejects; BL NCB on VN0event=0x20,umask=0x1001unc_cha_rxc_prq0_reject.bl_ncs_vn0uncore cacheIngress (from CMS) Request Queue Rejects; BL NCS on VN0event=0x20,umask=0x2001unc_cha_rxc_prq0_reject.bl_rsp_vn0uncore cacheIngress (from CMS) Request Queue Rejects; BL RSP on VN0event=0x20,umask=401unc_cha_rxc_prq0_reject.bl_wb_vn0uncore cacheIngress (from CMS) Request Queue Rejects; BL WB on VN0event=0x20,umask=801unc_cha_rxc_prq0_reject.iv_non_upiuncore cacheIngress (from CMS) Request Queue Rejects; Non UPI IV Requestevent=0x20,umask=0x8001unc_cha_rxc_prq1_reject.allow_snpuncore cacheIngress (from CMS) Request Queue Rejects; Allow Snoopevent=0x21,umask=0x4001unc_cha_rxc_prq1_reject.any0uncore cacheIngress (from CMS) Request Queue Rejects; ANY0event=0x21,umask=101unc_cha_rxc_prq1_reject.hauncore cacheIngress (from CMS) Request Queue Rejects; HAevent=0x21,umask=201unc_cha_rxc_prq1_reject.llc_or_sf_wayuncore cacheIngress (from CMS) Request Queue Rejects; LLC OR SF Wayevent=0x21,umask=0x2001unc_cha_rxc_prq1_reject.llc_victimuncore cacheIngress (from CMS) Request Queue Rejects; LLC Victimevent=0x21,umask=401unc_cha_rxc_prq1_reject.pa_matchuncore cacheIngress (from CMS) Request Queue Rejects; PhyAddr Matchevent=0x21,umask=0x8001unc_cha_rxc_prq1_reject.sf_victimuncore cacheIngress (from CMS) Request Queue Rejects; SF Victimevent=0x21,umask=801unc_cha_rxc_prq1_reject.victimuncore cacheIngress (from CMS) Request Queue Rejects; Victimevent=0x21,umask=0x1001unc_cha_rxc_req_q0_retry.ad_req_vn0uncore cacheRequest Queue Retries; AD REQ on VN0event=0x2a,umask=101REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)unc_cha_rxc_req_q0_retry.ad_rsp_vn0uncore cacheRequest Queue Retries; AD RSP on VN0event=0x2a,umask=201REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)unc_cha_rxc_req_q0_retry.ak_non_upiuncore cacheRequest Queue Retries; Non UPI AK Requestevent=0x2a,umask=0x4001REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)unc_cha_rxc_req_q0_retry.bl_ncb_vn0uncore cacheRequest Queue Retries; BL NCB on VN0event=0x2a,umask=0x1001REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)unc_cha_rxc_req_q0_retry.bl_ncs_vn0uncore cacheRequest Queue Retries; BL NCS on VN0event=0x2a,umask=0x2001REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)unc_cha_rxc_req_q0_retry.bl_rsp_vn0uncore cacheRequest Queue Retries; BL RSP on VN0event=0x2a,umask=401REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)unc_cha_rxc_req_q0_retry.bl_wb_vn0uncore cacheRequest Queue Retries; BL WB on VN0event=0x2a,umask=801REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)unc_cha_rxc_req_q0_retry.iv_non_upiuncore cacheRequest Queue Retries; Non UPI IV Requestevent=0x2a,umask=0x8001REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)unc_cha_rxc_req_q1_retry.allow_snpuncore cacheRequest Queue Retries; Allow Snoopevent=0x2b,umask=0x4001REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)unc_cha_rxc_req_q1_retry.any0uncore cacheRequest Queue Retries; ANY0event=0x2b,umask=101REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)unc_cha_rxc_req_q1_retry.hauncore cacheRequest Queue Retries; HAevent=0x2b,umask=201REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)unc_cha_rxc_req_q1_retry.llc_or_sf_wayuncore cacheRequest Queue Retries; Merging these two together to make room for ANY_REJECT_*0event=0x2b,umask=0x2001REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)unc_cha_rxc_req_q1_retry.llc_victimuncore cacheRequest Queue Retries; LLC Victimevent=0x2b,umask=401REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)unc_cha_rxc_req_q1_retry.pa_matchuncore cacheRequest Queue Retries; PhyAddr Matchevent=0x2b,umask=0x8001REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)unc_cha_rxc_req_q1_retry.sf_victimuncore cacheRequest Queue Retries; SF Victimevent=0x2b,umask=801REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)unc_cha_rxc_req_q1_retry.victimuncore cacheRequest Queue Retries; Victimevent=0x2b,umask=0x1001REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)unc_cha_rxc_rrq0_reject.ad_req_vn0uncore cacheRRQ Rejects; AD REQ on VN0event=0x26,umask=101Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retryunc_cha_rxc_rrq0_reject.ad_rsp_vn0uncore cacheRRQ Rejects; AD RSP on VN0event=0x26,umask=201Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retryunc_cha_rxc_rrq0_reject.ak_non_upiuncore cacheRRQ Rejects; Non UPI AK Requestevent=0x26,umask=0x4001Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retryunc_cha_rxc_rrq0_reject.bl_ncb_vn0uncore cacheRRQ Rejects; BL NCB on VN0event=0x26,umask=0x1001Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retryunc_cha_rxc_rrq0_reject.bl_ncs_vn0uncore cacheRRQ Rejects; BL NCS on VN0event=0x26,umask=0x2001Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retryunc_cha_rxc_rrq0_reject.bl_rsp_vn0uncore cacheRRQ Rejects; BL RSP on VN0event=0x26,umask=401Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retryunc_cha_rxc_rrq0_reject.bl_wb_vn0uncore cacheRRQ Rejects; BL WB on VN0event=0x26,umask=801Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retryunc_cha_rxc_rrq0_reject.iv_non_upiuncore cacheRRQ Rejects; Non UPI IV Requestevent=0x26,umask=0x8001Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retryunc_cha_rxc_rrq1_reject.allow_snpuncore cacheRRQ Rejects; Allow Snoopevent=0x27,umask=0x4001Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retryunc_cha_rxc_rrq1_reject.any0uncore cacheRRQ Rejects; ANY0event=0x27,umask=101Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retryunc_cha_rxc_rrq1_reject.hauncore cacheRRQ Rejects; HAevent=0x27,umask=201Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retryunc_cha_rxc_rrq1_reject.llc_or_sf_wayuncore cacheRRQ Rejects; Merging these two together to make room for ANY_REJECT_*0event=0x27,umask=0x2001Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retryunc_cha_rxc_rrq1_reject.llc_victimuncore cacheRRQ Rejects; LLC Victimevent=0x27,umask=401Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retryunc_cha_rxc_rrq1_reject.pa_matchuncore cacheRRQ Rejects; PhyAddr Matchevent=0x27,umask=0x8001Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retryunc_cha_rxc_rrq1_reject.sf_victimuncore cacheRRQ Rejects; SF Victimevent=0x27,umask=801Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retryunc_cha_rxc_rrq1_reject.victimuncore cacheRRQ Rejects; Victimevent=0x27,umask=0x1001Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retryunc_cha_rxc_wbq0_reject.ad_req_vn0uncore cacheWBQ Rejects; AD REQ on VN0event=0x28,umask=101Number of times a transaction flowing through the WBQ (Writeback Queue) had to retryunc_cha_rxc_wbq0_reject.ad_rsp_vn0uncore cacheWBQ Rejects; AD RSP on VN0event=0x28,umask=201Number of times a transaction flowing through the WBQ (Writeback Queue) had to retryunc_cha_rxc_wbq0_reject.ak_non_upiuncore cacheWBQ Rejects; Non UPI AK Requestevent=0x28,umask=0x4001Number of times a transaction flowing through the WBQ (Writeback Queue) had to retryunc_cha_rxc_wbq0_reject.bl_ncb_vn0uncore cacheWBQ Rejects; BL NCB on VN0event=0x28,umask=0x1001Number of times a transaction flowing through the WBQ (Writeback Queue) had to retryunc_cha_rxc_wbq0_reject.bl_ncs_vn0uncore cacheWBQ Rejects; BL NCS on VN0event=0x28,umask=0x2001Number of times a transaction flowing through the WBQ (Writeback Queue) had to retryunc_cha_rxc_wbq0_reject.bl_rsp_vn0uncore cacheWBQ Rejects; BL RSP on VN0event=0x28,umask=401Number of times a transaction flowing through the WBQ (Writeback Queue) had to retryunc_cha_rxc_wbq0_reject.bl_wb_vn0uncore cacheWBQ Rejects; BL WB on VN0event=0x28,umask=801Number of times a transaction flowing through the WBQ (Writeback Queue) had to retryunc_cha_rxc_wbq0_reject.iv_non_upiuncore cacheWBQ Rejects; Non UPI IV Requestevent=0x28,umask=0x8001Number of times a transaction flowing through the WBQ (Writeback Queue) had to retryunc_cha_rxc_wbq1_reject.allow_snpuncore cacheWBQ Rejects; Allow Snoopevent=0x29,umask=0x4001Number of times a transaction flowing through the WBQ (Writeback Queue) had to retryunc_cha_rxc_wbq1_reject.any0uncore cacheWBQ Rejects; ANY0event=0x29,umask=101Number of times a transaction flowing through the WBQ (Writeback Queue) had to retryunc_cha_rxc_wbq1_reject.hauncore cacheWBQ Rejects; HAevent=0x29,umask=201Number of times a transaction flowing through the WBQ (Writeback Queue) had to retryunc_cha_rxc_wbq1_reject.llc_or_sf_wayuncore cacheWBQ Rejects; Merging these two together to make room for ANY_REJECT_*0event=0x29,umask=0x2001Number of times a transaction flowing through the WBQ (Writeback Queue) had to retryunc_cha_rxc_wbq1_reject.llc_victimuncore cacheWBQ Rejects; LLC Victimevent=0x29,umask=401Number of times a transaction flowing through the WBQ (Writeback Queue) had to retryunc_cha_rxc_wbq1_reject.pa_matchuncore cacheWBQ Rejects; PhyAddr Matchevent=0x29,umask=0x8001Number of times a transaction flowing through the WBQ (Writeback Queue) had to retryunc_cha_rxc_wbq1_reject.sf_victimuncore cacheWBQ Rejects; SF Victimevent=0x29,umask=801Number of times a transaction flowing through the WBQ (Writeback Queue) had to retryunc_cha_rxc_wbq1_reject.victimuncore cacheWBQ Rejects; Victimevent=0x29,umask=0x1001Number of times a transaction flowing through the WBQ (Writeback Queue) had to retryunc_cha_rxr_busy_starved.ad_bncuncore cacheTransgress Injection Starvation; AD - Bounceevent=0xb4,umask=101Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityunc_cha_rxr_busy_starved.ad_crduncore cacheTransgress Injection Starvation; AD - Creditevent=0xb4,umask=0x1001Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityunc_cha_rxr_busy_starved.bl_bncuncore cacheTransgress Injection Starvation; BL - Bounceevent=0xb4,umask=401Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityunc_cha_rxr_busy_starved.bl_crduncore cacheTransgress Injection Starvation; BL - Creditevent=0xb4,umask=0x4001Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityunc_cha_rxr_bypass.ad_bncuncore cacheTransgress Ingress Bypass; AD - Bounceevent=0xb2,umask=101Number of packets bypassing the CMS Ingressunc_cha_rxr_bypass.ad_crduncore cacheTransgress Ingress Bypass; AD - Creditevent=0xb2,umask=0x1001Number of packets bypassing the CMS Ingressunc_cha_rxr_bypass.ak_bncuncore cacheTransgress Ingress Bypass; AK - Bounceevent=0xb2,umask=201Number of packets bypassing the CMS Ingressunc_cha_rxr_bypass.bl_bncuncore cacheTransgress Ingress Bypass; BL - Bounceevent=0xb2,umask=401Number of packets bypassing the CMS Ingressunc_cha_rxr_bypass.bl_crduncore cacheTransgress Ingress Bypass; BL - Creditevent=0xb2,umask=0x4001Number of packets bypassing the CMS Ingressunc_cha_rxr_bypass.iv_bncuncore cacheTransgress Ingress Bypass; IV - Bounceevent=0xb2,umask=801Number of packets bypassing the CMS Ingressunc_cha_rxr_crd_starved.ad_bncuncore cacheTransgress Injection Starvation; AD - Bounceevent=0xb3,umask=101Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_cha_rxr_crd_starved.ad_crduncore cacheTransgress Injection Starvation; AD - Creditevent=0xb3,umask=0x1001Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_cha_rxr_crd_starved.ak_bncuncore cacheTransgress Injection Starvation; AK - Bounceevent=0xb3,umask=201Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_cha_rxr_crd_starved.bl_bncuncore cacheTransgress Injection Starvation; BL - Bounceevent=0xb3,umask=401Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_cha_rxr_crd_starved.bl_crduncore cacheTransgress Injection Starvation; BL - Creditevent=0xb3,umask=0x4001Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_cha_rxr_crd_starved.ifvuncore cacheTransgress Injection Starvation; IFV - Creditevent=0xb3,umask=0x8001Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_cha_rxr_crd_starved.iv_bncuncore cacheTransgress Injection Starvation; IV - Bounceevent=0xb3,umask=801Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_cha_rxr_inserts.ad_bncuncore cacheTransgress Ingress Allocations; AD - Bounceevent=0xb1,umask=101Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_cha_rxr_inserts.ad_crduncore cacheTransgress Ingress Allocations; AD - Creditevent=0xb1,umask=0x1001Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_cha_rxr_inserts.ak_bncuncore cacheTransgress Ingress Allocations; AK - Bounceevent=0xb1,umask=201Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_cha_rxr_inserts.bl_bncuncore cacheTransgress Ingress Allocations; BL - Bounceevent=0xb1,umask=401Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_cha_rxr_inserts.bl_crduncore cacheTransgress Ingress Allocations; BL - Creditevent=0xb1,umask=0x4001Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_cha_rxr_inserts.iv_bncuncore cacheTransgress Ingress Allocations; IV - Bounceevent=0xb1,umask=801Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_cha_rxr_occupancy.ad_bncuncore cacheTransgress Ingress Occupancy; AD - Bounceevent=0xb0,umask=101Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_cha_rxr_occupancy.ad_crduncore cacheTransgress Ingress Occupancy; AD - Creditevent=0xb0,umask=0x1001Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_cha_rxr_occupancy.ak_bncuncore cacheTransgress Ingress Occupancy; AK - Bounceevent=0xb0,umask=201Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_cha_rxr_occupancy.bl_bncuncore cacheTransgress Ingress Occupancy; BL - Bounceevent=0xb0,umask=401Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_cha_rxr_occupancy.bl_crduncore cacheTransgress Ingress Occupancy; BL - Creditevent=0xb0,umask=0x4001Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_cha_rxr_occupancy.iv_bncuncore cacheTransgress Ingress Occupancy; IV - Bounceevent=0xb0,umask=801Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_cha_sf_eviction.e_stateuncore cacheSnoop filter capacity evictions for E-state entriesevent=0x3d,umask=201Counts snoop filter capacity evictions for entries tracking exclusive lines in the cores cache. Snoop filter capacity evictions occur when the snoop filter is full and evicts an existing entry to track a new entry. Does not count clean evictions such as when a cores cache replaces a tracked cacheline with a new cachelineunc_cha_sf_eviction.m_stateuncore cacheSnoop filter capacity evictions for M-state entriesevent=0x3d,umask=101Counts snoop filter capacity evictions for entries tracking modified lines in the cores cache. Snoop filter capacity evictions occur when the snoop filter is full and evicts an existing entry to track a new entry. Does not count clean evictions such as when a cores cache replaces a tracked cacheline with a new cachelineunc_cha_sf_eviction.s_stateuncore cacheSnoop filter capacity evictions for S-state entriesevent=0x3d,umask=401Counts snoop filter capacity evictions for entries tracking shared lines in the cores cache. Snoop filter capacity evictions occur when the snoop filter is full and evicts an existing entry to track a new entry. Does not count clean evictions such as when a cores cache replaces a tracked cacheline with a new cachelineunc_cha_snoops_sent.alluncore cacheSnoops Sent; Allevent=0x51,umask=101Counts the number of snoops issued by the HAunc_cha_snoops_sent.bcst_localuncore cacheSnoops Sent; Broadcast snoop for Local Requestsevent=0x51,umask=0x1001Counts the number of snoops issued by the HA.; Counts the number of broadcast snoops issued by the HA. This filter includes only requests coming from local socketsunc_cha_snoops_sent.bcst_remoteuncore cacheSnoops Sent; Broadcast snoops for Remote Requestsevent=0x51,umask=0x2001Counts the number of snoops issued by the HA.; Counts the number of broadcast snoops issued by the HA.This filter includes only requests coming from remote socketsunc_cha_snoops_sent.direct_localuncore cacheSnoops Sent; Directed snoops for Local Requestsevent=0x51,umask=0x4001Counts the number of snoops issued by the HA.; Counts the number of directed snoops issued by the HA. This filter includes only requests coming from local socketsunc_cha_snoops_sent.direct_remoteuncore cacheSnoops Sent; Directed snoops for Remote Requestsevent=0x51,umask=0x8001Counts the number of snoops issued by the HA.; Counts the number of directed snoops issued by the HA. This filter includes only requests coming from remote socketsunc_cha_snoops_sent.localuncore cacheSnoops Sent; Broadcast or directed Snoops sent for Local Requestsevent=0x51,umask=401Counts the number of snoops issued by the HA.; Counts the number of broadcast or directed snoops issued by the HA per request. This filter includes only requests coming from the local socketunc_cha_snoops_sent.remoteuncore cacheSnoops Sent; Broadcast or directed Snoops sent for Remote Requestsevent=0x51,umask=801Counts the number of snoops issued by the HA.; Counts the number of broadcast or directed snoops issued by the HA per request. This filter includes only requests coming from the remote socketunc_cha_snoop_resp.rspcnflctsuncore cacheRspCnflct* Snoop Responses Receivedevent=0x5c,umask=0x4001Counts when a a transaction with the opcode type RspCnflct* Snoop Response was received. This is returned when a snoop finds an existing outstanding transaction in a remote caching agent. This triggers conflict resolution hardware. This covers both the opcode RspCnflct and RspCnflctWbIunc_cha_snoop_resp.rspfwduncore cacheSnoop Responses Received; RspFwdevent=0x5c,umask=0x8001Counts the total number of RspI snoop responses received.  Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system.   In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received.  For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1.; Filters for a snoop response of RspFwd to a CA request.  This snoop response is only possible for RdCur when a snoop HITM/E in a remote caching agent and it directly forwards data to a requestor without changing the requestor's cache line stateunc_cha_snoop_resp.rspiuncore cacheRspI Snoop Responses Receivedevent=0x5c,umask=101Counts when a transaction with the opcode type RspI Snoop Response was received which indicates the remote cache does not have the data, or when the remote cache silently evicts data (such as when an RFO: the Read for Ownership issued before a write hits non-modified data)unc_cha_snoop_resp.rspifwduncore cacheRspIFwd Snoop Responses Receivedevent=0x5c,umask=401Counts when a a transaction with the opcode type RspIFwd Snoop Response was received which indicates a remote caching agent forwarded the data and the requesting agent is able to acquire the data in E (Exclusive) or M (modified) states.  This is commonly returned with RFO (the Read for Ownership issued before a write) transactions.  The snoop could have either been to a cacheline in the M,E,F (Modified, Exclusive or Forward)  statesunc_cha_snoop_resp.rspsuncore cacheSnoop Responses Received : RspSevent=0x5c,umask=201Snoop Responses Received : RspS : Counts the total number of RspI snoop responses received.  Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system.   In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received.  For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1. : Filters for snoop responses of RspS.  RspS is returned when a remote cache has data but is not forwarding it.  It is a way to let the requesting socket know that it cannot allocate the data in E state.  No data is sent with S RspSunc_cha_snoop_resp.rspsfwduncore cacheRspSFwd Snoop Responses Receivedevent=0x5c,umask=801Counts when a a transaction with the opcode type RspSFwd Snoop Response was received which indicates a remote caching agent forwarded the data but held on to its current copy.  This is common for data and code reads that hit in a remote socket in E (Exclusive) or F (Forward) stateunc_cha_snoop_resp.rsp_fwd_wbuncore cacheRsp*Fwd*WB Snoop Responses Receivedevent=0x5c,umask=0x2001Counts when a transaction with the opcode type Rsp*Fwd*WB Snoop Response was received which indicates the data was written back to its home socket, and the cacheline was forwarded to the requestor socket.  This snoop response is only used in >= 4 socket systems.  It is used when a snoop HITM's in a remote caching agent and it directly forwards data to a requestor, and simultaneously returns data to its home socket to be written back to memoryunc_cha_snoop_resp.rsp_wbwbuncore cacheRsp*WB Snoop Responses Receivedevent=0x5c,umask=0x1001Counts when a transaction with the opcode type Rsp*WB Snoop Response was received which indicates which indicates the data was written back to its home.  This is returned when a non-RFO request hits a cacheline in the Modified state. The Cache can either downgrade the cacheline to a S (Shared) or I (Invalid) state depending on how the system has been configured.  This response will also be sent when a cache requests E (Exclusive) ownership of a cache line without receiving data, because the cache must acquire ownershipunc_cha_snoop_resp_local.rspcnflctuncore cacheSnoop Responses Received Local; RspCnflctevent=0x5d,umask=0x4001Number of snoop responses received for a Local  request; Filters for snoops responses of RspConflict to local CA requests.  This is returned when a snoop finds an existing outstanding transaction in a remote caching agent when it CAMs that caching agent.  This triggers conflict resolution hardware.  This covers both RspCnflct and RspCnflctWbIunc_cha_snoop_resp_local.rspfwduncore cacheSnoop Responses Received Local; RspFwdevent=0x5d,umask=0x8001Number of snoop responses received for a Local  request; Filters for a snoop response of RspFwd to local CA requests.  This snoop response is only possible for RdCur when a snoop HITM/E in a remote caching agent and it directly forwards data to a requestor without changing the requestor's cache line stateunc_cha_snoop_resp_local.rspiuncore cacheSnoop Responses Received Local; RspIevent=0x5d,umask=101Number of snoop responses received for a Local  request; Filters for snoops responses of RspI to local CA requests.  RspI is returned when the remote cache does not have the data, or when the remote cache silently evicts data (such as when an RFO hits non-modified data)unc_cha_snoop_resp_local.rspifwduncore cacheSnoop Responses Received Local; RspIFwdevent=0x5d,umask=401Number of snoop responses received for a Local  request; Filters for snoop responses of RspIFwd to local CA requests.  This is returned when a remote caching agent forwards data and the requesting agent is able to acquire the data in E or M states.  This is commonly returned with RFO transactions.  It can be either a HitM or a HitFEunc_cha_snoop_resp_local.rspsuncore cacheSnoop Responses Received Local; RspSevent=0x5d,umask=201Number of snoop responses received for a Local  request; Filters for snoop responses of RspS to local CA requests.  RspS is returned when a remote cache has data but is not forwarding it.  It is a way to let the requesting socket know that it cannot allocate the data in E state.  No data is sent with S RspSunc_cha_snoop_resp_local.rspsfwduncore cacheSnoop Responses Received Local; RspSFwdevent=0x5d,umask=801Number of snoop responses received for a Local  request; Filters for a snoop response of RspSFwd to local CA requests.  This is returned when a remote caching agent forwards data but holds on to its current copy.  This is common for data and code reads that hit in a remote socket in E or F stateunc_cha_snoop_resp_local.rsp_fwd_wbuncore cacheSnoop Responses Received Local; Rsp*FWD*WBevent=0x5d,umask=0x2001Number of snoop responses received for a Local  request; Filters for a snoop response of Rsp*Fwd*WB to local CA requests.  This snoop response is only used in 4s systems.  It is used when a snoop HITM's in a remote caching agent and it directly forwards data to a requestor, and simultaneously returns data to the home to be written back to memoryunc_cha_snoop_resp_local.rsp_wbuncore cacheSnoop Responses Received Local; Rsp*WBevent=0x5d,umask=0x1001Number of snoop responses received for a Local  request; Filters for a snoop response of RspIWB or RspSWB to local CA requests.  This is returned when a non-RFO request hits in M state.  Data and Code Reads can return either RspIWB or RspSWB depending on how the system has been configured.  InvItoE transactions will also return RspIWB because they must acquire ownershipunc_cha_stall_no_txr_horz_crd_ad_ag0.tgr0uncore cacheStall on No AD Agent0 Transgress Credits; For Transgress 0event=0xd0,umask=101Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall_no_txr_horz_crd_ad_ag0.tgr1uncore cacheStall on No AD Agent0 Transgress Credits; For Transgress 1event=0xd0,umask=201Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall_no_txr_horz_crd_ad_ag0.tgr2uncore cacheStall on No AD Agent0 Transgress Credits; For Transgress 2event=0xd0,umask=401Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall_no_txr_horz_crd_ad_ag0.tgr3uncore cacheStall on No AD Agent0 Transgress Credits; For Transgress 3event=0xd0,umask=801Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall_no_txr_horz_crd_ad_ag0.tgr4uncore cacheStall on No AD Agent0 Transgress Credits; For Transgress 4event=0xd0,umask=0x1001Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall_no_txr_horz_crd_ad_ag0.tgr5uncore cacheStall on No AD Agent0 Transgress Credits; For Transgress 5event=0xd0,umask=0x2001Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall_no_txr_horz_crd_ad_ag1.tgr0uncore cacheStall on No AD Agent1 Transgress Credits; For Transgress 0event=0xd2,umask=101Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall_no_txr_horz_crd_ad_ag1.tgr1uncore cacheStall on No AD Agent1 Transgress Credits; For Transgress 1event=0xd2,umask=201Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall_no_txr_horz_crd_ad_ag1.tgr2uncore cacheStall on No AD Agent1 Transgress Credits; For Transgress 2event=0xd2,umask=401Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall_no_txr_horz_crd_ad_ag1.tgr3uncore cacheStall on No AD Agent1 Transgress Credits; For Transgress 3event=0xd2,umask=801Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall_no_txr_horz_crd_ad_ag1.tgr4uncore cacheStall on No AD Agent1 Transgress Credits; For Transgress 4event=0xd2,umask=0x1001Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall_no_txr_horz_crd_ad_ag1.tgr5uncore cacheStall on No AD Agent1 Transgress Credits; For Transgress 5event=0xd2,umask=0x2001Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall_no_txr_horz_crd_bl_ag0.tgr0uncore cacheStall on No BL Agent0 Transgress Credits; For Transgress 0event=0xd4,umask=101Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall_no_txr_horz_crd_bl_ag0.tgr1uncore cacheStall on No BL Agent0 Transgress Credits; For Transgress 1event=0xd4,umask=201Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall_no_txr_horz_crd_bl_ag0.tgr2uncore cacheStall on No BL Agent0 Transgress Credits; For Transgress 2event=0xd4,umask=401Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall_no_txr_horz_crd_bl_ag0.tgr3uncore cacheStall on No BL Agent0 Transgress Credits; For Transgress 3event=0xd4,umask=801Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall_no_txr_horz_crd_bl_ag0.tgr4uncore cacheStall on No BL Agent0 Transgress Credits; For Transgress 4event=0xd4,umask=0x1001Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall_no_txr_horz_crd_bl_ag0.tgr5uncore cacheStall on No BL Agent0 Transgress Credits; For Transgress 5event=0xd4,umask=0x2001Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall_no_txr_horz_crd_bl_ag1.tgr0uncore cacheStall on No BL Agent1 Transgress Credits; For Transgress 0event=0xd6,umask=101Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall_no_txr_horz_crd_bl_ag1.tgr1uncore cacheStall on No BL Agent1 Transgress Credits; For Transgress 1event=0xd6,umask=201Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall_no_txr_horz_crd_bl_ag1.tgr2uncore cacheStall on No BL Agent1 Transgress Credits; For Transgress 2event=0xd6,umask=401Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall_no_txr_horz_crd_bl_ag1.tgr3uncore cacheStall on No BL Agent1 Transgress Credits; For Transgress 3event=0xd6,umask=801Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall_no_txr_horz_crd_bl_ag1.tgr4uncore cacheStall on No BL Agent1 Transgress Credits; For Transgress 4event=0xd6,umask=0x1001Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall_no_txr_horz_crd_bl_ag1.tgr5uncore cacheStall on No BL Agent1 Transgress Credits; For Transgress 5event=0xd6,umask=0x2001Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_tor_inserts.alluncore cacheTOR Inserts; Allevent=0x35,umask=0xff01Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.all_hituncore cacheTOR Inserts; Hits from Localevent=0x35,umask=0x1501Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.all_io_iauncore cacheTOR Inserts; All from Local iA and IOevent=0x35,umask=0x3501Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.; All locally initiated requestsunc_cha_tor_inserts.all_missuncore cacheTOR Inserts; Misses from Localevent=0x35,umask=0x2501Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.evictuncore cacheTOR Inserts; SF/LLC Evictionsevent=0x35,umask=201Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.; TOR allocation occurred as a result of SF/LLC evictions (came from the ISMQ)unc_cha_tor_inserts.hituncore cacheTOR Inserts; Hit (Not a Miss)event=0x35,umask=0x1001Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.; HITs (hit is defined to be not a miss [see below], as a result for any request allocated into the TOR, one of either HIT or MISS must be true)unc_cha_tor_inserts.iauncore cacheTOR Inserts; All from Local iAevent=0x35,umask=0x3101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.; All locally initiated requests from iA Coresunc_cha_tor_inserts.ia_hituncore cacheTOR Inserts; Hits from Local iAevent=0x35,umask=0x1101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.ia_hit_crduncore cacheTOR Inserts : CRds issued by iA Cores that Hit the LLCevent=0x35,umask=0x11,config1=0x402330000000001TOR Inserts : CRds issued by iA Cores that Hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_hit_drduncore cacheTOR Inserts : DRds issued by iA Cores that Hit the LLCevent=0x35,umask=0x11,config1=0x404330000000001TOR Inserts : DRds issued by iA Cores that Hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_hit_llcprefcrduncore cacheUNC_CHA_TOR_INSERTS.IA_HIT_LlcPrefCRDevent=0x35,umask=0x11,config1=0x4b2330000000001unc_cha_tor_inserts.ia_hit_llcprefdrduncore cacheUNC_CHA_TOR_INSERTS.IA_HIT_LlcPrefDRDevent=0x35,umask=0x11,config1=0x4b4330000000001unc_cha_tor_inserts.ia_hit_llcprefrfouncore cacheTOR Inserts : LLCPrefRFO issued by iA Cores that hit the LLCevent=0x35,umask=0x11,config1=0x4b0330000000001TOR Inserts : LLCPrefRFO issued by iA Cores that hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_hit_rfouncore cacheTOR Inserts : RFOs issued by iA Cores that Hit the LLCevent=0x35,umask=0x11,config1=0x400330000000001TOR Inserts : RFOs issued by iA Cores that Hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_missuncore cacheTOR Inserts : All requests from iA Cores that Missed the LLCevent=0x35,umask=0x2101TOR Inserts : All requests from iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_crduncore cacheTOR Inserts : CRds issued by iA Cores that Missed the LLCevent=0x35,umask=0x21,config1=0x402330000000001TOR Inserts : CRds issued by iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drduncore cacheTOR Inserts : DRds issued by iA Cores that Missed the LLCevent=0x35,umask=0x21,config1=0x404330000000001TOR Inserts : DRds issued by iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_llcprefcrduncore cacheUNC_CHA_TOR_INSERTS.IA_MISS_LlcPrefCRDevent=0x35,umask=0x21,config1=0x4b2330000000001unc_cha_tor_inserts.ia_miss_llcprefdrduncore cacheUNC_CHA_TOR_INSERTS.IA_MISS_LlcPrefDRDevent=0x35,umask=0x21,config1=0x4b4330000000001unc_cha_tor_inserts.ia_miss_llcprefrfouncore cacheTOR Inserts : LLCPrefRFO issued by iA Cores that missed the LLCevent=0x35,umask=0x21,config1=0x4b0330000000001TOR Inserts : LLCPrefRFO issued by iA Cores that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_rfouncore cacheTOR Inserts : RFOs issued by iA Cores that Missed the LLCevent=0x35,umask=0x21,config1=0x400330000000001TOR Inserts : RFOs issued by iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.iouncore cacheTOR Inserts; All from Local IOevent=0x35,umask=0x3401Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.; All locally generated IO trafficunc_cha_tor_inserts.io_hituncore cacheTOR Inserts; Hits from Local IOevent=0x35,umask=0x1401Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.io_missuncore cacheTOR Inserts; Misses from Local IOevent=0x35,umask=0x2401Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.io_miss_itomuncore cacheTOR Inserts; ItoM misses from Local IOevent=0x35,umask=0x24,config1=0x490330000000001Counts the number of entries successfully inserted into the TOR that are generated from local IO ItoM requests that miss the LLC. An ItoM request is used by IIO to request a data write without first reading the data for ownershipunc_cha_tor_inserts.io_miss_rdcuruncore cacheTOR Inserts; RdCur misses from Local IOevent=0x35,umask=0x24,config1=0x43c330000000001Counts the number of entries successfully inserted into the TOR that are generated from local IO RdCur requests and miss the LLC. A RdCur request is used by IIO to read data without changing stateunc_cha_tor_inserts.io_miss_rfouncore cacheTOR Inserts; RFO misses from Local IOevent=0x35,umask=0x24,config1=0x400330000000001Counts the number of entries successfully inserted into the TOR that are generated from local IO RFO requests that miss the LLC. A read for ownership (RFO) requests a cache line to be cached in E state with the intent to modifyunc_cha_tor_inserts.ipquncore cacheTOR Inserts; IPQevent=0x35,umask=801Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.ipq_hituncore cacheThis event is deprecatedevent=0x35,umask=0x1811unc_cha_tor_inserts.ipq_missuncore cacheThis event is deprecatedevent=0x35,umask=0x2811unc_cha_tor_inserts.irquncore cacheTOR Inserts; IRQevent=0x35,umask=101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.loc_alluncore cacheThis event is deprecatedevent=0x35,umask=0x3711unc_cha_tor_inserts.missuncore cacheTOR Inserts; Missevent=0x35,umask=0x2001Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.; Misses.  (a miss is defined to be any transaction from the IRQ, PRQ, RRQ, IPQ or (in the victim case) the ISMQ, that required the CHA to spawn a new UPI/SMI3 request on the UPI fabric (including UPI snoops and/or any RD/WR to a local memory controller, in the event that the CHA is the home node)).  Basically, if the LLC/SF/MLC complex were not able to service the request without involving another agent...it is a miss.  If only IDI snoops were required, it is not a miss (that means the SF/MLC comunc_cha_tor_inserts.prquncore cacheTOR Inserts; PRQevent=0x35,umask=401Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.rem_alluncore cacheThis event is deprecatedevent=0x35,umask=0x3011unc_cha_tor_inserts.rrq_hituncore cacheThis event is deprecatedevent=0x35,umask=0x5011unc_cha_tor_inserts.rrq_missuncore cacheThis event is deprecatedevent=0x35,umask=0x6011unc_cha_tor_inserts.wbq_hituncore cacheThis event is deprecatedevent=0x35,umask=0x9011unc_cha_tor_inserts.wbq_missuncore cacheThis event is deprecatedevent=0x35,umask=0xa011unc_cha_tor_occupancy.alluncore cacheTOR Occupancy : Allevent=0x36,umask=0xff01TOR Occupancy : All : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.all_from_locuncore cacheTOR Occupancy; All from Localevent=0x36,umask=0x3701For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); All remotely generated requestsunc_cha_tor_occupancy.all_hituncore cacheTOR Occupancy; Hits from Localevent=0x36,umask=0x1701For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   Tunc_cha_tor_occupancy.all_missuncore cacheTOR Occupancy; Misses from Localevent=0x36,umask=0x2701For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   Tunc_cha_tor_occupancy.evictuncore cacheTOR Occupancy; SF/LLC Evictionsevent=0x36,umask=201For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   T; TOR allocation occurred as a result of SF/LLC evictions (came from the ISMQ)unc_cha_tor_occupancy.hituncore cacheTOR Occupancy; Hit (Not a Miss)event=0x36,umask=0x1001For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   T; HITs (hit is defined to be not a miss [see below], as a result for any request allocated into the TOR, one of either HIT or MISS must be true)unc_cha_tor_occupancy.iauncore cacheTOR Occupancy; All from Local iAevent=0x36,umask=0x3101For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   T; All locally initiated requests from iA Coresunc_cha_tor_occupancy.ia_hituncore cacheTOR Occupancy; Hits from Local iAevent=0x36,umask=0x1101For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   Tunc_cha_tor_occupancy.ia_hit_crduncore cacheTOR Occupancy : CRds issued by iA Cores that Hit the LLCevent=0x36,umask=0x11,config1=0x402330000000001TOR Occupancy : CRds issued by iA Cores that Hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_hit_drduncore cacheTOR Occupancy : DRds issued by iA Cores that Hit the LLCevent=0x36,umask=0x11,config1=0x404330000000001TOR Occupancy : DRds issued by iA Cores that Hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_hit_llcprefcrduncore cacheUNC_CHA_TOR_OCCUPANCY.IA_HIT_LlcPrefCRDevent=0x36,umask=0x11,config1=0x4b2330000000001unc_cha_tor_occupancy.ia_hit_llcprefdrduncore cacheUNC_CHA_TOR_OCCUPANCY.IA_HIT_LlcPrefDRDevent=0x36,umask=0x11,config1=0x4b4330000000001unc_cha_tor_occupancy.ia_hit_llcprefrfouncore cacheTOR Occupancy : LLCPrefRFO issued by iA Cores that hit the LLCevent=0x36,umask=0x11,config1=0x4b0330000000001TOR Occupancy : LLCPrefRFO issued by iA Cores that hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_hit_rfouncore cacheTOR Occupancy : RFOs issued by iA Cores that Hit the LLCevent=0x36,umask=0x11,config1=0x400330000000001TOR Occupancy : RFOs issued by iA Cores that Hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_missuncore cacheTOR Occupancy; Misses from Local iAevent=0x36,umask=0x2101For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   Tunc_cha_tor_occupancy.ia_miss_crduncore cacheTOR Occupancy : CRds issued by iA Cores that Missed the LLCevent=0x36,umask=0x21,config1=0x402330000000001TOR Occupancy : CRds issued by iA Cores that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_drduncore cacheTOR Occupancy : DRds issued by iA Cores that Missed the LLCevent=0x36,umask=0x21,config1=0x404330000000001TOR Occupancy : DRds issued by iA Cores that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_llcprefcrduncore cacheUNC_CHA_TOR_OCCUPANCY.IA_MISS_LlcPrefCRDevent=0x36,umask=0x21,config1=0x4b2330000000001unc_cha_tor_occupancy.ia_miss_llcprefdrduncore cacheUNC_CHA_TOR_OCCUPANCY.IA_MISS_LlcPrefDRDevent=0x36,umask=0x21,config1=0x4b4330000000001unc_cha_tor_occupancy.ia_miss_llcprefrfouncore cacheTOR Occupancy : LLCPrefRFO issued by iA Cores that missed the LLCevent=0x36,umask=0x21,config1=0x4b0330000000001TOR Occupancy : LLCPrefRFO issued by iA Cores that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_rfouncore cacheTOR Occupancy : RFOs issued by iA Cores that Missed the LLCevent=0x36,umask=0x21,config1=0x400330000000001TOR Occupancy : RFOs issued by iA Cores that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.iouncore cacheTOR Occupancy; All from Local IOevent=0x36,umask=0x3401For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   T; All locally generated IO trafficunc_cha_tor_occupancy.io_hituncore cacheTOR Occupancy; Hits from Local IOevent=0x36,umask=0x1401For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   Tunc_cha_tor_occupancy.io_missuncore cacheTOR Occupancy; Misses from Local IOevent=0x36,umask=0x2401For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   Tunc_cha_tor_occupancy.io_miss_itomuncore cacheTOR Occupancy;  ITOM Misses from Local IOevent=0x36,umask=0x24,config1=0x490330000000001For each cycle, this event accumulates the number of valid entries in the TOR that are generated from local IO ItoM requests that miss the LLC. An ItoM is used by IIO to request a data write without first reading the data for ownershipunc_cha_tor_occupancy.io_miss_rdcuruncore cacheTOR Occupancy;  RDCUR misses from Local IOevent=0x36,umask=0x24,config1=0x43c330000000001For each cycle, this event accumulates the number of valid entries in the TOR that are generated from local IO RdCur requests that miss the LLC. A RdCur request is used by IIO to read data without changing stateunc_cha_tor_occupancy.io_miss_rfouncore cacheTOR Occupancy;  RFO misses from Local IOevent=0x36,umask=0x24,config1=0x400330000000001For each cycle, this event accumulates the number of valid entries in the TOR that are generated from local IO RFO requests that miss the LLC. A read for ownership (RFO) requests data to be cached in E state with the intent to modifyunc_cha_tor_occupancy.ipquncore cacheTOR Occupancy; IPQevent=0x36,umask=801For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   Tunc_cha_tor_occupancy.ipq_hituncore cacheThis event is deprecatedevent=0x36,umask=0x1811unc_cha_tor_occupancy.ipq_missuncore cacheThis event is deprecatedevent=0x36,umask=0x2811unc_cha_tor_occupancy.irquncore cacheTOR Occupancy; IRQevent=0x36,umask=101For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   Tunc_cha_tor_occupancy.loc_alluncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_OCCUPANCY.ALL_FROM_LOCevent=0x36,umask=0x3711unc_cha_tor_occupancy.missuncore cacheTOR Occupancy; Missevent=0x36,umask=0x2001For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   T; Misses.  (a miss is defined to be any transaction from the IRQ, PRQ, RRQ, IPQ or (in the victim case) the ISMQ, that required the CHA to spawn a new UPI/SMI3 request on the UPI fabric (including UPI snoops and/or any RD/WR to a local memory controller, in the event that the CHA is the home node)).  Basically, if the LLC/SF/MLC complex were not able to service the request without involving another agent...it is a miss.  If only IDI snoops were required, it is not a miss (that means the SF/MLC comunc_cha_tor_occupancy.prquncore cacheTOR Occupancy; PRQevent=0x36,umask=401For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   Tunc_cha_txr_horz_ads_used.ad_bncuncore cacheCMS Horizontal ADS Used; AD - Bounceevent=0x9d,umask=101Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_cha_txr_horz_ads_used.ad_crduncore cacheCMS Horizontal ADS Used; AD - Creditevent=0x9d,umask=0x1001Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_cha_txr_horz_ads_used.ak_bncuncore cacheCMS Horizontal ADS Used; AK - Bounceevent=0x9d,umask=201Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_cha_txr_horz_ads_used.bl_bncuncore cacheCMS Horizontal ADS Used; BL - Bounceevent=0x9d,umask=401Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_cha_txr_horz_ads_used.bl_crduncore cacheCMS Horizontal ADS Used; BL - Creditevent=0x9d,umask=0x4001Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_cha_txr_horz_bypass.ad_bncuncore cacheCMS Horizontal Bypass Used; AD - Bounceevent=0x9f,umask=101Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_cha_txr_horz_bypass.ad_crduncore cacheCMS Horizontal Bypass Used; AD - Creditevent=0x9f,umask=0x1001Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_cha_txr_horz_bypass.ak_bncuncore cacheCMS Horizontal Bypass Used; AK - Bounceevent=0x9f,umask=201Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_cha_txr_horz_bypass.bl_bncuncore cacheCMS Horizontal Bypass Used; BL - Bounceevent=0x9f,umask=401Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_cha_txr_horz_bypass.bl_crduncore cacheCMS Horizontal Bypass Used; BL - Creditevent=0x9f,umask=0x4001Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_cha_txr_horz_bypass.iv_bncuncore cacheCMS Horizontal Bypass Used; IV - Bounceevent=0x9f,umask=801Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_cha_txr_horz_cycles_full.ad_bncuncore cacheCycles CMS Horizontal Egress Queue is Full; AD - Bounceevent=0x96,umask=101Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_cycles_full.ad_crduncore cacheCycles CMS Horizontal Egress Queue is Full; AD - Creditevent=0x96,umask=0x1001Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_cycles_full.ak_bncuncore cacheCycles CMS Horizontal Egress Queue is Full; AK - Bounceevent=0x96,umask=201Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_cycles_full.bl_bncuncore cacheCycles CMS Horizontal Egress Queue is Full; BL - Bounceevent=0x96,umask=401Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_cycles_full.bl_crduncore cacheCycles CMS Horizontal Egress Queue is Full; BL - Creditevent=0x96,umask=0x4001Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_cycles_full.iv_bncuncore cacheCycles CMS Horizontal Egress Queue is Full; IV - Bounceevent=0x96,umask=801Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_cycles_ne.ad_bncuncore cacheCycles CMS Horizontal Egress Queue is Not Empty; AD - Bounceevent=0x97,umask=101Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_cycles_ne.ad_crduncore cacheCycles CMS Horizontal Egress Queue is Not Empty; AD - Creditevent=0x97,umask=0x1001Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_cycles_ne.ak_bncuncore cacheCycles CMS Horizontal Egress Queue is Not Empty; AK - Bounceevent=0x97,umask=201Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_cycles_ne.bl_bncuncore cacheCycles CMS Horizontal Egress Queue is Not Empty; BL - Bounceevent=0x97,umask=401Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_cycles_ne.bl_crduncore cacheCycles CMS Horizontal Egress Queue is Not Empty; BL - Creditevent=0x97,umask=0x4001Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_cycles_ne.iv_bncuncore cacheCycles CMS Horizontal Egress Queue is Not Empty; IV - Bounceevent=0x97,umask=801Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_inserts.ad_bncuncore cacheCMS Horizontal Egress Inserts; AD - Bounceevent=0x95,umask=101Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_inserts.ad_crduncore cacheCMS Horizontal Egress Inserts; AD - Creditevent=0x95,umask=0x1001Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_inserts.ak_bncuncore cacheCMS Horizontal Egress Inserts; AK - Bounceevent=0x95,umask=201Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_inserts.bl_bncuncore cacheCMS Horizontal Egress Inserts; BL - Bounceevent=0x95,umask=401Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_inserts.bl_crduncore cacheCMS Horizontal Egress Inserts; BL - Creditevent=0x95,umask=0x4001Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_inserts.iv_bncuncore cacheCMS Horizontal Egress Inserts; IV - Bounceevent=0x95,umask=801Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_nack.ad_bncuncore cacheCMS Horizontal Egress NACKs; AD - Bounceevent=0x99,umask=101Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_cha_txr_horz_nack.ad_crduncore cacheCMS Horizontal Egress NACKs; AD - Creditevent=0x99,umask=0x2001Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_cha_txr_horz_nack.ak_bncuncore cacheCMS Horizontal Egress NACKs; AK - Bounceevent=0x99,umask=201Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_cha_txr_horz_nack.bl_bncuncore cacheCMS Horizontal Egress NACKs; BL - Bounceevent=0x99,umask=401Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_cha_txr_horz_nack.bl_crduncore cacheCMS Horizontal Egress NACKs; BL - Creditevent=0x99,umask=0x4001Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_cha_txr_horz_nack.iv_bncuncore cacheCMS Horizontal Egress NACKs; IV - Bounceevent=0x99,umask=801Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_cha_txr_horz_occupancy.ad_bncuncore cacheCMS Horizontal Egress Occupancy; AD - Bounceevent=0x94,umask=101Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_occupancy.ad_crduncore cacheCMS Horizontal Egress Occupancy; AD - Creditevent=0x94,umask=0x1001Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_occupancy.ak_bncuncore cacheCMS Horizontal Egress Occupancy; AK - Bounceevent=0x94,umask=201Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_occupancy.bl_bncuncore cacheCMS Horizontal Egress Occupancy; BL - Bounceevent=0x94,umask=401Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_occupancy.bl_crduncore cacheCMS Horizontal Egress Occupancy; BL - Creditevent=0x94,umask=0x4001Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_occupancy.iv_bncuncore cacheCMS Horizontal Egress Occupancy; IV - Bounceevent=0x94,umask=801Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_starved.ad_bncuncore cacheCMS Horizontal Egress Injection Starvation; AD - Bounceevent=0x9b,umask=101Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_cha_txr_horz_starved.ak_bncuncore cacheCMS Horizontal Egress Injection Starvation; AK - Bounceevent=0x9b,umask=201Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_cha_txr_horz_starved.bl_bncuncore cacheCMS Horizontal Egress Injection Starvation; BL - Bounceevent=0x9b,umask=401Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_cha_txr_horz_starved.iv_bncuncore cacheCMS Horizontal Egress Injection Starvation; IV - Bounceevent=0x9b,umask=801Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_cha_txr_vert_ads_used.ad_ag0uncore cacheCMS Vertical ADS Used; AD - Agent 0event=0x9c,umask=101Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_cha_txr_vert_ads_used.ad_ag1uncore cacheCMS Vertical ADS Used; AD - Agent 1event=0x9c,umask=0x1001Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_cha_txr_vert_ads_used.ak_ag0uncore cacheCMS Vertical ADS Used; AK - Agent 0event=0x9c,umask=201Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_cha_txr_vert_ads_used.ak_ag1uncore cacheCMS Vertical ADS Used; AK - Agent 1event=0x9c,umask=0x2001Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_cha_txr_vert_ads_used.bl_ag0uncore cacheCMS Vertical ADS Used; BL - Agent 0event=0x9c,umask=401Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_cha_txr_vert_ads_used.bl_ag1uncore cacheCMS Vertical ADS Used; BL - Agent 1event=0x9c,umask=0x4001Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_cha_txr_vert_bypass.ad_ag0uncore cacheCMS Vertical ADS Used; AD - Agent 0event=0x9e,umask=101Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_cha_txr_vert_bypass.ad_ag1uncore cacheCMS Vertical ADS Used; AD - Agent 1event=0x9e,umask=0x1001Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_cha_txr_vert_bypass.ak_ag0uncore cacheCMS Vertical ADS Used; AK - Agent 0event=0x9e,umask=201Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_cha_txr_vert_bypass.ak_ag1uncore cacheCMS Vertical ADS Used; AK - Agent 1event=0x9e,umask=0x2001Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_cha_txr_vert_bypass.bl_ag0uncore cacheCMS Vertical ADS Used; BL - Agent 0event=0x9e,umask=401Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_cha_txr_vert_bypass.bl_ag1uncore cacheCMS Vertical ADS Used; BL - Agent 1event=0x9e,umask=0x4001Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_cha_txr_vert_bypass.ivuncore cacheCMS Vertical ADS Used; IVevent=0x9e,umask=801Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_cha_txr_vert_cycles_full.ad_ag0uncore cacheCycles CMS Vertical Egress Queue Is Full; AD - Agent 0event=0x92,umask=101Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_cha_txr_vert_cycles_full.ad_ag1uncore cacheCycles CMS Vertical Egress Queue Is Full; AD - Agent 1event=0x92,umask=0x1001Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AD ring.  This is commonly used for outbound requestsunc_cha_txr_vert_cycles_full.ak_ag0uncore cacheCycles CMS Vertical Egress Queue Is Full; AK - Agent 0event=0x92,umask=201Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_cha_txr_vert_cycles_full.ak_ag1uncore cacheCycles CMS Vertical Egress Queue Is Full; AK - Agent 1event=0x92,umask=0x2001Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AK ringunc_cha_txr_vert_cycles_full.bl_ag0uncore cacheCycles CMS Vertical Egress Queue Is Full; BL - Agent 0event=0x92,umask=401Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_cha_txr_vert_cycles_full.bl_ag1uncore cacheCycles CMS Vertical Egress Queue Is Full; BL - Agent 1event=0x92,umask=0x4001Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_cha_txr_vert_cycles_full.ivuncore cacheCycles CMS Vertical Egress Queue Is Full; IVevent=0x92,umask=801Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the IV ring.  This is commonly used for snoops to the coresunc_cha_txr_vert_cycles_ne.ad_ag0uncore cacheCycles CMS Vertical Egress Queue Is Not Empty; AD - Agent 0event=0x93,umask=101Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_cha_txr_vert_cycles_ne.ad_ag1uncore cacheCycles CMS Vertical Egress Queue Is Not Empty; AD - Agent 1event=0x93,umask=0x1001Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AD ring.  This is commonly used for outbound requestsunc_cha_txr_vert_cycles_ne.ak_ag0uncore cacheCycles CMS Vertical Egress Queue Is Not Empty; AK - Agent 0event=0x93,umask=201Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_cha_txr_vert_cycles_ne.ak_ag1uncore cacheCycles CMS Vertical Egress Queue Is Not Empty; AK - Agent 1event=0x93,umask=0x2001Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AK ringunc_cha_txr_vert_cycles_ne.bl_ag0uncore cacheCycles CMS Vertical Egress Queue Is Not Empty; BL - Agent 0event=0x93,umask=401Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_cha_txr_vert_cycles_ne.bl_ag1uncore cacheCycles CMS Vertical Egress Queue Is Not Empty; BL - Agent 1event=0x93,umask=0x4001Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_cha_txr_vert_cycles_ne.ivuncore cacheCycles CMS Vertical Egress Queue Is Not Empty; IVevent=0x93,umask=801Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the IV ring.  This is commonly used for snoops to the coresunc_cha_txr_vert_inserts.ad_ag0uncore cacheCMS Vert Egress Allocations; AD - Agent 0event=0x91,umask=101Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_cha_txr_vert_inserts.ad_ag1uncore cacheCMS Vert Egress Allocations; AD - Agent 1event=0x91,umask=0x1001Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AD ring.  This is commonly used for outbound requestsunc_cha_txr_vert_inserts.ak_ag0uncore cacheCMS Vert Egress Allocations; AK - Agent 0event=0x91,umask=201Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_cha_txr_vert_inserts.ak_ag1uncore cacheCMS Vert Egress Allocations; AK - Agent 1event=0x91,umask=0x2001Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AK ringunc_cha_txr_vert_inserts.bl_ag0uncore cacheCMS Vert Egress Allocations; BL - Agent 0event=0x91,umask=401Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_cha_txr_vert_inserts.bl_ag1uncore cacheCMS Vert Egress Allocations; BL - Agent 1event=0x91,umask=0x4001Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_cha_txr_vert_inserts.ivuncore cacheCMS Vert Egress Allocations; IVevent=0x91,umask=801Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the IV ring.  This is commonly used for snoops to the coresunc_cha_txr_vert_nack.ad_ag0uncore cacheCMS Vertical Egress NACKs; AD - Agent 0event=0x98,umask=101Counts number of Egress packets NACK'ed on to the Vertical Ringunc_cha_txr_vert_nack.ad_ag1uncore cacheCMS Vertical Egress NACKs; AD - Agent 1event=0x98,umask=0x1001Counts number of Egress packets NACK'ed on to the Vertical Ringunc_cha_txr_vert_nack.ak_ag0uncore cacheCMS Vertical Egress NACKs; AK - Agent 0event=0x98,umask=201Counts number of Egress packets NACK'ed on to the Vertical Ringunc_cha_txr_vert_nack.ak_ag1uncore cacheCMS Vertical Egress NACKs; AK - Agent 1event=0x98,umask=0x2001Counts number of Egress packets NACK'ed on to the Vertical Ringunc_cha_txr_vert_nack.bl_ag0uncore cacheCMS Vertical Egress NACKs; BL - Agent 0event=0x98,umask=401Counts number of Egress packets NACK'ed on to the Vertical Ringunc_cha_txr_vert_nack.bl_ag1uncore cacheCMS Vertical Egress NACKs; BL - Agent 1event=0x98,umask=0x4001Counts number of Egress packets NACK'ed on to the Vertical Ringunc_cha_txr_vert_nack.ivuncore cacheCMS Vertical Egress NACKs; IVevent=0x98,umask=801Counts number of Egress packets NACK'ed on to the Vertical Ringunc_cha_txr_vert_occupancy.ad_ag0uncore cacheCMS Vert Egress Occupancy; AD - Agent 0event=0x90,umask=101Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_cha_txr_vert_occupancy.ad_ag1uncore cacheCMS Vert Egress Occupancy; AD - Agent 1event=0x90,umask=0x1001Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AD ring.  This is commonly used for outbound requestsunc_cha_txr_vert_occupancy.ak_ag0uncore cacheCMS Vert Egress Occupancy; AK - Agent 0event=0x90,umask=201Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_cha_txr_vert_occupancy.ak_ag1uncore cacheCMS Vert Egress Occupancy; AK - Agent 1event=0x90,umask=0x2001Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AK ringunc_cha_txr_vert_occupancy.bl_ag0uncore cacheCMS Vert Egress Occupancy; BL - Agent 0event=0x90,umask=401Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_cha_txr_vert_occupancy.bl_ag1uncore cacheCMS Vert Egress Occupancy; BL - Agent 1event=0x90,umask=0x4001Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_cha_txr_vert_occupancy.ivuncore cacheCMS Vert Egress Occupancy; IVevent=0x90,umask=801Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the IV ring.  This is commonly used for snoops to the coresunc_cha_txr_vert_starved.ad_ag0uncore cacheCMS Vertical Egress Injection Starvation; AD - Agent 0event=0x9a,umask=101Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_cha_txr_vert_starved.ad_ag1uncore cacheCMS Vertical Egress Injection Starvation; AD - Agent 1event=0x9a,umask=0x1001Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_cha_txr_vert_starved.ak_ag0uncore cacheCMS Vertical Egress Injection Starvation; AK - Agent 0event=0x9a,umask=201Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_cha_txr_vert_starved.ak_ag1uncore cacheCMS Vertical Egress Injection Starvation; AK - Agent 1event=0x9a,umask=0x2001Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_cha_txr_vert_starved.bl_ag0uncore cacheCMS Vertical Egress Injection Starvation; BL - Agent 0event=0x9a,umask=401Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_cha_txr_vert_starved.bl_ag1uncore cacheCMS Vertical Egress Injection Starvation; BL - Agent 1event=0x9a,umask=0x4001Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_cha_txr_vert_starved.ivuncore cacheCMS Vertical Egress Injection Starvation; IVevent=0x9a,umask=801Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_cha_upi_credits_acquired.ad_requncore cacheUPI Ingress Credit Allocations; AD REQ Creditsevent=0x38,umask=401Counts the number of UPI credits acquired for either the AD or BL ring.  In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer.  This can be used with the Credit Occupancy event in order to calculate average credit lifetime.  This event supports filtering to cover the VNA/VN0 credits and the different message classes.  Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a timeunc_cha_upi_credits_acquired.ad_rspuncore cacheUPI Ingress Credit Allocations; AD RSP VN0 Creditsevent=0x38,umask=801Counts the number of UPI credits acquired for either the AD or BL ring.  In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer.  This can be used with the Credit Occupancy event in order to calculate average credit lifetime.  This event supports filtering to cover the VNA/VN0 credits and the different message classes.  Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a timeunc_cha_upi_credits_acquired.bl_ncbuncore cacheUPI Ingress Credit Allocations; BL NCB Creditsevent=0x38,umask=0x4001Counts the number of UPI credits acquired for either the AD or BL ring.  In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer.  This can be used with the Credit Occupancy event in order to calculate average credit lifetime.  This event supports filtering to cover the VNA/VN0 credits and the different message classes.  Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a timeunc_cha_upi_credits_acquired.bl_ncsuncore cacheUPI Ingress Credit Allocations; BL NCS Creditsevent=0x38,umask=0x8001Counts the number of UPI credits acquired for either the AD or BL ring.  In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer.  This can be used with the Credit Occupancy event in order to calculate average credit lifetime.  This event supports filtering to cover the VNA/VN0 credits and the different message classes.  Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a timeunc_cha_upi_credits_acquired.bl_rspuncore cacheUPI Ingress Credit Allocations; BL RSP Creditsevent=0x38,umask=0x1001Counts the number of UPI credits acquired for either the AD or BL ring.  In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer.  This can be used with the Credit Occupancy event in order to calculate average credit lifetime.  This event supports filtering to cover the VNA/VN0 credits and the different message classes.  Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a timeunc_cha_upi_credits_acquired.bl_wbuncore cacheUPI Ingress Credit Allocations; BL DRS Creditsevent=0x38,umask=0x2001Counts the number of UPI credits acquired for either the AD or BL ring.  In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer.  This can be used with the Credit Occupancy event in order to calculate average credit lifetime.  This event supports filtering to cover the VNA/VN0 credits and the different message classes.  Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a timeunc_cha_upi_credits_acquired.vn0uncore cacheUPI Ingress Credit Allocations; VN0 Creditsevent=0x38,umask=201Counts the number of UPI credits acquired for either the AD or BL ring.  In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer.  This can be used with the Credit Occupancy event in order to calculate average credit lifetime.  This event supports filtering to cover the VNA/VN0 credits and the different message classes.  Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a timeunc_cha_upi_credits_acquired.vnauncore cacheUPI Ingress Credit Allocations; VNA Creditsevent=0x38,umask=101Counts the number of UPI credits acquired for either the AD or BL ring.  In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer.  This can be used with the Credit Occupancy event in order to calculate average credit lifetime.  This event supports filtering to cover the VNA/VN0 credits and the different message classes.  Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a timeunc_cha_upi_credit_occupancy.vn0_ad_requncore cacheUPI Ingress Credits In Use Cycles; AD REQ VN0 Creditsevent=0x3b,umask=401Accumulates the number of UPI credits available in each cycle for either the AD or BL ring.  In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer.  This stat increments by the number of credits that are available each cycle.  This can be used in conjunction with the Credit Acquired event in order to calculate average credit lifetime.  This event supports filtering for the different types of credits that are available.  Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a timeunc_cha_upi_credit_occupancy.vn0_ad_rspuncore cacheUPI Ingress Credits In Use Cycles; AD RSP VN0 Creditsevent=0x3b,umask=801Accumulates the number of UPI credits available in each cycle for either the AD or BL ring.  In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer.  This stat increments by the number of credits that are available each cycle.  This can be used in conjunction with the Credit Acquired event in order to calculate average credit lifetime.  This event supports filtering for the different types of credits that are available.  Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a timeunc_cha_upi_credit_occupancy.vn0_bl_ncbuncore cacheUPI Ingress Credits In Use Cycles; BL NCB VN0 Creditsevent=0x3b,umask=0x4001Accumulates the number of UPI credits available in each cycle for either the AD or BL ring.  In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer.  This stat increments by the number of credits that are available each cycle.  This can be used in conjunction with the Credit Acquired event in order to calculate average credit lifetime.  This event supports filtering for the different types of credits that are available.  Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a timeunc_cha_upi_credit_occupancy.vn0_bl_ncsuncore cacheUPI Ingress Credits In Use Cycles; BL NCS VN0 Creditsevent=0x3b,umask=0x8001Accumulates the number of UPI credits available in each cycle for either the AD or BL ring.  In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer.  This stat increments by the number of credits that are available each cycle.  This can be used in conjunction with the Credit Acquired event in order to calculate average credit lifetime.  This event supports filtering for the different types of credits that are available.  Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a timeunc_cha_upi_credit_occupancy.vn0_bl_rspuncore cacheUPI Ingress Credits In Use Cycles; BL RSP VN0 Creditsevent=0x3b,umask=0x1001Accumulates the number of UPI credits available in each cycle for either the AD or BL ring.  In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer.  This stat increments by the number of credits that are available each cycle.  This can be used in conjunction with the Credit Acquired event in order to calculate average credit lifetime.  This event supports filtering for the different types of credits that are available.  Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a timeunc_cha_upi_credit_occupancy.vn0_bl_wbuncore cacheUPI Ingress Credits In Use Cycles; BL DRS VN0 Creditsevent=0x3b,umask=0x2001Accumulates the number of UPI credits available in each cycle for either the AD or BL ring.  In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer.  This stat increments by the number of credits that are available each cycle.  This can be used in conjunction with the Credit Acquired event in order to calculate average credit lifetime.  This event supports filtering for the different types of credits that are available.  Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a timeunc_cha_upi_credit_occupancy.vna_aduncore cacheUPI Ingress Credits In Use Cycles; AD VNA Creditsevent=0x3b,umask=101Accumulates the number of UPI credits available in each cycle for either the AD or BL ring.  In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer.  This stat increments by the number of credits that are available each cycle.  This can be used in conjunction with the Credit Acquired event in order to calculate average credit lifetime.  This event supports filtering for the different types of credits that are available.  Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a timeunc_cha_upi_credit_occupancy.vna_bluncore cacheUPI Ingress Credits In Use Cycles; BL VNA Creditsevent=0x3b,umask=201Accumulates the number of UPI credits available in each cycle for either the AD or BL ring.  In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer.  This stat increments by the number of credits that are available each cycle.  This can be used in conjunction with the Credit Acquired event in order to calculate average credit lifetime.  This event supports filtering for the different types of credits that are available.  Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a timeunc_cha_vert_ring_ad_in_use.dn_evenuncore cacheVertical AD Ring In Use; Down and Evenevent=0xa6,umask=401Counts the number of cycles that the Vertical AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings  -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_ad_in_use.dn_odduncore cacheVertical AD Ring In Use; Down and Oddevent=0xa6,umask=801Counts the number of cycles that the Vertical AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings  -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_ad_in_use.up_evenuncore cacheVertical AD Ring In Use; Up and Evenevent=0xa6,umask=101Counts the number of cycles that the Vertical AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings  -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_ad_in_use.up_odduncore cacheVertical AD Ring In Use; Up and Oddevent=0xa6,umask=201Counts the number of cycles that the Vertical AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings  -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_ak_in_use.dn_evenuncore cacheVertical AK Ring In Use; Down and Evenevent=0xa8,umask=401Counts the number of cycles that the Vertical AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_ak_in_use.dn_odduncore cacheVertical AK Ring In Use; Down and Oddevent=0xa8,umask=801Counts the number of cycles that the Vertical AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_ak_in_use.up_evenuncore cacheVertical AK Ring In Use; Up and Evenevent=0xa8,umask=101Counts the number of cycles that the Vertical AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_ak_in_use.up_odduncore cacheVertical AK Ring In Use; Up and Oddevent=0xa8,umask=201Counts the number of cycles that the Vertical AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_bl_in_use.dn_evenuncore cacheVertical BL Ring in Use; Down and Evenevent=0xaa,umask=401Counts the number of cycles that the Vertical BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_bl_in_use.dn_odduncore cacheVertical BL Ring in Use; Down and Oddevent=0xaa,umask=801Counts the number of cycles that the Vertical BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_bl_in_use.up_evenuncore cacheVertical BL Ring in Use; Up and Evenevent=0xaa,umask=101Counts the number of cycles that the Vertical BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_bl_in_use.up_odduncore cacheVertical BL Ring in Use; Up and Oddevent=0xaa,umask=201Counts the number of cycles that the Vertical BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_iv_in_use.dnuncore cacheVertical IV Ring in Use; Downevent=0xac,umask=401Counts the number of cycles that the Vertical IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODDunc_cha_vert_ring_iv_in_use.upuncore cacheVertical IV Ring in Use; Upevent=0xac,umask=101Counts the number of cycles that the Vertical IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODDunc_cha_wb_push_mtoi.llcuncore cacheWbPushMtoI; Pushed to LLCevent=0x56,umask=101Counts the number of times when the CHA was received WbPushMtoI; Counts the number of times when the CHA was able to push WbPushMToI to LLCunc_cha_wb_push_mtoi.memuncore cacheWbPushMtoI; Pushed to Memoryevent=0x56,umask=201Counts the number of times when the CHA was received WbPushMtoI; Counts the number of times when the CHA was unable to push WbPushMToI to LLC (hence pushed it to MEM)unc_cha_write_no_credits.edc0_smi2uncore cacheCHA iMC CHNx WRITE Credits Empty; EDC0_SMI2event=0x5a,umask=401Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC.  In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue.; Filter for memory controller 2 onlyunc_cha_write_no_credits.edc1_smi3uncore cacheCHA iMC CHNx WRITE Credits Empty; EDC1_SMI3event=0x5a,umask=801Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC.  In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue.; Filter for memory controller 3 onlyunc_cha_write_no_credits.edc2_smi4uncore cacheCHA iMC CHNx WRITE Credits Empty; EDC2_SMI4event=0x5a,umask=0x1001Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC.  In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue.; Filter for memory controller 4 onlyunc_cha_write_no_credits.edc3_smi5uncore cacheCHA iMC CHNx WRITE Credits Empty; EDC3_SMI5event=0x5a,umask=0x2001Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC.  In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue.; Filter for memory controller 5 onlyunc_cha_write_no_credits.mc0_smi0uncore cacheCHA iMC CHNx WRITE Credits Empty; MC0_SMI0event=0x5a,umask=101Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC.  In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue.; Filter for memory controller 0 onlyunc_cha_write_no_credits.mc1_smi1uncore cacheCHA iMC CHNx WRITE Credits Empty; MC1_SMI1event=0x5a,umask=201Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC.  In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue.; Filter for memory controller 1 onlyunc_cha_xsnp_resp.any_rspi_fwdfeuncore cacheCore Cross Snoop Responses; Any RspIFwdFEevent=0x32,umask=0xe401Counts the number of core cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s):  from Evictions, Core  or External (i.e. from a remote node) Requests.  And the event can be filtered based on the responses:  RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Any Request - Response I to Fwd F/Eunc_cha_xsnp_resp.any_rspi_fwdmuncore cacheCore Cross Snoop Responsesevent=0x32,umask=0xf001Counts the number of core cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s):  from Evictions, Core  or External (i.e. from a remote node) Requests.  And the event can be filtered based on the responses:  RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Any Request - Response I to Fwd Munc_cha_xsnp_resp.any_rsps_fwdfeuncore cacheCore Cross Snoop Responses; Any RspSFwdFEevent=0x32,umask=0xe201Counts the number of core cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s):  from Evictions, Core  or External (i.e. from a remote node) Requests.  And the event can be filtered based on the responses:  RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Any Request - Response S to Fwd F/Eunc_cha_xsnp_resp.any_rsps_fwdmuncore cacheCore Cross Snoop Responses; Any RspSFwdMevent=0x32,umask=0xe801Counts the number of core cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s):  from Evictions, Core  or External (i.e. from a remote node) Requests.  And the event can be filtered based on the responses:  RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Any Request - Response S to Fwd Munc_cha_xsnp_resp.any_rsp_hitfseuncore cacheCore Cross Snoop Responses; Any RspHitFSEevent=0x32,umask=0xe101Counts the number of core cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s):  from Evictions, Core  or External (i.e. from a remote node) Requests.  And the event can be filtered based on the responses:  RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Any Request - Response any to Hit F/S/Eunc_cha_xsnp_resp.core_rspi_fwdfeuncore cacheCore Cross Snoop Responses; Core RspIFwdFEevent=0x32,umask=0x4401Counts the number of core cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s):  from Evictions, Core  or External (i.e. from a remote node) Requests.  And the event can be filtered based on the responses:  RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Core Request - Response I to Fwd F/Eunc_cha_xsnp_resp.core_rspi_fwdmuncore cacheCore Cross Snoop Responses; Core RspIFwdMevent=0x32,umask=0x5001Counts the number of core cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s):  from Evictions, Core  or External (i.e. from a remote node) Requests.  And the event can be filtered based on the responses:  RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Core Request - Response I to Fwd Munc_cha_xsnp_resp.core_rsps_fwdfeuncore cacheCore Cross Snoop Responses; Core RspSFwdFEevent=0x32,umask=0x4201Counts the number of core cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s):  from Evictions, Core  or External (i.e. from a remote node) Requests.  And the event can be filtered based on the responses:  RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Core Request - Response S to Fwd F/Eunc_cha_xsnp_resp.core_rsps_fwdmuncore cacheCore Cross Snoop Responses; Core RspSFwdMevent=0x32,umask=0x4801Counts the number of core cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s):  from Evictions, Core  or External (i.e. from a remote node) Requests.  And the event can be filtered based on the responses:  RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Core Request - Response S to Fwd Munc_cha_xsnp_resp.core_rsp_hitfseuncore cacheCore Cross Snoop Responses; Core RspHitFSEevent=0x32,umask=0x4101Counts the number of core cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s):  from Evictions, Core  or External (i.e. from a remote node) Requests.  And the event can be filtered based on the responses:  RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Core Request - Response any to Hit F/S/Eunc_cha_xsnp_resp.evict_rspi_fwdfeuncore cacheCore Cross Snoop Responses; Evict RspIFwdFEevent=0x32,umask=0x8401Counts the number of core cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s):  from Evictions, Core  or External (i.e. from a remote node) Requests.  And the event can be filtered based on the responses:  RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Eviction Request - Response I to Fwd F/Eunc_cha_xsnp_resp.evict_rspi_fwdmuncore cacheCore Cross Snoop Responses; Evict RspIFwdMevent=0x32,umask=0x9001Counts the number of core cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s):  from Evictions, Core  or External (i.e. from a remote node) Requests.  And the event can be filtered based on the responses:  RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Eviction Request - Response I to Fwd Munc_cha_xsnp_resp.evict_rsps_fwdfeuncore cacheCore Cross Snoop Responses; Evict RspSFwdFEevent=0x32,umask=0x8201Counts the number of core cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s):  from Evictions, Core  or External (i.e. from a remote node) Requests.  And the event can be filtered based on the responses:  RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Eviction Request - Response S to Fwd F/Eunc_cha_xsnp_resp.evict_rsps_fwdmuncore cacheCore Cross Snoop Responses; Evict RspSFwdMevent=0x32,umask=0x8801Counts the number of core cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s):  from Evictions, Core  or External (i.e. from a remote node) Requests.  And the event can be filtered based on the responses:  RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Eviction Request - Response S to Fwd Munc_cha_xsnp_resp.evict_rsp_hitfseuncore cacheCore Cross Snoop Responses; Evict RspHitFSEevent=0x32,umask=0x8101Counts the number of core cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s):  from Evictions, Core  or External (i.e. from a remote node) Requests.  And the event can be filtered based on the responses:  RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Eviction Request - Response any to Hit F/S/Eunc_cha_xsnp_resp.ext_rspi_fwdfeuncore cacheCore Cross Snoop Responses; External RspIFwdFEevent=0x32,umask=0x2401Counts the number of core cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s):  from Evictions, Core  or External (i.e. from a remote node) Requests.  And the event can be filtered based on the responses:  RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; External Request - Response I to Fwd F/Eunc_cha_xsnp_resp.ext_rspi_fwdmuncore cacheCore Cross Snoop Responses; External RspIFwdMevent=0x32,umask=0x3001Counts the number of core cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s):  from Evictions, Core  or External (i.e. from a remote node) Requests.  And the event can be filtered based on the responses:  RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; External Request - Response I to Fwd Munc_cha_xsnp_resp.ext_rsps_fwdfeuncore cacheCore Cross Snoop Responses; External RspSFwdFEevent=0x32,umask=0x2201Counts the number of core cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s):  from Evictions, Core  or External (i.e. from a remote node) Requests.  And the event can be filtered based on the responses:  RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; External Request - Response S to Fwd F/Eunc_cha_xsnp_resp.ext_rsps_fwdmuncore cacheCore Cross Snoop Responses; External RspSFwdMevent=0x32,umask=0x2801Counts the number of core cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s):  from Evictions, Core  or External (i.e. from a remote node) Requests.  And the event can be filtered based on the responses:  RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; External Request - Response S to Fwd Munc_cha_xsnp_resp.ext_rsp_hitfseuncore cacheCore Cross Snoop Responses; External RspHitFSEevent=0x32,umask=0x2101Counts the number of core cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s):  from Evictions, Core  or External (i.e. from a remote node) Requests.  And the event can be filtered based on the responses:  RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; External Request - Response any to Hit F/S/Eunc_c_clockticksuncore cacheThis event is deprecated. Refer to new event UNC_CHA_CLOCKTICKSevent=011unc_c_fast_asserteduncore cacheThis event is deprecated. Refer to new event UNC_CHA_FAST_ASSERTED.HORZevent=0xa5,umask=211unc_c_llc_lookup.anyuncore cacheThis event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.ANYevent=0x34,umask=0x1111unc_c_llc_lookup.data_readuncore cacheThis event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.DATA_READevent=0x34,umask=311unc_c_llc_lookup.localuncore cacheThis event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.LOCALevent=0x34,umask=0x3111unc_c_llc_lookup.remoteuncore cacheThis event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.REMOTEevent=0x34,umask=0x9111unc_c_llc_lookup.remote_snoopuncore cacheThis event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.REMOTE_SNOOPevent=0x34,umask=911unc_c_llc_lookup.writeuncore cacheThis event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.WRITEevent=0x34,umask=511unc_c_llc_victims.e_stateuncore cacheThis event is deprecated. Refer to new event UNC_CHA_LLC_VICTIMS.TOTAL_Eevent=0x37,umask=211unc_c_llc_victims.f_stateuncore cacheThis event is deprecated. Refer to new event UNC_CHA_LLC_VICTIMS.TOTAL_Fevent=0x37,umask=811unc_c_llc_victims.localuncore cacheThis event is deprecated. Refer to new event UNC_CHA_LLC_VICTIMS.LOCAL_ALLevent=0x37,umask=0x2f11unc_c_llc_victims.m_stateuncore cacheThis event is deprecated. Refer to new event UNC_CHA_LLC_VICTIMS.TOTAL_Mevent=0x37,umask=111unc_c_llc_victims.remoteuncore cacheThis event is deprecated. Refer to new event UNC_CHA_LLC_VICTIMS.REMOTE_ALLevent=0x37,umask=0x8011unc_c_llc_victims.s_stateuncore cacheThis event is deprecated. Refer to new event UNC_CHA_LLC_VICTIMS.TOTAL_Sevent=0x37,umask=411unc_c_ring_src_thrtluncore cacheThis event is deprecated. Refer to new event UNC_CHA_RING_SRC_THRTLevent=0xa411unc_c_tor_inserts.evictuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.EVICTevent=0x35,umask=211unc_c_tor_inserts.hituncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.HITevent=0x35,umask=0x1011unc_c_tor_inserts.ipquncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IPQevent=0x35,umask=811unc_c_tor_inserts.ipq_hituncore cacheThis event is deprecatedevent=0x35,umask=0x1811unc_c_tor_inserts.ipq_missuncore cacheThis event is deprecatedevent=0x35,umask=0x2811unc_c_tor_inserts.irquncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IAevent=0x35,umask=0x3111unc_c_tor_inserts.irq_hituncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IA_HITevent=0x35,umask=0x1111unc_c_tor_inserts.irq_missuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IA_MISSevent=0x35,umask=0x2111unc_c_tor_inserts.loc_alluncore cacheThis event is deprecatedevent=0x35,umask=0x3711unc_c_tor_inserts.loc_iauncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IAevent=0x35,umask=0x3111unc_c_tor_inserts.loc_iouncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IOevent=0x35,umask=0x3411unc_c_tor_inserts.missuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.MISSevent=0x35,umask=0x2011unc_c_tor_inserts.prquncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.PRQevent=0x35,umask=411unc_c_tor_inserts.prq_hituncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IO_HITevent=0x35,umask=0x1411unc_c_tor_inserts.prq_missuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IO_MISSevent=0x35,umask=0x2411unc_c_tor_inserts.rem_alluncore cacheThis event is deprecatedevent=0x35,umask=0x3011unc_c_tor_inserts.rrq_hituncore cacheThis event is deprecatedevent=0x35,umask=0x5011unc_c_tor_inserts.rrq_missuncore cacheThis event is deprecatedevent=0x35,umask=0x6011unc_c_tor_inserts.wbq_hituncore cacheThis event is deprecatedevent=0x35,umask=0x9011unc_c_tor_inserts.wbq_missuncore cacheThis event is deprecatedevent=0x35,umask=0xa011unc_c_tor_occupancy.evictuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_OCCUPANCY.EVICTevent=0x36,umask=211unc_c_tor_occupancy.hituncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_OCCUPANCY.HITevent=0x36,umask=0x1011unc_c_tor_occupancy.ipquncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_OCCUPANCY.IPQevent=0x36,umask=811unc_c_tor_occupancy.ipq_hituncore cacheThis event is deprecatedevent=0x36,umask=0x1811unc_c_tor_occupancy.ipq_missuncore cacheThis event is deprecatedevent=0x36,umask=0x2811unc_c_tor_occupancy.irquncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_OCCUPANCY.IAevent=0x36,umask=0x3111unc_c_tor_occupancy.irq_hituncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_OCCUPANCY.IA_HITevent=0x36,umask=0x1111unc_c_tor_occupancy.irq_missuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_OCCUPANCY.IA_MISSevent=0x36,umask=0x2111unc_c_tor_occupancy.loc_alluncore cacheThis event is deprecatedevent=0x36,umask=0x3711unc_c_tor_occupancy.loc_iauncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_OCCUPANCY.IAevent=0x36,umask=0x3111unc_c_tor_occupancy.loc_iouncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_OCCUPANCY.IOevent=0x36,umask=0x3411unc_c_tor_occupancy.missuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_OCCUPANCY.MISSevent=0x36,umask=0x2011unc_c_tor_occupancy.prquncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_OCCUPANCY.PRQevent=0x36,umask=411unc_c_tor_occupancy.prq_hituncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_OCCUPANCY.IO_HITevent=0x36,umask=0x1411unc_c_tor_occupancy.prq_missuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_OCCUPANCY.IO_MISSevent=0x36,umask=0x2411unc_h_ag0_ad_crd_acquired.tgr0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR0event=0x80,umask=111unc_h_ag0_ad_crd_acquired.tgr1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR1event=0x80,umask=211unc_h_ag0_ad_crd_acquired.tgr2uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR2event=0x80,umask=411unc_h_ag0_ad_crd_acquired.tgr3uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR3event=0x80,umask=811unc_h_ag0_ad_crd_acquired.tgr4uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR4event=0x80,umask=0x1011unc_h_ag0_ad_crd_acquired.tgr5uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR5event=0x80,umask=0x2011unc_h_ag0_ad_crd_occupancy.tgr0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR0event=0x82,umask=111unc_h_ag0_ad_crd_occupancy.tgr1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR1event=0x82,umask=211unc_h_ag0_ad_crd_occupancy.tgr2uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR2event=0x82,umask=411unc_h_ag0_ad_crd_occupancy.tgr3uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR3event=0x82,umask=811unc_h_ag0_ad_crd_occupancy.tgr4uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR4event=0x82,umask=0x1011unc_h_ag0_ad_crd_occupancy.tgr5uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR5event=0x82,umask=0x2011unc_h_ag0_bl_crd_acquired.tgr0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR0event=0x88,umask=111unc_h_ag0_bl_crd_acquired.tgr1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR1event=0x88,umask=211unc_h_ag0_bl_crd_acquired.tgr2uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR2event=0x88,umask=411unc_h_ag0_bl_crd_acquired.tgr3uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR3event=0x88,umask=811unc_h_ag0_bl_crd_acquired.tgr4uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR4event=0x88,umask=0x1011unc_h_ag0_bl_crd_acquired.tgr5uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR5event=0x88,umask=0x2011unc_h_ag0_bl_crd_occupancy.tgr0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR0event=0x8a,umask=111unc_h_ag0_bl_crd_occupancy.tgr1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR1event=0x8a,umask=211unc_h_ag0_bl_crd_occupancy.tgr2uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR2event=0x8a,umask=411unc_h_ag0_bl_crd_occupancy.tgr3uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR3event=0x8a,umask=811unc_h_ag0_bl_crd_occupancy.tgr4uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR4event=0x8a,umask=0x1011unc_h_ag0_bl_crd_occupancy.tgr5uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR5event=0x8a,umask=0x2011unc_h_ag1_ad_crd_acquired.tgr0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR0event=0x84,umask=111unc_h_ag1_ad_crd_acquired.tgr1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR1event=0x84,umask=211unc_h_ag1_ad_crd_acquired.tgr2uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR2event=0x84,umask=411unc_h_ag1_ad_crd_acquired.tgr3uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR3event=0x84,umask=811unc_h_ag1_ad_crd_acquired.tgr4uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR4event=0x84,umask=0x1011unc_h_ag1_ad_crd_acquired.tgr5uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR5event=0x84,umask=0x2011unc_h_ag1_ad_crd_occupancy.tgr0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR0event=0x86,umask=111unc_h_ag1_ad_crd_occupancy.tgr1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR1event=0x86,umask=211unc_h_ag1_ad_crd_occupancy.tgr2uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR2event=0x86,umask=411unc_h_ag1_ad_crd_occupancy.tgr3uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR3event=0x86,umask=811unc_h_ag1_ad_crd_occupancy.tgr4uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR4event=0x86,umask=0x1011unc_h_ag1_ad_crd_occupancy.tgr5uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR5event=0x86,umask=0x2011unc_h_ag1_bl_crd_occupancy.tgr0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR0event=0x8e,umask=111unc_h_ag1_bl_crd_occupancy.tgr1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR1event=0x8e,umask=211unc_h_ag1_bl_crd_occupancy.tgr2uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR2event=0x8e,umask=411unc_h_ag1_bl_crd_occupancy.tgr3uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR3event=0x8e,umask=811unc_h_ag1_bl_crd_occupancy.tgr4uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR4event=0x8e,umask=0x1011unc_h_ag1_bl_crd_occupancy.tgr5uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR5event=0x8e,umask=0x2011unc_h_ag1_bl_credits_acquired.tgr0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR0event=0x8c,umask=111unc_h_ag1_bl_credits_acquired.tgr1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR1event=0x8c,umask=211unc_h_ag1_bl_credits_acquired.tgr2uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR2event=0x8c,umask=411unc_h_ag1_bl_credits_acquired.tgr3uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR3event=0x8c,umask=811unc_h_ag1_bl_credits_acquired.tgr4uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR4event=0x8c,umask=0x1011unc_h_ag1_bl_credits_acquired.tgr5uncore cacheThis event is deprecated. Refer to new event UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR5event=0x8c,umask=0x2011unc_h_bypass_cha_imc.intermediateuncore cacheThis event is deprecated. Refer to new event UNC_CHA_BYPASS_CHA_IMC.INTERMEDIATEevent=0x57,umask=211unc_h_bypass_cha_imc.not_takenuncore cacheThis event is deprecated. Refer to new event UNC_CHA_BYPASS_CHA_IMC.NOT_TAKENevent=0x57,umask=411unc_h_bypass_cha_imc.takenuncore cacheThis event is deprecated. Refer to new event UNC_CHA_BYPASS_CHA_IMC.TAKENevent=0x57,umask=111unc_h_clockuncore cacheThis event is deprecated. Refer to new event UNC_CHA_CMS_CLOCKTICKSevent=0xc011unc_h_core_pma.c1_stateuncore cacheThis event is deprecated. Refer to new event UNC_CHA_CORE_PMA.C1_STATEevent=0x17,umask=111unc_h_core_pma.c1_transitionuncore cacheThis event is deprecated. Refer to new event UNC_CHA_CORE_PMA.C1_TRANSITIONevent=0x17,umask=211unc_h_core_pma.c6_stateuncore cacheThis event is deprecated. Refer to new event UNC_CHA_CORE_PMA.C6_STATEevent=0x17,umask=411unc_h_core_pma.c6_transitionuncore cacheThis event is deprecated. Refer to new event UNC_CHA_CORE_PMA.C6_TRANSITIONevent=0x17,umask=811unc_h_core_pma.gvuncore cacheThis event is deprecated. Refer to new event UNC_CHA_CORE_PMA.GVevent=0x17,umask=0x1011unc_h_core_snp.any_gtoneuncore cacheThis event is deprecated. Refer to new event UNC_CHA_CORE_SNP.ANY_GTONEevent=0x33,umask=0xe211unc_h_core_snp.any_oneuncore cacheThis event is deprecated. Refer to new event UNC_CHA_CORE_SNP.ANY_ONEevent=0x33,umask=0xe111unc_h_core_snp.any_remoteuncore cacheThis event is deprecated. Refer to new event UNC_CHA_CORE_SNP.ANY_REMOTEevent=0x33,umask=0xe411unc_h_core_snp.core_gtoneuncore cacheThis event is deprecated. Refer to new event UNC_CHA_CORE_SNP.CORE_GTONEevent=0x33,umask=0x4211unc_h_core_snp.core_oneuncore cacheThis event is deprecated. Refer to new event UNC_CHA_CORE_SNP.CORE_ONEevent=0x33,umask=0x4111unc_h_core_snp.core_remoteuncore cacheThis event is deprecated. Refer to new event UNC_CHA_CORE_SNP.CORE_REMOTEevent=0x33,umask=0x4411unc_h_core_snp.evict_gtoneuncore cacheThis event is deprecated. Refer to new event UNC_CHA_CORE_SNP.EVICT_GTONEevent=0x33,umask=0x8211unc_h_core_snp.evict_oneuncore cacheThis event is deprecated. Refer to new event UNC_CHA_CORE_SNP.EVICT_ONEevent=0x33,umask=0x8111unc_h_core_snp.evict_remoteuncore cacheThis event is deprecated. Refer to new event UNC_CHA_CORE_SNP.EVICT_REMOTEevent=0x33,umask=0x8411unc_h_core_snp.ext_gtoneuncore cacheThis event is deprecated. Refer to new event UNC_CHA_CORE_SNP.EXT_GTONEevent=0x33,umask=0x2211unc_h_core_snp.ext_oneuncore cacheThis event is deprecated. Refer to new event UNC_CHA_CORE_SNP.EXT_ONEevent=0x33,umask=0x2111unc_h_core_snp.ext_remoteuncore cacheThis event is deprecated. Refer to new event UNC_CHA_CORE_SNP.EXT_REMOTEevent=0x33,umask=0x2411unc_h_counter0_occupancyuncore cacheThis event is deprecated. Refer to new event UNC_CHA_COUNTER0_OCCUPANCYevent=0x1f11unc_h_dir_lookup.no_snpuncore cacheThis event is deprecated. Refer to new event UNC_CHA_DIR_LOOKUP.NO_SNPevent=0x53,umask=211unc_h_dir_lookup.snpuncore cacheThis event is deprecated. Refer to new event UNC_CHA_DIR_LOOKUP.SNPevent=0x53,umask=111unc_h_dir_update.hauncore cacheThis event is deprecated. Refer to new event UNC_CHA_DIR_UPDATE.HAevent=0x54,umask=111unc_h_dir_update.toruncore cacheThis event is deprecated. Refer to new event UNC_CHA_DIR_UPDATE.TORevent=0x54,umask=211unc_h_egress_ordering.iv_snoopgo_dnuncore cacheThis event is deprecated. Refer to new event UNC_CHA_EGRESS_ORDERING.IV_SNOOPGO_DNevent=0xae,umask=411unc_h_egress_ordering.iv_snoopgo_upuncore cacheThis event is deprecated. Refer to new event UNC_CHA_EGRESS_ORDERING.IV_SNOOPGO_UPevent=0xae,umask=111unc_h_hitme_hit.ex_rdsuncore cacheThis event is deprecated. Refer to new event UNC_CHA_HITME_HIT.EX_RDSevent=0x5f,umask=111unc_h_hitme_hit.shared_ownrequncore cacheThis event is deprecated. Refer to new event UNC_CHA_HITME_HIT.SHARED_OWNREQevent=0x5f,umask=411unc_h_hitme_hit.wbmtoeuncore cacheThis event is deprecated. Refer to new event UNC_CHA_HITME_HIT.WBMTOEevent=0x5f,umask=811unc_h_hitme_hit.wbmtoi_or_suncore cacheThis event is deprecated. Refer to new event UNC_CHA_HITME_HIT.WBMTOI_OR_Sevent=0x5f,umask=0x1011unc_h_hitme_lookup.readuncore cacheThis event is deprecated. Refer to new event UNC_CHA_HITME_LOOKUP.READevent=0x5e,umask=111unc_h_hitme_lookup.writeuncore cacheThis event is deprecated. Refer to new event UNC_CHA_HITME_LOOKUP.WRITEevent=0x5e,umask=211unc_h_hitme_miss.notshared_rdinvownuncore cacheThis event is deprecated. Refer to new event UNC_CHA_HITME_MISS.NOTSHARED_RDINVOWNevent=0x60,umask=0x4011unc_h_hitme_miss.read_or_invuncore cacheThis event is deprecated. Refer to new event UNC_CHA_HITME_MISS.READ_OR_INVevent=0x60,umask=0x8011unc_h_hitme_miss.shared_rdinvownuncore cacheThis event is deprecated. Refer to new event UNC_CHA_HITME_MISS.SHARED_RDINVOWNevent=0x60,umask=0x2011unc_h_hitme_update.deallocateuncore cacheThis event is deprecated. Refer to new event UNC_CHA_HITME_UPDATE.DEALLOCATEevent=0x61,umask=0x1011unc_h_hitme_update.deallocate_rspfwdi_locuncore cacheThis event is deprecated. Refer to new event UNC_CHA_HITME_UPDATE.DEALLOCATE_RSPFWDI_LOCevent=0x61,umask=111unc_h_hitme_update.rdinvownuncore cacheThis event is deprecated. Refer to new event UNC_CHA_HITME_UPDATE.RDINVOWNevent=0x61,umask=811unc_h_hitme_update.rspfwdi_remuncore cacheThis event is deprecated. Refer to new event UNC_CHA_HITME_UPDATE.RSPFWDI_REMevent=0x61,umask=211unc_h_hitme_update.shareduncore cacheThis event is deprecated. Refer to new event UNC_CHA_HITME_UPDATE.SHAREDevent=0x61,umask=411unc_h_horz_ring_ad_in_use.left_evenuncore cacheThis event is deprecated. Refer to new event UNC_CHA_HORZ_RING_AD_IN_USE.LEFT_EVENevent=0xa7,umask=111unc_h_horz_ring_ad_in_use.left_odduncore cacheThis event is deprecated. Refer to new event UNC_CHA_HORZ_RING_AD_IN_USE.LEFT_ODDevent=0xa7,umask=211unc_h_horz_ring_ad_in_use.right_evenuncore cacheThis event is deprecated. Refer to new event UNC_CHA_HORZ_RING_AD_IN_USE.RIGHT_EVENevent=0xa7,umask=411unc_h_horz_ring_ad_in_use.right_odduncore cacheThis event is deprecated. Refer to new event UNC_CHA_HORZ_RING_AD_IN_USE.RIGHT_ODDevent=0xa7,umask=811unc_h_horz_ring_ak_in_use.left_evenuncore cacheThis event is deprecated. Refer to new event UNC_CHA_HORZ_RING_AK_IN_USE.LEFT_EVENevent=0xa9,umask=111unc_h_horz_ring_ak_in_use.left_odduncore cacheThis event is deprecated. Refer to new event UNC_CHA_HORZ_RING_AK_IN_USE.LEFT_ODDevent=0xa9,umask=211unc_h_horz_ring_ak_in_use.right_evenuncore cacheThis event is deprecated. Refer to new event UNC_CHA_HORZ_RING_AK_IN_USE.RIGHT_EVENevent=0xa9,umask=411unc_h_horz_ring_ak_in_use.right_odduncore cacheThis event is deprecated. Refer to new event UNC_CHA_HORZ_RING_AK_IN_USE.RIGHT_ODDevent=0xa9,umask=811unc_h_horz_ring_bl_in_use.left_evenuncore cacheThis event is deprecated. Refer to new event UNC_CHA_HORZ_RING_BL_IN_USE.LEFT_EVENevent=0xab,umask=111unc_h_horz_ring_bl_in_use.left_odduncore cacheThis event is deprecated. Refer to new event UNC_CHA_HORZ_RING_BL_IN_USE.LEFT_ODDevent=0xab,umask=211unc_h_horz_ring_bl_in_use.right_evenuncore cacheThis event is deprecated. Refer to new event UNC_CHA_HORZ_RING_BL_IN_USE.RIGHT_EVENevent=0xab,umask=411unc_h_horz_ring_bl_in_use.right_odduncore cacheThis event is deprecated. Refer to new event UNC_CHA_HORZ_RING_BL_IN_USE.RIGHT_ODDevent=0xab,umask=811unc_h_horz_ring_iv_in_use.leftuncore cacheThis event is deprecated. Refer to new event UNC_CHA_HORZ_RING_IV_IN_USE.LEFTevent=0xad,umask=111unc_h_horz_ring_iv_in_use.rightuncore cacheThis event is deprecated. Refer to new event UNC_CHA_HORZ_RING_IV_IN_USE.RIGHTevent=0xad,umask=411unc_h_imc_reads_count.normaluncore cacheThis event is deprecated. Refer to new event UNC_CHA_IMC_READS_COUNT.NORMALevent=0x59,umask=111unc_h_imc_reads_count.priorityuncore cacheThis event is deprecated. Refer to new event UNC_CHA_IMC_READS_COUNT.PRIORITYevent=0x59,umask=211unc_h_imc_writes_count.fulluncore cacheThis event is deprecated. Refer to new event UNC_CHA_IMC_WRITES_COUNT.FULLevent=0x5b,umask=111unc_h_imc_writes_count.full_miguncore cacheThis event is deprecated. Refer to new event UNC_CHA_IMC_WRITES_COUNT.FULL_MIGevent=0x5b,umask=0x1011unc_h_imc_writes_count.full_priorityuncore cacheThis event is deprecated. Refer to new event UNC_CHA_IMC_WRITES_COUNT.FULL_PRIORITYevent=0x5b,umask=411unc_h_imc_writes_count.partialuncore cacheThis event is deprecated. Refer to new event UNC_CHA_IMC_WRITES_COUNT.PARTIALevent=0x5b,umask=211unc_h_imc_writes_count.partial_miguncore cacheThis event is deprecated. Refer to new event UNC_CHA_IMC_WRITES_COUNT.PARTIAL_MIGevent=0x5b,umask=0x2011unc_h_imc_writes_count.partial_priorityuncore cacheThis event is deprecated. Refer to new event UNC_CHA_IMC_WRITES_COUNT.PARTIAL_PRIORITYevent=0x5b,umask=811unc_h_iodc_alloc.invitomuncore cacheThis event is deprecated. Refer to new event UNC_CHA_IODC_ALLOC.INVITOMevent=0x62,umask=111unc_h_iodc_alloc.iodcfulluncore cacheThis event is deprecated. Refer to new event UNC_CHA_IODC_ALLOC.IODCFULLevent=0x62,umask=211unc_h_iodc_alloc.osbgateduncore cacheThis event is deprecated. Refer to new event UNC_CHA_IODC_ALLOC.OSBGATEDevent=0x62,umask=411unc_h_iodc_dealloc.alluncore cacheThis event is deprecated. Refer to new event UNC_CHA_IODC_DEALLOC.ALLevent=0x63,umask=0x1011unc_h_iodc_dealloc.snpoutuncore cacheThis event is deprecated. Refer to new event UNC_CHA_IODC_DEALLOC.SNPOUTevent=0x63,umask=811unc_h_iodc_dealloc.wbmtoeuncore cacheThis event is deprecated. Refer to new event UNC_CHA_IODC_DEALLOC.WBMTOEevent=0x63,umask=111unc_h_iodc_dealloc.wbmtoiuncore cacheThis event is deprecated. Refer to new event UNC_CHA_IODC_DEALLOC.WBMTOIevent=0x63,umask=211unc_h_iodc_dealloc.wbpushmtoiuncore cacheThis event is deprecated. Refer to new event UNC_CHA_IODC_DEALLOC.WBPUSHMTOIevent=0x63,umask=411unc_h_misc.cv0_pref_missuncore cacheThis event is deprecated. Refer to new event UNC_CHA_MISC.CV0_PREF_MISSevent=0x39,umask=0x2011unc_h_misc.cv0_pref_vicuncore cacheThis event is deprecated. Refer to new event UNC_CHA_MISC.CV0_PREF_VICevent=0x39,umask=0x1011unc_h_misc.rfo_hit_suncore cacheThis event is deprecated. Refer to new event UNC_CHA_MISC.RFO_HIT_Sevent=0x39,umask=811unc_h_misc.rspi_was_fseuncore cacheThis event is deprecated. Refer to new event UNC_CHA_MISC.RSPI_WAS_FSEevent=0x39,umask=111unc_h_misc.wc_aliasinguncore cacheThis event is deprecated. Refer to new event UNC_CHA_MISC.WC_ALIASINGevent=0x39,umask=211unc_h_osbuncore cacheThis event is deprecated. Refer to new event UNC_CHA_OSBevent=0x5511unc_h_read_no_credits.edc0_smi2uncore cacheThis event is deprecated. Refer to new event UNC_CHA_READ_NO_CREDITS.EDC0_SMI2event=0x58,umask=411unc_h_read_no_credits.edc1_smi3uncore cacheThis event is deprecated. Refer to new event UNC_CHA_READ_NO_CREDITS.EDC1_SMI3event=0x58,umask=811unc_h_read_no_credits.edc2_smi4uncore cacheThis event is deprecated. Refer to new event UNC_CHA_READ_NO_CREDITS.EDC2_SMI4event=0x58,umask=0x1011unc_h_read_no_credits.edc3_smi5uncore cacheThis event is deprecated. Refer to new event UNC_CHA_READ_NO_CREDITS.EDC3_SMI5event=0x58,umask=0x2011unc_h_read_no_credits.mc0_smi0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_READ_NO_CREDITS.MC0_SMI0event=0x58,umask=111unc_h_read_no_credits.mc1_smi1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_READ_NO_CREDITS.MC1_SMI1event=0x58,umask=211unc_h_requests.invitoe_localuncore cacheThis event is deprecated. Refer to new event UNC_CHA_REQUESTS.INVITOE_LOCALevent=0x50,umask=0x1011unc_h_requests.invitoe_remoteuncore cacheThis event is deprecated. Refer to new event UNC_CHA_REQUESTS.INVITOE_REMOTEevent=0x50,umask=0x2011unc_h_requests.readsuncore cacheread requests from home agentevent=0x50,umask=311unc_h_requests.reads_localuncore cacheread requests from local home agentevent=0x50,umask=111unc_h_requests.reads_remoteuncore cacheread requests from remote home agentevent=0x50,umask=211unc_h_requests.writesuncore cachewrite requests from home agentevent=0x50,umask=0xc11unc_h_requests.writes_localuncore cachewrite requests from local home agentevent=0x50,umask=411unc_h_requests.writes_remoteuncore cachewrite requests from remote home agentevent=0x50,umask=811unc_h_ring_bounces_horz.aduncore cacheThis event is deprecated. Refer to new event UNC_CHA_RING_BOUNCES_HORZ.ADevent=0xa1,umask=111unc_h_ring_bounces_horz.akuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RING_BOUNCES_HORZ.AKevent=0xa1,umask=211unc_h_ring_bounces_horz.bluncore cacheThis event is deprecated. Refer to new event UNC_CHA_RING_BOUNCES_HORZ.BLevent=0xa1,umask=411unc_h_ring_bounces_horz.ivuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RING_BOUNCES_HORZ.IVevent=0xa1,umask=811unc_h_ring_bounces_vert.aduncore cacheThis event is deprecated. Refer to new event UNC_CHA_RING_BOUNCES_VERT.ADevent=0xa0,umask=111unc_h_ring_bounces_vert.akuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RING_BOUNCES_VERT.AKevent=0xa0,umask=211unc_h_ring_bounces_vert.bluncore cacheThis event is deprecated. Refer to new event UNC_CHA_RING_BOUNCES_VERT.BLevent=0xa0,umask=411unc_h_ring_bounces_vert.ivuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RING_BOUNCES_VERT.IVevent=0xa0,umask=811unc_h_ring_sink_starved_horz.aduncore cacheThis event is deprecated. Refer to new event UNC_CHA_RING_SINK_STARVED_HORZ.ADevent=0xa3,umask=111unc_h_ring_sink_starved_horz.akuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RING_SINK_STARVED_HORZ.AKevent=0xa3,umask=211unc_h_ring_sink_starved_horz.ak_ag1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RING_SINK_STARVED_HORZ.AK_AG1event=0xa3,umask=0x2011unc_h_ring_sink_starved_horz.bluncore cacheThis event is deprecated. Refer to new event UNC_CHA_RING_SINK_STARVED_HORZ.BLevent=0xa3,umask=411unc_h_ring_sink_starved_horz.ivuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RING_SINK_STARVED_HORZ.IVevent=0xa3,umask=811unc_h_ring_sink_starved_vert.aduncore cacheThis event is deprecated. Refer to new event UNC_CHA_RING_SINK_STARVED_VERT.ADevent=0xa2,umask=111unc_h_ring_sink_starved_vert.akuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RING_SINK_STARVED_VERT.AKevent=0xa2,umask=211unc_h_ring_sink_starved_vert.bluncore cacheThis event is deprecated. Refer to new event UNC_CHA_RING_SINK_STARVED_VERT.BLevent=0xa2,umask=411unc_h_ring_sink_starved_vert.ivuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RING_SINK_STARVED_VERT.IVevent=0xa2,umask=811unc_h_rxc_inserts.ipquncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_INSERTS.IPQevent=0x13,umask=411unc_h_rxc_inserts.irquncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_INSERTS.IRQevent=0x13,umask=111unc_h_rxc_inserts.irq_rejuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_INSERTS.IRQ_REJevent=0x13,umask=211unc_h_rxc_inserts.prquncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_INSERTS.PRQevent=0x13,umask=0x1011unc_h_rxc_inserts.prq_rejuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_INSERTS.PRQ_REJevent=0x13,umask=0x2011unc_h_rxc_inserts.rrquncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_INSERTS.RRQevent=0x13,umask=0x4011unc_h_rxc_inserts.wbquncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_INSERTS.WBQevent=0x13,umask=0x8011unc_h_rxc_ipq0_reject.ad_req_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_IPQ0_REJECT.AD_REQ_VN0event=0x22,umask=111unc_h_rxc_ipq0_reject.ad_rsp_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_IPQ0_REJECT.AD_RSP_VN0event=0x22,umask=211unc_h_rxc_ipq0_reject.bl_ncb_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_IPQ0_REJECT.BL_NCB_VN0event=0x22,umask=0x1011unc_h_rxc_ipq0_reject.bl_ncs_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_IPQ0_REJECT.BL_NCS_VN0event=0x22,umask=0x2011unc_h_rxc_ipq0_reject.bl_rsp_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_IPQ0_REJECT.BL_RSP_VN0event=0x22,umask=411unc_h_rxc_ipq0_reject.bl_wb_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_IPQ0_REJECT.BL_WB_VN0event=0x22,umask=811unc_h_rxc_ipq1_reject.allow_snpuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_IPQ1_REJECT.ALLOW_SNPevent=0x23,umask=0x4011unc_h_rxc_ipq1_reject.any_ipq0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_IPQ1_REJECT.ANY0event=0x23,umask=111unc_h_rxc_ipq1_reject.hauncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_IPQ1_REJECT.HAevent=0x23,umask=211unc_h_rxc_ipq1_reject.llc_or_sf_wayuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_IPQ1_REJECT.LLC_OR_SF_WAYevent=0x23,umask=0x2011unc_h_rxc_ipq1_reject.llc_victimuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_IPQ1_REJECT.LLC_VICTIMevent=0x23,umask=411unc_h_rxc_ipq1_reject.pa_matchuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_IPQ1_REJECT.PA_MATCHevent=0x23,umask=0x8011unc_h_rxc_ipq1_reject.sf_victimuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_IPQ1_REJECT.SF_VICTIMevent=0x23,umask=811unc_h_rxc_ipq1_reject.victimuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_IPQ1_REJECT.VICTIMevent=0x23,umask=0x1011unc_h_rxc_irq0_reject.ad_req_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_IRQ0_REJECT.AD_REQ_VN0event=0x18,umask=111unc_h_rxc_irq0_reject.ad_rsp_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_IRQ0_REJECT.AD_RSP_VN0event=0x18,umask=211unc_h_rxc_irq0_reject.bl_ncb_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_IRQ0_REJECT.BL_NCB_VN0event=0x18,umask=0x1011unc_h_rxc_irq0_reject.bl_ncs_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_IRQ0_REJECT.BL_NCS_VN0event=0x18,umask=0x2011unc_h_rxc_irq0_reject.bl_rsp_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_IRQ0_REJECT.BL_RSP_VN0event=0x18,umask=411unc_h_rxc_irq0_reject.bl_wb_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_IRQ0_REJECT.BL_WB_VN0event=0x18,umask=811unc_h_rxc_irq1_reject.allow_snpuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_IRQ1_REJECT.ALLOW_SNPevent=0x19,umask=0x4011unc_h_rxc_irq1_reject.any_reject_irq0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_IRQ1_REJECT.ANY0event=0x19,umask=111unc_h_rxc_irq1_reject.hauncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_IRQ1_REJECT.HAevent=0x19,umask=211unc_h_rxc_irq1_reject.llc_or_sf_wayuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_IRQ1_REJECT.LLC_OR_SF_WAYevent=0x19,umask=0x2011unc_h_rxc_irq1_reject.llc_victimuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_IRQ1_REJECT.LLC_VICTIMevent=0x19,umask=411unc_h_rxc_irq1_reject.pa_matchuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_IRQ1_REJECT.PA_MATCHevent=0x19,umask=0x8011unc_h_rxc_irq1_reject.sf_victimuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_IRQ1_REJECT.SF_VICTIMevent=0x19,umask=811unc_h_rxc_irq1_reject.victimuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_IRQ1_REJECT.VICTIMevent=0x19,umask=0x1011unc_h_rxc_ismq0_reject.ad_req_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ0_REJECT.AD_REQ_VN0event=0x24,umask=111unc_h_rxc_ismq0_reject.ad_rsp_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ0_REJECT.AD_RSP_VN0event=0x24,umask=211unc_h_rxc_ismq0_reject.bl_ncb_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ0_REJECT.BL_NCB_VN0event=0x24,umask=0x1011unc_h_rxc_ismq0_reject.bl_ncs_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ0_REJECT.BL_NCS_VN0event=0x24,umask=0x2011unc_h_rxc_ismq0_reject.bl_rsp_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ0_REJECT.BL_RSP_VN0event=0x24,umask=411unc_h_rxc_ismq0_reject.bl_wb_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ0_REJECT.BL_WB_VN0event=0x24,umask=811unc_h_rxc_ismq0_retry.ad_req_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ0_RETRY.AD_REQ_VN0event=0x2c,umask=111unc_h_rxc_ismq0_retry.ad_rsp_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ0_RETRY.AD_RSP_VN0event=0x2c,umask=211unc_h_rxc_ismq0_retry.bl_ncb_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ0_RETRY.BL_NCB_VN0event=0x2c,umask=0x1011unc_h_rxc_ismq0_retry.bl_ncs_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ0_RETRY.BL_NCS_VN0event=0x2c,umask=0x2011unc_h_rxc_ismq0_retry.bl_rsp_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ0_RETRY.BL_RSP_VN0event=0x2c,umask=411unc_h_rxc_ismq0_retry.bl_wb_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ0_RETRY.BL_WB_VN0event=0x2c,umask=811unc_h_rxc_ismq1_reject.any_ismq0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ1_REJECT.ANY0event=0x25,umask=111unc_h_rxc_ismq1_reject.hauncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ1_REJECT.HAevent=0x25,umask=211unc_h_rxc_ismq1_retry.anyuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ1_RETRY.ANY0event=0x2d,umask=111unc_h_rxc_ismq1_retry.hauncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ1_RETRY.HAevent=0x2d,umask=211unc_h_rxc_occupancy.ipquncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_OCCUPANCY.IPQevent=0x11,umask=411unc_h_rxc_occupancy.irquncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_OCCUPANCY.IRQevent=0x11,umask=111unc_h_rxc_occupancy.rrquncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_OCCUPANCY.RRQevent=0x11,umask=0x4011unc_h_rxc_occupancy.wbquncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_OCCUPANCY.WBQevent=0x11,umask=0x8011unc_h_rxc_other0_retry.ad_req_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_OTHER0_RETRY.AD_REQ_VN0event=0x2e,umask=111unc_h_rxc_other0_retry.ad_rsp_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_OTHER0_RETRY.AD_RSP_VN0event=0x2e,umask=211unc_h_rxc_other0_retry.bl_ncb_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_OTHER0_RETRY.BL_NCB_VN0event=0x2e,umask=0x1011unc_h_rxc_other0_retry.bl_ncs_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_OTHER0_RETRY.BL_NCS_VN0event=0x2e,umask=0x2011unc_h_rxc_other0_retry.bl_rsp_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_OTHER0_RETRY.BL_RSP_VN0event=0x2e,umask=411unc_h_rxc_other0_retry.bl_wb_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_OTHER0_RETRY.BL_WB_VN0event=0x2e,umask=811unc_h_rxc_other1_retry.allow_snpuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_OTHER1_RETRY.ALLOW_SNPevent=0x2f,umask=0x4011unc_h_rxc_other1_retry.anyuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_OTHER1_RETRY.ANY0event=0x2f,umask=111unc_h_rxc_other1_retry.hauncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_OTHER1_RETRY.HAevent=0x2f,umask=211unc_h_rxc_other1_retry.llc_or_sf_wayuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_OTHER1_RETRY.LLC_OR_SF_WAYevent=0x2f,umask=0x2011unc_h_rxc_other1_retry.llc_victimuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_OTHER1_RETRY.LLC_VICTIMevent=0x2f,umask=411unc_h_rxc_other1_retry.pa_matchuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_OTHER1_RETRY.PA_MATCHevent=0x2f,umask=0x8011unc_h_rxc_other1_retry.sf_victimuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_OTHER1_RETRY.SF_VICTIMevent=0x2f,umask=811unc_h_rxc_other1_retry.victimuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_OTHER1_RETRY.VICTIMevent=0x2f,umask=0x1011unc_h_rxc_prq0_reject.ad_req_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_PRQ0_REJECT.AD_REQ_VN0event=0x20,umask=111unc_h_rxc_prq0_reject.ad_rsp_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_PRQ0_REJECT.AD_RSP_VN0event=0x20,umask=211unc_h_rxc_prq0_reject.bl_ncb_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_PRQ0_REJECT.BL_NCB_VN0event=0x20,umask=0x1011unc_h_rxc_prq0_reject.bl_ncs_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_PRQ0_REJECT.BL_NCS_VN0event=0x20,umask=0x2011unc_h_rxc_prq0_reject.bl_rsp_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_PRQ0_REJECT.BL_RSP_VN0event=0x20,umask=411unc_h_rxc_prq0_reject.bl_wb_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_PRQ0_REJECT.BL_WB_VN0event=0x20,umask=811unc_h_rxc_prq1_reject.allow_snpuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_PRQ1_REJECT.ALLOW_SNPevent=0x21,umask=0x4011unc_h_rxc_prq1_reject.any_prq0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_PRQ1_REJECT.ANY0event=0x21,umask=111unc_h_rxc_prq1_reject.hauncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_PRQ1_REJECT.HAevent=0x21,umask=211unc_h_rxc_prq1_reject.llc_or_sf_wayuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_PRQ1_REJECT.LLC_OR_SF_WAYevent=0x21,umask=0x2011unc_h_rxc_prq1_reject.llc_victimuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_PRQ1_REJECT.LLC_VICTIMevent=0x21,umask=411unc_h_rxc_prq1_reject.pa_matchuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_PRQ1_REJECT.PA_MATCHevent=0x21,umask=0x8011unc_h_rxc_prq1_reject.sf_victimuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_PRQ1_REJECT.SF_VICTIMevent=0x21,umask=811unc_h_rxc_prq1_reject.victimuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_PRQ1_REJECT.VICTIMevent=0x21,umask=0x1011unc_h_rxc_req_q0_retry.ad_req_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_REQ_Q0_RETRY.AD_REQ_VN0event=0x2a,umask=111unc_h_rxc_req_q0_retry.ad_rsp_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_REQ_Q0_RETRY.AD_RSP_VN0event=0x2a,umask=211unc_h_rxc_req_q0_retry.bl_ncb_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_REQ_Q0_RETRY.BL_NCB_VN0event=0x2a,umask=0x1011unc_h_rxc_req_q0_retry.bl_ncs_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_REQ_Q0_RETRY.BL_NCS_VN0event=0x2a,umask=0x2011unc_h_rxc_req_q0_retry.bl_rsp_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_REQ_Q0_RETRY.BL_RSP_VN0event=0x2a,umask=411unc_h_rxc_req_q0_retry.bl_wb_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_REQ_Q0_RETRY.BL_WB_VN0event=0x2a,umask=811unc_h_rxc_req_q1_retry.allow_snpuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_REQ_Q1_RETRY.ALLOW_SNPevent=0x2b,umask=0x4011unc_h_rxc_req_q1_retry.anyuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_REQ_Q1_RETRY.ANY0event=0x2b,umask=111unc_h_rxc_req_q1_retry.hauncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_REQ_Q1_RETRY.HAevent=0x2b,umask=211unc_h_rxc_req_q1_retry.llc_or_sf_wayuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_REQ_Q1_RETRY.LLC_OR_SF_WAYevent=0x2b,umask=0x2011unc_h_rxc_req_q1_retry.llc_victimuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_REQ_Q1_RETRY.LLC_VICTIMevent=0x2b,umask=411unc_h_rxc_req_q1_retry.pa_matchuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_REQ_Q1_RETRY.PA_MATCHevent=0x2b,umask=0x8011unc_h_rxc_req_q1_retry.sf_victimuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_REQ_Q1_RETRY.SF_VICTIMevent=0x2b,umask=811unc_h_rxc_req_q1_retry.victimuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_REQ_Q1_RETRY.VICTIMevent=0x2b,umask=0x1011unc_h_rxc_rrq0_reject.ad_req_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_RRQ0_REJECT.AD_REQ_VN0event=0x26,umask=111unc_h_rxc_rrq0_reject.ad_rsp_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_RRQ0_REJECT.AD_RSP_VN0event=0x26,umask=211unc_h_rxc_rrq0_reject.bl_ncb_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_RRQ0_REJECT.BL_NCB_VN0event=0x26,umask=0x1011unc_h_rxc_rrq0_reject.bl_ncs_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_RRQ0_REJECT.BL_NCS_VN0event=0x26,umask=0x2011unc_h_rxc_rrq0_reject.bl_rsp_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_RRQ0_REJECT.BL_RSP_VN0event=0x26,umask=411unc_h_rxc_rrq0_reject.bl_wb_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_RRQ0_REJECT.BL_WB_VN0event=0x26,umask=811unc_h_rxc_rrq1_reject.allow_snpuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_RRQ1_REJECT.ALLOW_SNPevent=0x27,umask=0x4011unc_h_rxc_rrq1_reject.any_rrq0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_RRQ1_REJECT.ANY0event=0x27,umask=111unc_h_rxc_rrq1_reject.hauncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_RRQ1_REJECT.HAevent=0x27,umask=211unc_h_rxc_rrq1_reject.llc_or_sf_wayuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_RRQ1_REJECT.LLC_OR_SF_WAYevent=0x27,umask=0x2011unc_h_rxc_rrq1_reject.llc_victimuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_RRQ1_REJECT.LLC_VICTIMevent=0x27,umask=411unc_h_rxc_rrq1_reject.pa_matchuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_RRQ1_REJECT.PA_MATCHevent=0x27,umask=0x8011unc_h_rxc_rrq1_reject.sf_victimuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_RRQ1_REJECT.SF_VICTIMevent=0x27,umask=811unc_h_rxc_rrq1_reject.victimuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_RRQ1_REJECT.VICTIMevent=0x27,umask=0x1011unc_h_rxc_wbq0_reject.ad_req_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_WBQ0_REJECT.AD_REQ_VN0event=0x28,umask=111unc_h_rxc_wbq0_reject.ad_rsp_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_WBQ0_REJECT.AD_RSP_VN0event=0x28,umask=211unc_h_rxc_wbq0_reject.bl_ncb_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_WBQ0_REJECT.BL_NCB_VN0event=0x28,umask=0x1011unc_h_rxc_wbq0_reject.bl_ncs_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_WBQ0_REJECT.BL_NCS_VN0event=0x28,umask=0x2011unc_h_rxc_wbq0_reject.bl_rsp_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_WBQ0_REJECT.BL_RSP_VN0event=0x28,umask=411unc_h_rxc_wbq0_reject.bl_wb_vn0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_WBQ0_REJECT.BL_WB_VN0event=0x28,umask=811unc_h_rxc_wbq1_reject.allow_snpuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_WBQ1_REJECT.ALLOW_SNPevent=0x29,umask=0x4011unc_h_rxc_wbq1_reject.any_wbq0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_WBQ1_REJECT.ANY0event=0x29,umask=111unc_h_rxc_wbq1_reject.hauncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_WBQ1_REJECT.HAevent=0x29,umask=211unc_h_rxc_wbq1_reject.llc_or_sf_wayuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_WBQ1_REJECT.LLC_OR_SF_WAYevent=0x29,umask=0x2011unc_h_rxc_wbq1_reject.llc_victimuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_WBQ1_REJECT.LLC_VICTIMevent=0x29,umask=411unc_h_rxc_wbq1_reject.pa_matchuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_WBQ1_REJECT.PA_MATCHevent=0x29,umask=0x8011unc_h_rxc_wbq1_reject.sf_victimuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_WBQ1_REJECT.SF_VICTIMevent=0x29,umask=811unc_h_rxc_wbq1_reject.victimuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxC_WBQ1_REJECT.VICTIMevent=0x29,umask=0x1011unc_h_rxr_busy_starved.ad_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_BUSY_STARVED.AD_BNCevent=0xb4,umask=111unc_h_rxr_busy_starved.ad_crduncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_BUSY_STARVED.AD_CRDevent=0xb4,umask=0x1011unc_h_rxr_busy_starved.bl_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_BUSY_STARVED.BL_BNCevent=0xb4,umask=411unc_h_rxr_busy_starved.bl_crduncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_BUSY_STARVED.BL_CRDevent=0xb4,umask=0x4011unc_h_rxr_bypass.ad_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_BYPASS.AD_BNCevent=0xb2,umask=111unc_h_rxr_bypass.ad_crduncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_BYPASS.AD_CRDevent=0xb2,umask=0x1011unc_h_rxr_bypass.ak_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_BYPASS.AK_BNCevent=0xb2,umask=211unc_h_rxr_bypass.bl_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_BYPASS.BL_BNCevent=0xb2,umask=411unc_h_rxr_bypass.bl_crduncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_BYPASS.BL_CRDevent=0xb2,umask=0x4011unc_h_rxr_bypass.iv_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_BYPASS.IV_BNCevent=0xb2,umask=811unc_h_rxr_crd_starved.ad_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_CRD_STARVED.AD_BNCevent=0xb3,umask=111unc_h_rxr_crd_starved.ad_crduncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_CRD_STARVED.AD_CRDevent=0xb3,umask=0x1011unc_h_rxr_crd_starved.ak_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_CRD_STARVED.AK_BNCevent=0xb3,umask=211unc_h_rxr_crd_starved.bl_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_CRD_STARVED.BL_BNCevent=0xb3,umask=411unc_h_rxr_crd_starved.bl_crduncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_CRD_STARVED.BL_CRDevent=0xb3,umask=0x4011unc_h_rxr_crd_starved.ifvuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_CRD_STARVED.IFVevent=0xb3,umask=0x8011unc_h_rxr_crd_starved.iv_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_CRD_STARVED.IV_BNCevent=0xb3,umask=811unc_h_rxr_inserts.ad_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_INSERTS.AD_BNCevent=0xb1,umask=111unc_h_rxr_inserts.ad_crduncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_INSERTS.AD_CRDevent=0xb1,umask=0x1011unc_h_rxr_inserts.ak_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_INSERTS.AK_BNCevent=0xb1,umask=211unc_h_rxr_inserts.bl_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_INSERTS.BL_BNCevent=0xb1,umask=411unc_h_rxr_inserts.bl_crduncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_INSERTS.BL_CRDevent=0xb1,umask=0x4011unc_h_rxr_inserts.iv_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_INSERTS.IV_BNCevent=0xb1,umask=811unc_h_rxr_occupancy.ad_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_OCCUPANCY.AD_BNCevent=0xb0,umask=111unc_h_rxr_occupancy.ad_crduncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_OCCUPANCY.AD_CRDevent=0xb0,umask=0x1011unc_h_rxr_occupancy.ak_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_OCCUPANCY.AK_BNCevent=0xb0,umask=211unc_h_rxr_occupancy.bl_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_OCCUPANCY.BL_BNCevent=0xb0,umask=411unc_h_rxr_occupancy.bl_crduncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_OCCUPANCY.BL_CRDevent=0xb0,umask=0x4011unc_h_rxr_occupancy.iv_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_RxR_OCCUPANCY.IV_BNCevent=0xb0,umask=811unc_h_sf_eviction.e_stateuncore cacheThis event is deprecated. Refer to new event UNC_CHA_SF_EVICTION.E_STATEevent=0x3d,umask=211unc_h_sf_eviction.m_stateuncore cacheThis event is deprecated. Refer to new event UNC_CHA_SF_EVICTION.M_STATEevent=0x3d,umask=111unc_h_sf_eviction.s_stateuncore cacheThis event is deprecated. Refer to new event UNC_CHA_SF_EVICTION.S_STATEevent=0x3d,umask=411unc_h_snoops_sent.uncore cacheThis event is deprecated. Refer to new event UNC_CHA_SNOOPS_SENT.ALLevent=0x51,umask=111unc_h_snoops_sent.bcst_locuncore cacheThis event is deprecated. Refer to new event UNC_CHA_SNOOPS_SENT.BCST_LOCALevent=0x51,umask=0x1011unc_h_snoops_sent.bcst_remuncore cacheThis event is deprecated. Refer to new event UNC_CHA_SNOOPS_SENT.BCST_REMOTEevent=0x51,umask=0x2011unc_h_snoops_sent.direct_locuncore cacheThis event is deprecated. Refer to new event UNC_CHA_SNOOPS_SENT.DIRECT_LOCALevent=0x51,umask=0x4011unc_h_snoops_sent.direct_remuncore cacheThis event is deprecated. Refer to new event UNC_CHA_SNOOPS_SENT.DIRECT_REMOTEevent=0x51,umask=0x8011unc_h_snoops_sent.localuncore cacheThis event is deprecated. Refer to new event UNC_CHA_SNOOPS_SENT.LOCALevent=0x51,umask=411unc_h_snoops_sent.remoteuncore cacheThis event is deprecated. Refer to new event UNC_CHA_SNOOPS_SENT.REMOTEevent=0x51,umask=811unc_h_snoop_resp.rspcnflctuncore cacheThis event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP.RSPCNFLCTSevent=0x5c,umask=0x4011unc_h_snoop_resp.rspfwduncore cacheThis event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP.RSPFWDevent=0x5c,umask=0x8011unc_h_snoop_resp.rspiuncore cacheThis event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP.RSPIevent=0x5c,umask=111unc_h_snoop_resp.rspifwduncore cacheThis event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP.RSPIFWDevent=0x5c,umask=411unc_h_snoop_resp.rspsuncore cacheThis event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP.RSPSevent=0x5c,umask=211unc_h_snoop_resp.rspsfwduncore cacheThis event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP.RSPSFWDevent=0x5c,umask=811unc_h_snoop_resp.rsp_fwd_wbuncore cacheThis event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP.RSP_FWD_WBevent=0x5c,umask=0x2011unc_h_snoop_resp.rsp_wbuncore cacheThis event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP.RSP_WBWBevent=0x5c,umask=0x1011unc_h_snp_rsp_rcv_local.rspcnflctuncore cacheThis event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP_LOCAL.RSPCNFLCTevent=0x5d,umask=0x4011unc_h_snp_rsp_rcv_local.rspfwduncore cacheThis event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP_LOCAL.RSPFWDevent=0x5d,umask=0x8011unc_h_snp_rsp_rcv_local.rspiuncore cacheThis event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP_LOCAL.RSPIevent=0x5d,umask=111unc_h_snp_rsp_rcv_local.rspifwduncore cacheThis event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP_LOCAL.RSPIFWDevent=0x5d,umask=411unc_h_snp_rsp_rcv_local.rspsuncore cacheThis event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP_LOCAL.RSPSevent=0x5d,umask=211unc_h_snp_rsp_rcv_local.rspsfwduncore cacheThis event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP_LOCAL.RSPSFWDevent=0x5d,umask=811unc_h_snp_rsp_rcv_local.rsp_fwd_wbuncore cacheThis event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP_LOCAL.RSP_FWD_WBevent=0x5d,umask=0x2011unc_h_snp_rsp_rcv_local.rsp_wbuncore cacheThis event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP_LOCAL.RSP_WBevent=0x5d,umask=0x1011unc_h_stall_no_txr_horz_crd_ad_ag0.tgr0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR0event=0xd0,umask=111unc_h_stall_no_txr_horz_crd_ad_ag0.tgr1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR1event=0xd0,umask=211unc_h_stall_no_txr_horz_crd_ad_ag0.tgr2uncore cacheThis event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR2event=0xd0,umask=411unc_h_stall_no_txr_horz_crd_ad_ag0.tgr3uncore cacheThis event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR3event=0xd0,umask=811unc_h_stall_no_txr_horz_crd_ad_ag0.tgr4uncore cacheThis event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR4event=0xd0,umask=0x1011unc_h_stall_no_txr_horz_crd_ad_ag0.tgr5uncore cacheThis event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR5event=0xd0,umask=0x2011unc_h_stall_no_txr_horz_crd_ad_ag1.tgr0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR0event=0xd2,umask=111unc_h_stall_no_txr_horz_crd_ad_ag1.tgr1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR1event=0xd2,umask=211unc_h_stall_no_txr_horz_crd_ad_ag1.tgr2uncore cacheThis event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR2event=0xd2,umask=411unc_h_stall_no_txr_horz_crd_ad_ag1.tgr3uncore cacheThis event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR3event=0xd2,umask=811unc_h_stall_no_txr_horz_crd_ad_ag1.tgr4uncore cacheThis event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR4event=0xd2,umask=0x1011unc_h_stall_no_txr_horz_crd_ad_ag1.tgr5uncore cacheThis event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR5event=0xd2,umask=0x2011unc_h_stall_no_txr_horz_crd_bl_ag0.tgr0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR0event=0xd4,umask=111unc_h_stall_no_txr_horz_crd_bl_ag0.tgr1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR1event=0xd4,umask=211unc_h_stall_no_txr_horz_crd_bl_ag0.tgr2uncore cacheThis event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR2event=0xd4,umask=411unc_h_stall_no_txr_horz_crd_bl_ag0.tgr3uncore cacheThis event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR3event=0xd4,umask=811unc_h_stall_no_txr_horz_crd_bl_ag0.tgr4uncore cacheThis event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR4event=0xd4,umask=0x1011unc_h_stall_no_txr_horz_crd_bl_ag0.tgr5uncore cacheThis event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR5event=0xd4,umask=0x2011unc_h_stall_no_txr_horz_crd_bl_ag1.tgr0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR0event=0xd6,umask=111unc_h_stall_no_txr_horz_crd_bl_ag1.tgr1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR1event=0xd6,umask=211unc_h_stall_no_txr_horz_crd_bl_ag1.tgr2uncore cacheThis event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR2event=0xd6,umask=411unc_h_stall_no_txr_horz_crd_bl_ag1.tgr3uncore cacheThis event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR3event=0xd6,umask=811unc_h_stall_no_txr_horz_crd_bl_ag1.tgr4uncore cacheThis event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR4event=0xd6,umask=0x1011unc_h_stall_no_txr_horz_crd_bl_ag1.tgr5uncore cacheThis event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR5event=0xd6,umask=0x2011unc_h_txr_horz_ads_used.ad_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_ADS_USED.AD_BNCevent=0x9d,umask=111unc_h_txr_horz_ads_used.ad_crduncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_ADS_USED.AD_CRDevent=0x9d,umask=0x1011unc_h_txr_horz_ads_used.ak_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_ADS_USED.AK_BNCevent=0x9d,umask=211unc_h_txr_horz_ads_used.bl_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_ADS_USED.BL_BNCevent=0x9d,umask=411unc_h_txr_horz_ads_used.bl_crduncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_ADS_USED.BL_CRDevent=0x9d,umask=0x4011unc_h_txr_horz_bypass.ad_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_BYPASS.AD_BNCevent=0x9f,umask=111unc_h_txr_horz_bypass.ad_crduncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_BYPASS.AD_CRDevent=0x9f,umask=0x1011unc_h_txr_horz_bypass.ak_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_BYPASS.AK_BNCevent=0x9f,umask=211unc_h_txr_horz_bypass.bl_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_BYPASS.BL_BNCevent=0x9f,umask=411unc_h_txr_horz_bypass.bl_crduncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_BYPASS.BL_CRDevent=0x9f,umask=0x4011unc_h_txr_horz_bypass.iv_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_BYPASS.IV_BNCevent=0x9f,umask=811unc_h_txr_horz_cycles_full.ad_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_CYCLES_FULL.AD_BNCevent=0x96,umask=111unc_h_txr_horz_cycles_full.ad_crduncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_CYCLES_FULL.AD_CRDevent=0x96,umask=0x1011unc_h_txr_horz_cycles_full.ak_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_CYCLES_FULL.AK_BNCevent=0x96,umask=211unc_h_txr_horz_cycles_full.bl_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_CYCLES_FULL.BL_BNCevent=0x96,umask=411unc_h_txr_horz_cycles_full.bl_crduncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_CYCLES_FULL.BL_CRDevent=0x96,umask=0x4011unc_h_txr_horz_cycles_full.iv_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_CYCLES_FULL.IV_BNCevent=0x96,umask=811unc_h_txr_horz_cycles_ne.ad_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_CYCLES_NE.AD_BNCevent=0x97,umask=111unc_h_txr_horz_cycles_ne.ad_crduncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_CYCLES_NE.AD_CRDevent=0x97,umask=0x1011unc_h_txr_horz_cycles_ne.ak_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_CYCLES_NE.AK_BNCevent=0x97,umask=211unc_h_txr_horz_cycles_ne.bl_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_CYCLES_NE.BL_BNCevent=0x97,umask=411unc_h_txr_horz_cycles_ne.bl_crduncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_CYCLES_NE.BL_CRDevent=0x97,umask=0x4011unc_h_txr_horz_cycles_ne.iv_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_CYCLES_NE.IV_BNCevent=0x97,umask=811unc_h_txr_horz_inserts.ad_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_INSERTS.AD_BNCevent=0x95,umask=111unc_h_txr_horz_inserts.ad_crduncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_INSERTS.AD_CRDevent=0x95,umask=0x1011unc_h_txr_horz_inserts.ak_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_INSERTS.AK_BNCevent=0x95,umask=211unc_h_txr_horz_inserts.bl_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_INSERTS.BL_BNCevent=0x95,umask=411unc_h_txr_horz_inserts.bl_crduncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_INSERTS.BL_CRDevent=0x95,umask=0x4011unc_h_txr_horz_inserts.iv_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_INSERTS.IV_BNCevent=0x95,umask=811unc_h_txr_horz_nack.ad_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_NACK.AD_BNCevent=0x99,umask=111unc_h_txr_horz_nack.ad_crduncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_NACK.AD_CRDevent=0x99,umask=0x2011unc_h_txr_horz_nack.ak_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_NACK.AK_BNCevent=0x99,umask=211unc_h_txr_horz_nack.bl_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_NACK.BL_BNCevent=0x99,umask=411unc_h_txr_horz_nack.bl_crduncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_NACK.BL_CRDevent=0x99,umask=0x4011unc_h_txr_horz_nack.iv_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_NACK.IV_BNCevent=0x99,umask=811unc_h_txr_horz_occupancy.ad_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_OCCUPANCY.AD_BNCevent=0x94,umask=111unc_h_txr_horz_occupancy.ad_crduncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_OCCUPANCY.AD_CRDevent=0x94,umask=0x1011unc_h_txr_horz_occupancy.ak_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_OCCUPANCY.AK_BNCevent=0x94,umask=211unc_h_txr_horz_occupancy.bl_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_OCCUPANCY.BL_BNCevent=0x94,umask=411unc_h_txr_horz_occupancy.bl_crduncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_OCCUPANCY.BL_CRDevent=0x94,umask=0x4011unc_h_txr_horz_occupancy.iv_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_OCCUPANCY.IV_BNCevent=0x94,umask=811unc_h_txr_horz_starved.ad_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_STARVED.AD_BNCevent=0x9b,umask=111unc_h_txr_horz_starved.ak_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_STARVED.AK_BNCevent=0x9b,umask=211unc_h_txr_horz_starved.bl_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_STARVED.BL_BNCevent=0x9b,umask=411unc_h_txr_horz_starved.iv_bncuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_STARVED.IV_BNCevent=0x9b,umask=811unc_h_txr_vert_ads_used.ad_ag0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_ADS_USED.AD_AG0event=0x9c,umask=111unc_h_txr_vert_ads_used.ad_ag1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_ADS_USED.AD_AG1event=0x9c,umask=0x1011unc_h_txr_vert_ads_used.ak_ag0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_ADS_USED.AK_AG0event=0x9c,umask=211unc_h_txr_vert_ads_used.ak_ag1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_ADS_USED.AK_AG1event=0x9c,umask=0x2011unc_h_txr_vert_ads_used.bl_ag0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_ADS_USED.BL_AG0event=0x9c,umask=411unc_h_txr_vert_ads_used.bl_ag1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_ADS_USED.BL_AG1event=0x9c,umask=0x4011unc_h_txr_vert_bypass.ad_ag0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_BYPASS.AD_AG0event=0x9e,umask=111unc_h_txr_vert_bypass.ad_ag1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_BYPASS.AD_AG1event=0x9e,umask=0x1011unc_h_txr_vert_bypass.ak_ag0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_BYPASS.AK_AG0event=0x9e,umask=211unc_h_txr_vert_bypass.ak_ag1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_BYPASS.AK_AG1event=0x9e,umask=0x2011unc_h_txr_vert_bypass.bl_ag0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_BYPASS.BL_AG0event=0x9e,umask=411unc_h_txr_vert_bypass.bl_ag1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_BYPASS.BL_AG1event=0x9e,umask=0x4011unc_h_txr_vert_bypass.iv_ag1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_BYPASS.IVevent=0x9e,umask=811unc_h_txr_vert_cycles_full.ad_ag0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_CYCLES_FULL.AD_AG0event=0x92,umask=111unc_h_txr_vert_cycles_full.ad_ag1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_CYCLES_FULL.AD_AG1event=0x92,umask=0x1011unc_h_txr_vert_cycles_full.ak_ag0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_CYCLES_FULL.AK_AG0event=0x92,umask=211unc_h_txr_vert_cycles_full.ak_ag1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_CYCLES_FULL.AK_AG1event=0x92,umask=0x2011unc_h_txr_vert_cycles_full.bl_ag0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_CYCLES_FULL.BL_AG0event=0x92,umask=411unc_h_txr_vert_cycles_full.bl_ag1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_CYCLES_FULL.BL_AG1event=0x92,umask=0x4011unc_h_txr_vert_cycles_full.iv_ag0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_CYCLES_FULL.IVevent=0x92,umask=811unc_h_txr_vert_cycles_ne.ad_ag0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_CYCLES_NE.AD_AG0event=0x93,umask=111unc_h_txr_vert_cycles_ne.ad_ag1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_CYCLES_NE.AD_AG1event=0x93,umask=0x1011unc_h_txr_vert_cycles_ne.ak_ag0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_CYCLES_NE.AK_AG0event=0x93,umask=211unc_h_txr_vert_cycles_ne.ak_ag1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_CYCLES_NE.AK_AG1event=0x93,umask=0x2011unc_h_txr_vert_cycles_ne.bl_ag0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_CYCLES_NE.BL_AG0event=0x93,umask=411unc_h_txr_vert_cycles_ne.bl_ag1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_CYCLES_NE.BL_AG1event=0x93,umask=0x4011unc_h_txr_vert_cycles_ne.iv_ag0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_CYCLES_NE.IVevent=0x93,umask=811unc_h_txr_vert_inserts.ad_ag0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_INSERTS.AD_AG0event=0x91,umask=111unc_h_txr_vert_inserts.ad_ag1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_INSERTS.AD_AG1event=0x91,umask=0x1011unc_h_txr_vert_inserts.ak_ag0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_INSERTS.AK_AG0event=0x91,umask=211unc_h_txr_vert_inserts.ak_ag1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_INSERTS.AK_AG1event=0x91,umask=0x2011unc_h_txr_vert_inserts.bl_ag0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_INSERTS.BL_AG0event=0x91,umask=411unc_h_txr_vert_inserts.bl_ag1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_INSERTS.BL_AG1event=0x91,umask=0x4011unc_h_txr_vert_inserts.iv_ag0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_INSERTS.IVevent=0x91,umask=811unc_h_txr_vert_nack.ad_ag0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_NACK.AD_AG0event=0x98,umask=111unc_h_txr_vert_nack.ad_ag1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_NACK.AD_AG1event=0x98,umask=0x1011unc_h_txr_vert_nack.ak_ag0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_NACK.AK_AG0event=0x98,umask=211unc_h_txr_vert_nack.ak_ag1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_NACK.AK_AG1event=0x98,umask=0x2011unc_h_txr_vert_nack.bl_ag0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_NACK.BL_AG0event=0x98,umask=411unc_h_txr_vert_nack.bl_ag1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_NACK.BL_AG1event=0x98,umask=0x4011unc_h_txr_vert_nack.ivuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_NACK.IVevent=0x98,umask=811unc_h_txr_vert_occupancy.ad_ag0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_OCCUPANCY.AD_AG0event=0x90,umask=111unc_h_txr_vert_occupancy.ad_ag1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_OCCUPANCY.AD_AG1event=0x90,umask=0x1011unc_h_txr_vert_occupancy.ak_ag0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_OCCUPANCY.AK_AG0event=0x90,umask=211unc_h_txr_vert_occupancy.ak_ag1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_OCCUPANCY.AK_AG1event=0x90,umask=0x2011unc_h_txr_vert_occupancy.bl_ag0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_OCCUPANCY.BL_AG0event=0x90,umask=411unc_h_txr_vert_occupancy.bl_ag1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_OCCUPANCY.BL_AG1event=0x90,umask=0x4011unc_h_txr_vert_occupancy.iv_ag0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_OCCUPANCY.IVevent=0x90,umask=811unc_h_txr_vert_starved.ad_ag0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_STARVED.AD_AG0event=0x9a,umask=111unc_h_txr_vert_starved.ad_ag1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_STARVED.AD_AG1event=0x9a,umask=0x1011unc_h_txr_vert_starved.ak_ag0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_STARVED.AK_AG0event=0x9a,umask=211unc_h_txr_vert_starved.ak_ag1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_STARVED.AK_AG1event=0x9a,umask=0x2011unc_h_txr_vert_starved.bl_ag0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_STARVED.BL_AG0event=0x9a,umask=411unc_h_txr_vert_starved.bl_ag1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_STARVED.BL_AG1event=0x9a,umask=0x4011unc_h_txr_vert_starved.ivuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TxR_VERT_STARVED.IVevent=0x9a,umask=811unc_h_vert_ring_ad_in_use.dn_evenuncore cacheThis event is deprecated. Refer to new event UNC_CHA_VERT_RING_AD_IN_USE.DN_EVENevent=0xa6,umask=411unc_h_vert_ring_ad_in_use.dn_odduncore cacheThis event is deprecated. Refer to new event UNC_CHA_VERT_RING_AD_IN_USE.DN_ODDevent=0xa6,umask=811unc_h_vert_ring_ad_in_use.up_evenuncore cacheThis event is deprecated. Refer to new event UNC_CHA_VERT_RING_AD_IN_USE.UP_EVENevent=0xa6,umask=111unc_h_vert_ring_ad_in_use.up_odduncore cacheThis event is deprecated. Refer to new event UNC_CHA_VERT_RING_AD_IN_USE.UP_ODDevent=0xa6,umask=211unc_h_vert_ring_ak_in_use.dn_evenuncore cacheThis event is deprecated. Refer to new event UNC_CHA_VERT_RING_AK_IN_USE.DN_EVENevent=0xa8,umask=411unc_h_vert_ring_ak_in_use.dn_odduncore cacheThis event is deprecated. Refer to new event UNC_CHA_VERT_RING_AK_IN_USE.DN_ODDevent=0xa8,umask=811unc_h_vert_ring_ak_in_use.up_evenuncore cacheThis event is deprecated. Refer to new event UNC_CHA_VERT_RING_AK_IN_USE.UP_EVENevent=0xa8,umask=111unc_h_vert_ring_ak_in_use.up_odduncore cacheThis event is deprecated. Refer to new event UNC_CHA_VERT_RING_AK_IN_USE.UP_ODDevent=0xa8,umask=211unc_h_vert_ring_bl_in_use.dn_evenuncore cacheThis event is deprecated. Refer to new event UNC_CHA_VERT_RING_BL_IN_USE.DN_EVENevent=0xaa,umask=411unc_h_vert_ring_bl_in_use.dn_odduncore cacheThis event is deprecated. Refer to new event UNC_CHA_VERT_RING_BL_IN_USE.DN_ODDevent=0xaa,umask=811unc_h_vert_ring_bl_in_use.up_evenuncore cacheThis event is deprecated. Refer to new event UNC_CHA_VERT_RING_BL_IN_USE.UP_EVENevent=0xaa,umask=111unc_h_vert_ring_bl_in_use.up_odduncore cacheThis event is deprecated. Refer to new event UNC_CHA_VERT_RING_BL_IN_USE.UP_ODDevent=0xaa,umask=211unc_h_vert_ring_iv_in_use.dnuncore cacheThis event is deprecated. Refer to new event UNC_CHA_VERT_RING_IV_IN_USE.DNevent=0xac,umask=411unc_h_vert_ring_iv_in_use.upuncore cacheThis event is deprecated. Refer to new event UNC_CHA_VERT_RING_IV_IN_USE.UPevent=0xac,umask=111unc_h_wb_push_mtoi.llcuncore cacheThis event is deprecated. Refer to new event UNC_CHA_WB_PUSH_MTOI.LLCevent=0x56,umask=111unc_h_wb_push_mtoi.memuncore cacheThis event is deprecated. Refer to new event UNC_CHA_WB_PUSH_MTOI.MEMevent=0x56,umask=211unc_h_write_no_credits.edc0_smi2uncore cacheThis event is deprecated. Refer to new event UNC_CHA_WRITE_NO_CREDITS.EDC0_SMI2event=0x5a,umask=411unc_h_write_no_credits.edc1_smi3uncore cacheThis event is deprecated. Refer to new event UNC_CHA_WRITE_NO_CREDITS.EDC1_SMI3event=0x5a,umask=811unc_h_write_no_credits.edc2_smi4uncore cacheThis event is deprecated. Refer to new event UNC_CHA_WRITE_NO_CREDITS.EDC2_SMI4event=0x5a,umask=0x1011unc_h_write_no_credits.edc3_smi5uncore cacheThis event is deprecated. Refer to new event UNC_CHA_WRITE_NO_CREDITS.EDC3_SMI5event=0x5a,umask=0x2011unc_h_write_no_credits.mc0_smi0uncore cacheThis event is deprecated. Refer to new event UNC_CHA_WRITE_NO_CREDITS.MC0_SMI0event=0x5a,umask=111unc_h_write_no_credits.mc1_smi1uncore cacheThis event is deprecated. Refer to new event UNC_CHA_WRITE_NO_CREDITS.MC1_SMI1event=0x5a,umask=211unc_h_xsnp_resp.any_rspi_fwdfeuncore cacheThis event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.ANY_RSPI_FWDFEevent=0x32,umask=0xe411unc_h_xsnp_resp.any_rspi_fwdmuncore cacheThis event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.ANY_RSPI_FWDMevent=0x32,umask=0xf011unc_h_xsnp_resp.any_rsps_fwdfeuncore cacheThis event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.ANY_RSPS_FWDFEevent=0x32,umask=0xe211unc_h_xsnp_resp.any_rsps_fwdmuncore cacheThis event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.ANY_RSPS_FWDMevent=0x32,umask=0xe811unc_h_xsnp_resp.any_rsp_hitfseuncore cacheThis event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.ANY_RSP_HITFSEevent=0x32,umask=0xe111unc_h_xsnp_resp.core_rspi_fwdfeuncore cacheThis event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.CORE_RSPI_FWDFEevent=0x32,umask=0x4411unc_h_xsnp_resp.core_rspi_fwdmuncore cacheThis event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.CORE_RSPI_FWDMevent=0x32,umask=0x5011unc_h_xsnp_resp.core_rsps_fwdfeuncore cacheThis event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.CORE_RSPS_FWDFEevent=0x32,umask=0x4211unc_h_xsnp_resp.core_rsps_fwdmuncore cacheThis event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.CORE_RSPS_FWDMevent=0x32,umask=0x4811unc_h_xsnp_resp.core_rsp_hitfseuncore cacheThis event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.CORE_RSP_HITFSEevent=0x32,umask=0x4111unc_h_xsnp_resp.evict_rspi_fwdfeuncore cacheThis event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.EVICT_RSPI_FWDFEevent=0x32,umask=0x8411unc_h_xsnp_resp.evict_rspi_fwdmuncore cacheThis event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.EVICT_RSPI_FWDMevent=0x32,umask=0x9011unc_h_xsnp_resp.evict_rsps_fwdfeuncore cacheThis event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.EVICT_RSPS_FWDFEevent=0x32,umask=0x8211unc_h_xsnp_resp.evict_rsps_fwdmuncore cacheThis event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.EVICT_RSPS_FWDMevent=0x32,umask=0x8811unc_h_xsnp_resp.evict_rsp_hitfseuncore cacheThis event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.EVICT_RSP_HITFSEevent=0x32,umask=0x8111unc_h_xsnp_resp.ext_rspi_fwdfeuncore cacheThis event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.EXT_RSPI_FWDFEevent=0x32,umask=0x2411unc_h_xsnp_resp.ext_rspi_fwdmuncore cacheThis event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.EXT_RSPI_FWDMevent=0x32,umask=0x3011unc_h_xsnp_resp.ext_rsps_fwdfeuncore cacheThis event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.EXT_RSPS_FWDFEevent=0x32,umask=0x2211unc_h_xsnp_resp.ext_rsps_fwdmuncore cacheThis event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.EXT_RSPS_FWDMevent=0x32,umask=0x2811unc_h_xsnp_resp.ext_rsp_hitfseuncore cacheThis event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.EXT_RSP_HITFSEevent=0x32,umask=0x2111unc_i_cache_total_occupancy.anyuncore interconnectTotal Write Cache Occupancy; Any Sourceevent=0xf,umask=101Accumulates the number of reads and writes that are outstanding in the uncore in each cycle.  This is effectively the sum of the READ_OCCUPANCY and WRITE_OCCUPANCY events.; Tracks all requests from any source portunc_i_cache_total_occupancy.iv_quncore interconnectTotal Write Cache Occupancy; Snoopsevent=0xf,umask=201Accumulates the number of reads and writes that are outstanding in the uncore in each cycle.  This is effectively the sum of the READ_OCCUPANCY and WRITE_OCCUPANCY eventsunc_i_cache_total_occupancy.memuncore interconnectTotal IRP occupancy of inbound read and write requestsevent=0xf,umask=401Total IRP occupancy of inbound read and write requests.  This is effectively the sum of read occupancy and write occupancyunc_i_clockticksuncore interconnectIRP Clocksevent=101unc_i_coherent_ops.clflushuncore interconnectCoherent Ops; CLFlushevent=0x10,umask=0x8001Counts the number of coherency related operations serviced by the IRPunc_i_coherent_ops.crduncore interconnectCoherent Ops; CRdevent=0x10,umask=201Counts the number of coherency related operations serviced by the IRPunc_i_coherent_ops.drduncore interconnectCoherent Ops; DRdevent=0x10,umask=401Counts the number of coherency related operations serviced by the IRPunc_i_coherent_ops.pcidcahintuncore interconnectCoherent Ops; PCIDCAHin5tevent=0x10,umask=0x2001Counts the number of coherency related operations serviced by the IRPunc_i_coherent_ops.pcirdcuruncore interconnectCoherent Ops; PCIRdCurevent=0x10,umask=101Counts the number of coherency related operations serviced by the IRPunc_i_coherent_ops.pcitomuncore interconnectPCIITOM request issued by the IRP unit to the mesh with the intention of writing a full cachelineevent=0x10,umask=0x1001PCIITOM request issued by the IRP unit to the mesh with the intention of writing a full cacheline to coherent memory, without a RFO.  PCIITOM is a speculative Invalidate to Modified command that requests ownership of the cacheline and does not move data from the mesh to IRP cacheunc_i_coherent_ops.rfouncore interconnectRFO request issued by the IRP unit to the mesh with the intention of writing a partial cachelineevent=0x10,umask=801RFO request issued by the IRP unit to the mesh with the intention of writing a partial cacheline to coherent memory.  RFO is a Read For Ownership command that requests ownership of the cacheline and moves data from the mesh to IRP cacheunc_i_coherent_ops.wbmtoiuncore interconnectCoherent Ops; WbMtoIevent=0x10,umask=0x4001Counts the number of coherency related operations serviced by the IRPunc_i_faf_fulluncore interconnectFAF RF fullevent=0x1701unc_i_faf_insertsuncore interconnectInbound read requests received by the IRP and inserted into the FAF queueevent=0x1801Inbound read requests to coherent memory, received by the IRP and inserted into the Fire and Forget queue (FAF), a queue used for processing inbound reads in the IRPunc_i_faf_occupancyuncore interconnectOccupancy of the IRP FAF queueevent=0x1901Occupancy of the IRP Fire and Forget (FAF) queue, a queue used for processing inbound reads in the IRPunc_i_faf_transactionsuncore interconnectFAF allocation -- sent to ADQevent=0x1601unc_i_irp_all.inbound_insertsuncore interconnectAll Inserts Inbound (p2p + faf + cset)event=0x1e,umask=101unc_i_irp_all.outbound_insertsuncore interconnectAll Inserts Outbound (BL, AK, Snoops)event=0x1e,umask=201unc_i_misc0.2nd_atomic_insertuncore interconnectMisc Events - Set 0; Cache Inserts of Atomic Transactions as Secondaryevent=0x1c,umask=0x1001unc_i_misc0.2nd_rd_insertuncore interconnectMisc Events - Set 0; Cache Inserts of Read Transactions as Secondaryevent=0x1c,umask=401unc_i_misc0.2nd_wr_insertuncore interconnectMisc Events - Set 0; Cache Inserts of Write Transactions as Secondaryevent=0x1c,umask=801unc_i_misc0.fast_rejuncore interconnectMisc Events - Set 0; Fastpath Rejectsevent=0x1c,umask=201unc_i_misc0.fast_requncore interconnectMisc Events - Set 0; Fastpath Requestsevent=0x1c,umask=101unc_i_misc0.fast_xferuncore interconnectMisc Events - Set 0; Fastpath Transfers From Primary to Secondaryevent=0x1c,umask=0x2001unc_i_misc0.pf_ack_hintuncore interconnectMisc Events - Set 0; Prefetch Ack Hints From Primary to Secondaryevent=0x1c,umask=0x4001unc_i_misc0.unknownuncore interconnectMisc Events - Set 0event=0x1c,umask=0x8001unc_i_misc1.lost_fwduncore interconnectMisc Events - Set 1; Lost Forwardevent=0x1d,umask=0x1001Snoop pulled away ownership before a write was committedunc_i_misc1.sec_rcvd_invlduncore interconnectMisc Events - Set 1; Received Invalidevent=0x1d,umask=0x2001Secondary received a transfer that did not have sufficient MESI stateunc_i_misc1.sec_rcvd_vlduncore interconnectMisc Events - Set 1; Received Validevent=0x1d,umask=0x4001Secondary received a transfer that did have sufficient MESI stateunc_i_misc1.slow_euncore interconnectMisc Events - Set 1; Slow Transfer of E Lineevent=0x1d,umask=401Secondary received a transfer that did have sufficient MESI stateunc_i_misc1.slow_iuncore interconnectMisc Events - Set 1; Slow Transfer of I Lineevent=0x1d,umask=101Snoop took cacheline ownership before write from data was committedunc_i_misc1.slow_muncore interconnectMisc Events - Set 1; Slow Transfer of M Lineevent=0x1d,umask=801Snoop took cacheline ownership before write from data was committedunc_i_misc1.slow_suncore interconnectMisc Events - Set 1; Slow Transfer of S Lineevent=0x1d,umask=201Secondary received a transfer that did not have sufficient MESI stateunc_i_p2p_insertsuncore interconnectP2P Requestsevent=0x1401P2P requests from the ITCunc_i_p2p_occupancyuncore interconnectP2P Occupancyevent=0x1501P2P B & S Queue Occupancyunc_i_p2p_transactions.cmpluncore interconnectP2P Transactions; P2P completionsevent=0x13,umask=801unc_i_p2p_transactions.locuncore interconnectP2P Transactions; match if local onlyevent=0x13,umask=0x4001unc_i_p2p_transactions.loc_and_tgt_matchuncore interconnectP2P Transactions; match if local and target matchesevent=0x13,umask=0x8001unc_i_p2p_transactions.msguncore interconnectP2P Transactions; P2P Messageevent=0x13,umask=401unc_i_p2p_transactions.rduncore interconnectP2P Transactions; P2P readsevent=0x13,umask=101unc_i_p2p_transactions.remuncore interconnectP2P Transactions; Match if remote onlyevent=0x13,umask=0x1001unc_i_p2p_transactions.rem_and_tgt_matchuncore interconnectP2P Transactions; match if remote and target matchesevent=0x13,umask=0x2001unc_i_p2p_transactions.wruncore interconnectP2P Transactions; P2P Writesevent=0x13,umask=201unc_i_snoop_resp.all_hituncore interconnectResponses to snoops of any type that hit M, E, S or I line in the IIOevent=0x12,umask=0x7e01Responses to snoops of any type (code, data, invalidate) that hit M, E, S or I line in the IIOunc_i_snoop_resp.all_hit_esuncore interconnectResponses to snoops of any type that hit E or S line in the IIO cacheevent=0x12,umask=0x7401Responses to snoops of any type (code, data, invalidate) that hit E or S line in the IIO cacheunc_i_snoop_resp.all_hit_iuncore interconnectResponses to snoops of any type that hit I line in the IIO cacheevent=0x12,umask=0x7201Responses to snoops of any type (code, data, invalidate) that hit I line in the IIO cacheunc_i_snoop_resp.all_hit_muncore interconnectResponses to snoops of any type that hit M line in the IIO cacheevent=0x12,umask=0x7801Responses to snoops of any type (code, data, invalidate) that hit M line in the IIO cacheunc_i_snoop_resp.all_missuncore interconnectResponses to snoops of any type that miss the IIO cacheevent=0x12,umask=0x7101Responses to snoops of any type (code, data, invalidate) that miss the IIO cacheunc_i_snoop_resp.hit_esuncore interconnectSnoop Responses; Hit E or Sevent=0x12,umask=401unc_i_snoop_resp.hit_iuncore interconnectSnoop Responses; Hit Ievent=0x12,umask=201unc_i_snoop_resp.hit_muncore interconnectSnoop Responses; Hit Mevent=0x12,umask=801unc_i_snoop_resp.missuncore interconnectSnoop Responses; Missevent=0x12,umask=101unc_i_snoop_resp.snpcodeuncore interconnectSnoop Responses; SnpCodeevent=0x12,umask=0x1001unc_i_snoop_resp.snpdatauncore interconnectSnoop Responses; SnpDataevent=0x12,umask=0x2001unc_i_snoop_resp.snpinvuncore interconnectSnoop Responses; SnpInvevent=0x12,umask=0x4001unc_i_transactions.atomicuncore interconnectInbound Transaction Count; Atomicevent=0x11,umask=0x1001Counts the number of Inbound transactions from the IRP to the Uncore.  This can be filtered based on request type in addition to the source queue.  Note the special filtering equation.  We do OR-reduction on the request type.  If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Tracks the number of atomic transactionsunc_i_transactions.otheruncore interconnectInbound Transaction Count; Otherevent=0x11,umask=0x2001Counts the number of Inbound transactions from the IRP to the Uncore.  This can be filtered based on request type in addition to the source queue.  Note the special filtering equation.  We do OR-reduction on the request type.  If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Tracks the number of 'other' kinds of transactionsunc_i_transactions.rd_prefuncore interconnectInbound Transaction Count; Read Prefetchesevent=0x11,umask=401Counts the number of Inbound transactions from the IRP to the Uncore.  This can be filtered based on request type in addition to the source queue.  Note the special filtering equation.  We do OR-reduction on the request type.  If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Tracks the number of read prefetchesunc_i_transactions.readsuncore interconnectInbound Transaction Count; Readsevent=0x11,umask=101Counts the number of Inbound transactions from the IRP to the Uncore.  This can be filtered based on request type in addition to the source queue.  Note the special filtering equation.  We do OR-reduction on the request type.  If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Tracks only read requests (not including read prefetches)unc_i_transactions.writesuncore interconnectInbound Transaction Count; Writesevent=0x11,umask=201Counts the number of Inbound transactions from the IRP to the Uncore.  This can be filtered based on request type in addition to the source queue.  Note the special filtering equation.  We do OR-reduction on the request type.  If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Tracks only write requests.  Each write request should have a prefetch, so there is no need to explicitly track these requests.  For writes that are tickled and have to retry, the counter will be incremented for each retryunc_i_transactions.wr_prefuncore interconnectInbound write (fast path) requests received by the IRPevent=0x11,umask=801Inbound write (fast path) requests to coherent memory, received by the IRP resulting in write ownership requests issued by IRP to the meshunc_i_txc_ak_insertsuncore interconnectAK Egress Allocationsevent=0xb01unc_i_txc_bl_drs_cycles_fulluncore interconnectBL DRS Egress Cycles Fullevent=501unc_i_txc_bl_drs_insertsuncore interconnectBL DRS Egress Insertsevent=201unc_i_txc_bl_drs_occupancyuncore interconnectBL DRS Egress Occupancyevent=801unc_i_txc_bl_ncb_cycles_fulluncore interconnectBL NCB Egress Cycles Fullevent=601unc_i_txc_bl_ncb_insertsuncore interconnectBL NCB Egress Insertsevent=301unc_i_txc_bl_ncb_occupancyuncore interconnectBL NCB Egress Occupancyevent=901unc_i_txc_bl_ncs_cycles_fulluncore interconnectBL NCS Egress Cycles Fullevent=701unc_i_txc_bl_ncs_insertsuncore interconnectBL NCS Egress Insertsevent=401unc_i_txc_bl_ncs_occupancyuncore interconnectBL NCS Egress Occupancyevent=0xa01unc_i_txr2_ad_stall_credit_cyclesuncore interconnectNo AD Egress Credit Stallsevent=0x1a01Counts the number times when it is not possible to issue a request to the R2PCIe because there are no AD Egress Credits availableunc_i_txr2_bl_stall_credit_cyclesuncore interconnectNo BL Egress Credit Stallsevent=0x1b01Counts the number times when it is not possible to issue data to the R2PCIe because there are no BL Egress Credits availableunc_i_txs_data_inserts_ncbuncore interconnectOutbound Read Requestsevent=0xd01Counts the number of requests issued to the switch (towards the devices)unc_i_txs_data_inserts_ncsuncore interconnectOutbound Read Requestsevent=0xe01Counts the number of requests issued to the switch (towards the devices)unc_i_txs_request_occupancyuncore interconnectOutbound Request Queue Occupancyevent=0xc01Accumulates the number of outstanding outbound requests from the IRP to the switch (towards the devices).  This can be used in conjunction with the allocations event in order to calculate average latency of outbound requestsuncore_m2munc_m2m_ag0_ad_crd_acquired.tgr0uncore interconnectCMS Agent0 AD Credits Acquired; For Transgress 0event=0x80,umask=101Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m2m_ag0_ad_crd_acquired.tgr1uncore interconnectCMS Agent0 AD Credits Acquired; For Transgress 1event=0x80,umask=201Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m2m_ag0_ad_crd_acquired.tgr2uncore interconnectCMS Agent0 AD Credits Acquired; For Transgress 2event=0x80,umask=401Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m2m_ag0_ad_crd_acquired.tgr3uncore interconnectCMS Agent0 AD Credits Acquired; For Transgress 3event=0x80,umask=801Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m2m_ag0_ad_crd_acquired.tgr4uncore interconnectCMS Agent0 AD Credits Acquired; For Transgress 4event=0x80,umask=0x1001Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m2m_ag0_ad_crd_acquired.tgr5uncore interconnectCMS Agent0 AD Credits Acquired; For Transgress 5event=0x80,umask=0x2001Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m2m_ag0_ad_crd_occupancy.tgr0uncore interconnectCMS Agent0 AD Credits Occupancy; For Transgress 0event=0x82,umask=101Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m2m_ag0_ad_crd_occupancy.tgr1uncore interconnectCMS Agent0 AD Credits Occupancy; For Transgress 1event=0x82,umask=201Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m2m_ag0_ad_crd_occupancy.tgr2uncore interconnectCMS Agent0 AD Credits Occupancy; For Transgress 2event=0x82,umask=401Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m2m_ag0_ad_crd_occupancy.tgr3uncore interconnectCMS Agent0 AD Credits Occupancy; For Transgress 3event=0x82,umask=801Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m2m_ag0_ad_crd_occupancy.tgr4uncore interconnectCMS Agent0 AD Credits Occupancy; For Transgress 4event=0x82,umask=0x1001Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m2m_ag0_ad_crd_occupancy.tgr5uncore interconnectCMS Agent0 AD Credits Occupancy; For Transgress 5event=0x82,umask=0x2001Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m2m_ag0_bl_crd_acquired.tgr0uncore interconnectCMS Agent0 BL Credits Acquired; For Transgress 0event=0x88,umask=101Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m2m_ag0_bl_crd_acquired.tgr1uncore interconnectCMS Agent0 BL Credits Acquired; For Transgress 1event=0x88,umask=201Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m2m_ag0_bl_crd_acquired.tgr2uncore interconnectCMS Agent0 BL Credits Acquired; For Transgress 2event=0x88,umask=401Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m2m_ag0_bl_crd_acquired.tgr3uncore interconnectCMS Agent0 BL Credits Acquired; For Transgress 3event=0x88,umask=801Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m2m_ag0_bl_crd_acquired.tgr4uncore interconnectCMS Agent0 BL Credits Acquired; For Transgress 4event=0x88,umask=0x1001Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m2m_ag0_bl_crd_acquired.tgr5uncore interconnectCMS Agent0 BL Credits Acquired; For Transgress 5event=0x88,umask=0x2001Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m2m_ag0_bl_crd_occupancy.tgr0uncore interconnectCMS Agent0 BL Credits Occupancy; For Transgress 0event=0x8a,umask=101Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m2m_ag0_bl_crd_occupancy.tgr1uncore interconnectCMS Agent0 BL Credits Occupancy; For Transgress 1event=0x8a,umask=201Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m2m_ag0_bl_crd_occupancy.tgr2uncore interconnectCMS Agent0 BL Credits Occupancy; For Transgress 2event=0x8a,umask=401Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m2m_ag0_bl_crd_occupancy.tgr3uncore interconnectCMS Agent0 BL Credits Occupancy; For Transgress 3event=0x8a,umask=801Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m2m_ag0_bl_crd_occupancy.tgr4uncore interconnectCMS Agent0 BL Credits Occupancy; For Transgress 4event=0x8a,umask=0x1001Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m2m_ag0_bl_crd_occupancy.tgr5uncore interconnectCMS Agent0 BL Credits Occupancy; For Transgress 5event=0x8a,umask=0x2001Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m2m_ag1_ad_crd_acquired.tgr0uncore interconnectCMS Agent1 AD Credits Acquired; For Transgress 0event=0x84,umask=101Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m2m_ag1_ad_crd_acquired.tgr1uncore interconnectCMS Agent1 AD Credits Acquired; For Transgress 1event=0x84,umask=201Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m2m_ag1_ad_crd_acquired.tgr2uncore interconnectCMS Agent1 AD Credits Acquired; For Transgress 2event=0x84,umask=401Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m2m_ag1_ad_crd_acquired.tgr3uncore interconnectCMS Agent1 AD Credits Acquired; For Transgress 3event=0x84,umask=801Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m2m_ag1_ad_crd_acquired.tgr4uncore interconnectCMS Agent1 AD Credits Acquired; For Transgress 4event=0x84,umask=0x1001Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m2m_ag1_ad_crd_acquired.tgr5uncore interconnectCMS Agent1 AD Credits Acquired; For Transgress 5event=0x84,umask=0x2001Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m2m_ag1_ad_crd_occupancy.tgr0uncore interconnectCMS Agent1 AD Credits Occupancy; For Transgress 0event=0x86,umask=101Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m2m_ag1_ad_crd_occupancy.tgr1uncore interconnectCMS Agent1 AD Credits Occupancy; For Transgress 1event=0x86,umask=201Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m2m_ag1_ad_crd_occupancy.tgr2uncore interconnectCMS Agent1 AD Credits Occupancy; For Transgress 2event=0x86,umask=401Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m2m_ag1_ad_crd_occupancy.tgr3uncore interconnectCMS Agent1 AD Credits Occupancy; For Transgress 3event=0x86,umask=801Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m2m_ag1_ad_crd_occupancy.tgr4uncore interconnectCMS Agent1 AD Credits Occupancy; For Transgress 4event=0x86,umask=0x1001Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m2m_ag1_ad_crd_occupancy.tgr5uncore interconnectCMS Agent1 AD Credits Occupancy; For Transgress 5event=0x86,umask=0x2001Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m2m_ag1_bl_crd_occupancy.tgr0uncore interconnectCMS Agent1 BL Credits Occupancy; For Transgress 0event=0x8e,umask=101Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m2m_ag1_bl_crd_occupancy.tgr1uncore interconnectCMS Agent1 BL Credits Occupancy; For Transgress 1event=0x8e,umask=201Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m2m_ag1_bl_crd_occupancy.tgr2uncore interconnectCMS Agent1 BL Credits Occupancy; For Transgress 2event=0x8e,umask=401Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m2m_ag1_bl_crd_occupancy.tgr3uncore interconnectCMS Agent1 BL Credits Occupancy; For Transgress 3event=0x8e,umask=801Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m2m_ag1_bl_crd_occupancy.tgr4uncore interconnectCMS Agent1 BL Credits Occupancy; For Transgress 4event=0x8e,umask=0x1001Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m2m_ag1_bl_crd_occupancy.tgr5uncore interconnectCMS Agent1 BL Credits Occupancy; For Transgress 5event=0x8e,umask=0x2001Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m2m_ag1_bl_credits_acquired.tgr0uncore interconnectCMS Agent1 BL Credits Acquired; For Transgress 0event=0x8c,umask=101Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m2m_ag1_bl_credits_acquired.tgr1uncore interconnectCMS Agent1 BL Credits Acquired; For Transgress 1event=0x8c,umask=201Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m2m_ag1_bl_credits_acquired.tgr2uncore interconnectCMS Agent1 BL Credits Acquired; For Transgress 2event=0x8c,umask=401Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m2m_ag1_bl_credits_acquired.tgr3uncore interconnectCMS Agent1 BL Credits Acquired; For Transgress 3event=0x8c,umask=801Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m2m_ag1_bl_credits_acquired.tgr4uncore interconnectCMS Agent1 BL Credits Acquired; For Transgress 4event=0x8c,umask=0x1001Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m2m_ag1_bl_credits_acquired.tgr5uncore interconnectCMS Agent1 BL Credits Acquired; For Transgress 5event=0x8c,umask=0x2001Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m2m_bypass_m2m_egress.not_takenuncore interconnectTraffic in which the M2M to iMC Bypass was not takenevent=0x22,umask=201Counts traffic in which the M2M (Mesh to Memory) to iMC (Memory Controller) bypass was not takenunc_m2m_bypass_m2m_egress.takenuncore interconnectM2M to iMC Bypass; Takenevent=0x22,umask=101unc_m2m_bypass_m2m_ingress.not_takenuncore interconnectM2M to iMC Bypass; Not Takenevent=0x21,umask=201unc_m2m_bypass_m2m_ingress.takenuncore interconnectM2M to iMC Bypass; Takenevent=0x21,umask=101unc_m2m_clockticksuncore interconnectCycles - at UCLKevent=001unc_m2m_cms_clockticksuncore interconnectCMS Clockticksevent=0xc001unc_m2m_direct2core_not_taken_dirstateuncore interconnectCycles when direct to core mode (which bypasses the CHA) was disabledevent=0x2401Counts cycles when direct to core mode (which bypasses the CHA) was disabledunc_m2m_direct2core_takenuncore interconnectMessages sent direct to core (bypassing the CHA)event=0x2301Counts when messages were sent direct to core (bypassing the CHA)unc_m2m_direct2core_txn_overrideuncore interconnectNumber of reads in which direct to core transaction were overriddenevent=0x2501Counts reads in which direct to core transactions (which would have bypassed the CHA) were overriddenunc_m2m_direct2upi_not_taken_creditsuncore interconnectNumber of reads in which direct to Intel(R) UPI transactions were overriddenevent=0x2801Counts reads in which direct to Intel(R) Ultra Path Interconnect (UPI) transactions (which would have bypassed the CHA) were overriddenunc_m2m_direct2upi_not_taken_dirstateuncore interconnectCycles when direct to Intel(R) UPI was disabledevent=0x2701Counts cycles when the ability to send messages direct to the Intel(R) Ultra Path Interconnect (bypassing the CHA) was disabledunc_m2m_direct2upi_takenuncore interconnectMessages sent direct to the Intel(R) UPIevent=0x2601Counts when messages were sent direct to the Intel(R) Ultra Path Interconnect (bypassing the CHA)unc_m2m_direct2upi_txn_overrideuncore interconnectNumber of reads that a message sent direct2 Intel(R) UPI was overriddenevent=0x2901Counts when a read message that was sent direct to the Intel(R) Ultra Path Interconnect (bypassing the CHA) was overriddenunc_m2m_directory_hit.clean_auncore interconnectDirectory Hit; On NonDirty Line in A Stateevent=0x2a,umask=0x8001unc_m2m_directory_hit.clean_iuncore interconnectDirectory Hit; On NonDirty Line in I Stateevent=0x2a,umask=0x1001unc_m2m_directory_hit.clean_puncore interconnectDirectory Hit; On NonDirty Line in L Stateevent=0x2a,umask=0x4001unc_m2m_directory_hit.clean_suncore interconnectDirectory Hit; On NonDirty Line in S Stateevent=0x2a,umask=0x2001unc_m2m_directory_hit.dirty_auncore interconnectDirectory Hit; On Dirty Line in A Stateevent=0x2a,umask=801unc_m2m_directory_hit.dirty_iuncore interconnectDirectory Hit; On Dirty Line in I Stateevent=0x2a,umask=101unc_m2m_directory_hit.dirty_puncore interconnectDirectory Hit; On Dirty Line in L Stateevent=0x2a,umask=401unc_m2m_directory_hit.dirty_suncore interconnectDirectory Hit; On Dirty Line in S Stateevent=0x2a,umask=201unc_m2m_directory_lookup.anyuncore interconnectMulti-socket cacheline Directory lookups (any state found)event=0x2d,umask=101Counts when the M2M (Mesh to Memory) looks into the multi-socket cacheline Directory state, and found the cacheline marked in Any State (A, I, S or unused)unc_m2m_directory_lookup.state_auncore interconnectMulti-socket cacheline Directory lookups (cacheline found in A state)event=0x2d,umask=801Counts when the M2M (Mesh to Memory) looks into the multi-socket cacheline Directory state, and found the cacheline marked in the A (SnoopAll) state, indicating the cacheline is stored in another socket in any state, and we must snoop the other sockets to make sure we get the latest data.  The data may be stored in any state in the local socketunc_m2m_directory_lookup.state_iuncore interconnectMulti-socket cacheline Directory lookup (cacheline found in I state)event=0x2d,umask=201Counts when the M2M (Mesh to Memory) looks into the multi-socket cacheline Directory state , and found the cacheline marked in the I (Invalid) state indicating the cacheline is not stored in another socket, and so there is no need to snoop the other sockets for the latest data.  The data may be stored in any state in the local socketunc_m2m_directory_lookup.state_suncore interconnectMulti-socket cacheline Directory lookup (cacheline found in S state)event=0x2d,umask=401Counts when the M2M (Mesh to Memory) looks into the multi-socket cacheline Directory state , and found the cacheline marked in the S (Shared) state indicating the cacheline is either stored in another socket in the S(hared) state , and so there is no need to snoop the other sockets for the latest data.  The data may be stored in any state in the local socketunc_m2m_directory_miss.clean_auncore interconnectDirectory Miss; On NonDirty Line in A Stateevent=0x2b,umask=0x8001unc_m2m_directory_miss.clean_iuncore interconnectDirectory Miss; On NonDirty Line in I Stateevent=0x2b,umask=0x1001unc_m2m_directory_miss.clean_puncore interconnectDirectory Miss; On NonDirty Line in L Stateevent=0x2b,umask=0x4001unc_m2m_directory_miss.clean_suncore interconnectDirectory Miss; On NonDirty Line in S Stateevent=0x2b,umask=0x2001unc_m2m_directory_miss.dirty_auncore interconnectDirectory Miss; On Dirty Line in A Stateevent=0x2b,umask=801unc_m2m_directory_miss.dirty_iuncore interconnectDirectory Miss; On Dirty Line in I Stateevent=0x2b,umask=101unc_m2m_directory_miss.dirty_puncore interconnectDirectory Miss; On Dirty Line in L Stateevent=0x2b,umask=401unc_m2m_directory_miss.dirty_suncore interconnectDirectory Miss; On Dirty Line in S Stateevent=0x2b,umask=201unc_m2m_directory_update.a2iuncore interconnectMulti-socket cacheline Directory update from A to Ievent=0x2e,umask=0x2001Counts when the M2M (Mesh to Memory) updates the multi-socket cacheline Directory state from A (SnoopAll) to I (Invalid)unc_m2m_directory_update.a2suncore interconnectMulti-socket cacheline Directory update from A to Sevent=0x2e,umask=0x4001Counts when the M2M (Mesh to Memory) updates the multi-socket cacheline Directory state from A (SnoopAll) to S (Shared)unc_m2m_directory_update.anyuncore interconnectMulti-socket cacheline Directory update from/to Any stateevent=0x2e,umask=101Counts when the M2M (Mesh to Memory) updates the multi-socket cacheline Directory to a new stateunc_m2m_directory_update.i2auncore interconnectMulti-socket cacheline Directory update from I to Aevent=0x2e,umask=401Counts when the M2M (Mesh to Memory) updates the multi-socket cacheline Directory state from I (Invalid) to A (SnoopAll)unc_m2m_directory_update.i2suncore interconnectMulti-socket cacheline Directory update from I to Sevent=0x2e,umask=201Counts when the M2M (Mesh to Memory) updates the multi-socket cacheline Directory state from I (Invalid) to S (Shared)unc_m2m_directory_update.s2auncore interconnectMulti-socket cacheline Directory update from S to Aevent=0x2e,umask=0x1001Counts when the M2M (Mesh to Memory) updates the multi-socket cacheline Directory state from S (Shared) to A (SnoopAll)unc_m2m_directory_update.s2iuncore interconnectMulti-socket cacheline Directory update from S to Ievent=0x2e,umask=801Counts when the M2M (Mesh to Memory) updates the multi-socket cacheline Directory state from S (Shared) to I (Invalid)unc_m2m_egress_ordering.iv_snoopgo_dnuncore interconnectEgress Blocking due to Ordering requirements; Downevent=0xae,umask=401Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirementsunc_m2m_egress_ordering.iv_snoopgo_upuncore interconnectEgress Blocking due to Ordering requirements; Upevent=0xae,umask=101Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirementsunc_m2m_fast_asserted.horzuncore interconnectFaST wire asserted; Horizontalevent=0xa5,umask=201Counts the number of cycles either the local or incoming distress signals are asserted.  Incoming distress includes up, dn and acrossunc_m2m_fast_asserted.vertuncore interconnectFaST wire asserted; Verticalevent=0xa5,umask=101Counts the number of cycles either the local or incoming distress signals are asserted.  Incoming distress includes up, dn and acrossunc_m2m_horz_ring_ad_in_use.left_evenuncore interconnectHorizontal AD Ring In Use; Left and Evenevent=0xa7,umask=101Counts the number of cycles that the Horizontal AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_horz_ring_ad_in_use.left_odduncore interconnectHorizontal AD Ring In Use; Left and Oddevent=0xa7,umask=201Counts the number of cycles that the Horizontal AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_horz_ring_ad_in_use.right_evenuncore interconnectHorizontal AD Ring In Use; Right and Evenevent=0xa7,umask=401Counts the number of cycles that the Horizontal AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_horz_ring_ad_in_use.right_odduncore interconnectHorizontal AD Ring In Use; Right and Oddevent=0xa7,umask=801Counts the number of cycles that the Horizontal AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_horz_ring_ak_in_use.left_evenuncore interconnectHorizontal AK Ring In Use; Left and Evenevent=0xa9,umask=101Counts the number of cycles that the Horizontal AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_horz_ring_ak_in_use.left_odduncore interconnectHorizontal AK Ring In Use; Left and Oddevent=0xa9,umask=201Counts the number of cycles that the Horizontal AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_horz_ring_ak_in_use.right_evenuncore interconnectHorizontal AK Ring In Use; Right and Evenevent=0xa9,umask=401Counts the number of cycles that the Horizontal AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_horz_ring_ak_in_use.right_odduncore interconnectHorizontal AK Ring In Use; Right and Oddevent=0xa9,umask=801Counts the number of cycles that the Horizontal AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_horz_ring_bl_in_use.left_evenuncore interconnectHorizontal BL Ring in Use; Left and Evenevent=0xab,umask=101Counts the number of cycles that the Horizontal BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_horz_ring_bl_in_use.left_odduncore interconnectHorizontal BL Ring in Use; Left and Oddevent=0xab,umask=201Counts the number of cycles that the Horizontal BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_horz_ring_bl_in_use.right_evenuncore interconnectHorizontal BL Ring in Use; Right and Evenevent=0xab,umask=401Counts the number of cycles that the Horizontal BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_horz_ring_bl_in_use.right_odduncore interconnectHorizontal BL Ring in Use; Right and Oddevent=0xab,umask=801Counts the number of cycles that the Horizontal BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_horz_ring_iv_in_use.leftuncore interconnectHorizontal IV Ring in Use; Leftevent=0xad,umask=101Counts the number of cycles that the Horizontal IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODDunc_m2m_horz_ring_iv_in_use.rightuncore interconnectHorizontal IV Ring in Use; Rightevent=0xad,umask=401Counts the number of cycles that the Horizontal IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODDunc_m2m_imc_reads.alluncore interconnectReads to iMC issuedevent=0x37,umask=401Counts when the M2M (Mesh to Memory) issues reads to the iMC (Memory Controller)unc_m2m_imc_reads.from_transgressuncore interconnectM2M Reads Issued to iMC; All, regardless of priorityevent=0x37,umask=0x1001unc_m2m_imc_reads.isochuncore interconnectM2M Reads Issued to iMC; Critical Priorityevent=0x37,umask=201unc_m2m_imc_reads.normaluncore interconnectReads to iMC issued at Normal Priority (Non-Isochronous)event=0x37,umask=101Counts when the M2M (Mesh to Memory) issues reads to the iMC (Memory Controller).  It only counts  normal priority non-isochronous readsunc_m2m_imc_reads.to_pmmuncore interconnectRead requests to Intel(R) Optane(TM) DC persistent memory issued to the iMC from M2Mevent=0x37,umask=801M2M Reads Issued to iMC; All, regardless of priorityunc_m2m_imc_writes.alluncore interconnectWrites to iMC issuedevent=0x38,umask=0x1001Counts when the M2M (Mesh to Memory) issues writes to the iMC (Memory Controller)unc_m2m_imc_writes.from_transgressuncore interconnectM2M Writes Issued to iMC; All, regardless of priorityevent=0x38,umask=0x4001unc_m2m_imc_writes.fulluncore interconnectM2M Writes Issued to iMC; Full Line Non-ISOCHevent=0x38,umask=101unc_m2m_imc_writes.full_isochuncore interconnectM2M Writes Issued to iMC; ISOCH Full Lineevent=0x38,umask=401unc_m2m_imc_writes.niuncore interconnectM2M Writes Issued to iMC; All, regardless of priorityevent=0x38,umask=0x8001unc_m2m_imc_writes.partialuncore interconnectPartial Non-Isochronous writes to the iMCevent=0x38,umask=201Counts when the M2M (Mesh to Memory) issues partial writes to the iMC (Memory Controller).  It only counts normal priority non-isochronous writesunc_m2m_imc_writes.partial_isochuncore interconnectM2M Writes Issued to iMC; ISOCH Partialevent=0x38,umask=801unc_m2m_imc_writes.to_pmmuncore interconnectWrite requests to Intel(R) Optane(TM) DC persistent memory issued to the iMC from M2Mevent=0x38,umask=0x2001M2M Writes Issued to iMC; All, regardless of priorityunc_m2m_pkt_match.mcuncore interconnectNumber Packet Header Matches; MC Matchevent=0x4c,umask=201unc_m2m_pkt_match.meshuncore interconnectNumber Packet Header Matches; Mesh Matchevent=0x4c,umask=101unc_m2m_pmm_rpq_cycles_reg_credits.chn0uncore interconnectM2M->iMC RPQ Cycles w/Credits - Regular; Channel 0event=0x4f,umask=101unc_m2m_pmm_rpq_cycles_reg_credits.chn1uncore interconnectM2M->iMC RPQ Cycles w/Credits - Regular; Channel 1event=0x4f,umask=201unc_m2m_pmm_rpq_cycles_reg_credits.chn2uncore interconnectM2M->iMC RPQ Cycles w/Credits - Regular; Channel 2event=0x4f,umask=401unc_m2m_pmm_wpq_cycles_reg_credits.chn0uncore interconnectM2M->iMC WPQ Cycles w/Credits - Regular; Channel 0event=0x51,umask=101unc_m2m_pmm_wpq_cycles_reg_credits.chn1uncore interconnectM2M->iMC WPQ Cycles w/Credits - Regular; Channel 1event=0x51,umask=201unc_m2m_pmm_wpq_cycles_reg_credits.chn2uncore interconnectM2M->iMC WPQ Cycles w/Credits - Regular; Channel 2event=0x51,umask=401unc_m2m_prefcam_cycles_fulluncore interconnectPrefetch CAM Cycles Fullevent=0x5301unc_m2m_prefcam_cycles_neuncore interconnectPrefetch CAM Cycles Not Emptyevent=0x5401unc_m2m_prefcam_demand_promotionsuncore interconnectPrefetch requests that got turn into a demand requestevent=0x5601Counts when the M2M (Mesh to Memory) promotes a outstanding request in the prefetch queue due to a subsequent demand read request that entered the M2M with the same address.  Explanatory Side Note: The Prefetch queue is made of CAM (Content Addressable Memory)unc_m2m_prefcam_insertsuncore interconnectInserts into the Memory Controller Prefetch Queueevent=0x5701Counts when the M2M (Mesh to Memory) receives a prefetch request and inserts it into its outstanding prefetch queue.  Explanatory Side Note: the prefect queue is made from CAM: Content Addressable Memoryunc_m2m_prefcam_occupancyuncore interconnectPrefetch CAM Occupancyevent=0x5501unc_m2m_ring_bounces_horz.aduncore interconnectMessages that bounced on the Horizontal Ring.; ADevent=0xa1,umask=101Number of cycles incoming messages from the Horizontal ring that were bounced, by ring typeunc_m2m_ring_bounces_horz.akuncore interconnectMessages that bounced on the Horizontal Ring.; AKevent=0xa1,umask=201Number of cycles incoming messages from the Horizontal ring that were bounced, by ring typeunc_m2m_ring_bounces_horz.bluncore interconnectMessages that bounced on the Horizontal Ring.; BLevent=0xa1,umask=401Number of cycles incoming messages from the Horizontal ring that were bounced, by ring typeunc_m2m_ring_bounces_horz.ivuncore interconnectMessages that bounced on the Horizontal Ring.; IVevent=0xa1,umask=801Number of cycles incoming messages from the Horizontal ring that were bounced, by ring typeunc_m2m_ring_bounces_vert.aduncore interconnectMessages that bounced on the Vertical Ring.; ADevent=0xa0,umask=101Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_m2m_ring_bounces_vert.akuncore interconnectMessages that bounced on the Vertical Ring.; Acknowledgements to coreevent=0xa0,umask=201Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_m2m_ring_bounces_vert.bluncore interconnectMessages that bounced on the Vertical Ring.; Data Responses to coreevent=0xa0,umask=401Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_m2m_ring_bounces_vert.ivuncore interconnectMessages that bounced on the Vertical Ring.; Snoops of processor's cacheevent=0xa0,umask=801Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_m2m_ring_sink_starved_horz.aduncore interconnectSink Starvation on Horizontal Ring; ADevent=0xa3,umask=101unc_m2m_ring_sink_starved_horz.akuncore interconnectSink Starvation on Horizontal Ring; AKevent=0xa3,umask=201unc_m2m_ring_sink_starved_horz.ak_ag1uncore interconnectSink Starvation on Horizontal Ring; Acknowledgements to Agent 1event=0xa3,umask=0x2001unc_m2m_ring_sink_starved_horz.bluncore interconnectSink Starvation on Horizontal Ring; BLevent=0xa3,umask=401unc_m2m_ring_sink_starved_horz.ivuncore interconnectSink Starvation on Horizontal Ring; IVevent=0xa3,umask=801unc_m2m_ring_sink_starved_vert.aduncore interconnectSink Starvation on Vertical Ring; ADevent=0xa2,umask=101unc_m2m_ring_sink_starved_vert.akuncore interconnectSink Starvation on Vertical Ring; Acknowledgements to coreevent=0xa2,umask=201unc_m2m_ring_sink_starved_vert.bluncore interconnectSink Starvation on Vertical Ring; Data Responses to coreevent=0xa2,umask=401unc_m2m_ring_sink_starved_vert.ivuncore interconnectSink Starvation on Vertical Ring; Snoops of processor's cacheevent=0xa2,umask=801unc_m2m_ring_src_thrtluncore interconnectSource Throttleevent=0xa401unc_m2m_rpq_cycles_no_spec_credits.chn0uncore interconnectThis event is deprecated. Refer to new event UNC_M2M_RPQ_CYCLES_SPEC_CREDITS.CHN0event=0x44,umask=111unc_m2m_rpq_cycles_no_spec_credits.chn1uncore interconnectThis event is deprecated. Refer to new event UNC_M2M_RPQ_CYCLES_SPEC_CREDITS.CHN1event=0x44,umask=211unc_m2m_rpq_cycles_no_spec_credits.chn2uncore interconnectThis event is deprecated. Refer to new event UNC_M2M_RPQ_CYCLES_SPEC_CREDITS.CHN2event=0x44,umask=411unc_m2m_rpq_cycles_reg_credits.chn0uncore interconnectM2M to iMC RPQ Cycles w/Credits - Regular; Channel 0event=0x43,umask=101unc_m2m_rpq_cycles_reg_credits.chn1uncore interconnectM2M to iMC RPQ Cycles w/Credits - Regular; Channel 1event=0x43,umask=201unc_m2m_rpq_cycles_reg_credits.chn2uncore interconnectM2M to iMC RPQ Cycles w/Credits - Regular; Channel 2event=0x43,umask=401unc_m2m_rpq_cycles_spec_credits.chn0uncore interconnectM2M to iMC RPQ Cycles w/Credits - Special; Channel 0event=0x44,umask=101unc_m2m_rpq_cycles_spec_credits.chn1uncore interconnectM2M to iMC RPQ Cycles w/Credits - Special; Channel 1event=0x44,umask=201unc_m2m_rpq_cycles_spec_credits.chn2uncore interconnectM2M to iMC RPQ Cycles w/Credits - Special; Channel 2event=0x44,umask=401unc_m2m_rxc_ad_cycles_fulluncore interconnectAD Ingress (from CMS) Fullevent=401unc_m2m_rxc_ad_cycles_neuncore interconnectAD Ingress (from CMS) Not Emptyevent=301unc_m2m_rxc_ad_insertsuncore interconnectAD Ingress (from CMS) Queue Insertsevent=101Counts when the a new entry is Received(RxC) and then added to the AD (Address Ring) Ingress Queue from the CMS (Common Mesh Stop).  This is generally used for reads, andunc_m2m_rxc_ad_occupancyuncore interconnectAD Ingress (from CMS) Occupancyevent=201unc_m2m_rxc_bl_cycles_fulluncore interconnectBL Ingress (from CMS) Fullevent=801unc_m2m_rxc_bl_cycles_neuncore interconnectBL Ingress (from CMS) Not Emptyevent=701unc_m2m_rxc_bl_insertsuncore interconnectBL Ingress (from CMS) Allocationsevent=501unc_m2m_rxc_bl_occupancyuncore interconnectBL Ingress (from CMS) Occupancyevent=601unc_m2m_rxr_busy_starved.ad_bncuncore interconnectTransgress Injection Starvation; AD - Bounceevent=0xb4,umask=101Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityunc_m2m_rxr_busy_starved.ad_crduncore interconnectTransgress Injection Starvation; AD - Creditevent=0xb4,umask=0x1001Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityunc_m2m_rxr_busy_starved.bl_bncuncore interconnectTransgress Injection Starvation; BL - Bounceevent=0xb4,umask=401Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityunc_m2m_rxr_busy_starved.bl_crduncore interconnectTransgress Injection Starvation; BL - Creditevent=0xb4,umask=0x4001Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityunc_m2m_rxr_bypass.ad_bncuncore interconnectTransgress Ingress Bypass; AD - Bounceevent=0xb2,umask=101Number of packets bypassing the CMS Ingressunc_m2m_rxr_bypass.ad_crduncore interconnectTransgress Ingress Bypass; AD - Creditevent=0xb2,umask=0x1001Number of packets bypassing the CMS Ingressunc_m2m_rxr_bypass.ak_bncuncore interconnectTransgress Ingress Bypass; AK - Bounceevent=0xb2,umask=201Number of packets bypassing the CMS Ingressunc_m2m_rxr_bypass.bl_bncuncore interconnectTransgress Ingress Bypass; BL - Bounceevent=0xb2,umask=401Number of packets bypassing the CMS Ingressunc_m2m_rxr_bypass.bl_crduncore interconnectTransgress Ingress Bypass; BL - Creditevent=0xb2,umask=0x4001Number of packets bypassing the CMS Ingressunc_m2m_rxr_bypass.iv_bncuncore interconnectTransgress Ingress Bypass; IV - Bounceevent=0xb2,umask=801Number of packets bypassing the CMS Ingressunc_m2m_rxr_crd_starved.ad_bncuncore interconnectTransgress Injection Starvation; AD - Bounceevent=0xb3,umask=101Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m2m_rxr_crd_starved.ad_crduncore interconnectTransgress Injection Starvation; AD - Creditevent=0xb3,umask=0x1001Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m2m_rxr_crd_starved.ak_bncuncore interconnectTransgress Injection Starvation; AK - Bounceevent=0xb3,umask=201Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m2m_rxr_crd_starved.bl_bncuncore interconnectTransgress Injection Starvation; BL - Bounceevent=0xb3,umask=401Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m2m_rxr_crd_starved.bl_crduncore interconnectTransgress Injection Starvation; BL - Creditevent=0xb3,umask=0x4001Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m2m_rxr_crd_starved.ifvuncore interconnectTransgress Injection Starvation; IFV - Creditevent=0xb3,umask=0x8001Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m2m_rxr_crd_starved.iv_bncuncore interconnectTransgress Injection Starvation; IV - Bounceevent=0xb3,umask=801Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m2m_rxr_inserts.ad_bncuncore interconnectTransgress Ingress Allocations; AD - Bounceevent=0xb1,umask=101Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m2m_rxr_inserts.ad_crduncore interconnectTransgress Ingress Allocations; AD - Creditevent=0xb1,umask=0x1001Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m2m_rxr_inserts.ak_bncuncore interconnectTransgress Ingress Allocations; AK - Bounceevent=0xb1,umask=201Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m2m_rxr_inserts.bl_bncuncore interconnectTransgress Ingress Allocations; BL - Bounceevent=0xb1,umask=401Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m2m_rxr_inserts.bl_crduncore interconnectTransgress Ingress Allocations; BL - Creditevent=0xb1,umask=0x4001Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m2m_rxr_inserts.iv_bncuncore interconnectTransgress Ingress Allocations; IV - Bounceevent=0xb1,umask=801Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m2m_rxr_occupancy.ad_bncuncore interconnectTransgress Ingress Occupancy; AD - Bounceevent=0xb0,umask=101Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m2m_rxr_occupancy.ad_crduncore interconnectTransgress Ingress Occupancy; AD - Creditevent=0xb0,umask=0x1001Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m2m_rxr_occupancy.ak_bncuncore interconnectTransgress Ingress Occupancy; AK - Bounceevent=0xb0,umask=201Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m2m_rxr_occupancy.bl_bncuncore interconnectTransgress Ingress Occupancy; BL - Bounceevent=0xb0,umask=401Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m2m_rxr_occupancy.bl_crduncore interconnectTransgress Ingress Occupancy; BL - Creditevent=0xb0,umask=0x4001Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m2m_rxr_occupancy.iv_bncuncore interconnectTransgress Ingress Occupancy; IV - Bounceevent=0xb0,umask=801Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m2m_stall_no_txr_horz_crd_ad_ag0.tgr0uncore interconnectStall on No AD Agent0 Transgress Credits; For Transgress 0event=0xd0,umask=101Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall_no_txr_horz_crd_ad_ag0.tgr1uncore interconnectStall on No AD Agent0 Transgress Credits; For Transgress 1event=0xd0,umask=201Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall_no_txr_horz_crd_ad_ag0.tgr2uncore interconnectStall on No AD Agent0 Transgress Credits; For Transgress 2event=0xd0,umask=401Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall_no_txr_horz_crd_ad_ag0.tgr3uncore interconnectStall on No AD Agent0 Transgress Credits; For Transgress 3event=0xd0,umask=801Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall_no_txr_horz_crd_ad_ag0.tgr4uncore interconnectStall on No AD Agent0 Transgress Credits; For Transgress 4event=0xd0,umask=0x1001Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall_no_txr_horz_crd_ad_ag0.tgr5uncore interconnectStall on No AD Agent0 Transgress Credits; For Transgress 5event=0xd0,umask=0x2001Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall_no_txr_horz_crd_ad_ag1.tgr0uncore interconnectStall on No AD Agent1 Transgress Credits; For Transgress 0event=0xd2,umask=101Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall_no_txr_horz_crd_ad_ag1.tgr1uncore interconnectStall on No AD Agent1 Transgress Credits; For Transgress 1event=0xd2,umask=201Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall_no_txr_horz_crd_ad_ag1.tgr2uncore interconnectStall on No AD Agent1 Transgress Credits; For Transgress 2event=0xd2,umask=401Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall_no_txr_horz_crd_ad_ag1.tgr3uncore interconnectStall on No AD Agent1 Transgress Credits; For Transgress 3event=0xd2,umask=801Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall_no_txr_horz_crd_ad_ag1.tgr4uncore interconnectStall on No AD Agent1 Transgress Credits; For Transgress 4event=0xd2,umask=0x1001Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall_no_txr_horz_crd_ad_ag1.tgr5uncore interconnectStall on No AD Agent1 Transgress Credits; For Transgress 5event=0xd2,umask=0x2001Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall_no_txr_horz_crd_bl_ag0.tgr0uncore interconnectStall on No BL Agent0 Transgress Credits; For Transgress 0event=0xd4,umask=101Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall_no_txr_horz_crd_bl_ag0.tgr1uncore interconnectStall on No BL Agent0 Transgress Credits; For Transgress 1event=0xd4,umask=201Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall_no_txr_horz_crd_bl_ag0.tgr2uncore interconnectStall on No BL Agent0 Transgress Credits; For Transgress 2event=0xd4,umask=401Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall_no_txr_horz_crd_bl_ag0.tgr3uncore interconnectStall on No BL Agent0 Transgress Credits; For Transgress 3event=0xd4,umask=801Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall_no_txr_horz_crd_bl_ag0.tgr4uncore interconnectStall on No BL Agent0 Transgress Credits; For Transgress 4event=0xd4,umask=0x1001Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall_no_txr_horz_crd_bl_ag0.tgr5uncore interconnectStall on No BL Agent0 Transgress Credits; For Transgress 5event=0xd4,umask=0x2001Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall_no_txr_horz_crd_bl_ag1.tgr0uncore interconnectStall on No BL Agent1 Transgress Credits; For Transgress 0event=0xd6,umask=101Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall_no_txr_horz_crd_bl_ag1.tgr1uncore interconnectStall on No BL Agent1 Transgress Credits; For Transgress 1event=0xd6,umask=201Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall_no_txr_horz_crd_bl_ag1.tgr2uncore interconnectStall on No BL Agent1 Transgress Credits; For Transgress 2event=0xd6,umask=401Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall_no_txr_horz_crd_bl_ag1.tgr3uncore interconnectStall on No BL Agent1 Transgress Credits; For Transgress 3event=0xd6,umask=801Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall_no_txr_horz_crd_bl_ag1.tgr4uncore interconnectStall on No BL Agent1 Transgress Credits; For Transgress 4event=0xd6,umask=0x1001Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall_no_txr_horz_crd_bl_ag1.tgr5uncore interconnectStall on No BL Agent1 Transgress Credits; For Transgress 5event=0xd6,umask=0x2001Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_tag_hit.nm_rd_hit_cleanuncore interconnectClean line read hits(Regular and RFO) to Near Memory(DRAM cache) in Memory Mode and regular reads to DRAM in 1LMevent=0x2c,umask=101Tag Hit; Read Hit from NearMem, Clean Lineunc_m2m_tag_hit.nm_rd_hit_dirtyuncore interconnectDirty line read hits(Regular and RFO) to Near Memory(DRAM cache) in Memory Modeevent=0x2c,umask=201Tag Hit; Read Hit from NearMem, Dirty  Lineunc_m2m_tag_hit.nm_ufill_hit_cleanuncore interconnectClean line underfill read hits to Near Memory(DRAM cache) in Memory Modeevent=0x2c,umask=401Tag Hit; Underfill Rd Hit from NearMem, Clean Lineunc_m2m_tag_hit.nm_ufill_hit_dirtyuncore interconnectDirty line underfill read hits to Near Memory(DRAM cache) in Memory Modeevent=0x2c,umask=801Tag Hit; Underfill Rd Hit from NearMem, Dirty  Lineunc_m2m_tgr_ad_creditsuncore interconnectNumber AD Ingress Creditsevent=0x4101unc_m2m_tgr_bl_creditsuncore interconnectNumber BL Ingress Creditsevent=0x4201unc_m2m_tracker_cycles_full.ch0uncore interconnectTracker Cycles Full; Channel 0event=0x45,umask=101unc_m2m_tracker_cycles_full.ch1uncore interconnectTracker Cycles Full; Channel 1event=0x45,umask=201unc_m2m_tracker_cycles_full.ch2uncore interconnectTracker Cycles Full; Channel 2event=0x45,umask=401unc_m2m_tracker_cycles_ne.ch0uncore interconnectTracker Cycles Not Empty; Channel 0event=0x46,umask=101unc_m2m_tracker_cycles_ne.ch1uncore interconnectTracker Cycles Not Empty; Channel 1event=0x46,umask=201unc_m2m_tracker_cycles_ne.ch2uncore interconnectTracker Cycles Not Empty; Channel 2event=0x46,umask=401unc_m2m_tracker_inserts.ch0uncore interconnectTracker Inserts; Channel 0event=0x49,umask=101unc_m2m_tracker_inserts.ch1uncore interconnectTracker Inserts; Channel 1event=0x49,umask=201unc_m2m_tracker_inserts.ch2uncore interconnectTracker Inserts; Channel 2event=0x49,umask=401unc_m2m_tracker_occupancy.ch0uncore interconnectTracker Occupancy; Channel 0event=0x47,umask=101unc_m2m_tracker_occupancy.ch1uncore interconnectTracker Occupancy; Channel 1event=0x47,umask=201unc_m2m_tracker_occupancy.ch2uncore interconnectTracker Occupancy; Channel 2event=0x47,umask=401unc_m2m_tracker_pending_occupancyuncore interconnectData Pending Occupancyevent=0x4801unc_m2m_txc_ad_credits_acquireduncore interconnectAD Egress (to CMS) Credit Acquiredevent=0xd01unc_m2m_txc_ad_credit_occupancyuncore interconnectAD Egress (to CMS) Credits Occupancyevent=0xe01unc_m2m_txc_ad_cycles_fulluncore interconnectAD Egress (to CMS) Fullevent=0xc01unc_m2m_txc_ad_cycles_neuncore interconnectAD Egress (to CMS) Not Emptyevent=0xb01unc_m2m_txc_ad_insertsuncore interconnectAD Egress (to CMS) Allocationsevent=901unc_m2m_txc_ad_no_credit_cyclesuncore interconnectCycles with No AD Egress (to CMS) Creditsevent=0xf01unc_m2m_txc_ad_no_credit_stalleduncore interconnectCycles Stalled with No AD Egress (to CMS) Creditsevent=0x1001unc_m2m_txc_ad_occupancyuncore interconnectAD Egress (to CMS) Occupancyevent=0xa01unc_m2m_txc_ak.crd_cbouncore interconnectOutbound Ring Transactions on AK; CRD Transactions to Cboevent=0x39,umask=201unc_m2m_txc_ak.ndruncore interconnectOutbound Ring Transactions on AK; NDR Transactionsevent=0x39,umask=101unc_m2m_txc_ak_credits_acquired.cms0uncore interconnectAK Egress (to CMS) Credit Acquired; Common Mesh Stop - Near Sideevent=0x1d,umask=101unc_m2m_txc_ak_credits_acquired.cms1uncore interconnectAK Egress (to CMS) Credit Acquired; Common Mesh Stop - Far Sideevent=0x1d,umask=201unc_m2m_txc_ak_credit_occupancy.cms0uncore interconnectAK Egress (to CMS) Credits Occupancy; Common Mesh Stop - Near Sideevent=0x1e,umask=101unc_m2m_txc_ak_credit_occupancy.cms1uncore interconnectAK Egress (to CMS) Credits Occupancy; Common Mesh Stop - Far Sideevent=0x1e,umask=201unc_m2m_txc_ak_cycles_full.alluncore interconnectAK Egress (to CMS) Full; Allevent=0x14,umask=301unc_m2m_txc_ak_cycles_full.cms0uncore interconnectAK Egress (to CMS) Full; Common Mesh Stop - Near Sideevent=0x14,umask=101unc_m2m_txc_ak_cycles_full.cms1uncore interconnectAK Egress (to CMS) Full; Common Mesh Stop - Far Sideevent=0x14,umask=201unc_m2m_txc_ak_cycles_full.rdcrd0uncore interconnectAK Egress (to CMS) Full; Read Credit Requestevent=0x14,umask=801unc_m2m_txc_ak_cycles_full.rdcrd1uncore interconnectAK Egress (to CMS) Full; Read Credit Requestevent=0x14,umask=0x8801unc_m2m_txc_ak_cycles_full.wrcmp0uncore interconnectAK Egress (to CMS) Full; Write Compare Requestevent=0x14,umask=0x2001unc_m2m_txc_ak_cycles_full.wrcmp1uncore interconnectAK Egress (to CMS) Full; Write Compare Requestevent=0x14,umask=0xa001unc_m2m_txc_ak_cycles_full.wrcrd0uncore interconnectAK Egress (to CMS) Full; Write Credit Requestevent=0x14,umask=0x1001unc_m2m_txc_ak_cycles_full.wrcrd1uncore interconnectAK Egress (to CMS) Full; Write Credit Requestevent=0x14,umask=0x9001unc_m2m_txc_ak_cycles_ne.alluncore interconnectAK Egress (to CMS) Not Empty; Allevent=0x13,umask=301unc_m2m_txc_ak_cycles_ne.cms0uncore interconnectAK Egress (to CMS) Not Empty; Common Mesh Stop - Near Sideevent=0x13,umask=101unc_m2m_txc_ak_cycles_ne.cms1uncore interconnectAK Egress (to CMS) Not Empty; Common Mesh Stop - Far Sideevent=0x13,umask=201unc_m2m_txc_ak_cycles_ne.rdcrduncore interconnectAK Egress (to CMS) Not Empty; Read Credit Requestevent=0x13,umask=801unc_m2m_txc_ak_cycles_ne.wrcmpuncore interconnectAK Egress (to CMS) Not Empty; Write Compare Requestevent=0x13,umask=0x2001unc_m2m_txc_ak_cycles_ne.wrcrduncore interconnectAK Egress (to CMS) Not Empty; Write Credit Requestevent=0x13,umask=0x1001unc_m2m_txc_ak_inserts.alluncore interconnectAK Egress (to CMS) Allocations; Allevent=0x11,umask=301unc_m2m_txc_ak_inserts.cms0uncore interconnectAK Egress (to CMS) Allocations; Common Mesh Stop - Near Sideevent=0x11,umask=101unc_m2m_txc_ak_inserts.cms1uncore interconnectAK Egress (to CMS) Allocations; Common Mesh Stop - Far Sideevent=0x11,umask=201unc_m2m_txc_ak_inserts.pref_rd_cam_hituncore interconnectAK Egress (to CMS) Allocations; Prefetch Read Cam Hitevent=0x11,umask=0x4001unc_m2m_txc_ak_inserts.rdcrduncore interconnectAK Egress (to CMS) Allocations; Read Credit Requestevent=0x11,umask=801unc_m2m_txc_ak_inserts.wrcmpuncore interconnectAK Egress (to CMS) Allocations; Write Compare Requestevent=0x11,umask=0x2001unc_m2m_txc_ak_inserts.wrcrduncore interconnectAK Egress (to CMS) Allocations; Write Credit Requestevent=0x11,umask=0x1001unc_m2m_txc_ak_no_credit_cycles.cms0uncore interconnectCycles with No AK Egress (to CMS) Credits; Common Mesh Stop - Near Sideevent=0x1f,umask=101unc_m2m_txc_ak_no_credit_cycles.cms1uncore interconnectCycles with No AK Egress (to CMS) Credits; Common Mesh Stop - Far Sideevent=0x1f,umask=201unc_m2m_txc_ak_no_credit_stalled.cms0uncore interconnectCycles Stalled with No AK Egress (to CMS) Credits; Common Mesh Stop - Near Sideevent=0x20,umask=101unc_m2m_txc_ak_no_credit_stalled.cms1uncore interconnectCycles Stalled with No AK Egress (to CMS) Credits; Common Mesh Stop - Far Sideevent=0x20,umask=201unc_m2m_txc_ak_occupancy.alluncore interconnectAK Egress (to CMS) Occupancy; Allevent=0x12,umask=301unc_m2m_txc_ak_occupancy.cms0uncore interconnectAK Egress (to CMS) Occupancy; Common Mesh Stop - Near Sideevent=0x12,umask=101unc_m2m_txc_ak_occupancy.cms1uncore interconnectAK Egress (to CMS) Occupancy; Common Mesh Stop - Far Sideevent=0x12,umask=201unc_m2m_txc_ak_occupancy.rdcrduncore interconnectAK Egress (to CMS) Occupancy; Read Credit Requestevent=0x12,umask=801unc_m2m_txc_ak_occupancy.wrcmpuncore interconnectAK Egress (to CMS) Occupancy; Write Compare Requestevent=0x12,umask=0x2001unc_m2m_txc_ak_occupancy.wrcrduncore interconnectAK Egress (to CMS) Occupancy; Write Credit Requestevent=0x12,umask=0x1001unc_m2m_txc_ak_sideband.rduncore interconnectAK Egress (to CMS) Sidebandevent=0x6b,umask=101unc_m2m_txc_ak_sideband.wruncore interconnectAK Egress (to CMS) Sidebandevent=0x6b,umask=201unc_m2m_txc_bl.drs_cacheuncore interconnectOutbound DRS Ring Transactions to Cache; Data to Cacheevent=0x40,umask=101unc_m2m_txc_bl.drs_coreuncore interconnectOutbound DRS Ring Transactions to Cache; Data to Coreevent=0x40,umask=201unc_m2m_txc_bl.drs_upiuncore interconnectOutbound DRS Ring Transactions to Cache; Data to QPIevent=0x40,umask=401unc_m2m_txc_bl_credits_acquired.cms0uncore interconnectBL Egress (to CMS) Credit Acquired; Common Mesh Stop - Near Sideevent=0x19,umask=101unc_m2m_txc_bl_credits_acquired.cms1uncore interconnectBL Egress (to CMS) Credit Acquired; Common Mesh Stop - Far Sideevent=0x19,umask=201unc_m2m_txc_bl_credit_occupancy.cms0uncore interconnectBL Egress (to CMS) Credits Occupancy; Common Mesh Stop - Near Sideevent=0x1a,umask=101unc_m2m_txc_bl_credit_occupancy.cms1uncore interconnectBL Egress (to CMS) Credits Occupancy; Common Mesh Stop - Far Sideevent=0x1a,umask=201unc_m2m_txc_bl_cycles_full.alluncore interconnectBL Egress (to CMS) Full; Allevent=0x18,umask=301unc_m2m_txc_bl_cycles_full.cms0uncore interconnectBL Egress (to CMS) Full; Common Mesh Stop - Near Sideevent=0x18,umask=101unc_m2m_txc_bl_cycles_full.cms1uncore interconnectBL Egress (to CMS) Full; Common Mesh Stop - Far Sideevent=0x18,umask=201unc_m2m_txc_bl_cycles_ne.alluncore interconnectBL Egress (to CMS) Not Empty; Allevent=0x17,umask=301unc_m2m_txc_bl_cycles_ne.cms0uncore interconnectBL Egress (to CMS) Not Empty; Common Mesh Stop - Near Sideevent=0x17,umask=101unc_m2m_txc_bl_cycles_ne.cms1uncore interconnectBL Egress (to CMS) Not Empty; Common Mesh Stop - Far Sideevent=0x17,umask=201unc_m2m_txc_bl_inserts.alluncore interconnectBL Egress (to CMS) Allocations; Allevent=0x15,umask=301unc_m2m_txc_bl_inserts.cms0uncore interconnectBL Egress (to CMS) Allocations; Common Mesh Stop - Near Sideevent=0x15,umask=101unc_m2m_txc_bl_inserts.cms1uncore interconnectBL Egress (to CMS) Allocations; Common Mesh Stop - Far Sideevent=0x15,umask=201unc_m2m_txc_bl_no_credit_cycles.cms0uncore interconnectCycles with No BL Egress (to CMS) Credits; Common Mesh Stop - Near Sideevent=0x1b,umask=101unc_m2m_txc_bl_no_credit_cycles.cms1uncore interconnectCycles with No BL Egress (to CMS) Credits; Common Mesh Stop - Far Sideevent=0x1b,umask=201unc_m2m_txc_bl_no_credit_stalled.cms0uncore interconnectCycles Stalled with No BL Egress (to CMS) Credits; Common Mesh Stop - Near Sideevent=0x1c,umask=101unc_m2m_txc_bl_no_credit_stalled.cms1uncore interconnectCycles Stalled with No BL Egress (to CMS) Credits; Common Mesh Stop - Far Sideevent=0x1c,umask=201unc_m2m_txc_bl_occupancy.alluncore interconnectBL Egress (to CMS) Occupancy; Allevent=0x16,umask=301unc_m2m_txc_bl_occupancy.cms0uncore interconnectBL Egress (to CMS) Occupancy; Common Mesh Stop - Near Sideevent=0x16,umask=101unc_m2m_txc_bl_occupancy.cms1uncore interconnectBL Egress (to CMS) Occupancy; Common Mesh Stop - Far Sideevent=0x16,umask=201unc_m2m_txr_horz_ads_used.ad_bncuncore interconnectCMS Horizontal ADS Used; AD - Bounceevent=0x9d,umask=101Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m2m_txr_horz_ads_used.ad_crduncore interconnectCMS Horizontal ADS Used; AD - Creditevent=0x9d,umask=0x1001Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m2m_txr_horz_ads_used.ak_bncuncore interconnectCMS Horizontal ADS Used; AK - Bounceevent=0x9d,umask=201Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m2m_txr_horz_ads_used.bl_bncuncore interconnectCMS Horizontal ADS Used; BL - Bounceevent=0x9d,umask=401Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m2m_txr_horz_ads_used.bl_crduncore interconnectCMS Horizontal ADS Used; BL - Creditevent=0x9d,umask=0x4001Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m2m_txr_horz_bypass.ad_bncuncore interconnectCMS Horizontal Bypass Used; AD - Bounceevent=0x9f,umask=101Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m2m_txr_horz_bypass.ad_crduncore interconnectCMS Horizontal Bypass Used; AD - Creditevent=0x9f,umask=0x1001Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m2m_txr_horz_bypass.ak_bncuncore interconnectCMS Horizontal Bypass Used; AK - Bounceevent=0x9f,umask=201Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m2m_txr_horz_bypass.bl_bncuncore interconnectCMS Horizontal Bypass Used; BL - Bounceevent=0x9f,umask=401Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m2m_txr_horz_bypass.bl_crduncore interconnectCMS Horizontal Bypass Used; BL - Creditevent=0x9f,umask=0x4001Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m2m_txr_horz_bypass.iv_bncuncore interconnectCMS Horizontal Bypass Used; IV - Bounceevent=0x9f,umask=801Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m2m_txr_horz_cycles_full.ad_bncuncore interconnectCycles CMS Horizontal Egress Queue is Full; AD - Bounceevent=0x96,umask=101Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_cycles_full.ad_crduncore interconnectCycles CMS Horizontal Egress Queue is Full; AD - Creditevent=0x96,umask=0x1001Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_cycles_full.ak_bncuncore interconnectCycles CMS Horizontal Egress Queue is Full; AK - Bounceevent=0x96,umask=201Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_cycles_full.bl_bncuncore interconnectCycles CMS Horizontal Egress Queue is Full; BL - Bounceevent=0x96,umask=401Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_cycles_full.bl_crduncore interconnectCycles CMS Horizontal Egress Queue is Full; BL - Creditevent=0x96,umask=0x4001Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_cycles_full.iv_bncuncore interconnectCycles CMS Horizontal Egress Queue is Full; IV - Bounceevent=0x96,umask=801Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_cycles_ne.ad_bncuncore interconnectCycles CMS Horizontal Egress Queue is Not Empty; AD - Bounceevent=0x97,umask=101Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_cycles_ne.ad_crduncore interconnectCycles CMS Horizontal Egress Queue is Not Empty; AD - Creditevent=0x97,umask=0x1001Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_cycles_ne.ak_bncuncore interconnectCycles CMS Horizontal Egress Queue is Not Empty; AK - Bounceevent=0x97,umask=201Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_cycles_ne.bl_bncuncore interconnectCycles CMS Horizontal Egress Queue is Not Empty; BL - Bounceevent=0x97,umask=401Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_cycles_ne.bl_crduncore interconnectCycles CMS Horizontal Egress Queue is Not Empty; BL - Creditevent=0x97,umask=0x4001Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_cycles_ne.iv_bncuncore interconnectCycles CMS Horizontal Egress Queue is Not Empty; IV - Bounceevent=0x97,umask=801Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_inserts.ad_bncuncore interconnectCMS Horizontal Egress Inserts; AD - Bounceevent=0x95,umask=101Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_inserts.ad_crduncore interconnectCMS Horizontal Egress Inserts; AD - Creditevent=0x95,umask=0x1001Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_inserts.ak_bncuncore interconnectCMS Horizontal Egress Inserts; AK - Bounceevent=0x95,umask=201Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_inserts.bl_bncuncore interconnectCMS Horizontal Egress Inserts; BL - Bounceevent=0x95,umask=401Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_inserts.bl_crduncore interconnectCMS Horizontal Egress Inserts; BL - Creditevent=0x95,umask=0x4001Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_inserts.iv_bncuncore interconnectCMS Horizontal Egress Inserts; IV - Bounceevent=0x95,umask=801Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_nack.ad_bncuncore interconnectCMS Horizontal Egress NACKs; AD - Bounceevent=0x99,umask=101Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m2m_txr_horz_nack.ad_crduncore interconnectCMS Horizontal Egress NACKs; AD - Creditevent=0x99,umask=0x2001Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m2m_txr_horz_nack.ak_bncuncore interconnectCMS Horizontal Egress NACKs; AK - Bounceevent=0x99,umask=201Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m2m_txr_horz_nack.bl_bncuncore interconnectCMS Horizontal Egress NACKs; BL - Bounceevent=0x99,umask=401Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m2m_txr_horz_nack.bl_crduncore interconnectCMS Horizontal Egress NACKs; BL - Creditevent=0x99,umask=0x4001Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m2m_txr_horz_nack.iv_bncuncore interconnectCMS Horizontal Egress NACKs; IV - Bounceevent=0x99,umask=801Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m2m_txr_horz_occupancy.ad_bncuncore interconnectCMS Horizontal Egress Occupancy; AD - Bounceevent=0x94,umask=101Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_occupancy.ad_crduncore interconnectCMS Horizontal Egress Occupancy; AD - Creditevent=0x94,umask=0x1001Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_occupancy.ak_bncuncore interconnectCMS Horizontal Egress Occupancy; AK - Bounceevent=0x94,umask=201Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_occupancy.bl_bncuncore interconnectCMS Horizontal Egress Occupancy; BL - Bounceevent=0x94,umask=401Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_occupancy.bl_crduncore interconnectCMS Horizontal Egress Occupancy; BL - Creditevent=0x94,umask=0x4001Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_occupancy.iv_bncuncore interconnectCMS Horizontal Egress Occupancy; IV - Bounceevent=0x94,umask=801Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_starved.ad_bncuncore interconnectCMS Horizontal Egress Injection Starvation; AD - Bounceevent=0x9b,umask=101Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_m2m_txr_horz_starved.ak_bncuncore interconnectCMS Horizontal Egress Injection Starvation; AK - Bounceevent=0x9b,umask=201Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_m2m_txr_horz_starved.bl_bncuncore interconnectCMS Horizontal Egress Injection Starvation; BL - Bounceevent=0x9b,umask=401Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_m2m_txr_horz_starved.iv_bncuncore interconnectCMS Horizontal Egress Injection Starvation; IV - Bounceevent=0x9b,umask=801Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_m2m_txr_vert_ads_used.ad_ag0uncore interconnectCMS Vertical ADS Used; AD - Agent 0event=0x9c,umask=101Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m2m_txr_vert_ads_used.ad_ag1uncore interconnectCMS Vertical ADS Used; AD - Agent 1event=0x9c,umask=0x1001Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m2m_txr_vert_ads_used.ak_ag0uncore interconnectCMS Vertical ADS Used; AK - Agent 0event=0x9c,umask=201Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m2m_txr_vert_ads_used.ak_ag1uncore interconnectCMS Vertical ADS Used; AK - Agent 1event=0x9c,umask=0x2001Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m2m_txr_vert_ads_used.bl_ag0uncore interconnectCMS Vertical ADS Used; BL - Agent 0event=0x9c,umask=401Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m2m_txr_vert_ads_used.bl_ag1uncore interconnectCMS Vertical ADS Used; BL - Agent 1event=0x9c,umask=0x4001Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m2m_txr_vert_bypass.ad_ag0uncore interconnectCMS Vertical ADS Used; AD - Agent 0event=0x9e,umask=101Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m2m_txr_vert_bypass.ad_ag1uncore interconnectCMS Vertical ADS Used; AD - Agent 1event=0x9e,umask=0x1001Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m2m_txr_vert_bypass.ak_ag0uncore interconnectCMS Vertical ADS Used; AK - Agent 0event=0x9e,umask=201Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m2m_txr_vert_bypass.ak_ag1uncore interconnectCMS Vertical ADS Used; AK - Agent 1event=0x9e,umask=0x2001Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m2m_txr_vert_bypass.bl_ag0uncore interconnectCMS Vertical ADS Used; BL - Agent 0event=0x9e,umask=401Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m2m_txr_vert_bypass.bl_ag1uncore interconnectCMS Vertical ADS Used; BL - Agent 1event=0x9e,umask=0x4001Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m2m_txr_vert_bypass.ivuncore interconnectCMS Vertical ADS Used; IVevent=0x9e,umask=801Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m2m_txr_vert_cycles_full.ad_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Full; AD - Agent 0event=0x92,umask=101Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m2m_txr_vert_cycles_full.ad_ag1uncore interconnectCycles CMS Vertical Egress Queue Is Full; AD - Agent 1event=0x92,umask=0x1001Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AD ring.  This is commonly used for outbound requestsunc_m2m_txr_vert_cycles_full.ak_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Full; AK - Agent 0event=0x92,umask=201Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m2m_txr_vert_cycles_full.ak_ag1uncore interconnectCycles CMS Vertical Egress Queue Is Full; AK - Agent 1event=0x92,umask=0x2001Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AK ringunc_m2m_txr_vert_cycles_full.bl_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Full; BL - Agent 0event=0x92,umask=401Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_m2m_txr_vert_cycles_full.bl_ag1uncore interconnectCycles CMS Vertical Egress Queue Is Full; BL - Agent 1event=0x92,umask=0x4001Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_m2m_txr_vert_cycles_full.ivuncore interconnectCycles CMS Vertical Egress Queue Is Full; IVevent=0x92,umask=801Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the IV ring.  This is commonly used for snoops to the coresunc_m2m_txr_vert_cycles_ne.ad_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty; AD - Agent 0event=0x93,umask=101Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m2m_txr_vert_cycles_ne.ad_ag1uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty; AD - Agent 1event=0x93,umask=0x1001Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AD ring.  This is commonly used for outbound requestsunc_m2m_txr_vert_cycles_ne.ak_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty; AK - Agent 0event=0x93,umask=201Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m2m_txr_vert_cycles_ne.ak_ag1uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty; AK - Agent 1event=0x93,umask=0x2001Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AK ringunc_m2m_txr_vert_cycles_ne.bl_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty; BL - Agent 0event=0x93,umask=401Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_m2m_txr_vert_cycles_ne.bl_ag1uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty; BL - Agent 1event=0x93,umask=0x4001Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_m2m_txr_vert_cycles_ne.ivuncore interconnectCycles CMS Vertical Egress Queue Is Not Empty; IVevent=0x93,umask=801Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the IV ring.  This is commonly used for snoops to the coresunc_m2m_txr_vert_inserts.ad_ag0uncore interconnectCMS Vert Egress Allocations; AD - Agent 0event=0x91,umask=101Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m2m_txr_vert_inserts.ad_ag1uncore interconnectCMS Vert Egress Allocations; AD - Agent 1event=0x91,umask=0x1001Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AD ring.  This is commonly used for outbound requestsunc_m2m_txr_vert_inserts.ak_ag0uncore interconnectCMS Vert Egress Allocations; AK - Agent 0event=0x91,umask=201Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m2m_txr_vert_inserts.ak_ag1uncore interconnectCMS Vert Egress Allocations; AK - Agent 1event=0x91,umask=0x2001Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AK ringunc_m2m_txr_vert_inserts.bl_ag0uncore interconnectCMS Vert Egress Allocations; BL - Agent 0event=0x91,umask=401Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_m2m_txr_vert_inserts.bl_ag1uncore interconnectCMS Vert Egress Allocations; BL - Agent 1event=0x91,umask=0x4001Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_m2m_txr_vert_inserts.ivuncore interconnectCMS Vert Egress Allocations; IVevent=0x91,umask=801Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the IV ring.  This is commonly used for snoops to the coresunc_m2m_txr_vert_nack.ad_ag0uncore interconnectCMS Vertical Egress NACKs; AD - Agent 0event=0x98,umask=101Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m2m_txr_vert_nack.ad_ag1uncore interconnectCMS Vertical Egress NACKs; AD - Agent 1event=0x98,umask=0x1001Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m2m_txr_vert_nack.ak_ag0uncore interconnectCMS Vertical Egress NACKs; AK - Agent 0event=0x98,umask=201Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m2m_txr_vert_nack.ak_ag1uncore interconnectCMS Vertical Egress NACKs; AK - Agent 1event=0x98,umask=0x2001Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m2m_txr_vert_nack.bl_ag0uncore interconnectCMS Vertical Egress NACKs; BL - Agent 0event=0x98,umask=401Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m2m_txr_vert_nack.bl_ag1uncore interconnectCMS Vertical Egress NACKs; BL - Agent 1event=0x98,umask=0x4001Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m2m_txr_vert_nack.ivuncore interconnectCMS Vertical Egress NACKs; IVevent=0x98,umask=801Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m2m_txr_vert_occupancy.ad_ag0uncore interconnectCMS Vert Egress Occupancy; AD - Agent 0event=0x90,umask=101Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m2m_txr_vert_occupancy.ad_ag1uncore interconnectCMS Vert Egress Occupancy; AD - Agent 1event=0x90,umask=0x1001Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AD ring.  This is commonly used for outbound requestsunc_m2m_txr_vert_occupancy.ak_ag0uncore interconnectCMS Vert Egress Occupancy; AK - Agent 0event=0x90,umask=201Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m2m_txr_vert_occupancy.ak_ag1uncore interconnectCMS Vert Egress Occupancy; AK - Agent 1event=0x90,umask=0x2001Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AK ringunc_m2m_txr_vert_occupancy.bl_ag0uncore interconnectCMS Vert Egress Occupancy; BL - Agent 0event=0x90,umask=401Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_m2m_txr_vert_occupancy.bl_ag1uncore interconnectCMS Vert Egress Occupancy; BL - Agent 1event=0x90,umask=0x4001Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_m2m_txr_vert_occupancy.ivuncore interconnectCMS Vert Egress Occupancy; IVevent=0x90,umask=801Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the IV ring.  This is commonly used for snoops to the coresunc_m2m_txr_vert_starved.ad_ag0uncore interconnectCMS Vertical Egress Injection Starvation; AD - Agent 0event=0x9a,umask=101Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m2m_txr_vert_starved.ad_ag1uncore interconnectCMS Vertical Egress Injection Starvation; AD - Agent 1event=0x9a,umask=0x1001Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m2m_txr_vert_starved.ak_ag0uncore interconnectCMS Vertical Egress Injection Starvation; AK - Agent 0event=0x9a,umask=201Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m2m_txr_vert_starved.ak_ag1uncore interconnectCMS Vertical Egress Injection Starvation; AK - Agent 1event=0x9a,umask=0x2001Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m2m_txr_vert_starved.bl_ag0uncore interconnectCMS Vertical Egress Injection Starvation; BL - Agent 0event=0x9a,umask=401Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m2m_txr_vert_starved.bl_ag1uncore interconnectCMS Vertical Egress Injection Starvation; BL - Agent 1event=0x9a,umask=0x4001Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m2m_txr_vert_starved.ivuncore interconnectCMS Vertical Egress Injection Starvation; IVevent=0x9a,umask=801Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m2m_vert_ring_ad_in_use.dn_evenuncore interconnectVertical AD Ring In Use; Down and Evenevent=0xa6,umask=401Counts the number of cycles that the Vertical AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings  -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_ad_in_use.dn_odduncore interconnectVertical AD Ring In Use; Down and Oddevent=0xa6,umask=801Counts the number of cycles that the Vertical AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings  -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_ad_in_use.up_evenuncore interconnectVertical AD Ring In Use; Up and Evenevent=0xa6,umask=101Counts the number of cycles that the Vertical AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings  -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_ad_in_use.up_odduncore interconnectVertical AD Ring In Use; Up and Oddevent=0xa6,umask=201Counts the number of cycles that the Vertical AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings  -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_ak_in_use.dn_evenuncore interconnectVertical AK Ring In Use; Down and Evenevent=0xa8,umask=401Counts the number of cycles that the Vertical AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_ak_in_use.dn_odduncore interconnectVertical AK Ring In Use; Down and Oddevent=0xa8,umask=801Counts the number of cycles that the Vertical AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_ak_in_use.up_evenuncore interconnectVertical AK Ring In Use; Up and Evenevent=0xa8,umask=101Counts the number of cycles that the Vertical AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_ak_in_use.up_odduncore interconnectVertical AK Ring In Use; Up and Oddevent=0xa8,umask=201Counts the number of cycles that the Vertical AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_bl_in_use.dn_evenuncore interconnectVertical BL Ring in Use; Down and Evenevent=0xaa,umask=401Counts the number of cycles that the Vertical BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_bl_in_use.dn_odduncore interconnectVertical BL Ring in Use; Down and Oddevent=0xaa,umask=801Counts the number of cycles that the Vertical BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_bl_in_use.up_evenuncore interconnectVertical BL Ring in Use; Up and Evenevent=0xaa,umask=101Counts the number of cycles that the Vertical BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_bl_in_use.up_odduncore interconnectVertical BL Ring in Use; Up and Oddevent=0xaa,umask=201Counts the number of cycles that the Vertical BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_iv_in_use.dnuncore interconnectVertical IV Ring in Use; Downevent=0xac,umask=401Counts the number of cycles that the Vertical IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODDunc_m2m_vert_ring_iv_in_use.upuncore interconnectVertical IV Ring in Use; Upevent=0xac,umask=101Counts the number of cycles that the Vertical IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODDunc_m2m_wpq_cycles_no_reg_credits.chn0uncore interconnectThis event is deprecated. Refer to new event UNC_M2M_WPQ_CYCLES_REG_CREDITS.CHN0event=0x4d,umask=111unc_m2m_wpq_cycles_no_reg_credits.chn1uncore interconnectThis event is deprecated. Refer to new event UNC_M2M_WPQ_CYCLES_REG_CREDITS.CHN1event=0x4d,umask=211unc_m2m_wpq_cycles_no_reg_credits.chn2uncore interconnectThis event is deprecated. Refer to new event UNC_M2M_WPQ_CYCLES_REG_CREDITS.CHN2event=0x4d,umask=411unc_m2m_wpq_cycles_reg_credits.chn0uncore interconnectM2M->iMC WPQ Cycles w/Credits - Regular; Channel 0event=0x4d,umask=101unc_m2m_wpq_cycles_reg_credits.chn1uncore interconnectM2M->iMC WPQ Cycles w/Credits - Regular; Channel 1event=0x4d,umask=201unc_m2m_wpq_cycles_reg_credits.chn2uncore interconnectM2M->iMC WPQ Cycles w/Credits - Regular; Channel 2event=0x4d,umask=401unc_m2m_wpq_cycles_spec_credits.chn0uncore interconnectM2M->iMC WPQ Cycles w/Credits - Special; Channel 0event=0x4e,umask=101unc_m2m_wpq_cycles_spec_credits.chn1uncore interconnectM2M->iMC WPQ Cycles w/Credits - Special; Channel 1event=0x4e,umask=201unc_m2m_wpq_cycles_spec_credits.chn2uncore interconnectM2M->iMC WPQ Cycles w/Credits - Special; Channel 2event=0x4e,umask=401unc_m2m_write_tracker_cycles_full.ch0uncore interconnectWrite Tracker Cycles Full; Channel 0event=0x4a,umask=101unc_m2m_write_tracker_cycles_full.ch1uncore interconnectWrite Tracker Cycles Full; Channel 1event=0x4a,umask=201unc_m2m_write_tracker_cycles_full.ch2uncore interconnectWrite Tracker Cycles Full; Channel 2event=0x4a,umask=401unc_m2m_write_tracker_cycles_ne.ch0uncore interconnectWrite Tracker Cycles Not Empty; Channel 0event=0x4b,umask=101unc_m2m_write_tracker_cycles_ne.ch1uncore interconnectWrite Tracker Cycles Not Empty; Channel 1event=0x4b,umask=201unc_m2m_write_tracker_cycles_ne.ch2uncore interconnectWrite Tracker Cycles Not Empty; Channel 2event=0x4b,umask=401unc_m2m_write_tracker_inserts.ch0uncore interconnectWrite Tracker Inserts; Channel 0event=0x61,umask=101unc_m2m_write_tracker_inserts.ch1uncore interconnectWrite Tracker Inserts; Channel 1event=0x61,umask=201unc_m2m_write_tracker_inserts.ch2uncore interconnectWrite Tracker Inserts; Channel 2event=0x61,umask=401unc_m2m_write_tracker_occupancy.ch0uncore interconnectWrite Tracker Occupancy; Channel 0event=0x60,umask=101unc_m2m_write_tracker_occupancy.ch1uncore interconnectWrite Tracker Occupancy; Channel 1event=0x60,umask=201unc_m2m_write_tracker_occupancy.ch2uncore interconnectWrite Tracker Occupancy; Channel 2event=0x60,umask=401uncore_m3upiunc_m3upi_ag0_ad_crd_acquired.tgr0uncore interconnectCMS Agent0 AD Credits Acquired; For Transgress 0event=0x80,umask=101Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m3upi_ag0_ad_crd_acquired.tgr1uncore interconnectCMS Agent0 AD Credits Acquired; For Transgress 1event=0x80,umask=201Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m3upi_ag0_ad_crd_acquired.tgr2uncore interconnectCMS Agent0 AD Credits Acquired; For Transgress 2event=0x80,umask=401Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m3upi_ag0_ad_crd_acquired.tgr3uncore interconnectCMS Agent0 AD Credits Acquired; For Transgress 3event=0x80,umask=801Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m3upi_ag0_ad_crd_acquired.tgr4uncore interconnectCMS Agent0 AD Credits Acquired; For Transgress 4event=0x80,umask=0x1001Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m3upi_ag0_ad_crd_acquired.tgr5uncore interconnectCMS Agent0 AD Credits Acquired; For Transgress 5event=0x80,umask=0x2001Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m3upi_ag0_ad_crd_occupancy.tgr0uncore interconnectCMS Agent0 AD Credits Occupancy; For Transgress 0event=0x82,umask=101Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m3upi_ag0_ad_crd_occupancy.tgr1uncore interconnectCMS Agent0 AD Credits Occupancy; For Transgress 1event=0x82,umask=201Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m3upi_ag0_ad_crd_occupancy.tgr2uncore interconnectCMS Agent0 AD Credits Occupancy; For Transgress 2event=0x82,umask=401Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m3upi_ag0_ad_crd_occupancy.tgr3uncore interconnectCMS Agent0 AD Credits Occupancy; For Transgress 3event=0x82,umask=801Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m3upi_ag0_ad_crd_occupancy.tgr4uncore interconnectCMS Agent0 AD Credits Occupancy; For Transgress 4event=0x82,umask=0x1001Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m3upi_ag0_ad_crd_occupancy.tgr5uncore interconnectCMS Agent0 AD Credits Occupancy; For Transgress 5event=0x82,umask=0x2001Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m3upi_ag0_bl_crd_acquired.tgr0uncore interconnectCMS Agent0 BL Credits Acquired; For Transgress 0event=0x88,umask=101Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m3upi_ag0_bl_crd_acquired.tgr1uncore interconnectCMS Agent0 BL Credits Acquired; For Transgress 1event=0x88,umask=201Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m3upi_ag0_bl_crd_acquired.tgr2uncore interconnectCMS Agent0 BL Credits Acquired; For Transgress 2event=0x88,umask=401Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m3upi_ag0_bl_crd_acquired.tgr3uncore interconnectCMS Agent0 BL Credits Acquired; For Transgress 3event=0x88,umask=801Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m3upi_ag0_bl_crd_acquired.tgr4uncore interconnectCMS Agent0 BL Credits Acquired; For Transgress 4event=0x88,umask=0x1001Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m3upi_ag0_bl_crd_acquired.tgr5uncore interconnectCMS Agent0 BL Credits Acquired; For Transgress 5event=0x88,umask=0x2001Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m3upi_ag0_bl_crd_occupancy.tgr0uncore interconnectCMS Agent0 BL Credits Occupancy; For Transgress 0event=0x8a,umask=101Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m3upi_ag0_bl_crd_occupancy.tgr1uncore interconnectCMS Agent0 BL Credits Occupancy; For Transgress 1event=0x8a,umask=201Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m3upi_ag0_bl_crd_occupancy.tgr2uncore interconnectCMS Agent0 BL Credits Occupancy; For Transgress 2event=0x8a,umask=401Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m3upi_ag0_bl_crd_occupancy.tgr3uncore interconnectCMS Agent0 BL Credits Occupancy; For Transgress 3event=0x8a,umask=801Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m3upi_ag0_bl_crd_occupancy.tgr4uncore interconnectCMS Agent0 BL Credits Occupancy; For Transgress 4event=0x8a,umask=0x1001Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m3upi_ag0_bl_crd_occupancy.tgr5uncore interconnectCMS Agent0 BL Credits Occupancy; For Transgress 5event=0x8a,umask=0x2001Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m3upi_ag1_ad_crd_acquired.tgr0uncore interconnectCMS Agent1 AD Credits Acquired; For Transgress 0event=0x84,umask=101Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m3upi_ag1_ad_crd_acquired.tgr1uncore interconnectCMS Agent1 AD Credits Acquired; For Transgress 1event=0x84,umask=201Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m3upi_ag1_ad_crd_acquired.tgr2uncore interconnectCMS Agent1 AD Credits Acquired; For Transgress 2event=0x84,umask=401Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m3upi_ag1_ad_crd_acquired.tgr3uncore interconnectCMS Agent1 AD Credits Acquired; For Transgress 3event=0x84,umask=801Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m3upi_ag1_ad_crd_acquired.tgr4uncore interconnectCMS Agent1 AD Credits Acquired; For Transgress 4event=0x84,umask=0x1001Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m3upi_ag1_ad_crd_acquired.tgr5uncore interconnectCMS Agent1 AD Credits Acquired; For Transgress 5event=0x84,umask=0x2001Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m3upi_ag1_ad_crd_occupancy.tgr0uncore interconnectCMS Agent1 AD Credits Occupancy; For Transgress 0event=0x86,umask=101Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m3upi_ag1_ad_crd_occupancy.tgr1uncore interconnectCMS Agent1 AD Credits Occupancy; For Transgress 1event=0x86,umask=201Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m3upi_ag1_ad_crd_occupancy.tgr2uncore interconnectCMS Agent1 AD Credits Occupancy; For Transgress 2event=0x86,umask=401Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m3upi_ag1_ad_crd_occupancy.tgr3uncore interconnectCMS Agent1 AD Credits Occupancy; For Transgress 3event=0x86,umask=801Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m3upi_ag1_ad_crd_occupancy.tgr4uncore interconnectCMS Agent1 AD Credits Occupancy; For Transgress 4event=0x86,umask=0x1001Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m3upi_ag1_ad_crd_occupancy.tgr5uncore interconnectCMS Agent1 AD Credits Occupancy; For Transgress 5event=0x86,umask=0x2001Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m3upi_ag1_bl_crd_occupancy.tgr0uncore interconnectCMS Agent1 BL Credits Occupancy; For Transgress 0event=0x8e,umask=101Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m3upi_ag1_bl_crd_occupancy.tgr1uncore interconnectCMS Agent1 BL Credits Occupancy; For Transgress 1event=0x8e,umask=201Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m3upi_ag1_bl_crd_occupancy.tgr2uncore interconnectCMS Agent1 BL Credits Occupancy; For Transgress 2event=0x8e,umask=401Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m3upi_ag1_bl_crd_occupancy.tgr3uncore interconnectCMS Agent1 BL Credits Occupancy; For Transgress 3event=0x8e,umask=801Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m3upi_ag1_bl_crd_occupancy.tgr4uncore interconnectCMS Agent1 BL Credits Occupancy; For Transgress 4event=0x8e,umask=0x1001Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m3upi_ag1_bl_crd_occupancy.tgr5uncore interconnectCMS Agent1 BL Credits Occupancy; For Transgress 5event=0x8e,umask=0x2001Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m3upi_ag1_bl_credits_acquired.tgr0uncore interconnectCMS Agent1 BL Credits Acquired; For Transgress 0event=0x8c,umask=101Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m3upi_ag1_bl_credits_acquired.tgr1uncore interconnectCMS Agent1 BL Credits Acquired; For Transgress 1event=0x8c,umask=201Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m3upi_ag1_bl_credits_acquired.tgr2uncore interconnectCMS Agent1 BL Credits Acquired; For Transgress 2event=0x8c,umask=401Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m3upi_ag1_bl_credits_acquired.tgr3uncore interconnectCMS Agent1 BL Credits Acquired; For Transgress 3event=0x8c,umask=801Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m3upi_ag1_bl_credits_acquired.tgr4uncore interconnectCMS Agent1 BL Credits Acquired; For Transgress 4event=0x8c,umask=0x1001Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m3upi_ag1_bl_credits_acquired.tgr5uncore interconnectCMS Agent1 BL Credits Acquired; For Transgress 5event=0x8c,umask=0x2001Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m3upi_cha_ad_credits_empty.requncore interconnectCBox AD Credits Empty; Requestsevent=0x22,umask=401No credits available to send to Cbox on the AD Ring (covers higher CBoxes)unc_m3upi_cha_ad_credits_empty.snpuncore interconnectCBox AD Credits Empty; Snoopsevent=0x22,umask=801No credits available to send to Cbox on the AD Ring (covers higher CBoxes)unc_m3upi_cha_ad_credits_empty.vnauncore interconnectCBox AD Credits Empty; VNA Messagesevent=0x22,umask=101No credits available to send to Cbox on the AD Ring (covers higher CBoxes)unc_m3upi_cha_ad_credits_empty.wbuncore interconnectCBox AD Credits Empty; Writebacksevent=0x22,umask=201No credits available to send to Cbox on the AD Ring (covers higher CBoxes)unc_m3upi_clockticksuncore interconnectNumber of uclks in domainevent=101Counts the number of uclks in the M3 uclk domain.  This could be slightly different than the count in the Ubox because of enable/freeze delays.  However, because the M3 is close to the Ubox, they generally should not diverge by more than a handful of cyclesunc_m3upi_cms_clockticksuncore interconnectCMS Clockticksevent=0xc001unc_m3upi_d2c_sentuncore interconnectD2C Sentevent=0x2b01Count cases BL sends direct to coreunc_m3upi_d2u_sentuncore interconnectD2U Sentevent=0x2a01Cases where SMI3 sends D2U commandunc_m3upi_egress_ordering.iv_snoopgo_dnuncore interconnectEgress Blocking due to Ordering requirements; Downevent=0xae,umask=401Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirementsunc_m3upi_egress_ordering.iv_snoopgo_upuncore interconnectEgress Blocking due to Ordering requirements; Upevent=0xae,umask=101Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirementsunc_m3upi_fast_asserted.horzuncore interconnectFaST wire asserted; Horizontalevent=0xa5,umask=201Counts the number of cycles either the local or incoming distress signals are asserted.  Incoming distress includes up, dn and acrossunc_m3upi_fast_asserted.vertuncore interconnectFaST wire asserted; Verticalevent=0xa5,umask=101Counts the number of cycles either the local or incoming distress signals are asserted.  Incoming distress includes up, dn and acrossunc_m3upi_horz_ring_ad_in_use.left_evenuncore interconnectHorizontal AD Ring In Use; Left and Evenevent=0xa7,umask=101Counts the number of cycles that the Horizontal AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_horz_ring_ad_in_use.left_odduncore interconnectHorizontal AD Ring In Use; Left and Oddevent=0xa7,umask=201Counts the number of cycles that the Horizontal AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_horz_ring_ad_in_use.right_evenuncore interconnectHorizontal AD Ring In Use; Right and Evenevent=0xa7,umask=401Counts the number of cycles that the Horizontal AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_horz_ring_ad_in_use.right_odduncore interconnectHorizontal AD Ring In Use; Right and Oddevent=0xa7,umask=801Counts the number of cycles that the Horizontal AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_horz_ring_ak_in_use.left_evenuncore interconnectHorizontal AK Ring In Use; Left and Evenevent=0xa9,umask=101Counts the number of cycles that the Horizontal AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_horz_ring_ak_in_use.left_odduncore interconnectHorizontal AK Ring In Use; Left and Oddevent=0xa9,umask=201Counts the number of cycles that the Horizontal AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_horz_ring_ak_in_use.right_evenuncore interconnectHorizontal AK Ring In Use; Right and Evenevent=0xa9,umask=401Counts the number of cycles that the Horizontal AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_horz_ring_ak_in_use.right_odduncore interconnectHorizontal AK Ring In Use; Right and Oddevent=0xa9,umask=801Counts the number of cycles that the Horizontal AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_horz_ring_bl_in_use.left_evenuncore interconnectHorizontal BL Ring in Use; Left and Evenevent=0xab,umask=101Counts the number of cycles that the Horizontal BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_horz_ring_bl_in_use.left_odduncore interconnectHorizontal BL Ring in Use; Left and Oddevent=0xab,umask=201Counts the number of cycles that the Horizontal BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_horz_ring_bl_in_use.right_evenuncore interconnectHorizontal BL Ring in Use; Right and Evenevent=0xab,umask=401Counts the number of cycles that the Horizontal BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_horz_ring_bl_in_use.right_odduncore interconnectHorizontal BL Ring in Use; Right and Oddevent=0xab,umask=801Counts the number of cycles that the Horizontal BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_horz_ring_iv_in_use.leftuncore interconnectHorizontal IV Ring in Use; Leftevent=0xad,umask=101Counts the number of cycles that the Horizontal IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODDunc_m3upi_horz_ring_iv_in_use.rightuncore interconnectHorizontal IV Ring in Use; Rightevent=0xad,umask=401Counts the number of cycles that the Horizontal IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODDunc_m3upi_m2_bl_credits_empty.iio0_iio1_ncbuncore interconnectM2 BL Credits Empty; IIO0 and IIO1 share the same ring destination. (1 VN0 credit only)event=0x23,umask=101No vn0 and vna credits available to send to M2unc_m3upi_m2_bl_credits_empty.iio2_ncbuncore interconnectM2 BL Credits Empty; IIO2event=0x23,umask=201No vn0 and vna credits available to send to M2unc_m3upi_m2_bl_credits_empty.iio3_ncbuncore interconnectM2 BL Credits Empty; IIO3event=0x23,umask=401No vn0 and vna credits available to send to M2unc_m3upi_m2_bl_credits_empty.iio4_ncbuncore interconnectM2 BL Credits Empty; IIO4event=0x23,umask=801No vn0 and vna credits available to send to M2unc_m3upi_m2_bl_credits_empty.iio5_ncbuncore interconnectM2 BL Credits Empty; IIO5event=0x23,umask=0x1001No vn0 and vna credits available to send to M2unc_m3upi_m2_bl_credits_empty.ncsuncore interconnectM2 BL Credits Empty; All IIO targets for NCS are in single mask. ORs them togetherevent=0x23,umask=0x2001No vn0 and vna credits available to send to M2unc_m3upi_m2_bl_credits_empty.ncs_seluncore interconnectM2 BL Credits Empty; Selected M2p BL NCS creditsevent=0x23,umask=0x4001No vn0 and vna credits available to send to M2unc_m3upi_multi_slot_rcvd.ad_slot0uncore interconnectMulti Slot Flit Received; AD - Slot 0event=0x3e,umask=101Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)unc_m3upi_multi_slot_rcvd.ad_slot1uncore interconnectMulti Slot Flit Received; AD - Slot 1event=0x3e,umask=201Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)unc_m3upi_multi_slot_rcvd.ad_slot2uncore interconnectMulti Slot Flit Received; AD - Slot 2event=0x3e,umask=401Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)unc_m3upi_multi_slot_rcvd.ak_slot0uncore interconnectMulti Slot Flit Received; AK - Slot 0event=0x3e,umask=0x1001Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)unc_m3upi_multi_slot_rcvd.ak_slot2uncore interconnectMulti Slot Flit Received; AK - Slot 2event=0x3e,umask=0x2001Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)unc_m3upi_multi_slot_rcvd.bl_slot0uncore interconnectMulti Slot Flit Received; BL - Slot 0event=0x3e,umask=801Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)unc_m3upi_ring_bounces_horz.aduncore interconnectMessages that bounced on the Horizontal Ring.; ADevent=0xa1,umask=101Number of cycles incoming messages from the Horizontal ring that were bounced, by ring typeunc_m3upi_ring_bounces_horz.akuncore interconnectMessages that bounced on the Horizontal Ring.; AKevent=0xa1,umask=201Number of cycles incoming messages from the Horizontal ring that were bounced, by ring typeunc_m3upi_ring_bounces_horz.bluncore interconnectMessages that bounced on the Horizontal Ring.; BLevent=0xa1,umask=401Number of cycles incoming messages from the Horizontal ring that were bounced, by ring typeunc_m3upi_ring_bounces_horz.ivuncore interconnectMessages that bounced on the Horizontal Ring.; IVevent=0xa1,umask=801Number of cycles incoming messages from the Horizontal ring that were bounced, by ring typeunc_m3upi_ring_bounces_vert.aduncore interconnectMessages that bounced on the Vertical Ring.; ADevent=0xa0,umask=101Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_m3upi_ring_bounces_vert.akuncore interconnectMessages that bounced on the Vertical Ring.; Acknowledgements to coreevent=0xa0,umask=201Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_m3upi_ring_bounces_vert.bluncore interconnectMessages that bounced on the Vertical Ring.; Data Responses to coreevent=0xa0,umask=401Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_m3upi_ring_bounces_vert.ivuncore interconnectMessages that bounced on the Vertical Ring.; Snoops of processor's cacheevent=0xa0,umask=801Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_m3upi_ring_sink_starved_horz.aduncore interconnectSink Starvation on Horizontal Ring; ADevent=0xa3,umask=101unc_m3upi_ring_sink_starved_horz.akuncore interconnectSink Starvation on Horizontal Ring; AKevent=0xa3,umask=201unc_m3upi_ring_sink_starved_horz.ak_ag1uncore interconnectSink Starvation on Horizontal Ring; Acknowledgements to Agent 1event=0xa3,umask=0x2001unc_m3upi_ring_sink_starved_horz.bluncore interconnectSink Starvation on Horizontal Ring; BLevent=0xa3,umask=401unc_m3upi_ring_sink_starved_horz.ivuncore interconnectSink Starvation on Horizontal Ring; IVevent=0xa3,umask=801unc_m3upi_ring_sink_starved_vert.aduncore interconnectSink Starvation on Vertical Ring; ADevent=0xa2,umask=101unc_m3upi_ring_sink_starved_vert.akuncore interconnectSink Starvation on Vertical Ring; Acknowledgements to coreevent=0xa2,umask=201unc_m3upi_ring_sink_starved_vert.bluncore interconnectSink Starvation on Vertical Ring; Data Responses to coreevent=0xa2,umask=401unc_m3upi_ring_sink_starved_vert.ivuncore interconnectSink Starvation on Vertical Ring; Snoops of processor's cacheevent=0xa2,umask=801unc_m3upi_ring_src_thrtluncore interconnectSource Throttleevent=0xa401unc_m3upi_rxc_arb_lost_vn0.ad_requncore interconnectLost Arb for VN0; REQ on ADevent=0x4b,umask=101VN0 message requested but lost arbitration; Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_arb_lost_vn0.ad_rspuncore interconnectLost Arb for VN0; RSP on ADevent=0x4b,umask=401VN0 message requested but lost arbitration; Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_arb_lost_vn0.ad_snpuncore interconnectLost Arb for VN0; SNP on ADevent=0x4b,umask=201VN0 message requested but lost arbitration; Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_arb_lost_vn0.bl_ncbuncore interconnectLost Arb for VN0; NCB on BLevent=0x4b,umask=0x2001VN0 message requested but lost arbitration; Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_arb_lost_vn0.bl_ncsuncore interconnectLost Arb for VN0; NCS on BLevent=0x4b,umask=0x4001VN0 message requested but lost arbitration; Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_arb_lost_vn0.bl_rspuncore interconnectLost Arb for VN0; RSP on BLevent=0x4b,umask=801VN0 message requested but lost arbitration; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_arb_lost_vn0.bl_wbuncore interconnectLost Arb for VN0; WB on BLevent=0x4b,umask=0x1001VN0 message requested but lost arbitration; Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_arb_lost_vn1.ad_requncore interconnectLost Arb for VN1; REQ on ADevent=0x4c,umask=101VN1 message requested but lost arbitration; Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_arb_lost_vn1.ad_rspuncore interconnectLost Arb for VN1; RSP on ADevent=0x4c,umask=401VN1 message requested but lost arbitration; Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_arb_lost_vn1.ad_snpuncore interconnectLost Arb for VN1; SNP on ADevent=0x4c,umask=201VN1 message requested but lost arbitration; Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_arb_lost_vn1.bl_ncbuncore interconnectLost Arb for VN1; NCB on BLevent=0x4c,umask=0x2001VN1 message requested but lost arbitration; Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_arb_lost_vn1.bl_ncsuncore interconnectLost Arb for VN1; NCS on BLevent=0x4c,umask=0x4001VN1 message requested but lost arbitration; Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_arb_lost_vn1.bl_rspuncore interconnectLost Arb for VN1; RSP on BLevent=0x4c,umask=801VN1 message requested but lost arbitration; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_arb_lost_vn1.bl_wbuncore interconnectLost Arb for VN1; WB on BLevent=0x4c,umask=0x1001VN1 message requested but lost arbitration; Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_arb_misc.adbl_parallel_winuncore interconnectArb Miscellaneous; AD, BL Parallel Winevent=0x4d,umask=0x4001AD and BL messages won arbitration concurrently / in parallelunc_m3upi_rxc_arb_misc.no_prog_ad_vn0uncore interconnectArb Miscellaneous; No Progress on Pending AD VN0event=0x4d,umask=401Arbitration stage made no progress on pending ad vn0 messages because slotting stage cannot accept new messageunc_m3upi_rxc_arb_misc.no_prog_ad_vn1uncore interconnectArb Miscellaneous; No Progress on Pending AD VN1event=0x4d,umask=801Arbitration stage made no progress on pending ad vn1 messages because slotting stage cannot accept new messageunc_m3upi_rxc_arb_misc.no_prog_bl_vn0uncore interconnectArb Miscellaneous; No Progress on Pending BL VN0event=0x4d,umask=0x1001Arbitration stage made no progress on pending bl vn0 messages because slotting stage cannot accept new messageunc_m3upi_rxc_arb_misc.no_prog_bl_vn1uncore interconnectArb Miscellaneous; No Progress on Pending BL VN1event=0x4d,umask=0x2001Arbitration stage made no progress on pending bl vn1 messages because slotting stage cannot accept new messageunc_m3upi_rxc_arb_misc.par_bias_vn0uncore interconnectArb Miscellaneous; Parallel Bias to VN0event=0x4d,umask=101VN0/VN1 arbiter gave second, consecutive win to vn0, delaying vn1 win, because vn0 offered parallel ad/blunc_m3upi_rxc_arb_misc.par_bias_vn1uncore interconnectArb Miscellaneous; Parallel Bias to VN1event=0x4d,umask=201VN0/VN1 arbiter gave second, consecutive win to vn1, delaying vn0 win, because vn1 offered parallel ad/blunc_m3upi_rxc_arb_noad_req_vn0.ad_requncore interconnectCan't Arb for VN0; REQ on ADevent=0x49,umask=101VN0 message was not able to request arbitration while some other message won arbitration; Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_arb_noad_req_vn0.ad_rspuncore interconnectCan't Arb for VN0; RSP on ADevent=0x49,umask=401VN0 message was not able to request arbitration while some other message won arbitration; Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_arb_noad_req_vn0.ad_snpuncore interconnectCan't Arb for VN0; SNP on ADevent=0x49,umask=201VN0 message was not able to request arbitration while some other message won arbitration; Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_arb_noad_req_vn0.bl_ncbuncore interconnectCan't Arb for VN0; NCB on BLevent=0x49,umask=0x2001VN0 message was not able to request arbitration while some other message won arbitration; Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_arb_noad_req_vn0.bl_ncsuncore interconnectCan't Arb for VN0; NCS on BLevent=0x49,umask=0x4001VN0 message was not able to request arbitration while some other message won arbitration; Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_arb_noad_req_vn0.bl_rspuncore interconnectCan't Arb for VN0; RSP on BLevent=0x49,umask=801VN0 message was not able to request arbitration while some other message won arbitration; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_arb_noad_req_vn0.bl_wbuncore interconnectCan't Arb for VN0; WB on BLevent=0x49,umask=0x1001VN0 message was not able to request arbitration while some other message won arbitration; Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_arb_noad_req_vn1.ad_requncore interconnectCan't Arb for VN1; REQ on ADevent=0x4a,umask=101VN1 message was not able to request arbitration while some other message won arbitration; Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_arb_noad_req_vn1.ad_rspuncore interconnectCan't Arb for VN1; RSP on ADevent=0x4a,umask=401VN1 message was not able to request arbitration while some other message won arbitration; Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_arb_noad_req_vn1.ad_snpuncore interconnectCan't Arb for VN1; SNP on ADevent=0x4a,umask=201VN1 message was not able to request arbitration while some other message won arbitration; Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_arb_noad_req_vn1.bl_ncbuncore interconnectCan't Arb for VN1; NCB on BLevent=0x4a,umask=0x2001VN1 message was not able to request arbitration while some other message won arbitration; Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_arb_noad_req_vn1.bl_ncsuncore interconnectCan't Arb for VN1; NCS on BLevent=0x4a,umask=0x4001VN1 message was not able to request arbitration while some other message won arbitration; Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_arb_noad_req_vn1.bl_rspuncore interconnectCan't Arb for VN1; RSP on BLevent=0x4a,umask=801VN1 message was not able to request arbitration while some other message won arbitration; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_arb_noad_req_vn1.bl_wbuncore interconnectCan't Arb for VN1; WB on BLevent=0x4a,umask=0x1001VN1 message was not able to request arbitration while some other message won arbitration; Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_arb_nocred_vn0.ad_requncore interconnectNo Credits to Arb for VN0; REQ on ADevent=0x47,umask=101VN0 message is blocked from requesting arbitration due to lack of remote UPI credits; Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_arb_nocred_vn0.ad_rspuncore interconnectNo Credits to Arb for VN0; RSP on ADevent=0x47,umask=401VN0 message is blocked from requesting arbitration due to lack of remote UPI credits; Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_arb_nocred_vn0.ad_snpuncore interconnectNo Credits to Arb for VN0; SNP on ADevent=0x47,umask=201VN0 message is blocked from requesting arbitration due to lack of remote UPI credits; Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_arb_nocred_vn0.bl_ncbuncore interconnectNo Credits to Arb for VN0; NCB on BLevent=0x47,umask=0x2001VN0 message is blocked from requesting arbitration due to lack of remote UPI credits; Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_arb_nocred_vn0.bl_ncsuncore interconnectNo Credits to Arb for VN0; NCS on BLevent=0x47,umask=0x4001VN0 message is blocked from requesting arbitration due to lack of remote UPI credits; Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_arb_nocred_vn0.bl_rspuncore interconnectNo Credits to Arb for VN0; RSP on BLevent=0x47,umask=801VN0 message is blocked from requesting arbitration due to lack of remote UPI credits; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_arb_nocred_vn0.bl_wbuncore interconnectNo Credits to Arb for VN0; WB on BLevent=0x47,umask=0x1001VN0 message is blocked from requesting arbitration due to lack of remote UPI credits; Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_arb_nocred_vn1.ad_requncore interconnectNo Credits to Arb for VN1; REQ on ADevent=0x48,umask=101VN1 message is blocked from requesting arbitration due to lack of remote UPI credits; Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_arb_nocred_vn1.ad_rspuncore interconnectNo Credits to Arb for VN1; RSP on ADevent=0x48,umask=401VN1 message is blocked from requesting arbitration due to lack of remote UPI credits; Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_arb_nocred_vn1.ad_snpuncore interconnectNo Credits to Arb for VN1; SNP on ADevent=0x48,umask=201VN1 message is blocked from requesting arbitration due to lack of remote UPI credits; Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_arb_nocred_vn1.bl_ncbuncore interconnectNo Credits to Arb for VN1; NCB on BLevent=0x48,umask=0x2001VN1 message is blocked from requesting arbitration due to lack of remote UPI credits; Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_arb_nocred_vn1.bl_ncsuncore interconnectNo Credits to Arb for VN1; NCS on BLevent=0x48,umask=0x4001VN1 message is blocked from requesting arbitration due to lack of remote UPI credits; Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_arb_nocred_vn1.bl_rspuncore interconnectNo Credits to Arb for VN1; RSP on BLevent=0x48,umask=801VN1 message is blocked from requesting arbitration due to lack of remote UPI credits; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_arb_nocred_vn1.bl_wbuncore interconnectNo Credits to Arb for VN1; WB on BLevent=0x48,umask=0x1001VN1 message is blocked from requesting arbitration due to lack of remote UPI credits; Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_bypassed.ad_s0_bl_arbuncore interconnectIngress Queue Bypasses; AD to Slot 0 on BL Arbevent=0x40,umask=201Number of times message is bypassed around the Ingress Queue; AD is taking bypass to slot 0 of independent flit while bl message is in arbitrationunc_m3upi_rxc_bypassed.ad_s0_idleuncore interconnectIngress Queue Bypasses; AD to Slot 0 on Idleevent=0x40,umask=101Number of times message is bypassed around the Ingress Queue; AD is taking bypass to slot 0 of independent flit while pipeline is idleunc_m3upi_rxc_bypassed.ad_s1_bl_slotuncore interconnectIngress Queue Bypasses; AD + BL to Slot 1event=0x40,umask=401Number of times message is bypassed around the Ingress Queue; AD is taking bypass to flit slot 1 while merging with bl message in same flitunc_m3upi_rxc_bypassed.ad_s2_bl_slotuncore interconnectIngress Queue Bypasses; AD + BL to Slot 2event=0x40,umask=801Number of times message is bypassed around the Ingress Queue; AD is taking bypass to flit slot 2 while merging with bl message in same flitunc_m3upi_rxc_collision_vn0.ad_requncore interconnectVN0 message lost contest for flit; REQ on ADevent=0x50,umask=101Count cases where Ingress VN0 packets lost the contest for Flit Slot 0.; Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_collision_vn0.ad_rspuncore interconnectVN0 message lost contest for flit; RSP on ADevent=0x50,umask=401Count cases where Ingress VN0 packets lost the contest for Flit Slot 0.; Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_collision_vn0.ad_snpuncore interconnectVN0 message lost contest for flit; SNP on ADevent=0x50,umask=201Count cases where Ingress VN0 packets lost the contest for Flit Slot 0.; Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_collision_vn0.bl_ncbuncore interconnectVN0 message lost contest for flit; NCB on BLevent=0x50,umask=0x2001Count cases where Ingress VN0 packets lost the contest for Flit Slot 0.; Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_collision_vn0.bl_ncsuncore interconnectVN0 message lost contest for flit; NCS on BLevent=0x50,umask=0x4001Count cases where Ingress VN0 packets lost the contest for Flit Slot 0.; Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_collision_vn0.bl_rspuncore interconnectVN0 message lost contest for flit; RSP on BLevent=0x50,umask=801Count cases where Ingress VN0 packets lost the contest for Flit Slot 0.; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_collision_vn0.bl_wbuncore interconnectVN0 message lost contest for flit; WB on BLevent=0x50,umask=0x1001Count cases where Ingress VN0 packets lost the contest for Flit Slot 0.; Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_collision_vn1.ad_requncore interconnectVN1 message lost contest for flit; REQ on ADevent=0x51,umask=101Count cases where Ingress VN1 packets lost the contest for Flit Slot 0.; Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_collision_vn1.ad_rspuncore interconnectVN1 message lost contest for flit; RSP on ADevent=0x51,umask=401Count cases where Ingress VN1 packets lost the contest for Flit Slot 0.; Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_collision_vn1.ad_snpuncore interconnectVN1 message lost contest for flit; SNP on ADevent=0x51,umask=201Count cases where Ingress VN1 packets lost the contest for Flit Slot 0.; Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_collision_vn1.bl_ncbuncore interconnectVN1 message lost contest for flit; NCB on BLevent=0x51,umask=0x2001Count cases where Ingress VN1 packets lost the contest for Flit Slot 0.; Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_collision_vn1.bl_ncsuncore interconnectVN1 message lost contest for flit; NCS on BLevent=0x51,umask=0x4001Count cases where Ingress VN1 packets lost the contest for Flit Slot 0.; Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_collision_vn1.bl_rspuncore interconnectVN1 message lost contest for flit; RSP on BLevent=0x51,umask=801Count cases where Ingress VN1 packets lost the contest for Flit Slot 0.; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_collision_vn1.bl_wbuncore interconnectVN1 message lost contest for flit; WB on BLevent=0x51,umask=0x1001Count cases where Ingress VN1 packets lost the contest for Flit Slot 0.; Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_crd_misc.any_bgf_fifouncore interconnectMiscellaneous Credit Events; Any In BGF FIFOevent=0x60,umask=101Indication that at least one packet (flit) is in the bgf (fifo only)unc_m3upi_rxc_crd_misc.any_bgf_pathuncore interconnectMiscellaneous Credit Events; Any in BGF Pathevent=0x60,umask=201Indication that at least one packet (flit) is in the bgf path (i.e. pipe to fifo)unc_m3upi_rxc_crd_misc.no_d2k_for_arbuncore interconnectMiscellaneous Credit Events; No D2K For Arbevent=0x60,umask=401VN0 or VN1 BL RSP message was blocked from arbitration request due to lack of D2K CMP creditsunc_m3upi_rxc_crd_occ.d2k_crduncore interconnectCredit Occupancy; D2K Creditsevent=0x61,umask=0x1001D2K completion fifo credit occupancy (credits in use), accumulated across all cyclesunc_m3upi_rxc_crd_occ.flits_in_fifouncore interconnectCredit Occupancy; Packets in BGF FIFOevent=0x61,umask=201Occupancy of m3upi ingress -> upi link layer bgf; packets (flits) in fifounc_m3upi_rxc_crd_occ.flits_in_pathuncore interconnectCredit Occupancy; Packets in BGF Pathevent=0x61,umask=401Occupancy of m3upi ingress -> upi link layer bgf; packets (flits) in path (i.e. pipe to fifo or fifo)unc_m3upi_rxc_crd_occ.p1p_fifouncore interconnectCredit Occupancyevent=0x61,umask=0x4001count of bl messages in pump-1-pending state, in completion fifo onlyunc_m3upi_rxc_crd_occ.p1p_totaluncore interconnectCredit Occupancyevent=0x61,umask=0x2001count of bl messages in pump-1-pending state, in marker table and in fifounc_m3upi_rxc_crd_occ.txq_crduncore interconnectCredit Occupancy; Transmit Creditsevent=0x61,umask=801Link layer transmit queue credit occupancy (credits in use), accumulated across all cyclesunc_m3upi_rxc_crd_occ.vna_in_useuncore interconnectCredit Occupancy; VNA In Useevent=0x61,umask=101Remote UPI VNA credit occupancy (number of credits in use), accumulated across all cyclesunc_m3upi_rxc_cycles_ne_vn0.ad_requncore interconnectVN0 Ingress (from CMS) Queue - Cycles Not Empty; REQ on ADevent=0x43,umask=101Counts the number of cycles when the UPI Ingress is not empty.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple counters.; Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_cycles_ne_vn0.ad_rspuncore interconnectVN0 Ingress (from CMS) Queue - Cycles Not Empty; RSP on ADevent=0x43,umask=401Counts the number of cycles when the UPI Ingress is not empty.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple counters.; Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_cycles_ne_vn0.ad_snpuncore interconnectVN0 Ingress (from CMS) Queue - Cycles Not Empty; SNP on ADevent=0x43,umask=201Counts the number of cycles when the UPI Ingress is not empty.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple counters.; Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_cycles_ne_vn0.bl_ncbuncore interconnectVN0 Ingress (from CMS) Queue - Cycles Not Empty; NCB on BLevent=0x43,umask=0x2001Counts the number of cycles when the UPI Ingress is not empty.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple counters.; Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_cycles_ne_vn0.bl_ncsuncore interconnectVN0 Ingress (from CMS) Queue - Cycles Not Empty; NCS on BLevent=0x43,umask=0x4001Counts the number of cycles when the UPI Ingress is not empty.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple counters.; Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_cycles_ne_vn0.bl_rspuncore interconnectVN0 Ingress (from CMS) Queue - Cycles Not Empty; RSP on BLevent=0x43,umask=801Counts the number of cycles when the UPI Ingress is not empty.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple counters.; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_cycles_ne_vn0.bl_wbuncore interconnectVN0 Ingress (from CMS) Queue - Cycles Not Empty; WB on BLevent=0x43,umask=0x1001Counts the number of cycles when the UPI Ingress is not empty.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple counters.; Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_cycles_ne_vn1.ad_requncore interconnectVN1 Ingress (from CMS) Queue - Cycles Not Empty; REQ on ADevent=0x44,umask=101Counts the number of allocations into the UPI VN1  Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_cycles_ne_vn1.ad_rspuncore interconnectVN1 Ingress (from CMS) Queue - Cycles Not Empty; RSP on ADevent=0x44,umask=401Counts the number of allocations into the UPI VN1  Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_cycles_ne_vn1.ad_snpuncore interconnectVN1 Ingress (from CMS) Queue - Cycles Not Empty; SNP on ADevent=0x44,umask=201Counts the number of allocations into the UPI VN1  Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_cycles_ne_vn1.bl_ncbuncore interconnectVN1 Ingress (from CMS) Queue - Cycles Not Empty; NCB on BLevent=0x44,umask=0x2001Counts the number of allocations into the UPI VN1  Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_cycles_ne_vn1.bl_ncsuncore interconnectVN1 Ingress (from CMS) Queue - Cycles Not Empty; NCS on BLevent=0x44,umask=0x4001Counts the number of allocations into the UPI VN1  Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_cycles_ne_vn1.bl_rspuncore interconnectVN1 Ingress (from CMS) Queue - Cycles Not Empty; RSP on BLevent=0x44,umask=801Counts the number of allocations into the UPI VN1  Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_cycles_ne_vn1.bl_wbuncore interconnectVN1 Ingress (from CMS) Queue - Cycles Not Empty; WB on BLevent=0x44,umask=0x1001Counts the number of allocations into the UPI VN1  Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_flits_data_not_sent.alluncore interconnectData Flit Not Sent; Allevent=0x57,umask=101Data flit is ready for transmission but could not be sentunc_m3upi_rxc_flits_data_not_sent.no_bgfuncore interconnectData Flit Not Sent; No BGF Creditsevent=0x57,umask=201Data flit is ready for transmission but could not be sentunc_m3upi_rxc_flits_data_not_sent.no_txquncore interconnectData Flit Not Sent; No TxQ Creditsevent=0x57,umask=401Data flit is ready for transmission but could not be sentunc_m3upi_rxc_flits_gen_bl.p0_waituncore interconnectGenerating BL Data Flit Sequence; Wait on Pump 0event=0x59,umask=101generating bl data flit sequence; waiting for data pump 0unc_m3upi_rxc_flits_gen_bl.p1p_at_limituncore interconnectGenerating BL Data Flit Sequenceevent=0x59,umask=0x1001pump-1-pending logic is at capacity (pending table plus completion fifo at limit)unc_m3upi_rxc_flits_gen_bl.p1p_busyuncore interconnectGenerating BL Data Flit Sequenceevent=0x59,umask=801pump-1-pending logic is tracking at least one messageunc_m3upi_rxc_flits_gen_bl.p1p_fifo_fulluncore interconnectGenerating BL Data Flit Sequenceevent=0x59,umask=0x4001pump-1-pending completion fifo is fullunc_m3upi_rxc_flits_gen_bl.p1p_hold_p0uncore interconnectGenerating BL Data Flit Sequenceevent=0x59,umask=0x2001pump-1-pending logic is at or near capacity, such that pump-0-only bl messages are getting stalled in slotting stageunc_m3upi_rxc_flits_gen_bl.p1p_to_limbouncore interconnectGenerating BL Data Flit Sequenceevent=0x59,umask=401a bl message finished but is in limbo and moved to pump-1-pending logicunc_m3upi_rxc_flits_gen_bl.p1_waituncore interconnectGenerating BL Data Flit Sequence; Wait on Pump 1event=0x59,umask=201generating bl data flit sequence; waiting for data pump 1unc_m3upi_rxc_flits_miscuncore interconnectUNC_M3UPI_RxC_FLITS_MISCevent=0x5a01unc_m3upi_rxc_flits_sent.1_msguncore interconnectSent Header Flit; One Messageevent=0x56,umask=101One message in flit; VNA or non-VNA flitunc_m3upi_rxc_flits_sent.1_msg_vnxuncore interconnectSent Header Flit; One Message in non-VNAevent=0x56,umask=801One message in flit; non-VNA flitunc_m3upi_rxc_flits_sent.2_msgsuncore interconnectSent Header Flit; Two Messagesevent=0x56,umask=201Two messages in flit; VNA flitunc_m3upi_rxc_flits_sent.3_msgsuncore interconnectSent Header Flit; Three Messagesevent=0x56,umask=401Three messages in flit; VNA flitunc_m3upi_rxc_flits_sent.slots_1uncore interconnectSent Header Flitevent=0x56,umask=0x1001unc_m3upi_rxc_flits_sent.slots_2uncore interconnectSent Header Flitevent=0x56,umask=0x2001unc_m3upi_rxc_flits_sent.slots_3uncore interconnectSent Header Flitevent=0x56,umask=0x4001unc_m3upi_rxc_flits_slot_bl.alluncore interconnectSlotting BL Message Into Header Flit; Allevent=0x58,umask=101unc_m3upi_rxc_flits_slot_bl.need_datauncore interconnectSlotting BL Message Into Header Flit; Needs Data Flitevent=0x58,umask=201BL message requires data flit sequenceunc_m3upi_rxc_flits_slot_bl.p0_waituncore interconnectSlotting BL Message Into Header Flit; Wait on Pump 0event=0x58,umask=401Waiting for header pump 0unc_m3upi_rxc_flits_slot_bl.p1_not_requncore interconnectSlotting BL Message Into Header Flit; Don't Need Pump 1event=0x58,umask=0x1001Header pump 1 is not required for flitunc_m3upi_rxc_flits_slot_bl.p1_not_req_but_bubbleuncore interconnectSlotting BL Message Into Header Flit; Don't Need Pump 1 - Bubbleevent=0x58,umask=0x2001Header pump 1 is not required for flit but flit transmission delayedunc_m3upi_rxc_flits_slot_bl.p1_not_req_not_availuncore interconnectSlotting BL Message Into Header Flit; Don't Need Pump 1 - Not Availevent=0x58,umask=0x4001Header pump 1 is not required for flit and not availableunc_m3upi_rxc_flits_slot_bl.p1_waituncore interconnectSlotting BL Message Into Header Flit; Wait on Pump 1event=0x58,umask=801Waiting for header pump 1unc_m3upi_rxc_flit_gen_hdr1.accumuncore interconnectFlit Gen - Header 1; Accumulateevent=0x53,umask=101Events related to Header Flit Generation - Set 1; Header flit slotting control state machine is in any accumulate state; multi-message flit may be assembled over multiple cyclesunc_m3upi_rxc_flit_gen_hdr1.accum_readuncore interconnectFlit Gen - Header 1; Accumulate Readyevent=0x53,umask=201Events related to Header Flit Generation - Set 1; header flit slotting control state machine is in accum_ready state; flit is ready to send but transmission is blocked; more messages may be slotted into flitunc_m3upi_rxc_flit_gen_hdr1.accum_wasteduncore interconnectFlit Gen - Header 1; Accumulate Wastedevent=0x53,umask=401Events related to Header Flit Generation - Set 1; Flit is being assembled over multiple cycles, but no additional message is being slotted into flit in current cycle; accumulate cycle is wastedunc_m3upi_rxc_flit_gen_hdr1.ahead_blockeduncore interconnectFlit Gen - Header 1; Run-Ahead - Blockedevent=0x53,umask=801Events related to Header Flit Generation - Set 1; Header flit slotting entered run-ahead state; new header flit is started while transmission of prior, fully assembled flit is blockedunc_m3upi_rxc_flit_gen_hdr1.ahead_msguncore interconnectFlit Gen - Header 1; Run-Ahead - Messageevent=0x53,umask=0x1001Events related to Header Flit Generation - Set 1; Header flit slotting is in run-ahead to start new flit, and message is actually slotted into new flitunc_m3upi_rxc_flit_gen_hdr1.paruncore interconnectFlit Gen - Header 1; Parallel Okevent=0x53,umask=0x2001Events related to Header Flit Generation - Set 1; New header flit construction may proceed in parallel with data flit sequenceunc_m3upi_rxc_flit_gen_hdr1.par_flituncore interconnectFlit Gen - Header 1; Parallel Flit Finishedevent=0x53,umask=0x8001Events related to Header Flit Generation - Set 1; Header flit finished assembly in parallel with data flit sequenceunc_m3upi_rxc_flit_gen_hdr1.par_msguncore interconnectFlit Gen - Header 1; Parallel Messageevent=0x53,umask=0x4001Events related to Header Flit Generation - Set 1; Message is slotted into header flit in parallel with data flit sequenceunc_m3upi_rxc_flit_gen_hdr2.rmstalluncore interconnectFlit Gen - Header 2; Rate-matching Stallevent=0x54,umask=101Events related to Header Flit Generation - Set 2; Rate-matching stall injectedunc_m3upi_rxc_flit_gen_hdr2.rmstall_nomsguncore interconnectFlit Gen - Header 2; Rate-matching Stall - No Messageevent=0x54,umask=201Events related to Header Flit Generation - Set 2; Rate matching stall injected, but no additional message slotted during stall cycleunc_m3upi_rxc_flit_not_sent.alluncore interconnectHeader Not Sent; Allevent=0x55,umask=101header flit is ready for transmission but could not be sentunc_m3upi_rxc_flit_not_sent.no_bgf_crduncore interconnectHeader Not Sent; No BGF Creditsevent=0x55,umask=201header flit is ready for transmission but could not be sent; No BGF credits availableunc_m3upi_rxc_flit_not_sent.no_bgf_no_msguncore interconnectHeader Not Sent; No BGF Credits + No Extra Message Slottedevent=0x55,umask=801header flit is ready for transmission but could not be sent; No BGF credits available; no additional message slotted into flitunc_m3upi_rxc_flit_not_sent.no_txq_crduncore interconnectHeader Not Sent; No TxQ Creditsevent=0x55,umask=401header flit is ready for transmission but could not be sent; No TxQ credits availableunc_m3upi_rxc_flit_not_sent.no_txq_no_msguncore interconnectHeader Not Sent; No TxQ Credits + No Extra Message Slottedevent=0x55,umask=0x1001header flit is ready for transmission but could not be sent; No TxQ credits available; no additional message slotted into flitunc_m3upi_rxc_flit_not_sent.one_takenuncore interconnectHeader Not Sent; Sent - One Slot Takenevent=0x55,umask=0x2001header flit is ready for transmission but could not be sent; sending header flit with only one slot taken (two slots free)unc_m3upi_rxc_flit_not_sent.three_takenuncore interconnectHeader Not Sent; Sent - Three Slots Takenevent=0x55,umask=0x8001header flit is ready for transmission but could not be sent; sending header flit with three slots taken (no slots free)unc_m3upi_rxc_flit_not_sent.two_takenuncore interconnectHeader Not Sent; Sent - Two Slots Takenevent=0x55,umask=0x4001header flit is ready for transmission but could not be sent; sending header flit with only two slots taken (one slots free)unc_m3upi_rxc_held.cant_slot_aduncore interconnectMessage Held; Can't Slot ADevent=0x52,umask=0x4001some AD message could not be slotted (logical OR of all AD events under INGR_SLOT_CANT_MC_VN{0,1})unc_m3upi_rxc_held.cant_slot_bluncore interconnectMessage Held; Can't Slot BLevent=0x52,umask=0x8001some BL message could not be slotted (logical OR of all BL events under INGR_SLOT_CANT_MC_VN{0,1})unc_m3upi_rxc_held.parallel_ad_lostuncore interconnectMessage Held; Parallel AD Lostevent=0x52,umask=0x1001some AD message lost contest for slot 0 (logical OR of all AD events under INGR_SLOT_LOST_MC_VN{0,1})unc_m3upi_rxc_held.parallel_attemptuncore interconnectMessage Held; Parallel Attemptevent=0x52,umask=401ad and bl messages attempted to slot into the same flit in parallelunc_m3upi_rxc_held.parallel_bl_lostuncore interconnectMessage Held; Parallel BL Lostevent=0x52,umask=0x2001some BL message lost contest for slot 0 (logical OR of all BL events under INGR_SLOT_LOST_MC_VN{0,1})unc_m3upi_rxc_held.parallel_successuncore interconnectMessage Held; Parallel Successevent=0x52,umask=801ad and bl messages were actually slotted into the same flit in parallelunc_m3upi_rxc_held.vn0uncore interconnectMessage Held; VN0event=0x52,umask=101vn0 message(s) that couldn't be slotted into last vn0 flit are held in slotting stage while processing vn1 flitunc_m3upi_rxc_held.vn1uncore interconnectMessage Held; VN1event=0x52,umask=201vn1 message(s) that couldn't be slotted into last vn1 flit are held in slotting stage while processing vn0 flitunc_m3upi_rxc_inserts_vn0.ad_requncore interconnectVN0 Ingress (from CMS) Queue - Inserts; REQ on ADevent=0x41,umask=101Counts the number of allocations into the UPI Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_inserts_vn0.ad_rspuncore interconnectVN0 Ingress (from CMS) Queue - Inserts; RSP on ADevent=0x41,umask=401Counts the number of allocations into the UPI Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_inserts_vn0.ad_snpuncore interconnectVN0 Ingress (from CMS) Queue - Inserts; SNP on ADevent=0x41,umask=201Counts the number of allocations into the UPI Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_inserts_vn0.bl_ncbuncore interconnectVN0 Ingress (from CMS) Queue - Inserts; NCB on BLevent=0x41,umask=0x2001Counts the number of allocations into the UPI Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_inserts_vn0.bl_ncsuncore interconnectVN0 Ingress (from CMS) Queue - Inserts; NCS on BLevent=0x41,umask=0x4001Counts the number of allocations into the UPI Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_inserts_vn0.bl_rspuncore interconnectVN0 Ingress (from CMS) Queue - Inserts; RSP on BLevent=0x41,umask=801Counts the number of allocations into the UPI Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_inserts_vn0.bl_wbuncore interconnectVN0 Ingress (from CMS) Queue - Inserts; WB on BLevent=0x41,umask=0x1001Counts the number of allocations into the UPI Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_inserts_vn1.ad_requncore interconnectVN1 Ingress (from CMS) Queue - Inserts; REQ on ADevent=0x42,umask=101Counts the number of allocations into the UPI VN1  Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_inserts_vn1.ad_rspuncore interconnectVN1 Ingress (from CMS) Queue - Inserts; RSP on ADevent=0x42,umask=401Counts the number of allocations into the UPI VN1  Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_inserts_vn1.ad_snpuncore interconnectVN1 Ingress (from CMS) Queue - Inserts; SNP on ADevent=0x42,umask=201Counts the number of allocations into the UPI VN1  Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_inserts_vn1.bl_ncbuncore interconnectVN1 Ingress (from CMS) Queue - Inserts; NCB on BLevent=0x42,umask=0x2001Counts the number of allocations into the UPI VN1  Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_inserts_vn1.bl_ncsuncore interconnectVN1 Ingress (from CMS) Queue - Inserts; NCS on BLevent=0x42,umask=0x4001Counts the number of allocations into the UPI VN1  Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_inserts_vn1.bl_rspuncore interconnectVN1 Ingress (from CMS) Queue - Inserts; RSP on BLevent=0x42,umask=801Counts the number of allocations into the UPI VN1  Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_inserts_vn1.bl_wbuncore interconnectVN1 Ingress (from CMS) Queue - Inserts; WB on BLevent=0x42,umask=0x1001Counts the number of allocations into the UPI VN1  Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters.; Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_occupancy_vn0.ad_requncore interconnectVN0 Ingress (from CMS) Queue - Occupancy; REQ on ADevent=0x45,umask=101Accumulates the occupancy of a given UPI VN1  Ingress queue in each cycle.  This tracks one of the three ring Ingress buffers.  This can be used with the UPI VN1  Ingress Not Empty event to calculate average occupancy or the UPI VN1  Ingress Allocations event in order to calculate average queuing latency.; Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_occupancy_vn0.ad_rspuncore interconnectVN0 Ingress (from CMS) Queue - Occupancy; RSP on ADevent=0x45,umask=401Accumulates the occupancy of a given UPI VN1  Ingress queue in each cycle.  This tracks one of the three ring Ingress buffers.  This can be used with the UPI VN1  Ingress Not Empty event to calculate average occupancy or the UPI VN1  Ingress Allocations event in order to calculate average queuing latency.; Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_occupancy_vn0.ad_snpuncore interconnectVN0 Ingress (from CMS) Queue - Occupancy; SNP on ADevent=0x45,umask=201Accumulates the occupancy of a given UPI VN1  Ingress queue in each cycle.  This tracks one of the three ring Ingress buffers.  This can be used with the UPI VN1  Ingress Not Empty event to calculate average occupancy or the UPI VN1  Ingress Allocations event in order to calculate average queuing latency.; Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_occupancy_vn0.bl_ncbuncore interconnectVN0 Ingress (from CMS) Queue - Occupancy; NCB on BLevent=0x45,umask=0x2001Accumulates the occupancy of a given UPI VN1  Ingress queue in each cycle.  This tracks one of the three ring Ingress buffers.  This can be used with the UPI VN1  Ingress Not Empty event to calculate average occupancy or the UPI VN1  Ingress Allocations event in order to calculate average queuing latency.; Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_occupancy_vn0.bl_ncsuncore interconnectVN0 Ingress (from CMS) Queue - Occupancy; NCS on BLevent=0x45,umask=0x4001Accumulates the occupancy of a given UPI VN1  Ingress queue in each cycle.  This tracks one of the three ring Ingress buffers.  This can be used with the UPI VN1  Ingress Not Empty event to calculate average occupancy or the UPI VN1  Ingress Allocations event in order to calculate average queuing latency.; Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_occupancy_vn0.bl_rspuncore interconnectVN0 Ingress (from CMS) Queue - Occupancy; RSP on BLevent=0x45,umask=801Accumulates the occupancy of a given UPI VN1  Ingress queue in each cycle.  This tracks one of the three ring Ingress buffers.  This can be used with the UPI VN1  Ingress Not Empty event to calculate average occupancy or the UPI VN1  Ingress Allocations event in order to calculate average queuing latency.; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_occupancy_vn0.bl_wbuncore interconnectVN0 Ingress (from CMS) Queue - Occupancy; WB on BLevent=0x45,umask=0x1001Accumulates the occupancy of a given UPI VN1  Ingress queue in each cycle.  This tracks one of the three ring Ingress buffers.  This can be used with the UPI VN1  Ingress Not Empty event to calculate average occupancy or the UPI VN1  Ingress Allocations event in order to calculate average queuing latency.; Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_occupancy_vn1.ad_requncore interconnectVN1 Ingress (from CMS) Queue - Occupancy; REQ on ADevent=0x46,umask=101Accumulates the occupancy of a given UPI VN1  Ingress queue in each cycle.  This tracks one of the three ring Ingress buffers.  This can be used with the UPI VN1  Ingress Not Empty event to calculate average occupancy or the UPI VN1  Ingress Allocations event in order to calculate average queuing latency.; Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_occupancy_vn1.ad_rspuncore interconnectVN1 Ingress (from CMS) Queue - Occupancy; RSP on ADevent=0x46,umask=401Accumulates the occupancy of a given UPI VN1  Ingress queue in each cycle.  This tracks one of the three ring Ingress buffers.  This can be used with the UPI VN1  Ingress Not Empty event to calculate average occupancy or the UPI VN1  Ingress Allocations event in order to calculate average queuing latency.; Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_occupancy_vn1.ad_snpuncore interconnectVN1 Ingress (from CMS) Queue - Occupancy; SNP on ADevent=0x46,umask=201Accumulates the occupancy of a given UPI VN1  Ingress queue in each cycle.  This tracks one of the three ring Ingress buffers.  This can be used with the UPI VN1  Ingress Not Empty event to calculate average occupancy or the UPI VN1  Ingress Allocations event in order to calculate average queuing latency.; Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_occupancy_vn1.bl_ncbuncore interconnectVN1 Ingress (from CMS) Queue - Occupancy; NCB on BLevent=0x46,umask=0x2001Accumulates the occupancy of a given UPI VN1  Ingress queue in each cycle.  This tracks one of the three ring Ingress buffers.  This can be used with the UPI VN1  Ingress Not Empty event to calculate average occupancy or the UPI VN1  Ingress Allocations event in order to calculate average queuing latency.; Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_occupancy_vn1.bl_ncsuncore interconnectVN1 Ingress (from CMS) Queue - Occupancy; NCS on BLevent=0x46,umask=0x4001Accumulates the occupancy of a given UPI VN1  Ingress queue in each cycle.  This tracks one of the three ring Ingress buffers.  This can be used with the UPI VN1  Ingress Not Empty event to calculate average occupancy or the UPI VN1  Ingress Allocations event in order to calculate average queuing latency.; Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_occupancy_vn1.bl_rspuncore interconnectVN1 Ingress (from CMS) Queue - Occupancy; RSP on BLevent=0x46,umask=801Accumulates the occupancy of a given UPI VN1  Ingress queue in each cycle.  This tracks one of the three ring Ingress buffers.  This can be used with the UPI VN1  Ingress Not Empty event to calculate average occupancy or the UPI VN1  Ingress Allocations event in order to calculate average queuing latency.; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_occupancy_vn1.bl_wbuncore interconnectVN1 Ingress (from CMS) Queue - Occupancy; WB on BLevent=0x46,umask=0x1001Accumulates the occupancy of a given UPI VN1  Ingress queue in each cycle.  This tracks one of the three ring Ingress buffers.  This can be used with the UPI VN1  Ingress Not Empty event to calculate average occupancy or the UPI VN1  Ingress Allocations event in order to calculate average queuing latency.; Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_packing_miss_vn0.ad_requncore interconnectVN0 message can't slot into flit; REQ on ADevent=0x4e,umask=101Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used.; Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_packing_miss_vn0.ad_rspuncore interconnectVN0 message can't slot into flit; RSP on ADevent=0x4e,umask=401Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used.; Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_packing_miss_vn0.ad_snpuncore interconnectVN0 message can't slot into flit; SNP on ADevent=0x4e,umask=201Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used.; Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_packing_miss_vn0.bl_ncbuncore interconnectVN0 message can't slot into flit; NCB on BLevent=0x4e,umask=0x2001Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used.; Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_packing_miss_vn0.bl_ncsuncore interconnectVN0 message can't slot into flit; NCS on BLevent=0x4e,umask=0x4001Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used.; Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_packing_miss_vn0.bl_rspuncore interconnectVN0 message can't slot into flit; RSP on BLevent=0x4e,umask=801Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used.; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_packing_miss_vn0.bl_wbuncore interconnectVN0 message can't slot into flit; WB on BLevent=0x4e,umask=0x1001Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used.; Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_packing_miss_vn1.ad_requncore interconnectVN1 message can't slot into flit; REQ on ADevent=0x4f,umask=101Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used.; Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_packing_miss_vn1.ad_rspuncore interconnectVN1 message can't slot into flit; RSP on ADevent=0x4f,umask=401Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used.; Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_packing_miss_vn1.ad_snpuncore interconnectVN1 message can't slot into flit; SNP on ADevent=0x4f,umask=201Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used.; Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_packing_miss_vn1.bl_ncbuncore interconnectVN1 message can't slot into flit; NCB on BLevent=0x4f,umask=0x2001Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used.; Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_packing_miss_vn1.bl_ncsuncore interconnectVN1 message can't slot into flit; NCS on BLevent=0x4f,umask=0x4001Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used.; Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_packing_miss_vn1.bl_rspuncore interconnectVN1 message can't slot into flit; RSP on BLevent=0x4f,umask=801Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used.; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_packing_miss_vn1.bl_wbuncore interconnectVN1 message can't slot into flit; WB on BLevent=0x4f,umask=0x1001Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used.; Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_smi3_pftch.arb_lostuncore interconnectSMI3 Prefetch Messages; Lost Arbitrationevent=0x62,umask=201unc_m3upi_rxc_smi3_pftch.arriveduncore interconnectSMI3 Prefetch Messages; Arrivedevent=0x62,umask=101unc_m3upi_rxc_smi3_pftch.drop_olduncore interconnectSMI3 Prefetch Messages; Dropped - Oldevent=0x62,umask=801unc_m3upi_rxc_smi3_pftch.drop_wrapuncore interconnectSMI3 Prefetch Messages; Dropped - Wrapevent=0x62,umask=0x1001Dropped because it was overwritten by new message while prefetch queue was fullunc_m3upi_rxc_smi3_pftch.slotteduncore interconnectSMI3 Prefetch Messages; Slottedevent=0x62,umask=401unc_m3upi_rxc_vna_crd.any_in_useuncore interconnectRemote VNA Credits; Any In Useevent=0x5b,umask=0x2001At least one remote vna credit is in useunc_m3upi_rxc_vna_crd.correcteduncore interconnectRemote VNA Credits; Correctedevent=0x5b,umask=201Number of remote vna credits corrected (local return) per cycleunc_m3upi_rxc_vna_crd.lt1uncore interconnectRemote VNA Credits; Level < 1event=0x5b,umask=401Remote vna credit level is less than 1 (i.e. no vna credits available)unc_m3upi_rxc_vna_crd.lt4uncore interconnectRemote VNA Credits; Level < 4event=0x5b,umask=801Remote vna credit level is less than 4; bl (or ad requiring 4 vna) cannot arb on vnaunc_m3upi_rxc_vna_crd.lt5uncore interconnectRemote VNA Credits; Level < 5event=0x5b,umask=0x1001Remote vna credit level is less than 5; parallel ad/bl arb on vna not possibleunc_m3upi_rxc_vna_crd.useduncore interconnectRemote VNA Credits; Usedevent=0x5b,umask=101Number of remote vna credits consumed per cycleunc_m3upi_rxr_busy_starved.ad_bncuncore interconnectTransgress Injection Starvation; AD - Bounceevent=0xb4,umask=101Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityunc_m3upi_rxr_busy_starved.ad_crduncore interconnectTransgress Injection Starvation; AD - Creditevent=0xb4,umask=0x1001Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityunc_m3upi_rxr_busy_starved.bl_bncuncore interconnectTransgress Injection Starvation; BL - Bounceevent=0xb4,umask=401Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityunc_m3upi_rxr_busy_starved.bl_crduncore interconnectTransgress Injection Starvation; BL - Creditevent=0xb4,umask=0x4001Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityunc_m3upi_rxr_bypass.ad_bncuncore interconnectTransgress Ingress Bypass; AD - Bounceevent=0xb2,umask=101Number of packets bypassing the CMS Ingressunc_m3upi_rxr_bypass.ad_crduncore interconnectTransgress Ingress Bypass; AD - Creditevent=0xb2,umask=0x1001Number of packets bypassing the CMS Ingressunc_m3upi_rxr_bypass.ak_bncuncore interconnectTransgress Ingress Bypass; AK - Bounceevent=0xb2,umask=201Number of packets bypassing the CMS Ingressunc_m3upi_rxr_bypass.bl_bncuncore interconnectTransgress Ingress Bypass; BL - Bounceevent=0xb2,umask=401Number of packets bypassing the CMS Ingressunc_m3upi_rxr_bypass.bl_crduncore interconnectTransgress Ingress Bypass; BL - Creditevent=0xb2,umask=0x4001Number of packets bypassing the CMS Ingressunc_m3upi_rxr_bypass.iv_bncuncore interconnectTransgress Ingress Bypass; IV - Bounceevent=0xb2,umask=801Number of packets bypassing the CMS Ingressunc_m3upi_rxr_crd_starved.ad_bncuncore interconnectTransgress Injection Starvation; AD - Bounceevent=0xb3,umask=101Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m3upi_rxr_crd_starved.ad_crduncore interconnectTransgress Injection Starvation; AD - Creditevent=0xb3,umask=0x1001Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m3upi_rxr_crd_starved.ak_bncuncore interconnectTransgress Injection Starvation; AK - Bounceevent=0xb3,umask=201Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m3upi_rxr_crd_starved.bl_bncuncore interconnectTransgress Injection Starvation; BL - Bounceevent=0xb3,umask=401Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m3upi_rxr_crd_starved.bl_crduncore interconnectTransgress Injection Starvation; BL - Creditevent=0xb3,umask=0x4001Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m3upi_rxr_crd_starved.ifvuncore interconnectTransgress Injection Starvation; IFV - Creditevent=0xb3,umask=0x8001Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m3upi_rxr_crd_starved.iv_bncuncore interconnectTransgress Injection Starvation; IV - Bounceevent=0xb3,umask=801Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m3upi_rxr_inserts.ad_bncuncore interconnectTransgress Ingress Allocations; AD - Bounceevent=0xb1,umask=101Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m3upi_rxr_inserts.ad_crduncore interconnectTransgress Ingress Allocations; AD - Creditevent=0xb1,umask=0x1001Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m3upi_rxr_inserts.ak_bncuncore interconnectTransgress Ingress Allocations; AK - Bounceevent=0xb1,umask=201Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m3upi_rxr_inserts.bl_bncuncore interconnectTransgress Ingress Allocations; BL - Bounceevent=0xb1,umask=401Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m3upi_rxr_inserts.bl_crduncore interconnectTransgress Ingress Allocations; BL - Creditevent=0xb1,umask=0x4001Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m3upi_rxr_inserts.iv_bncuncore interconnectTransgress Ingress Allocations; IV - Bounceevent=0xb1,umask=801Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m3upi_rxr_occupancy.ad_bncuncore interconnectTransgress Ingress Occupancy; AD - Bounceevent=0xb0,umask=101Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m3upi_rxr_occupancy.ad_crduncore interconnectTransgress Ingress Occupancy; AD - Creditevent=0xb0,umask=0x1001Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m3upi_rxr_occupancy.ak_bncuncore interconnectTransgress Ingress Occupancy; AK - Bounceevent=0xb0,umask=201Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m3upi_rxr_occupancy.bl_bncuncore interconnectTransgress Ingress Occupancy; BL - Bounceevent=0xb0,umask=401Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m3upi_rxr_occupancy.bl_crduncore interconnectTransgress Ingress Occupancy; BL - Creditevent=0xb0,umask=0x4001Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m3upi_rxr_occupancy.iv_bncuncore interconnectTransgress Ingress Occupancy; IV - Bounceevent=0xb0,umask=801Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m3upi_stall_no_txr_horz_crd_ad_ag0.tgr0uncore interconnectStall on No AD Agent0 Transgress Credits; For Transgress 0event=0xd0,umask=101Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall_no_txr_horz_crd_ad_ag0.tgr1uncore interconnectStall on No AD Agent0 Transgress Credits; For Transgress 1event=0xd0,umask=201Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall_no_txr_horz_crd_ad_ag0.tgr2uncore interconnectStall on No AD Agent0 Transgress Credits; For Transgress 2event=0xd0,umask=401Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall_no_txr_horz_crd_ad_ag0.tgr3uncore interconnectStall on No AD Agent0 Transgress Credits; For Transgress 3event=0xd0,umask=801Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall_no_txr_horz_crd_ad_ag0.tgr4uncore interconnectStall on No AD Agent0 Transgress Credits; For Transgress 4event=0xd0,umask=0x1001Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall_no_txr_horz_crd_ad_ag0.tgr5uncore interconnectStall on No AD Agent0 Transgress Credits; For Transgress 5event=0xd0,umask=0x2001Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall_no_txr_horz_crd_ad_ag1.tgr0uncore interconnectStall on No AD Agent1 Transgress Credits; For Transgress 0event=0xd2,umask=101Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall_no_txr_horz_crd_ad_ag1.tgr1uncore interconnectStall on No AD Agent1 Transgress Credits; For Transgress 1event=0xd2,umask=201Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall_no_txr_horz_crd_ad_ag1.tgr2uncore interconnectStall on No AD Agent1 Transgress Credits; For Transgress 2event=0xd2,umask=401Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall_no_txr_horz_crd_ad_ag1.tgr3uncore interconnectStall on No AD Agent1 Transgress Credits; For Transgress 3event=0xd2,umask=801Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall_no_txr_horz_crd_ad_ag1.tgr4uncore interconnectStall on No AD Agent1 Transgress Credits; For Transgress 4event=0xd2,umask=0x1001Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall_no_txr_horz_crd_ad_ag1.tgr5uncore interconnectStall on No AD Agent1 Transgress Credits; For Transgress 5event=0xd2,umask=0x2001Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall_no_txr_horz_crd_bl_ag0.tgr0uncore interconnectStall on No BL Agent0 Transgress Credits; For Transgress 0event=0xd4,umask=101Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall_no_txr_horz_crd_bl_ag0.tgr1uncore interconnectStall on No BL Agent0 Transgress Credits; For Transgress 1event=0xd4,umask=201Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall_no_txr_horz_crd_bl_ag0.tgr2uncore interconnectStall on No BL Agent0 Transgress Credits; For Transgress 2event=0xd4,umask=401Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall_no_txr_horz_crd_bl_ag0.tgr3uncore interconnectStall on No BL Agent0 Transgress Credits; For Transgress 3event=0xd4,umask=801Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall_no_txr_horz_crd_bl_ag0.tgr4uncore interconnectStall on No BL Agent0 Transgress Credits; For Transgress 4event=0xd4,umask=0x1001Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall_no_txr_horz_crd_bl_ag0.tgr5uncore interconnectStall on No BL Agent0 Transgress Credits; For Transgress 5event=0xd4,umask=0x2001Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall_no_txr_horz_crd_bl_ag1.tgr0uncore interconnectStall on No BL Agent1 Transgress Credits; For Transgress 0event=0xd6,umask=101Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall_no_txr_horz_crd_bl_ag1.tgr1uncore interconnectStall on No BL Agent1 Transgress Credits; For Transgress 1event=0xd6,umask=201Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall_no_txr_horz_crd_bl_ag1.tgr2uncore interconnectStall on No BL Agent1 Transgress Credits; For Transgress 2event=0xd6,umask=401Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall_no_txr_horz_crd_bl_ag1.tgr3uncore interconnectStall on No BL Agent1 Transgress Credits; For Transgress 3event=0xd6,umask=801Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall_no_txr_horz_crd_bl_ag1.tgr4uncore interconnectStall on No BL Agent1 Transgress Credits; For Transgress 4event=0xd6,umask=0x1001Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall_no_txr_horz_crd_bl_ag1.tgr5uncore interconnectStall on No BL Agent1 Transgress Credits; For Transgress 5event=0xd6,umask=0x2001Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_txc_ad_arb_fail.vn0_requncore interconnectFailed ARB for AD; VN0 REQ Messagesevent=0x30,umask=101AD arb but no win; arb request asserted but not wonunc_m3upi_txc_ad_arb_fail.vn0_rspuncore interconnectFailed ARB for AD; VN0 RSP Messagesevent=0x30,umask=401AD arb but no win; arb request asserted but not wonunc_m3upi_txc_ad_arb_fail.vn0_snpuncore interconnectFailed ARB for AD; VN0 SNP Messagesevent=0x30,umask=201AD arb but no win; arb request asserted but not wonunc_m3upi_txc_ad_arb_fail.vn0_wbuncore interconnectFailed ARB for AD; VN0 WB Messagesevent=0x30,umask=801AD arb but no win; arb request asserted but not wonunc_m3upi_txc_ad_arb_fail.vn1_requncore interconnectFailed ARB for AD; VN1 REQ Messagesevent=0x30,umask=0x1001AD arb but no win; arb request asserted but not wonunc_m3upi_txc_ad_arb_fail.vn1_rspuncore interconnectFailed ARB for AD; VN1 RSP Messagesevent=0x30,umask=0x4001AD arb but no win; arb request asserted but not wonunc_m3upi_txc_ad_arb_fail.vn1_snpuncore interconnectFailed ARB for AD; VN1 SNP Messagesevent=0x30,umask=0x2001AD arb but no win; arb request asserted but not wonunc_m3upi_txc_ad_arb_fail.vn1_wbuncore interconnectFailed ARB for AD; VN1 WB Messagesevent=0x30,umask=0x8001AD arb but no win; arb request asserted but not wonunc_m3upi_txc_ad_flq_bypass.ad_slot0uncore interconnectAD FlowQ Bypassevent=0x2c,umask=101Counts cases when the AD flowQ is bypassed (S0, S1 and S2 indicate which slot was bypassed with S0 having the highest priority and S2 the least)unc_m3upi_txc_ad_flq_bypass.ad_slot1uncore interconnectAD FlowQ Bypassevent=0x2c,umask=201Counts cases when the AD flowQ is bypassed (S0, S1 and S2 indicate which slot was bypassed with S0 having the highest priority and S2 the least)unc_m3upi_txc_ad_flq_bypass.ad_slot2uncore interconnectAD FlowQ Bypassevent=0x2c,umask=401Counts cases when the AD flowQ is bypassed (S0, S1 and S2 indicate which slot was bypassed with S0 having the highest priority and S2 the least)unc_m3upi_txc_ad_flq_bypass.bl_early_rspuncore interconnectAD FlowQ Bypassevent=0x2c,umask=801Counts cases when the AD flowQ is bypassed (S0, S1 and S2 indicate which slot was bypassed with S0 having the highest priority and S2 the least)unc_m3upi_txc_ad_flq_cycles_ne.vn0_requncore interconnectAD Flow Q Not Empty; VN0 REQ Messagesevent=0x27,umask=101Number of cycles the AD Egress queue is Not Emptyunc_m3upi_txc_ad_flq_cycles_ne.vn0_rspuncore interconnectAD Flow Q Not Empty; VN0 RSP Messagesevent=0x27,umask=401Number of cycles the AD Egress queue is Not Emptyunc_m3upi_txc_ad_flq_cycles_ne.vn0_snpuncore interconnectAD Flow Q Not Empty; VN0 SNP Messagesevent=0x27,umask=201Number of cycles the AD Egress queue is Not Emptyunc_m3upi_txc_ad_flq_cycles_ne.vn0_wbuncore interconnectAD Flow Q Not Empty; VN0 WB Messagesevent=0x27,umask=801Number of cycles the AD Egress queue is Not Emptyunc_m3upi_txc_ad_flq_cycles_ne.vn1_requncore interconnectAD Flow Q Not Empty; VN1 REQ Messagesevent=0x27,umask=0x1001Number of cycles the AD Egress queue is Not Emptyunc_m3upi_txc_ad_flq_cycles_ne.vn1_rspuncore interconnectAD Flow Q Not Empty; VN1 RSP Messagesevent=0x27,umask=0x4001Number of cycles the AD Egress queue is Not Emptyunc_m3upi_txc_ad_flq_cycles_ne.vn1_snpuncore interconnectAD Flow Q Not Empty; VN1 SNP Messagesevent=0x27,umask=0x2001Number of cycles the AD Egress queue is Not Emptyunc_m3upi_txc_ad_flq_cycles_ne.vn1_wbuncore interconnectAD Flow Q Not Empty; VN1 WB Messagesevent=0x27,umask=0x8001Number of cycles the AD Egress queue is Not Emptyunc_m3upi_txc_ad_flq_inserts.vn0_requncore interconnectAD Flow Q Inserts; VN0 REQ Messagesevent=0x2d,umask=101Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_ad_flq_inserts.vn0_rspuncore interconnectAD Flow Q Inserts; VN0 RSP Messagesevent=0x2d,umask=401Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_ad_flq_inserts.vn0_snpuncore interconnectAD Flow Q Inserts; VN0 SNP Messagesevent=0x2d,umask=201Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_ad_flq_inserts.vn0_wbuncore interconnectAD Flow Q Inserts; VN0 WB Messagesevent=0x2d,umask=801Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_ad_flq_inserts.vn1_requncore interconnectAD Flow Q Inserts; VN1 REQ Messagesevent=0x2d,umask=0x1001Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_ad_flq_inserts.vn1_rspuncore interconnectAD Flow Q Inserts; VN1 RSP Messagesevent=0x2d,umask=0x4001Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_ad_flq_inserts.vn1_snpuncore interconnectAD Flow Q Inserts; VN1 SNP Messagesevent=0x2d,umask=0x2001Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_ad_flq_occupancy.vn0_requncore interconnectAD Flow Q Occupancy; VN0 REQ Messagesevent=0x1c,umask=101unc_m3upi_txc_ad_flq_occupancy.vn0_rspuncore interconnectAD Flow Q Occupancy; VN0 RSP Messagesevent=0x1c,umask=401unc_m3upi_txc_ad_flq_occupancy.vn0_snpuncore interconnectAD Flow Q Occupancy; VN0 SNP Messagesevent=0x1c,umask=201unc_m3upi_txc_ad_flq_occupancy.vn0_wbuncore interconnectAD Flow Q Occupancy; VN0 WB Messagesevent=0x1c,umask=801unc_m3upi_txc_ad_flq_occupancy.vn1_requncore interconnectAD Flow Q Occupancy; VN1 REQ Messagesevent=0x1c,umask=0x1001unc_m3upi_txc_ad_flq_occupancy.vn1_rspuncore interconnectAD Flow Q Occupancy; VN1 RSP Messagesevent=0x1c,umask=0x4001unc_m3upi_txc_ad_flq_occupancy.vn1_snpuncore interconnectAD Flow Q Occupancy; VN1 SNP Messagesevent=0x1c,umask=0x2001unc_m3upi_txc_ad_snpf_grp1_vn1.vn0_chauncore interconnectNumber of Snoop Targets; CHA on VN0event=0x3c,umask=401Number of snpfanout targets and non-idle cycles can be used to calculate average snpfanout latency; Number of VN0 Snpf to CHAunc_m3upi_txc_ad_snpf_grp1_vn1.vn0_non_idleuncore interconnectNumber of Snoop Targets; Non Idle cycles on VN0event=0x3c,umask=0x4001Number of snpfanout targets and non-idle cycles can be used to calculate average snpfanout latency; Number of non-idle cycles in issuing Vn0 Snpfunc_m3upi_txc_ad_snpf_grp1_vn1.vn0_peer_upi0uncore interconnectNumber of Snoop Targets; Peer UPI0 on VN0event=0x3c,umask=101Number of snpfanout targets and non-idle cycles can be used to calculate average snpfanout latency; Number of VN0 Snpf to peer UPI0unc_m3upi_txc_ad_snpf_grp1_vn1.vn0_peer_upi1uncore interconnectNumber of Snoop Targets; Peer UPI1 on VN0event=0x3c,umask=201Number of snpfanout targets and non-idle cycles can be used to calculate average snpfanout latency; Number of VN0 Snpf to peer UPI1unc_m3upi_txc_ad_snpf_grp1_vn1.vn1_chauncore interconnectNumber of Snoop Targets; CHA on VN1event=0x3c,umask=0x2001Number of snpfanout targets and non-idle cycles can be used to calculate average snpfanout latency; Number of VN1 Snpf to CHAunc_m3upi_txc_ad_snpf_grp1_vn1.vn1_non_idleuncore interconnectNumber of Snoop Targets; Non Idle cycles on VN1event=0x3c,umask=0x8001Number of snpfanout targets and non-idle cycles can be used to calculate average snpfanout latency; Number of non-idle cycles in issuing Vn1 Snpfunc_m3upi_txc_ad_snpf_grp1_vn1.vn1_peer_upi0uncore interconnectNumber of Snoop Targets; Peer UPI0 on VN1event=0x3c,umask=801Number of snpfanout targets and non-idle cycles can be used to calculate average snpfanout latency; Number of VN1 Snpf to peer UPI0unc_m3upi_txc_ad_snpf_grp1_vn1.vn1_peer_upi1uncore interconnectNumber of Snoop Targets; Peer UPI1 on VN1event=0x3c,umask=0x1001Number of snpfanout targets and non-idle cycles can be used to calculate average snpfanout latency; Number of VN1 Snpf to peer UPI1unc_m3upi_txc_ad_snpf_grp2_vn1.vn0_snpfp_nonsnpuncore interconnectSnoop Arbitration; FlowQ Wonevent=0x3d,umask=101Outcome of SnpF pending arbitration; FlowQ txn issued when SnpF pending on Vn0unc_m3upi_txc_ad_snpf_grp2_vn1.vn0_snpfp_vn2snpuncore interconnectSnoop Arbitration; FlowQ SnpF Wonevent=0x3d,umask=401Outcome of SnpF pending arbitration; FlowQ Vn0 SnpF issued when SnpF pending on Vn1unc_m3upi_txc_ad_snpf_grp2_vn1.vn1_snpfp_nonsnpuncore interconnectSnoop Arbitration; FlowQ Wonevent=0x3d,umask=201Outcome of SnpF pending arbitration; FlowQ txn issued when SnpF pending on Vn1unc_m3upi_txc_ad_snpf_grp2_vn1.vn1_snpfp_vn0snpuncore interconnectSnoop Arbitration; FlowQ SnpF Wonevent=0x3d,umask=801Outcome of SnpF pending arbitration; FlowQ Vn1 SnpF issued when SnpF pending on Vn0unc_m3upi_txc_ad_spec_arb_crd_avail.vn0_requncore interconnectSpeculative ARB for AD  -  Credit Available; VN0 REQ Messagesevent=0x34,umask=101AD speculative arb request with prior cycle credit check complete and credit availunc_m3upi_txc_ad_spec_arb_crd_avail.vn0_snpuncore interconnectSpeculative ARB for AD  -  Credit Available; VN0 SNP Messagesevent=0x34,umask=201AD speculative arb request with prior cycle credit check complete and credit availunc_m3upi_txc_ad_spec_arb_crd_avail.vn0_wbuncore interconnectSpeculative ARB for AD  -  Credit Available; VN0 WB Messagesevent=0x34,umask=801AD speculative arb request with prior cycle credit check complete and credit availunc_m3upi_txc_ad_spec_arb_crd_avail.vn1_requncore interconnectSpeculative ARB for AD  -  Credit Available; VN1 REQ Messagesevent=0x34,umask=0x1001AD speculative arb request with prior cycle credit check complete and credit availunc_m3upi_txc_ad_spec_arb_crd_avail.vn1_snpuncore interconnectSpeculative ARB for AD  -  Credit Available; VN1 SNP Messagesevent=0x34,umask=0x2001AD speculative arb request with prior cycle credit check complete and credit availunc_m3upi_txc_ad_spec_arb_crd_avail.vn1_wbuncore interconnectSpeculative ARB for AD  -  Credit Available; VN1 WB Messagesevent=0x34,umask=0x8001AD speculative arb request with prior cycle credit check complete and credit availunc_m3upi_txc_ad_spec_arb_new_msg.vn0_requncore interconnectSpeculative ARB for AD  - New Message; VN0 REQ Messagesevent=0x33,umask=101AD speculative arb request due to new message arriving on a specific channel (MC/VN)unc_m3upi_txc_ad_spec_arb_new_msg.vn0_snpuncore interconnectSpeculative ARB for AD  - New Message; VN0 SNP Messagesevent=0x33,umask=201AD speculative arb request due to new message arriving on a specific channel (MC/VN)unc_m3upi_txc_ad_spec_arb_new_msg.vn0_wbuncore interconnectSpeculative ARB for AD  - New Message; VN0 WB Messagesevent=0x33,umask=801AD speculative arb request due to new message arriving on a specific channel (MC/VN)unc_m3upi_txc_ad_spec_arb_new_msg.vn1_requncore interconnectSpeculative ARB for AD  - New Message; VN1 REQ Messagesevent=0x33,umask=0x1001AD speculative arb request due to new message arriving on a specific channel (MC/VN)unc_m3upi_txc_ad_spec_arb_new_msg.vn1_snpuncore interconnectSpeculative ARB for AD  - New Message; VN1 SNP Messagesevent=0x33,umask=0x2001AD speculative arb request due to new message arriving on a specific channel (MC/VN)unc_m3upi_txc_ad_spec_arb_new_msg.vn1_wbuncore interconnectSpeculative ARB for AD  - New Message; VN1 WB Messagesevent=0x33,umask=0x8001AD speculative arb request due to new message arriving on a specific channel (MC/VN)unc_m3upi_txc_ad_spec_arb_no_other_pend.vn0_requncore interconnectSpeculative ARB for AD  - No Credit; VN0 REQ Messagesevent=0x32,umask=101AD speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)unc_m3upi_txc_ad_spec_arb_no_other_pend.vn0_rspuncore interconnectSpeculative ARB for AD  - No Credit; VN0 RSP Messagesevent=0x32,umask=401AD speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)unc_m3upi_txc_ad_spec_arb_no_other_pend.vn0_snpuncore interconnectSpeculative ARB for AD  - No Credit; VN0 SNP Messagesevent=0x32,umask=201AD speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)unc_m3upi_txc_ad_spec_arb_no_other_pend.vn0_wbuncore interconnectSpeculative ARB for AD  - No Credit; VN0 WB Messagesevent=0x32,umask=801AD speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)unc_m3upi_txc_ad_spec_arb_no_other_pend.vn1_requncore interconnectSpeculative ARB for AD  - No Credit; VN1 REQ Messagesevent=0x32,umask=0x1001AD speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)unc_m3upi_txc_ad_spec_arb_no_other_pend.vn1_rspuncore interconnectSpeculative ARB for AD  - No Credit; VN1 RSP Messagesevent=0x32,umask=0x4001AD speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)unc_m3upi_txc_ad_spec_arb_no_other_pend.vn1_snpuncore interconnectSpeculative ARB for AD  - No Credit; VN1 SNP Messagesevent=0x32,umask=0x2001AD speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)unc_m3upi_txc_ad_spec_arb_no_other_pend.vn1_wbuncore interconnectSpeculative ARB for AD  - No Credit; VN1 WB Messagesevent=0x32,umask=0x8001AD speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)unc_m3upi_txc_ak_flq_insertsuncore interconnectAK Flow Q Insertsevent=0x2f01unc_m3upi_txc_ak_flq_occupancyuncore interconnectAK Flow Q Occupancyevent=0x1e01unc_m3upi_txc_bl_arb_fail.vn0_ncbuncore interconnectFailed ARB for BL; VN0 NCB Messagesevent=0x35,umask=401BL arb but no win; arb request asserted but not wonunc_m3upi_txc_bl_arb_fail.vn0_ncsuncore interconnectFailed ARB for BL; VN0 NCS Messagesevent=0x35,umask=801BL arb but no win; arb request asserted but not wonunc_m3upi_txc_bl_arb_fail.vn0_rspuncore interconnectFailed ARB for BL; VN0 RSP Messagesevent=0x35,umask=101BL arb but no win; arb request asserted but not wonunc_m3upi_txc_bl_arb_fail.vn0_wbuncore interconnectFailed ARB for BL; VN0 WB Messagesevent=0x35,umask=201BL arb but no win; arb request asserted but not wonunc_m3upi_txc_bl_arb_fail.vn1_ncbuncore interconnectFailed ARB for BL; VN1 NCS Messagesevent=0x35,umask=0x4001BL arb but no win; arb request asserted but not wonunc_m3upi_txc_bl_arb_fail.vn1_ncsuncore interconnectFailed ARB for BL; VN1 NCB Messagesevent=0x35,umask=0x8001BL arb but no win; arb request asserted but not wonunc_m3upi_txc_bl_arb_fail.vn1_rspuncore interconnectFailed ARB for BL; VN1 RSP Messagesevent=0x35,umask=0x1001BL arb but no win; arb request asserted but not wonunc_m3upi_txc_bl_arb_fail.vn1_wbuncore interconnectFailed ARB for BL; VN1 WB Messagesevent=0x35,umask=0x2001BL arb but no win; arb request asserted but not wonunc_m3upi_txc_bl_flq_cycles_ne.vn0_requncore interconnectBL Flow Q Not Empty; VN0 REQ Messagesevent=0x28,umask=101Number of cycles the BL Egress queue is Not Emptyunc_m3upi_txc_bl_flq_cycles_ne.vn0_rspuncore interconnectBL Flow Q Not Empty; VN0 RSP Messagesevent=0x28,umask=401Number of cycles the BL Egress queue is Not Emptyunc_m3upi_txc_bl_flq_cycles_ne.vn0_snpuncore interconnectBL Flow Q Not Empty; VN0 SNP Messagesevent=0x28,umask=201Number of cycles the BL Egress queue is Not Emptyunc_m3upi_txc_bl_flq_cycles_ne.vn0_wbuncore interconnectBL Flow Q Not Empty; VN0 WB Messagesevent=0x28,umask=801Number of cycles the BL Egress queue is Not Emptyunc_m3upi_txc_bl_flq_cycles_ne.vn1_requncore interconnectBL Flow Q Not Empty; VN1 REQ Messagesevent=0x28,umask=0x1001Number of cycles the BL Egress queue is Not Emptyunc_m3upi_txc_bl_flq_cycles_ne.vn1_rspuncore interconnectBL Flow Q Not Empty; VN1 RSP Messagesevent=0x28,umask=0x4001Number of cycles the BL Egress queue is Not Emptyunc_m3upi_txc_bl_flq_cycles_ne.vn1_snpuncore interconnectBL Flow Q Not Empty; VN1 SNP Messagesevent=0x28,umask=0x2001Number of cycles the BL Egress queue is Not Emptyunc_m3upi_txc_bl_flq_cycles_ne.vn1_wbuncore interconnectBL Flow Q Not Empty; VN1 WB Messagesevent=0x28,umask=0x8001Number of cycles the BL Egress queue is Not Emptyunc_m3upi_txc_bl_flq_inserts.vn0_ncbuncore interconnectBL Flow Q Inserts; VN0 RSP Messagesevent=0x2e,umask=101Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_bl_flq_inserts.vn0_ncsuncore interconnectBL Flow Q Inserts; VN0 WB Messagesevent=0x2e,umask=201Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_bl_flq_inserts.vn0_rspuncore interconnectBL Flow Q Inserts; VN0 NCS Messagesevent=0x2e,umask=801Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_bl_flq_inserts.vn0_wbuncore interconnectBL Flow Q Inserts; VN0 NCB Messagesevent=0x2e,umask=401Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_bl_flq_inserts.vn1_ncbuncore interconnectBL Flow Q Inserts; VN1 RSP Messagesevent=0x2e,umask=0x1001Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_bl_flq_inserts.vn1_ncsuncore interconnectBL Flow Q Inserts; VN1 WB Messagesevent=0x2e,umask=0x2001Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_bl_flq_inserts.vn1_rspuncore interconnectBL Flow Q Inserts; VN1_NCB Messagesevent=0x2e,umask=0x8001Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_bl_flq_inserts.vn1_wbuncore interconnectBL Flow Q Inserts; VN1_NCS Messagesevent=0x2e,umask=0x4001Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_bl_flq_occupancy.vn0_ncbuncore interconnectBL Flow Q Occupancy; VN0 NCB Messagesevent=0x1d,umask=401unc_m3upi_txc_bl_flq_occupancy.vn0_ncsuncore interconnectBL Flow Q Occupancy; VN0 NCS Messagesevent=0x1d,umask=801unc_m3upi_txc_bl_flq_occupancy.vn0_rspuncore interconnectBL Flow Q Occupancy; VN0 RSP Messagesevent=0x1d,umask=101unc_m3upi_txc_bl_flq_occupancy.vn0_wbuncore interconnectBL Flow Q Occupancy; VN0 WB Messagesevent=0x1d,umask=201unc_m3upi_txc_bl_flq_occupancy.vn1_ncbuncore interconnectBL Flow Q Occupancy; VN1_NCS Messagesevent=0x1d,umask=0x4001unc_m3upi_txc_bl_flq_occupancy.vn1_ncsuncore interconnectBL Flow Q Occupancy; VN1_NCB Messagesevent=0x1d,umask=0x8001unc_m3upi_txc_bl_flq_occupancy.vn1_rspuncore interconnectBL Flow Q Occupancy; VN1 RSP Messagesevent=0x1d,umask=0x1001unc_m3upi_txc_bl_flq_occupancy.vn1_wbuncore interconnectBL Flow Q Occupancy; VN1 WB Messagesevent=0x1d,umask=0x2001unc_m3upi_txc_bl_spec_arb_new_msg.vn0_ncbuncore interconnectSpeculative ARB for BL  - New Message; VN0 WB Messagesevent=0x38,umask=201BL speculative arb request due to new message arriving on a specific channel (MC/VN)unc_m3upi_txc_bl_spec_arb_new_msg.vn0_ncsuncore interconnectSpeculative ARB for BL  - New Message; VN0 NCS Messagesevent=0x38,umask=801BL speculative arb request due to new message arriving on a specific channel (MC/VN)unc_m3upi_txc_bl_spec_arb_new_msg.vn0_wbuncore interconnectSpeculative ARB for BL  - New Message; VN0 WB Messagesevent=0x38,umask=101BL speculative arb request due to new message arriving on a specific channel (MC/VN)unc_m3upi_txc_bl_spec_arb_new_msg.vn1_ncbuncore interconnectSpeculative ARB for BL  - New Message; VN1 WB Messagesevent=0x38,umask=0x2001BL speculative arb request due to new message arriving on a specific channel (MC/VN)unc_m3upi_txc_bl_spec_arb_new_msg.vn1_ncsuncore interconnectSpeculative ARB for BL  - New Message; VN1 NCB Messagesevent=0x38,umask=0x8001BL speculative arb request due to new message arriving on a specific channel (MC/VN)unc_m3upi_txc_bl_spec_arb_new_msg.vn1_wbuncore interconnectSpeculative ARB for BL  - New Message; VN1 RSP Messagesevent=0x38,umask=0x1001BL speculative arb request due to new message arriving on a specific channel (MC/VN)unc_m3upi_txc_bl_spec_arb_no_other_pend.vn0_ncbuncore interconnectSpeculative ARB for AD Failed - No Credit; VN0 NCB Messagesevent=0x37,umask=401BL speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)unc_m3upi_txc_bl_spec_arb_no_other_pend.vn0_ncsuncore interconnectSpeculative ARB for AD Failed - No Credit; VN0 NCS Messagesevent=0x37,umask=801BL speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)unc_m3upi_txc_bl_spec_arb_no_other_pend.vn0_rspuncore interconnectSpeculative ARB for AD Failed - No Credit; VN0 RSP Messagesevent=0x37,umask=101BL speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)unc_m3upi_txc_bl_spec_arb_no_other_pend.vn0_wbuncore interconnectSpeculative ARB for AD Failed - No Credit; VN0 WB Messagesevent=0x37,umask=201BL speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)unc_m3upi_txc_bl_spec_arb_no_other_pend.vn1_ncbuncore interconnectSpeculative ARB for AD Failed - No Credit; VN1 NCS Messagesevent=0x37,umask=0x4001BL speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)unc_m3upi_txc_bl_spec_arb_no_other_pend.vn1_ncsuncore interconnectSpeculative ARB for AD Failed - No Credit; VN1 NCB Messagesevent=0x37,umask=0x8001BL speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)unc_m3upi_txc_bl_spec_arb_no_other_pend.vn1_rspuncore interconnectSpeculative ARB for AD Failed - No Credit; VN1 RSP Messagesevent=0x37,umask=0x1001BL speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)unc_m3upi_txc_bl_spec_arb_no_other_pend.vn1_wbuncore interconnectSpeculative ARB for AD Failed - No Credit; VN1 WB Messagesevent=0x37,umask=0x2001BL speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)unc_m3upi_txr_horz_ads_used.ad_bncuncore interconnectCMS Horizontal ADS Used; AD - Bounceevent=0x9d,umask=101Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m3upi_txr_horz_ads_used.ad_crduncore interconnectCMS Horizontal ADS Used; AD - Creditevent=0x9d,umask=0x1001Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m3upi_txr_horz_ads_used.ak_bncuncore interconnectCMS Horizontal ADS Used; AK - Bounceevent=0x9d,umask=201Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m3upi_txr_horz_ads_used.bl_bncuncore interconnectCMS Horizontal ADS Used; BL - Bounceevent=0x9d,umask=401Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m3upi_txr_horz_ads_used.bl_crduncore interconnectCMS Horizontal ADS Used; BL - Creditevent=0x9d,umask=0x4001Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m3upi_txr_horz_bypass.ad_bncuncore interconnectCMS Horizontal Bypass Used; AD - Bounceevent=0x9f,umask=101Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m3upi_txr_horz_bypass.ad_crduncore interconnectCMS Horizontal Bypass Used; AD - Creditevent=0x9f,umask=0x1001Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m3upi_txr_horz_bypass.ak_bncuncore interconnectCMS Horizontal Bypass Used; AK - Bounceevent=0x9f,umask=201Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m3upi_txr_horz_bypass.bl_bncuncore interconnectCMS Horizontal Bypass Used; BL - Bounceevent=0x9f,umask=401Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m3upi_txr_horz_bypass.bl_crduncore interconnectCMS Horizontal Bypass Used; BL - Creditevent=0x9f,umask=0x4001Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m3upi_txr_horz_bypass.iv_bncuncore interconnectCMS Horizontal Bypass Used; IV - Bounceevent=0x9f,umask=801Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m3upi_txr_horz_cycles_full.ad_bncuncore interconnectCycles CMS Horizontal Egress Queue is Full; AD - Bounceevent=0x96,umask=101Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_cycles_full.ad_crduncore interconnectCycles CMS Horizontal Egress Queue is Full; AD - Creditevent=0x96,umask=0x1001Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_cycles_full.ak_bncuncore interconnectCycles CMS Horizontal Egress Queue is Full; AK - Bounceevent=0x96,umask=201Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_cycles_full.bl_bncuncore interconnectCycles CMS Horizontal Egress Queue is Full; BL - Bounceevent=0x96,umask=401Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_cycles_full.bl_crduncore interconnectCycles CMS Horizontal Egress Queue is Full; BL - Creditevent=0x96,umask=0x4001Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_cycles_full.iv_bncuncore interconnectCycles CMS Horizontal Egress Queue is Full; IV - Bounceevent=0x96,umask=801Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_cycles_ne.ad_bncuncore interconnectCycles CMS Horizontal Egress Queue is Not Empty; AD - Bounceevent=0x97,umask=101Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_cycles_ne.ad_crduncore interconnectCycles CMS Horizontal Egress Queue is Not Empty; AD - Creditevent=0x97,umask=0x1001Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_cycles_ne.ak_bncuncore interconnectCycles CMS Horizontal Egress Queue is Not Empty; AK - Bounceevent=0x97,umask=201Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_cycles_ne.bl_bncuncore interconnectCycles CMS Horizontal Egress Queue is Not Empty; BL - Bounceevent=0x97,umask=401Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_cycles_ne.bl_crduncore interconnectCycles CMS Horizontal Egress Queue is Not Empty; BL - Creditevent=0x97,umask=0x4001Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_cycles_ne.iv_bncuncore interconnectCycles CMS Horizontal Egress Queue is Not Empty; IV - Bounceevent=0x97,umask=801Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_inserts.ad_bncuncore interconnectCMS Horizontal Egress Inserts; AD - Bounceevent=0x95,umask=101Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_inserts.ad_crduncore interconnectCMS Horizontal Egress Inserts; AD - Creditevent=0x95,umask=0x1001Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_inserts.ak_bncuncore interconnectCMS Horizontal Egress Inserts; AK - Bounceevent=0x95,umask=201Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_inserts.bl_bncuncore interconnectCMS Horizontal Egress Inserts; BL - Bounceevent=0x95,umask=401Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_inserts.bl_crduncore interconnectCMS Horizontal Egress Inserts; BL - Creditevent=0x95,umask=0x4001Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_inserts.iv_bncuncore interconnectCMS Horizontal Egress Inserts; IV - Bounceevent=0x95,umask=801Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_nack.ad_bncuncore interconnectCMS Horizontal Egress NACKs; AD - Bounceevent=0x99,umask=101Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m3upi_txr_horz_nack.ad_crduncore interconnectCMS Horizontal Egress NACKs; AD - Creditevent=0x99,umask=0x2001Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m3upi_txr_horz_nack.ak_bncuncore interconnectCMS Horizontal Egress NACKs; AK - Bounceevent=0x99,umask=201Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m3upi_txr_horz_nack.bl_bncuncore interconnectCMS Horizontal Egress NACKs; BL - Bounceevent=0x99,umask=401Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m3upi_txr_horz_nack.bl_crduncore interconnectCMS Horizontal Egress NACKs; BL - Creditevent=0x99,umask=0x4001Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m3upi_txr_horz_nack.iv_bncuncore interconnectCMS Horizontal Egress NACKs; IV - Bounceevent=0x99,umask=801Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m3upi_txr_horz_occupancy.ad_bncuncore interconnectCMS Horizontal Egress Occupancy; AD - Bounceevent=0x94,umask=101Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_occupancy.ad_crduncore interconnectCMS Horizontal Egress Occupancy; AD - Creditevent=0x94,umask=0x1001Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_occupancy.ak_bncuncore interconnectCMS Horizontal Egress Occupancy; AK - Bounceevent=0x94,umask=201Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_occupancy.bl_bncuncore interconnectCMS Horizontal Egress Occupancy; BL - Bounceevent=0x94,umask=401Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_occupancy.bl_crduncore interconnectCMS Horizontal Egress Occupancy; BL - Creditevent=0x94,umask=0x4001Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_occupancy.iv_bncuncore interconnectCMS Horizontal Egress Occupancy; IV - Bounceevent=0x94,umask=801Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_starved.ad_bncuncore interconnectCMS Horizontal Egress Injection Starvation; AD - Bounceevent=0x9b,umask=101Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_m3upi_txr_horz_starved.ak_bncuncore interconnectCMS Horizontal Egress Injection Starvation; AK - Bounceevent=0x9b,umask=201Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_m3upi_txr_horz_starved.bl_bncuncore interconnectCMS Horizontal Egress Injection Starvation; BL - Bounceevent=0x9b,umask=401Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_m3upi_txr_horz_starved.iv_bncuncore interconnectCMS Horizontal Egress Injection Starvation; IV - Bounceevent=0x9b,umask=801Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_m3upi_txr_vert_ads_used.ad_ag0uncore interconnectCMS Vertical ADS Used; AD - Agent 0event=0x9c,umask=101Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m3upi_txr_vert_ads_used.ad_ag1uncore interconnectCMS Vertical ADS Used; AD - Agent 1event=0x9c,umask=0x1001Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m3upi_txr_vert_ads_used.ak_ag0uncore interconnectCMS Vertical ADS Used; AK - Agent 0event=0x9c,umask=201Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m3upi_txr_vert_ads_used.ak_ag1uncore interconnectCMS Vertical ADS Used; AK - Agent 1event=0x9c,umask=0x2001Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m3upi_txr_vert_ads_used.bl_ag0uncore interconnectCMS Vertical ADS Used; BL - Agent 0event=0x9c,umask=401Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m3upi_txr_vert_ads_used.bl_ag1uncore interconnectCMS Vertical ADS Used; BL - Agent 1event=0x9c,umask=0x4001Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m3upi_txr_vert_bypass.ad_ag0uncore interconnectCMS Vertical ADS Used; AD - Agent 0event=0x9e,umask=101Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m3upi_txr_vert_bypass.ad_ag1uncore interconnectCMS Vertical ADS Used; AD - Agent 1event=0x9e,umask=0x1001Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m3upi_txr_vert_bypass.ak_ag0uncore interconnectCMS Vertical ADS Used; AK - Agent 0event=0x9e,umask=201Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m3upi_txr_vert_bypass.ak_ag1uncore interconnectCMS Vertical ADS Used; AK - Agent 1event=0x9e,umask=0x2001Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m3upi_txr_vert_bypass.bl_ag0uncore interconnectCMS Vertical ADS Used; BL - Agent 0event=0x9e,umask=401Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m3upi_txr_vert_bypass.bl_ag1uncore interconnectCMS Vertical ADS Used; BL - Agent 1event=0x9e,umask=0x4001Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m3upi_txr_vert_bypass.ivuncore interconnectCMS Vertical ADS Used; IVevent=0x9e,umask=801Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m3upi_txr_vert_cycles_full.ad_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Full; AD - Agent 0event=0x92,umask=101Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m3upi_txr_vert_cycles_full.ad_ag1uncore interconnectCycles CMS Vertical Egress Queue Is Full; AD - Agent 1event=0x92,umask=0x1001Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AD ring.  This is commonly used for outbound requestsunc_m3upi_txr_vert_cycles_full.ak_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Full; AK - Agent 0event=0x92,umask=201Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m3upi_txr_vert_cycles_full.ak_ag1uncore interconnectCycles CMS Vertical Egress Queue Is Full; AK - Agent 1event=0x92,umask=0x2001Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AK ringunc_m3upi_txr_vert_cycles_full.bl_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Full; BL - Agent 0event=0x92,umask=401Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_m3upi_txr_vert_cycles_full.bl_ag1uncore interconnectCycles CMS Vertical Egress Queue Is Full; BL - Agent 1event=0x92,umask=0x4001Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_m3upi_txr_vert_cycles_full.ivuncore interconnectCycles CMS Vertical Egress Queue Is Full; IVevent=0x92,umask=801Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the IV ring.  This is commonly used for snoops to the coresunc_m3upi_txr_vert_cycles_ne.ad_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty; AD - Agent 0event=0x93,umask=101Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m3upi_txr_vert_cycles_ne.ad_ag1uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty; AD - Agent 1event=0x93,umask=0x1001Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AD ring.  This is commonly used for outbound requestsunc_m3upi_txr_vert_cycles_ne.ak_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty; AK - Agent 0event=0x93,umask=201Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m3upi_txr_vert_cycles_ne.ak_ag1uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty; AK - Agent 1event=0x93,umask=0x2001Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AK ringunc_m3upi_txr_vert_cycles_ne.bl_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty; BL - Agent 0event=0x93,umask=401Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_m3upi_txr_vert_cycles_ne.bl_ag1uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty; BL - Agent 1event=0x93,umask=0x4001Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_m3upi_txr_vert_cycles_ne.ivuncore interconnectCycles CMS Vertical Egress Queue Is Not Empty; IVevent=0x93,umask=801Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the IV ring.  This is commonly used for snoops to the coresunc_m3upi_txr_vert_inserts.ad_ag0uncore interconnectCMS Vert Egress Allocations; AD - Agent 0event=0x91,umask=101Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m3upi_txr_vert_inserts.ad_ag1uncore interconnectCMS Vert Egress Allocations; AD - Agent 1event=0x91,umask=0x1001Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AD ring.  This is commonly used for outbound requestsunc_m3upi_txr_vert_inserts.ak_ag0uncore interconnectCMS Vert Egress Allocations; AK - Agent 0event=0x91,umask=201Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m3upi_txr_vert_inserts.ak_ag1uncore interconnectCMS Vert Egress Allocations; AK - Agent 1event=0x91,umask=0x2001Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AK ringunc_m3upi_txr_vert_inserts.bl_ag0uncore interconnectCMS Vert Egress Allocations; BL - Agent 0event=0x91,umask=401Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_m3upi_txr_vert_inserts.bl_ag1uncore interconnectCMS Vert Egress Allocations; BL - Agent 1event=0x91,umask=0x4001Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_m3upi_txr_vert_inserts.ivuncore interconnectCMS Vert Egress Allocations; IVevent=0x91,umask=801Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the IV ring.  This is commonly used for snoops to the coresunc_m3upi_txr_vert_nack.ad_ag0uncore interconnectCMS Vertical Egress NACKs; AD - Agent 0event=0x98,umask=101Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m3upi_txr_vert_nack.ad_ag1uncore interconnectCMS Vertical Egress NACKs; AD - Agent 1event=0x98,umask=0x1001Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m3upi_txr_vert_nack.ak_ag0uncore interconnectCMS Vertical Egress NACKs; AK - Agent 0event=0x98,umask=201Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m3upi_txr_vert_nack.ak_ag1uncore interconnectCMS Vertical Egress NACKs; AK - Agent 1event=0x98,umask=0x2001Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m3upi_txr_vert_nack.bl_ag0uncore interconnectCMS Vertical Egress NACKs; BL - Agent 0event=0x98,umask=401Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m3upi_txr_vert_nack.bl_ag1uncore interconnectCMS Vertical Egress NACKs; BL - Agent 1event=0x98,umask=0x4001Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m3upi_txr_vert_nack.ivuncore interconnectCMS Vertical Egress NACKs; IVevent=0x98,umask=801Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m3upi_txr_vert_occupancy.ad_ag0uncore interconnectCMS Vert Egress Occupancy; AD - Agent 0event=0x90,umask=101Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m3upi_txr_vert_occupancy.ad_ag1uncore interconnectCMS Vert Egress Occupancy; AD - Agent 1event=0x90,umask=0x1001Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AD ring.  This is commonly used for outbound requestsunc_m3upi_txr_vert_occupancy.ak_ag0uncore interconnectCMS Vert Egress Occupancy; AK - Agent 0event=0x90,umask=201Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m3upi_txr_vert_occupancy.ak_ag1uncore interconnectCMS Vert Egress Occupancy; AK - Agent 1event=0x90,umask=0x2001Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AK ringunc_m3upi_txr_vert_occupancy.bl_ag0uncore interconnectCMS Vert Egress Occupancy; BL - Agent 0event=0x90,umask=401Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_m3upi_txr_vert_occupancy.bl_ag1uncore interconnectCMS Vert Egress Occupancy; BL - Agent 1event=0x90,umask=0x4001Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_m3upi_txr_vert_occupancy.ivuncore interconnectCMS Vert Egress Occupancy; IVevent=0x90,umask=801Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the IV ring.  This is commonly used for snoops to the coresunc_m3upi_txr_vert_starved.ad_ag0uncore interconnectCMS Vertical Egress Injection Starvation; AD - Agent 0event=0x9a,umask=101Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m3upi_txr_vert_starved.ad_ag1uncore interconnectCMS Vertical Egress Injection Starvation; AD - Agent 1event=0x9a,umask=0x1001Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m3upi_txr_vert_starved.ak_ag0uncore interconnectCMS Vertical Egress Injection Starvation; AK - Agent 0event=0x9a,umask=201Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m3upi_txr_vert_starved.ak_ag1uncore interconnectCMS Vertical Egress Injection Starvation; AK - Agent 1event=0x9a,umask=0x2001Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m3upi_txr_vert_starved.bl_ag0uncore interconnectCMS Vertical Egress Injection Starvation; BL - Agent 0event=0x9a,umask=401Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m3upi_txr_vert_starved.bl_ag1uncore interconnectCMS Vertical Egress Injection Starvation; BL - Agent 1event=0x9a,umask=0x4001Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m3upi_txr_vert_starved.ivuncore interconnectCMS Vertical Egress Injection Starvation; IVevent=0x9a,umask=801Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m3upi_upi_peer_ad_credits_empty.vn0_requncore interconnectUPI0 AD Credits Empty; VN0 REQ Messagesevent=0x20,umask=201No credits available to send to UPIs on the AD Ringunc_m3upi_upi_peer_ad_credits_empty.vn0_rspuncore interconnectUPI0 AD Credits Empty; VN0 RSP Messagesevent=0x20,umask=801No credits available to send to UPIs on the AD Ringunc_m3upi_upi_peer_ad_credits_empty.vn0_snpuncore interconnectUPI0 AD Credits Empty; VN0 SNP Messagesevent=0x20,umask=401No credits available to send to UPIs on the AD Ringunc_m3upi_upi_peer_ad_credits_empty.vn1_requncore interconnectUPI0 AD Credits Empty; VN1 REQ Messagesevent=0x20,umask=0x1001No credits available to send to UPIs on the AD Ringunc_m3upi_upi_peer_ad_credits_empty.vn1_rspuncore interconnectUPI0 AD Credits Empty; VN1 RSP Messagesevent=0x20,umask=0x4001No credits available to send to UPIs on the AD Ringunc_m3upi_upi_peer_ad_credits_empty.vn1_snpuncore interconnectUPI0 AD Credits Empty; VN1 SNP Messagesevent=0x20,umask=0x2001No credits available to send to UPIs on the AD Ringunc_m3upi_upi_peer_ad_credits_empty.vnauncore interconnectUPI0 AD Credits Empty; VNAevent=0x20,umask=101No credits available to send to UPIs on the AD Ringunc_m3upi_upi_peer_bl_credits_empty.vn0_ncs_ncbuncore interconnectUPI0 BL Credits Empty; VN0 RSP Messagesevent=0x21,umask=401No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)unc_m3upi_upi_peer_bl_credits_empty.vn0_rspuncore interconnectUPI0 BL Credits Empty; VN0 REQ Messagesevent=0x21,umask=201No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)unc_m3upi_upi_peer_bl_credits_empty.vn0_wbuncore interconnectUPI0 BL Credits Empty; VN0 SNP Messagesevent=0x21,umask=801No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)unc_m3upi_upi_peer_bl_credits_empty.vn1_ncs_ncbuncore interconnectUPI0 BL Credits Empty; VN1 RSP Messagesevent=0x21,umask=0x2001No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)unc_m3upi_upi_peer_bl_credits_empty.vn1_rspuncore interconnectUPI0 BL Credits Empty; VN1 REQ Messagesevent=0x21,umask=0x1001No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)unc_m3upi_upi_peer_bl_credits_empty.vn1_wbuncore interconnectUPI0 BL Credits Empty; VN1 SNP Messagesevent=0x21,umask=0x4001No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)unc_m3upi_upi_peer_bl_credits_empty.vnauncore interconnectUPI0 BL Credits Empty; VNAevent=0x21,umask=101No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)unc_m3upi_upi_prefetch_spawnuncore interconnectPrefetches generated by the flow control queue of the M3UPI unitevent=0x2901Count cases where flow control queue that sits between the Intel(R) Ultra Path Interconnect (UPI) and the mesh spawns a prefetch to the iMC (Memory Controller)unc_m3upi_vert_ring_ad_in_use.dn_evenuncore interconnectVertical AD Ring In Use; Down and Evenevent=0xa6,umask=401Counts the number of cycles that the Vertical AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings  -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_ad_in_use.dn_odduncore interconnectVertical AD Ring In Use; Down and Oddevent=0xa6,umask=801Counts the number of cycles that the Vertical AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings  -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_ad_in_use.up_evenuncore interconnectVertical AD Ring In Use; Up and Evenevent=0xa6,umask=101Counts the number of cycles that the Vertical AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings  -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_ad_in_use.up_odduncore interconnectVertical AD Ring In Use; Up and Oddevent=0xa6,umask=201Counts the number of cycles that the Vertical AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings  -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_ak_in_use.dn_evenuncore interconnectVertical AK Ring In Use; Down and Evenevent=0xa8,umask=401Counts the number of cycles that the Vertical AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_ak_in_use.dn_odduncore interconnectVertical AK Ring In Use; Down and Oddevent=0xa8,umask=801Counts the number of cycles that the Vertical AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_ak_in_use.up_evenuncore interconnectVertical AK Ring In Use; Up and Evenevent=0xa8,umask=101Counts the number of cycles that the Vertical AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_ak_in_use.up_odduncore interconnectVertical AK Ring In Use; Up and Oddevent=0xa8,umask=201Counts the number of cycles that the Vertical AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_bl_in_use.dn_evenuncore interconnectVertical BL Ring in Use; Down and Evenevent=0xaa,umask=401Counts the number of cycles that the Vertical BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_bl_in_use.dn_odduncore interconnectVertical BL Ring in Use; Down and Oddevent=0xaa,umask=801Counts the number of cycles that the Vertical BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_bl_in_use.up_evenuncore interconnectVertical BL Ring in Use; Up and Evenevent=0xaa,umask=101Counts the number of cycles that the Vertical BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_bl_in_use.up_odduncore interconnectVertical BL Ring in Use; Up and Oddevent=0xaa,umask=201Counts the number of cycles that the Vertical BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_iv_in_use.dnuncore interconnectVertical IV Ring in Use; Downevent=0xac,umask=401Counts the number of cycles that the Vertical IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODDunc_m3upi_vert_ring_iv_in_use.upuncore interconnectVertical IV Ring in Use; Upevent=0xac,umask=101Counts the number of cycles that the Vertical IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODDunc_m3upi_vn0_credits_used.ncbuncore interconnectVN0 Credit Used; WB on BLevent=0x5c,umask=0x1001Number of times a VN0 credit was used on the DRS message channel.  In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This counts the number of times a VN0 credit was used.  Note that a single VN0 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN0 will only count a single credit even though it may use multiple buffers.; Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_vn0_credits_used.ncsuncore interconnectVN0 Credit Used; NCB on BLevent=0x5c,umask=0x2001Number of times a VN0 credit was used on the DRS message channel.  In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This counts the number of times a VN0 credit was used.  Note that a single VN0 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN0 will only count a single credit even though it may use multiple buffers.; Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_vn0_credits_used.requncore interconnectVN0 Credit Used; REQ on ADevent=0x5c,umask=101Number of times a VN0 credit was used on the DRS message channel.  In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This counts the number of times a VN0 credit was used.  Note that a single VN0 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN0 will only count a single credit even though it may use multiple buffers.; Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_vn0_credits_used.rspuncore interconnectVN0 Credit Used; RSP on ADevent=0x5c,umask=401Number of times a VN0 credit was used on the DRS message channel.  In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This counts the number of times a VN0 credit was used.  Note that a single VN0 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN0 will only count a single credit even though it may use multiple buffers.; Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_vn0_credits_used.snpuncore interconnectVN0 Credit Used; SNP on ADevent=0x5c,umask=201Number of times a VN0 credit was used on the DRS message channel.  In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This counts the number of times a VN0 credit was used.  Note that a single VN0 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN0 will only count a single credit even though it may use multiple buffers.; Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_vn0_credits_used.wbuncore interconnectVN0 Credit Used; RSP on BLevent=0x5c,umask=801Number of times a VN0 credit was used on the DRS message channel.  In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This counts the number of times a VN0 credit was used.  Note that a single VN0 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN0 will only count a single credit even though it may use multiple buffers.; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_vn0_no_credits.ncbuncore interconnectVN0 No Credits; WB on BLevent=0x5e,umask=0x1001Number of Cycles there were no VN0 Credits; Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_vn0_no_credits.ncsuncore interconnectVN0 No Credits; NCB on BLevent=0x5e,umask=0x2001Number of Cycles there were no VN0 Credits; Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_vn0_no_credits.requncore interconnectVN0 No Credits; REQ on ADevent=0x5e,umask=101Number of Cycles there were no VN0 Credits; Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_vn0_no_credits.rspuncore interconnectVN0 No Credits; RSP on ADevent=0x5e,umask=401Number of Cycles there were no VN0 Credits; Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_vn0_no_credits.snpuncore interconnectVN0 No Credits; SNP on ADevent=0x5e,umask=201Number of Cycles there were no VN0 Credits; Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_vn0_no_credits.wbuncore interconnectVN0 No Credits; RSP on BLevent=0x5e,umask=801Number of Cycles there were no VN0 Credits; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_vn1_credits_used.ncbuncore interconnectVN1 Credit Used; WB on BLevent=0x5d,umask=0x1001Number of times a VN1 credit was used on the WB message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN1.  VNA is a shared pool used to achieve high performance.  The VN1 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail.  This counts the number of times a VN1 credit was used.  Note that a single VN1 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN1 will only count a single credit even though it may use multiple buffers.; Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_vn1_credits_used.ncsuncore interconnectVN1 Credit Used; NCB on BLevent=0x5d,umask=0x2001Number of times a VN1 credit was used on the WB message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN1.  VNA is a shared pool used to achieve high performance.  The VN1 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail.  This counts the number of times a VN1 credit was used.  Note that a single VN1 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN1 will only count a single credit even though it may use multiple buffers.; Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_vn1_credits_used.requncore interconnectVN1 Credit Used; REQ on ADevent=0x5d,umask=101Number of times a VN1 credit was used on the WB message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN1.  VNA is a shared pool used to achieve high performance.  The VN1 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail.  This counts the number of times a VN1 credit was used.  Note that a single VN1 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN1 will only count a single credit even though it may use multiple buffers.; Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_vn1_credits_used.rspuncore interconnectVN1 Credit Used; RSP on ADevent=0x5d,umask=401Number of times a VN1 credit was used on the WB message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN1.  VNA is a shared pool used to achieve high performance.  The VN1 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail.  This counts the number of times a VN1 credit was used.  Note that a single VN1 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN1 will only count a single credit even though it may use multiple buffers.; Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_vn1_credits_used.snpuncore interconnectVN1 Credit Used; SNP on ADevent=0x5d,umask=201Number of times a VN1 credit was used on the WB message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN1.  VNA is a shared pool used to achieve high performance.  The VN1 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail.  This counts the number of times a VN1 credit was used.  Note that a single VN1 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN1 will only count a single credit even though it may use multiple buffers.; Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_vn1_credits_used.wbuncore interconnectVN1 Credit Used; RSP on BLevent=0x5d,umask=801Number of times a VN1 credit was used on the WB message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN1.  VNA is a shared pool used to achieve high performance.  The VN1 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail.  This counts the number of times a VN1 credit was used.  Note that a single VN1 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN1 will only count a single credit even though it may use multiple buffers.; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_vn1_no_credits.ncbuncore interconnectVN1 No Credits; WB on BLevent=0x5f,umask=0x1001Number of Cycles there were no VN1 Credits; Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_vn1_no_credits.ncsuncore interconnectVN1 No Credits; NCB on BLevent=0x5f,umask=0x2001Number of Cycles there were no VN1 Credits; Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_vn1_no_credits.requncore interconnectVN1 No Credits; REQ on ADevent=0x5f,umask=101Number of Cycles there were no VN1 Credits; Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_vn1_no_credits.rspuncore interconnectVN1 No Credits; RSP on ADevent=0x5f,umask=401Number of Cycles there were no VN1 Credits; Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_vn1_no_credits.snpuncore interconnectVN1 No Credits; SNP on ADevent=0x5f,umask=201Number of Cycles there were no VN1 Credits; Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_vn1_no_credits.wbuncore interconnectVN1 No Credits; RSP on BLevent=0x5f,umask=801Number of Cycles there were no VN1 Credits; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_nounit_txc_bl.drs_upiuncore interconnectThis event is deprecated. Refer to new event UNC_M2M_TxC_BL.DRS_UPIevent=0x40,umask=411uncore_upiunc_upi_clockticksuncore interconnectClocks of the Intel(R) Ultra Path Interconnect (UPI)event=101Counts clockticks of the fixed frequency clock controlling the Intel(R) Ultra Path Interconnect (UPI).  This clock runs at1/8th the 'GT/s' speed of the UPI link.  For example, a  9.6GT/s  link will have a fixed Frequency of 1.2 Ghzunc_upi_direct_attempts.d2cuncore interconnectData Response packets that go direct to coreevent=0x12,umask=101Counts Data Response (DRS) packets that attempted to go direct to core bypassing the CHAunc_upi_direct_attempts.d2kuncore interconnectThis event is deprecated. Refer to new event UNC_UPI_DIRECT_ATTEMPTS.D2Uevent=0x12,umask=211unc_upi_direct_attempts.d2uuncore interconnectData Response packets that go direct to Intel(R) UPIevent=0x12,umask=201Counts Data Response (DRS) packets that attempted to go direct to Intel(R) Ultra Path Interconnect (UPI) bypassing the CHA unc_upi_flowq_no_vna_crd.ad_vna_eq0uncore interconnectUNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ0event=0x18,umask=101unc_upi_flowq_no_vna_crd.ad_vna_eq1uncore interconnectUNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ1event=0x18,umask=201unc_upi_flowq_no_vna_crd.ad_vna_eq2uncore interconnectUNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ2event=0x18,umask=401unc_upi_flowq_no_vna_crd.ak_vna_eq0uncore interconnectUNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ0event=0x18,umask=0x1001unc_upi_flowq_no_vna_crd.ak_vna_eq1uncore interconnectUNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ1event=0x18,umask=0x2001unc_upi_flowq_no_vna_crd.ak_vna_eq2uncore interconnectUNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ2event=0x18,umask=0x4001unc_upi_flowq_no_vna_crd.ak_vna_eq3uncore interconnectUNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ3event=0x18,umask=0x8001unc_upi_flowq_no_vna_crd.bl_vna_eq0uncore interconnectUNC_UPI_FLOWQ_NO_VNA_CRD.BL_VNA_EQ0event=0x18,umask=801unc_upi_l1_power_cyclesuncore interconnectCycles Intel(R) UPI is in L1 power mode (shutdown)event=0x2101Counts cycles when the Intel(R) Ultra Path Interconnect (UPI) is in L1 power mode.  L1 is a mode that totally shuts down the UPI link.  Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another, this event only coutns when both links are shutdownunc_upi_m3_byp_blocked.bgf_crduncore interconnectUNC_UPI_M3_BYP_BLOCKED.BGF_CRDevent=0x14,umask=801unc_upi_m3_byp_blocked.flowq_ad_vna_le2uncore interconnectUNC_UPI_M3_BYP_BLOCKED.FLOWQ_AD_VNA_LE2event=0x14,umask=101unc_upi_m3_byp_blocked.flowq_ak_vna_le3uncore interconnectUNC_UPI_M3_BYP_BLOCKED.FLOWQ_AK_VNA_LE3event=0x14,umask=401unc_upi_m3_byp_blocked.flowq_bl_vna_eq0uncore interconnectUNC_UPI_M3_BYP_BLOCKED.FLOWQ_BL_VNA_EQ0event=0x14,umask=201unc_upi_m3_byp_blocked.gv_blockuncore interconnectUNC_UPI_M3_BYP_BLOCKED.GV_BLOCKevent=0x14,umask=0x1001unc_upi_m3_crd_return_blockeduncore interconnectUNC_UPI_M3_CRD_RETURN_BLOCKEDevent=0x1601unc_upi_m3_rxq_blocked.bgf_crduncore interconnectUNC_UPI_M3_RXQ_BLOCKED.BGF_CRDevent=0x15,umask=0x2001unc_upi_m3_rxq_blocked.flowq_ad_vna_btw_2_threshuncore interconnectUNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AD_VNA_BTW_2_THRESHevent=0x15,umask=201unc_upi_m3_rxq_blocked.flowq_ad_vna_le2uncore interconnectUNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AD_VNA_LE2event=0x15,umask=101unc_upi_m3_rxq_blocked.flowq_ak_vna_le3uncore interconnectUNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AK_VNA_LE3event=0x15,umask=0x1001unc_upi_m3_rxq_blocked.flowq_bl_vna_btw_0_threshuncore interconnectUNC_UPI_M3_RXQ_BLOCKED.FLOWQ_BL_VNA_BTW_0_THRESHevent=0x15,umask=801unc_upi_m3_rxq_blocked.flowq_bl_vna_eq0uncore interconnectUNC_UPI_M3_RXQ_BLOCKED.FLOWQ_BL_VNA_EQ0event=0x15,umask=401unc_upi_m3_rxq_blocked.gv_blockuncore interconnectUNC_UPI_M3_RXQ_BLOCKED.GV_BLOCKevent=0x15,umask=0x4001unc_upi_phy_init_cyclesuncore interconnectCycles where phy is not in L0, L0c, L0p, L1event=0x2001unc_upi_power_l1_nackuncore interconnectL1 Req Nackevent=0x2301Counts the number of times a link sends/receives a LinkReqNAck.  When the UPI links would like to change power state, the Tx side initiates a request to the Rx side requesting to change states.  This requests can either be accepted or denied.  If the Rx side replies with an Ack, the power mode will change.  If it replies with NAck, no change will take place.  This can be filtered based on Rx and Tx.  An Rx LinkReqNAck refers to receiving an NAck (meaning this agent's Tx originally requested the power change).  A Tx LinkReqNAck refers to sending this command (meaning the peer agent's Tx originally requested the power change and this agent accepted it)unc_upi_power_l1_requncore interconnectL1 Req (same as L1 Ack)event=0x2201Counts the number of times a link sends/receives a LinkReqAck.  When the UPI links would like to change power state, the Tx side initiates a request to the Rx side requesting to change states.  This requests can either be accepted or denied.  If the Rx side replies with an Ack, the power mode will change.  If it replies with NAck, no change will take place.  This can be filtered based on Rx and Tx.  An Rx LinkReqAck refers to receiving an Ack (meaning this agent's Tx originally requested the power change).  A Tx LinkReqAck refers to sending this command (meaning the peer agent's Tx originally requested the power change and this agent accepted it)unc_upi_req_slot2_from_m3.ackuncore interconnectUNC_UPI_REQ_SLOT2_FROM_M3.ACKevent=0x46,umask=801unc_upi_req_slot2_from_m3.vn0uncore interconnectUNC_UPI_REQ_SLOT2_FROM_M3.VN0event=0x46,umask=201unc_upi_req_slot2_from_m3.vn1uncore interconnectUNC_UPI_REQ_SLOT2_FROM_M3.VN1event=0x46,umask=401unc_upi_req_slot2_from_m3.vnauncore interconnectUNC_UPI_REQ_SLOT2_FROM_M3.VNAevent=0x46,umask=101unc_upi_rxl0p_power_cyclesuncore interconnectCycles the Rx of the Intel(R) UPI is in L0p power modeevent=0x2501Counts cycles when the receive side (Rx) of the Intel(R) Ultra Path Interconnect(UPI) is in L0p power mode. L0p is a mode where we disable 60% of the UPI lanes, decreasing our bandwidth in order to save powerunc_upi_rxl0_power_cyclesuncore interconnectCycles in L0. Receive sideevent=0x2401Number of UPI qfclk cycles spent in L0 power mode in the Link Layer.  L0 is the default mode which provides the highest performance with the most power.  Use edge detect to count the number of instances that the link entered L0.  Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another.  The phy layer  sometimes leaves L0 for training, which will not be captured by this eventunc_upi_rxl_basic_hdr_match.ncbuncore interconnectMatches on Receive path of a UPI Port; Non-Coherent Bypassevent=5,umask=0xe01Match Message Class - NCBunc_upi_rxl_basic_hdr_match.ncb_opcuncore interconnectMatches on Receive path of a UPI Port; Non-Coherent Bypassevent=5,umask=0x10e01Match Message Class - NCBunc_upi_rxl_basic_hdr_match.ncsuncore interconnectMatches on Receive path of a UPI Port; Non-Coherent Standardevent=5,umask=0xf01Match Message Class - NCSunc_upi_rxl_basic_hdr_match.ncs_opcuncore interconnectMatches on Receive path of a UPI Port; Non-Coherent Standardevent=5,umask=0x10f01Match Message Class - NCSunc_upi_rxl_basic_hdr_match.requncore interconnectMatches on Receive path of a UPI Port; Requestevent=5,umask=801REQ Message Classunc_upi_rxl_basic_hdr_match.req_opcuncore interconnectMatches on Receive path of a UPI Port; Request Opcodeevent=5,umask=0x10801Match REQ Opcodes - Specified in Umask[7:4]unc_upi_rxl_basic_hdr_match.rspcnfltuncore interconnectMatches on Receive path of a UPI Port; Response - Conflictevent=5,umask=0x1aa01unc_upi_rxl_basic_hdr_match.rspiuncore interconnectMatches on Receive path of a UPI Port; Response - Invalidevent=5,umask=0x12a01unc_upi_rxl_basic_hdr_match.rsp_datauncore interconnectMatches on Receive path of a UPI Port; Response - Dataevent=5,umask=0xc01Match Message Class -WBunc_upi_rxl_basic_hdr_match.rsp_data_opcuncore interconnectMatches on Receive path of a UPI Port; Response - Dataevent=5,umask=0x10c01Match Message Class -WBunc_upi_rxl_basic_hdr_match.rsp_nodatauncore interconnectMatches on Receive path of a UPI Port; Response - No Dataevent=5,umask=0xa01Match Message Class - RSPunc_upi_rxl_basic_hdr_match.rsp_nodata_opcuncore interconnectMatches on Receive path of a UPI Port; Response - No Dataevent=5,umask=0x10a01Match Message Class - RSPunc_upi_rxl_basic_hdr_match.snpuncore interconnectMatches on Receive path of a UPI Port; Snoopevent=5,umask=901SNP Message Classunc_upi_rxl_basic_hdr_match.snp_opcuncore interconnectMatches on Receive path of a UPI Port; Snoop Opcodeevent=5,umask=0x10901Match SNP Opcodes - Specified in Umask[7:4]unc_upi_rxl_basic_hdr_match.wbuncore interconnectMatches on Receive path of a UPI Port; Writebackevent=5,umask=0xd01Match Message Class -WBunc_upi_rxl_basic_hdr_match.wb_opcuncore interconnectMatches on Receive path of a UPI Port; Writebackevent=5,umask=0x10d01Match Message Class -WBunc_upi_rxl_bypassed.slot0uncore interconnectFLITs received which bypassed the Slot0 Receive Bufferevent=0x31,umask=101Counts incoming FLITs (FLow control unITs) which bypassed the slot0 RxQ buffer (Receive Queue) and passed directly to the Egress.  This is a latency optimization, and should generally be the common case.  If this value is less than the number of FLITs transferred, it implies that there was queueing getting onto the ring, and thus the transactions saw higher latencyunc_upi_rxl_bypassed.slot1uncore interconnectFLITs received which bypassed the Slot0 Receive Bufferevent=0x31,umask=201Counts incoming FLITs (FLow control unITs) which bypassed the slot1 RxQ buffer  (Receive Queue) and passed directly across the BGF and into the Egress.  This is a latency optimization, and should generally be the common case.  If this value is less than the number of FLITs transferred, it implies that there was queueing getting onto the ring, and thus the transactions saw higher latencyunc_upi_rxl_bypassed.slot2uncore interconnectFLITs received which bypassed the Slot0 Receive Bufferevent=0x31,umask=401Counts incoming FLITs (FLow control unITs) which bypassed the slot2 RxQ buffer (Receive Queue)  and passed directly to the Egress.  This is a latency optimization, and should generally be the common case.  If this value is less than the number of FLITs transferred, it implies that there was queueing getting onto the ring, and thus the transactions saw higher latencyunc_upi_rxl_credits_consumed_vn0uncore interconnectVN0 Credit Consumedevent=0x3901Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer).  This includes packets that went through the RxQ and those that were bypasssedunc_upi_rxl_credits_consumed_vn1uncore interconnectVN1 Credit Consumedevent=0x3a01Counts the number of times that an RxQ VN1 credit was consumed (i.e. message uses a VN1 credit for the Rx Buffer).  This includes packets that went through the RxQ and those that were bypasssedunc_upi_rxl_credits_consumed_vnauncore interconnectVNA Credit Consumedevent=0x3801Counts the number of times that an RxQ VNA credit was consumed (i.e. message uses a VNA credit for the Rx Buffer).  This includes packets that went through the RxQ and those that were bypasssedunc_upi_rxl_flits.all_datauncore interconnectValid data FLITs received from any slotevent=3,umask=0xf01Counts valid data FLITs  (80 bit FLow control unITs: 64bits of data) received from any of the 3 Intel(R) Ultra Path Interconnect (UPI) Receive Queue slots on this UPI unitunc_upi_rxl_flits.all_nulluncore interconnectNull FLITs received from any slotevent=3,umask=0x2701Counts null FLITs (80 bit FLow control unITs) received from any of the 3 Intel(R) Ultra Path Interconnect (UPI) Receive Queue slots on this UPI unitunc_upi_rxl_flits.datauncore interconnectValid Flits Received; Dataevent=3,umask=801Shows legal flit time (hides impact of L0p and L0c).; Count Data Flits (which consume all slots), but how much to count is based on Slot0-2 mask, so count can be 0-3 depending on which slots are enabled for counting.unc_upi_rxl_flits.idleuncore interconnectValid Flits Received; Idleevent=3,umask=0x4701Shows legal flit time (hides impact of L0p and L0c)unc_upi_rxl_flits.llcrduncore interconnectValid Flits Received; LLCRD Not Emptyevent=3,umask=0x1001Shows legal flit time (hides impact of L0p and L0c).; Enables counting of LLCRD (with non-zero payload). This only applies to slot 2 since LLCRD is only allowed in slot 2unc_upi_rxl_flits.llctrluncore interconnectValid Flits Received; LLCTRLevent=3,umask=0x4001Shows legal flit time (hides impact of L0p and L0c).; Equivalent to an idle packet.  Enables counting of slot 0 LLCTRL messagesunc_upi_rxl_flits.non_datauncore interconnectProtocol header and credit FLITs received from any slotevent=3,umask=0x9701Counts protocol header and credit FLITs  (80 bit FLow control unITs) received from any of the 3 UPI slots on this UPI unitunc_upi_rxl_flits.nulluncore interconnectThis event is deprecated. Refer to new event UNC_UPI_RxL_FLITS.ALL_NULLevent=3,umask=0x2011unc_upi_rxl_flits.prothdruncore interconnectValid Flits Received; Protocol Headerevent=3,umask=0x8001Shows legal flit time (hides impact of L0p and L0c).; Enables count of protocol headers in slot 0,1,2 (depending on slot uMask bits)unc_upi_rxl_flits.prot_hdruncore interconnectThis event is deprecated. Refer to new event UNC_UPI_RxL_FLITS.PROTHDRevent=3,umask=0x8011unc_upi_rxl_flits.slot0uncore interconnectValid Flits Received; Slot 0event=3,umask=101Shows legal flit time (hides impact of L0p and L0c).; Count Slot 0 - Other mask bits determine types of headers to countunc_upi_rxl_flits.slot1uncore interconnectValid Flits Received; Slot 1event=3,umask=201Shows legal flit time (hides impact of L0p and L0c).; Count Slot 1 - Other mask bits determine types of headers to countunc_upi_rxl_flits.slot2uncore interconnectValid Flits Received; Slot 2event=3,umask=401Shows legal flit time (hides impact of L0p and L0c).; Count Slot 2 - Other mask bits determine types of headers to countunc_upi_rxl_hdr_match.ncbuncore interconnectThis event is deprecated. Refer to new event UNC_UPI_RxL_BASIC_HDR_MATCH.NCBevent=5,umask=0xc11unc_upi_rxl_hdr_match.ncsuncore interconnectThis event is deprecated. Refer to new event UNC_UPI_RxL_BASIC_HDR_MATCH.NCSevent=5,umask=0xd11unc_upi_rxl_hdr_match.requncore interconnectThis event is deprecated. Refer to new event UNC_UPI_RxL_BASIC_HDR_MATCH.REQevent=5,umask=811unc_upi_rxl_hdr_match.rspuncore interconnectThis event is deprecated. Refer to new event UNC_UPI_RxL_BASIC_HDR_MATCH.RSP_DATAevent=5,umask=0xa11unc_upi_rxl_hdr_match.snpuncore interconnectThis event is deprecated. Refer to new event UNC_UPI_RxL_BASIC_HDR_MATCH.SNPevent=5,umask=911unc_upi_rxl_hdr_match.wbuncore interconnectThis event is deprecated. Refer to new event UNC_UPI_RxL_BASIC_HDR_MATCH.WBevent=5,umask=0xb11unc_upi_rxl_inserts.slot0uncore interconnectRxQ Flit Buffer Allocations; Slot 0event=0x30,umask=101Number of allocations into the UPI Rx Flit Buffer.  Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetimeunc_upi_rxl_inserts.slot1uncore interconnectRxQ Flit Buffer Allocations; Slot 1event=0x30,umask=201Number of allocations into the UPI Rx Flit Buffer.  Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetimeunc_upi_rxl_inserts.slot2uncore interconnectRxQ Flit Buffer Allocations; Slot 2event=0x30,umask=401Number of allocations into the UPI Rx Flit Buffer.  Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetimeunc_upi_rxl_occupancy.slot0uncore interconnectRxQ Occupancy - All Packets; Slot 0event=0x32,umask=101Accumulates the number of elements in the UPI RxQ in each cycle.  Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetimeunc_upi_rxl_occupancy.slot1uncore interconnectRxQ Occupancy - All Packets; Slot 1event=0x32,umask=201Accumulates the number of elements in the UPI RxQ in each cycle.  Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetimeunc_upi_rxl_occupancy.slot2uncore interconnectRxQ Occupancy - All Packets; Slot 2event=0x32,umask=401Accumulates the number of elements in the UPI RxQ in each cycle.  Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetimeunc_upi_rxl_slot_bypass.s0_rxq1uncore interconnectUNC_UPI_RxL_SLOT_BYPASS.S0_RXQ1event=0x33,umask=101unc_upi_rxl_slot_bypass.s0_rxq2uncore interconnectUNC_UPI_RxL_SLOT_BYPASS.S0_RXQ2event=0x33,umask=201unc_upi_rxl_slot_bypass.s1_rxq0uncore interconnectUNC_UPI_RxL_SLOT_BYPASS.S1_RXQ0event=0x33,umask=401unc_upi_rxl_slot_bypass.s1_rxq2uncore interconnectUNC_UPI_RxL_SLOT_BYPASS.S1_RXQ2event=0x33,umask=801unc_upi_rxl_slot_bypass.s2_rxq0uncore interconnectUNC_UPI_RxL_SLOT_BYPASS.S2_RXQ0event=0x33,umask=0x1001unc_upi_rxl_slot_bypass.s2_rxq1uncore interconnectUNC_UPI_RxL_SLOT_BYPASS.S2_RXQ1event=0x33,umask=0x2001unc_upi_txl0p_clk_active.cfg_ctluncore interconnectUNC_UPI_TxL0P_CLK_ACTIVE.CFG_CTLevent=0x2a,umask=101unc_upi_txl0p_clk_active.dfxuncore interconnectUNC_UPI_TxL0P_CLK_ACTIVE.DFXevent=0x2a,umask=0x4001unc_upi_txl0p_clk_active.retryuncore interconnectUNC_UPI_TxL0P_CLK_ACTIVE.RETRYevent=0x2a,umask=0x2001unc_upi_txl0p_clk_active.rxquncore interconnectUNC_UPI_TxL0P_CLK_ACTIVE.RXQevent=0x2a,umask=201unc_upi_txl0p_clk_active.rxq_bypassuncore interconnectUNC_UPI_TxL0P_CLK_ACTIVE.RXQ_BYPASSevent=0x2a,umask=401unc_upi_txl0p_clk_active.rxq_creduncore interconnectUNC_UPI_TxL0P_CLK_ACTIVE.RXQ_CREDevent=0x2a,umask=801unc_upi_txl0p_clk_active.spareuncore interconnectUNC_UPI_TxL0P_CLK_ACTIVE.SPAREevent=0x2a,umask=0x8001unc_upi_txl0p_clk_active.txquncore interconnectUNC_UPI_TxL0P_CLK_ACTIVE.TXQevent=0x2a,umask=0x1001unc_upi_txl0p_power_cyclesuncore interconnectCycles in which the Tx of the Intel(R) Ultra Path Interconnect (UPI) is in L0p power modeevent=0x2701Counts cycles when the transmit side (Tx) of the Intel(R) Ultra Path Interconnect(UPI) is in L0p power mode. L0p is a mode where we disable 60% of the UPI lanes, decreasing our bandwidth in order to save powerunc_upi_txl0p_power_cycles_ll_enteruncore interconnectUNC_UPI_TxL0P_POWER_CYCLES_LL_ENTERevent=0x2801unc_upi_txl0p_power_cycles_m3_exituncore interconnectUNC_UPI_TxL0P_POWER_CYCLES_M3_EXITevent=0x2901unc_upi_txl0_power_cyclesuncore interconnectCycles in L0. Transmit sideevent=0x2601Number of UPI qfclk cycles spent in L0 power mode in the Link Layer.  L0 is the default mode which provides the highest performance with the most power.  Use edge detect to count the number of instances that the link entered L0.  Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another.  The phy layer  sometimes leaves L0 for training, which will not be captured by this eventunc_upi_txl_basic_hdr_match.ncbuncore interconnectMatches on Transmit path of a UPI Port; Non-Coherent Bypassevent=4,umask=0xe01Match Message Class - NCBunc_upi_txl_basic_hdr_match.ncb_opcuncore interconnectMatches on Transmit path of a UPI Port; Non-Coherent Bypassevent=4,umask=0x10e01Match Message Class - NCBunc_upi_txl_basic_hdr_match.ncsuncore interconnectMatches on Transmit path of a UPI Port; Non-Coherent Standardevent=4,umask=0xf01Match Message Class - NCSunc_upi_txl_basic_hdr_match.ncs_opcuncore interconnectMatches on Transmit path of a UPI Port; Non-Coherent Standardevent=4,umask=0x10f01Match Message Class - NCSunc_upi_txl_basic_hdr_match.requncore interconnectMatches on Transmit path of a UPI Port; Requestevent=4,umask=801REQ Message Classunc_upi_txl_basic_hdr_match.req_opcuncore interconnectMatches on Transmit path of a UPI Port; Request Opcodeevent=4,umask=0x10801Match REQ Opcodes - Specified in Umask[7:4]unc_upi_txl_basic_hdr_match.rspcnfltuncore interconnectMatches on Transmit path of a UPI Port; Response - Conflictevent=4,umask=0x1aa01unc_upi_txl_basic_hdr_match.rspiuncore interconnectMatches on Transmit path of a UPI Port; Response - Invalidevent=4,umask=0x12a01unc_upi_txl_basic_hdr_match.rsp_datauncore interconnectMatches on Transmit path of a UPI Port; Response - Dataevent=4,umask=0xc01Match Message Class -WBunc_upi_txl_basic_hdr_match.rsp_data_opcuncore interconnectMatches on Transmit path of a UPI Port; Response - Dataevent=4,umask=0x10c01Match Message Class -WBunc_upi_txl_basic_hdr_match.rsp_nodatauncore interconnectMatches on Transmit path of a UPI Port; Response - No Dataevent=4,umask=0xa01Match Message Class - RSPunc_upi_txl_basic_hdr_match.rsp_nodata_opcuncore interconnectMatches on Transmit path of a UPI Port; Response - No Dataevent=4,umask=0x10a01Match Message Class - RSPunc_upi_txl_basic_hdr_match.snpuncore interconnectMatches on Transmit path of a UPI Port; Snoopevent=4,umask=901SNP Message Classunc_upi_txl_basic_hdr_match.snp_opcuncore interconnectMatches on Transmit path of a UPI Port; Snoop Opcodeevent=4,umask=0x10901Match SNP Opcodes - Specified in Umask[7:4]unc_upi_txl_basic_hdr_match.wbuncore interconnectMatches on Transmit path of a UPI Port; Writebackevent=4,umask=0xd01Match Message Class -WBunc_upi_txl_basic_hdr_match.wb_opcuncore interconnectMatches on Transmit path of a UPI Port; Writebackevent=4,umask=0x10d01Match Message Class -WBunc_upi_txl_bypasseduncore interconnectFLITs that bypassed the TxL Bufferevent=0x4101Counts incoming FLITs (FLow control unITs) which bypassed the TxL(transmit) FLIT buffer and pass directly out the UPI Link. Generally, when data is transmitted across the Intel(R) Ultra Path Interconnect (UPI), it will bypass the TxQ and pass directly to the link.  However, the TxQ will be used in L0p (Low Power) mode and (Link Layer Retry) LLR  mode, increasing latency to transfer out to the linkunc_upi_txl_flits.all_datauncore interconnectValid data FLITs transmitted via any slotevent=2,umask=0xf01Counts valid data FLITs (80 bit FLow control unITs: 64bits of data) transmitted (TxL) via any of the 3 Intel(R) Ultra Path Interconnect (UPI) slots on this UPI unitunc_upi_txl_flits.all_nulluncore interconnectNull FLITs transmitted from any slotevent=2,umask=0x2701Counts null FLITs (80 bit FLow control unITs) transmitted via any of the 3 Intel(R) Ulra Path Interconnect (UPI) slots on this UPI unitunc_upi_txl_flits.datauncore interconnectValid Flits Sent; Dataevent=2,umask=801Shows legal flit time (hides impact of L0p and L0c).; Count Data Flits (which consume all slots), but how much to count is based on Slot0-2 mask, so count can be 0-3 depending on which slots are enabled for counting.unc_upi_txl_flits.idleuncore interconnectIdle FLITs transmittedevent=2,umask=0x4701Counts when the Intel Ultra Path Interconnect(UPI) transmits an idle FLIT(80 bit FLow control unITs).  Every UPI cycle must be sending either data FLITs, protocol/credit FLITs or idle FLITsunc_upi_txl_flits.llcrduncore interconnectValid Flits Sent; LLCRD Not Emptyevent=2,umask=0x1001Shows legal flit time (hides impact of L0p and L0c).; Enables counting of LLCRD (with non-zero payload). This only applies to slot 2 since LLCRD is only allowed in slot 2unc_upi_txl_flits.llctrluncore interconnectValid Flits Sent; LLCTRLevent=2,umask=0x4001Shows legal flit time (hides impact of L0p and L0c).; Equivalent to an idle packet.  Enables counting of slot 0 LLCTRL messagesunc_upi_txl_flits.non_datauncore interconnectProtocol header and credit FLITs transmitted across any slotevent=2,umask=0x9701Counts protocol header and credit FLITs (80 bit FLow control unITs) transmitted across any of the 3 UPI (Ultra Path Interconnect) slots on this UPI unitunc_upi_txl_flits.nulluncore interconnectThis event is deprecated. Refer to new event UNC_UPI_TxL_FLITS.ALL_NULLevent=2,umask=0x2011unc_upi_txl_flits.prothdruncore interconnectValid Flits Sent; Protocol Headerevent=2,umask=0x8001Shows legal flit time (hides impact of L0p and L0c).; Enables count of protocol headers in slot 0,1,2 (depending on slot uMask bits)unc_upi_txl_flits.prot_hdruncore interconnectThis event is deprecated. Refer to new event UNC_UPI_TxL_FLITS.PROTHDRevent=2,umask=0x8011unc_upi_txl_flits.slot0uncore interconnectValid Flits Sent; Slot 0event=2,umask=101Shows legal flit time (hides impact of L0p and L0c).; Count Slot 0 - Other mask bits determine types of headers to countunc_upi_txl_flits.slot1uncore interconnectValid Flits Sent; Slot 1event=2,umask=201Shows legal flit time (hides impact of L0p and L0c).; Count Slot 1 - Other mask bits determine types of headers to countunc_upi_txl_flits.slot2uncore interconnectValid Flits Sent; Slot 2event=2,umask=401Shows legal flit time (hides impact of L0p and L0c).; Count Slot 2 - Other mask bits determine types of headers to countunc_upi_txl_hdr_match.data_hdruncore interconnectThis event is deprecatedevent=411unc_upi_txl_hdr_match.dual_slot_hdruncore interconnectThis event is deprecatedevent=411unc_upi_txl_hdr_match.locuncore interconnectThis event is deprecatedevent=411unc_upi_txl_hdr_match.ncbuncore interconnectThis event is deprecated. Refer to new event UNC_UPI_TxL_BASIC_HDR_MATCH.NCBevent=4,umask=0xe11unc_upi_txl_hdr_match.ncsuncore interconnectThis event is deprecated. Refer to new event UNC_UPI_TxL_BASIC_HDR_MATCH.NCSevent=4,umask=0xf11unc_upi_txl_hdr_match.non_data_hdruncore interconnectThis event is deprecatedevent=411unc_upi_txl_hdr_match.remuncore interconnectThis event is deprecatedevent=411unc_upi_txl_hdr_match.requncore interconnectThis event is deprecated. Refer to new event UNC_UPI_TxL_BASIC_HDR_MATCH.REQevent=4,umask=811unc_upi_txl_hdr_match.rsp_datauncore interconnectThis event is deprecated. Refer to new event UNC_UPI_TxL_BASIC_HDR_MATCH.RSP_DATAevent=4,umask=0xc11unc_upi_txl_hdr_match.rsp_nodatauncore interconnectThis event is deprecated. Refer to new event UNC_UPI_TxL_BASIC_HDR_MATCH.RSP_NODATAevent=4,umask=0xa11unc_upi_txl_hdr_match.sgl_slot_hdruncore interconnectThis event is deprecatedevent=411unc_upi_txl_hdr_match.snpuncore interconnectThis event is deprecated. Refer to new event UNC_UPI_TxL_BASIC_HDR_MATCH.SNPevent=4,umask=911unc_upi_txl_hdr_match.wbuncore interconnectThis event is deprecated. Refer to new event UNC_UPI_TxL_BASIC_HDR_MATCH.WBevent=4,umask=0xc11unc_upi_txl_insertsuncore interconnectTx Flit Buffer Allocationsevent=0x4001Number of allocations into the UPI Tx Flit Buffer.  Generally, when data is transmitted across UPI, it will bypass the TxQ and pass directly to the link.  However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetimeunc_upi_txl_occupancyuncore interconnectTx Flit Buffer Occupancyevent=0x4201Accumulates the number of flits in the TxQ.  Generally, when data is transmitted across UPI, it will bypass the TxQ and pass directly to the link.  However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link. This can be used with the cycles not empty event to track average occupancy, or the allocations event to track average lifetime in the TxQunc_upi_vna_credit_return_blocked_vn01uncore interconnectUNC_UPI_VNA_CREDIT_RETURN_BLOCKED_VN01event=0x4501unc_upi_vna_credit_return_occupancyuncore interconnectVNA Credits Pending Return - Occupancyevent=0x4401Number of VNA credits in the Rx side that are waitng to be returned back across the linkunc_u_event_msg.doorbell_rcvduncore interconnectMessage Receivedevent=0x42,umask=801Virtual Logical Wire (legacy) message were received from Uncoreunc_u_event_msg.int_priouncore interconnectMessage Receivedevent=0x42,umask=0x1001Virtual Logical Wire (legacy) message were received from Uncoreunc_u_event_msg.ipi_rcvduncore interconnectMessage Received; IPIevent=0x42,umask=401Virtual Logical Wire (legacy) message were received from Uncore.; Inter Processor Interruptsunc_u_event_msg.msi_rcvduncore interconnectMessage Received; MSIevent=0x42,umask=201Virtual Logical Wire (legacy) message were received from Uncore.; Message Signaled Interrupts - interrupts sent by devices (including PCIe via IOxAPIC) (Socket Mode only)unc_u_event_msg.vlw_rcvduncore interconnectMessage Received; VLWevent=0x42,umask=101Virtual Logical Wire (legacy) message were received from Uncoreunc_u_lock_cyclesuncore interconnectIDI Lock/SplitLock Cyclesevent=0x4401Number of times an IDI Lock/SplitLock sequence was startedunc_u_phold_cycles.assert_to_ackuncore interconnectCycles PHOLD Assert to Ack; Assert to ACKevent=0x45,umask=101PHOLD cyclesunc_u_racu_drng.pftch_buf_emptyuncore interconnectUNC_U_RACU_DRNG.PFTCH_BUF_EMPTYevent=0x4c,umask=401unc_u_racu_drng.rdranduncore interconnectUNC_U_RACU_DRNG.RDRANDevent=0x4c,umask=101unc_u_racu_drng.rdseeduncore interconnectUNC_U_RACU_DRNG.RDSEEDevent=0x4c,umask=201upi_data_bandwidth_txuncore interconnectUPI interconnect send bandwidth for payload. Derived from unc_upi_txl_flits.all_dataevent=2,umask=0xf017.11E-06BytesCounts valid data FLITs (80 bit FLow control unITs: 64bits of data) transmitted (TxL) via any of the 3 Intel(R) Ultra Path Interconnect (UPI) slots on this UPI unituncore_iiollc_misses.pcie_readuncore ioPCI Express bandwidth reading at IIO. Derived from unc_iio_data_req_of_cpu.mem_read.part0event=0x83,ch_mask=1,fc_mask=7,umask=4,ch_mask=0x1f014BytesCounts every read request for 4 bytes of data made by IIO Part0 to a unit on the main die (generally memory). In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the busllc_misses.pcie_writeuncore ioPCI Express bandwidth writing at IIO. Derived from unc_iio_data_req_of_cpu.mem_write.part0event=0x83,ch_mask=1,fc_mask=7,umask=1,ch_mask=0x1f014BytesCounts every write request of 4 bytes of data made by IIO Part0 to a unit on the main die (generally memory). In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the busunc_iio_clockticksuncore ioClockticks of the IIO Traffic Controllerevent=101Counts clockticks of the 1GHz traffic controller clock in the IIO unitunc_iio_comp_buf_inserts.cmpd.all_partsuncore ioPCIe Completion Buffer Inserts of completions with data: Part 0-3event=0xc2,ch_mask=0xf,fc_mask=4,umask=301unc_iio_comp_buf_inserts.cmpd.part0uncore ioPCIe Completion Buffer Inserts of completions with data: Part 0event=0xc2,ch_mask=1,fc_mask=4,umask=301unc_iio_comp_buf_inserts.cmpd.part1uncore ioPCIe Completion Buffer Inserts of completions with data: Part 1event=0xc2,ch_mask=2,fc_mask=4,umask=301unc_iio_comp_buf_inserts.cmpd.part2uncore ioPCIe Completion Buffer Inserts of completions with data: Part 2event=0xc2,ch_mask=4,fc_mask=4,umask=301unc_iio_comp_buf_inserts.cmpd.part3uncore ioPCIe Completion Buffer Inserts of completions with data: Part 3event=0xc2,ch_mask=8,fc_mask=4,umask=301unc_iio_comp_buf_inserts.port0uncore ioPCIe Completion Buffer Inserts; Port 0event=0xc2,ch_mask=1,fc_mask=7,umask=401unc_iio_comp_buf_inserts.port1uncore ioPCIe Completion Buffer Inserts; Port 1event=0xc2,ch_mask=2,fc_mask=7,umask=401unc_iio_comp_buf_inserts.port2uncore ioPCIe Completion Buffer Inserts; Port 2event=0xc2,ch_mask=4,fc_mask=7,umask=401unc_iio_comp_buf_inserts.port3uncore ioPCIe Completion Buffer Inserts; Port 3event=0xc2,ch_mask=8,fc_mask=7,umask=401unc_iio_comp_buf_occupancy.cmpd.all_partsuncore ioPCIe Completion Buffer occupancy of completions with data: Part 0-3event=0xd5,fc_mask=4,umask=0xf01unc_iio_comp_buf_occupancy.cmpd.part0uncore ioPCIe Completion Buffer occupancy of completions with data: Part 0event=0xd5,fc_mask=4,umask=101unc_iio_comp_buf_occupancy.cmpd.part1uncore ioPCIe Completion Buffer occupancy of completions with data: Part 1event=0xd5,fc_mask=4,umask=201unc_iio_comp_buf_occupancy.cmpd.part2uncore ioPCIe Completion Buffer occupancy of completions with data: Part 2event=0xd5,fc_mask=4,umask=401unc_iio_comp_buf_occupancy.cmpd.part3uncore ioPCIe Completion Buffer occupancy of completions with data: Part 3event=0xd5,fc_mask=4,umask=801unc_iio_data_req_by_cpu.cfg_read.part0uncore ioData requested by the CPU; Core reading from Card's PCICFG spaceevent=0xc0,ch_mask=1,fc_mask=7,umask=0x4001Number of double word (4 bytes) requests initiated by the main die to the attached device.; x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_by_cpu.cfg_read.part1uncore ioData requested by the CPU; Core reading from Card's PCICFG spaceevent=0xc0,ch_mask=2,fc_mask=7,umask=0x4001Number of double word (4 bytes) requests initiated by the main die to the attached device.; x4 card is plugged in to slot 1unc_iio_data_req_by_cpu.cfg_read.part2uncore ioData requested by the CPU; Core reading from Card's PCICFG spaceevent=0xc0,ch_mask=4,fc_mask=7,umask=0x4001Number of double word (4 bytes) requests initiated by the main die to the attached device.; x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_data_req_by_cpu.cfg_read.part3uncore ioData requested by the CPU; Core reading from Card's PCICFG spaceevent=0xc0,ch_mask=8,fc_mask=7,umask=0x4001Number of double word (4 bytes) requests initiated by the main die to the attached device.; x4 card is plugged in to slot 3unc_iio_data_req_by_cpu.cfg_read.vtd0uncore ioData requested by the CPU; Core reading from Card's PCICFG spaceevent=0xc0,ch_mask=0x10,fc_mask=7,umask=0x4001Number of double word (4 bytes) requests initiated by the main die to the attached device.; VTd - Type 0unc_iio_data_req_by_cpu.cfg_read.vtd1uncore ioData requested by the CPU; Core reading from Card's PCICFG spaceevent=0xc0,ch_mask=0x20,fc_mask=7,umask=0x4001Number of double word (4 bytes) requests initiated by the main die to the attached device.; VTd - Type 1unc_iio_data_req_by_cpu.cfg_write.part0uncore ioData requested by the CPU; Core writing to Card's PCICFG spaceevent=0xc0,ch_mask=1,fc_mask=7,umask=0x1001Number of double word (4 bytes) requests initiated by the main die to the attached device.; x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_by_cpu.cfg_write.part1uncore ioData requested by the CPU; Core writing to Card's PCICFG spaceevent=0xc0,ch_mask=2,fc_mask=7,umask=0x1001Number of double word (4 bytes) requests initiated by the main die to the attached device.; x4 card is plugged in to slot 1unc_iio_data_req_by_cpu.cfg_write.part2uncore ioData requested by the CPU; Core writing to Card's PCICFG spaceevent=0xc0,ch_mask=4,fc_mask=7,umask=0x1001Number of double word (4 bytes) requests initiated by the main die to the attached device.; x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_data_req_by_cpu.cfg_write.part3uncore ioData requested by the CPU; Core writing to Card's PCICFG spaceevent=0xc0,ch_mask=8,fc_mask=7,umask=0x1001Number of double word (4 bytes) requests initiated by the main die to the attached device.; x4 card is plugged in to slot 3unc_iio_data_req_by_cpu.cfg_write.vtd0uncore ioData requested by the CPU; Core writing to Card's PCICFG spaceevent=0xc0,ch_mask=0x10,fc_mask=7,umask=0x1001Number of double word (4 bytes) requests initiated by the main die to the attached device.; VTd - Type 0unc_iio_data_req_by_cpu.cfg_write.vtd1uncore ioData requested by the CPU; Core writing to Card's PCICFG spaceevent=0xc0,ch_mask=0x20,fc_mask=7,umask=0x1001Number of double word (4 bytes) requests initiated by the main die to the attached device.; VTd - Type 1unc_iio_data_req_by_cpu.io_read.part0uncore ioData requested by the CPU; Core reading from Card's IO spaceevent=0xc0,ch_mask=1,fc_mask=7,umask=0x8001Number of double word (4 bytes) requests initiated by the main die to the attached device.; x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_by_cpu.io_read.part1uncore ioData requested by the CPU; Core reading from Card's IO spaceevent=0xc0,ch_mask=2,fc_mask=7,umask=0x8001Number of double word (4 bytes) requests initiated by the main die to the attached device.; x4 card is plugged in to slot 1unc_iio_data_req_by_cpu.io_read.part2uncore ioData requested by the CPU; Core reading from Card's IO spaceevent=0xc0,ch_mask=4,fc_mask=7,umask=0x8001Number of double word (4 bytes) requests initiated by the main die to the attached device.; x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_data_req_by_cpu.io_read.part3uncore ioData requested by the CPU; Core reading from Card's IO spaceevent=0xc0,ch_mask=8,fc_mask=7,umask=0x8001Number of double word (4 bytes) requests initiated by the main die to the attached device.; x4 card is plugged in to slot 3unc_iio_data_req_by_cpu.io_read.vtd0uncore ioData requested by the CPU; Core reading from Card's IO spaceevent=0xc0,ch_mask=0x10,fc_mask=7,umask=0x8001Number of double word (4 bytes) requests initiated by the main die to the attached device.; VTd - Type 0unc_iio_data_req_by_cpu.io_read.vtd1uncore ioData requested by the CPU; Core reading from Card's IO spaceevent=0xc0,ch_mask=0x20,fc_mask=7,umask=0x8001Number of double word (4 bytes) requests initiated by the main die to the attached device.; VTd - Type 1unc_iio_data_req_by_cpu.io_write.part0uncore ioData requested by the CPU; Core writing to Card's IO spaceevent=0xc0,ch_mask=1,fc_mask=7,umask=0x2001Number of double word (4 bytes) requests initiated by the main die to the attached device.; x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_by_cpu.io_write.part1uncore ioData requested by the CPU; Core writing to Card's IO spaceevent=0xc0,ch_mask=2,fc_mask=7,umask=0x2001Number of double word (4 bytes) requests initiated by the main die to the attached device.; x4 card is plugged in to slot 1unc_iio_data_req_by_cpu.io_write.part2uncore ioData requested by the CPU; Core writing to Card's IO spaceevent=0xc0,ch_mask=4,fc_mask=7,umask=0x2001Number of double word (4 bytes) requests initiated by the main die to the attached device.; x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_data_req_by_cpu.io_write.part3uncore ioData requested by the CPU; Core writing to Card's IO spaceevent=0xc0,ch_mask=8,fc_mask=7,umask=0x2001Number of double word (4 bytes) requests initiated by the main die to the attached device.; x4 card is plugged in to slot 3unc_iio_data_req_by_cpu.io_write.vtd0uncore ioData requested by the CPU; Core writing to Card's IO spaceevent=0xc0,ch_mask=0x10,fc_mask=7,umask=0x2001Number of double word (4 bytes) requests initiated by the main die to the attached device.; VTd - Type 0unc_iio_data_req_by_cpu.io_write.vtd1uncore ioData requested by the CPU; Core writing to Card's IO spaceevent=0xc0,ch_mask=0x20,fc_mask=7,umask=0x2001Number of double word (4 bytes) requests initiated by the main die to the attached device.; VTd - Type 1unc_iio_data_req_by_cpu.mem_read.part0uncore ioRead request for 4 bytes made by the CPU to IIO Part0event=0xc0,ch_mask=1,fc_mask=7,umask=401Counts every read request for 4 bytes of data made by a unit on the main die (generally a core) or by another IIO unit to the MMIO space of a card on IIO Part0. In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the busunc_iio_data_req_by_cpu.mem_read.part1uncore ioRead request for 4 bytes made by the CPU to IIO Part1event=0xc0,ch_mask=2,fc_mask=7,umask=401Counts every read request for 4 bytes of data made by a unit on the main die (generally a core) or by another IIO unit to the MMIO space of a card on IIO Part1. In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the busunc_iio_data_req_by_cpu.mem_read.part2uncore ioRead request for 4 bytes made by the CPU to IIO Part2event=0xc0,ch_mask=4,fc_mask=7,umask=401Counts every read request for 4 bytes of data made by a unit on the main die (generally a core) or by another IIO unit to the MMIO space of a card on IIO Part2. In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the busunc_iio_data_req_by_cpu.mem_read.part3uncore ioRead request for 4 bytes made by the CPU to IIO Part3event=0xc0,ch_mask=8,fc_mask=7,umask=401Counts every read request for 4 bytes of data made by a unit on the main die (generally a core) or by another IIO unit to the MMIO space of a card on IIO Part3. In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to  any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the busunc_iio_data_req_by_cpu.mem_read.vtd0uncore ioData requested by the CPU; Core reading from Card's MMIO spaceevent=0xc0,ch_mask=0x10,fc_mask=7,umask=401Number of double word (4 bytes) requests initiated by the main die to the attached device.; VTd - Type 0unc_iio_data_req_by_cpu.mem_read.vtd1uncore ioData requested by the CPU; Core reading from Card's MMIO spaceevent=0xc0,ch_mask=0x20,fc_mask=7,umask=401Number of double word (4 bytes) requests initiated by the main die to the attached device.; VTd - Type 1unc_iio_data_req_by_cpu.mem_write.part0uncore ioWrite request of 4 bytes made to IIO Part0 by the CPUevent=0xc0,ch_mask=1,fc_mask=7,umask=101Counts every write request of 4 bytes of data made to the MMIO space of a card on IIO Part0 by a unit on the main die (generally a core) or by another IIO unit. In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the busunc_iio_data_req_by_cpu.mem_write.part1uncore ioWrite request of 4 bytes made to IIO Part1 by the CPUevent=0xc0,ch_mask=2,fc_mask=7,umask=101Counts every write request of 4 bytes of data made to the MMIO space of a card on IIO Part1 by a unit on the main die (generally a core) or by another IIO unit. In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the busunc_iio_data_req_by_cpu.mem_write.part2uncore ioWrite request of 4 bytes made to IIO Part2 by the CPUevent=0xc0,ch_mask=4,fc_mask=7,umask=101Counts every write request of 4 bytes of data made to the MMIO space of a card on IIO Part2 by  a unit on the main die (generally a core) or by another IIO unit. In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the busunc_iio_data_req_by_cpu.mem_write.part3uncore ioWrite request of 4 bytes made to IIO Part3 by the CPUevent=0xc0,ch_mask=8,fc_mask=7,umask=101Counts every write request of 4 bytes of data made to the MMIO space of a card on IIO Part3 by  a unit on the main die (generally a core) or by another IIO unit. In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the busunc_iio_data_req_by_cpu.mem_write.vtd0uncore ioData requested by the CPU; Core writing to Card's MMIO spaceevent=0xc0,ch_mask=0x10,fc_mask=7,umask=101Number of double word (4 bytes) requests initiated by the main die to the attached device.; VTd - Type 0unc_iio_data_req_by_cpu.mem_write.vtd1uncore ioData requested by the CPU; Core writing to Card's MMIO spaceevent=0xc0,ch_mask=0x20,fc_mask=7,umask=101Number of double word (4 bytes) requests initiated by the main die to the attached device.; VTd - Type 1unc_iio_data_req_by_cpu.peer_read.part0uncore ioPeer to peer read request for 4 bytes made by a different IIO unit to IIO Part0event=0xc0,ch_mask=1,fc_mask=7,umask=801Counts ever peer to peer read request for 4 bytes of data made by a different IIO unit to the MMIO space of a card on IIO Part0. Does not include requests made by the same IIO unit. In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the busunc_iio_data_req_by_cpu.peer_read.part1uncore ioPeer to peer read request for 4 bytes made by a different IIO unit to IIO Part1event=0xc0,ch_mask=2,fc_mask=7,umask=801Counts ever peer to peer read request for 4 bytes of data made by a different IIO unit to the MMIO space of a card on IIO Part1. Does not include requests made by the same IIO unit. In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the busunc_iio_data_req_by_cpu.peer_read.part2uncore ioPeer to peer read request for 4 bytes made by a different IIO unit to IIO Part2event=0xc0,ch_mask=4,fc_mask=7,umask=801Counts ever peer to peer read request for 4 bytes of data made by a different IIO unit to the MMIO space of a card on IIO Part2. Does not include requests made by the same IIO unit. In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the busunc_iio_data_req_by_cpu.peer_read.part3uncore ioPeer to peer read request for 4 bytes made by a different IIO unit to IIO Part3event=0xc0,ch_mask=8,fc_mask=7,umask=801Counts ever peer to peer read request for 4 bytes of data made by a different IIO unit to the MMIO space of a card on IIO Part3. Does not include requests made by the same IIO unit. In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to  any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the busunc_iio_data_req_by_cpu.peer_read.vtd0uncore ioData requested by the CPU; Another card (different IIO stack) reading from this cardevent=0xc0,ch_mask=0x10,fc_mask=7,umask=801Number of double word (4 bytes) requests initiated by the main die to the attached device.; VTd - Type 0unc_iio_data_req_by_cpu.peer_read.vtd1uncore ioData requested by the CPU; Another card (different IIO stack) reading from this cardevent=0xc0,ch_mask=0x20,fc_mask=7,umask=801Number of double word (4 bytes) requests initiated by the main die to the attached device.; VTd - Type 1unc_iio_data_req_by_cpu.peer_write.part0uncore ioPeer to peer write request of 4 bytes made to IIO Part0 by a different IIO unitevent=0xc0,ch_mask=1,fc_mask=7,umask=201Counts every peer to peer write request of 4 bytes of data made to the MMIO space of a card on IIO Part0 by a different IIO unit. Does not include requests made by the same IIO unit.  In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the busunc_iio_data_req_by_cpu.peer_write.part1uncore ioPeer to peer write request of 4 bytes made to IIO Part1 by a different IIO unitevent=0xc0,ch_mask=2,fc_mask=7,umask=201Counts every peer to peer write request of 4 bytes of data made to the MMIO space of a card on IIO Part1 by a different IIO unit. Does not include requests made by the same IIO unit. In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the busunc_iio_data_req_by_cpu.peer_write.part2uncore ioPeer to peer write request of 4 bytes made to IIO Part2 by a different IIO unitevent=0xc0,ch_mask=4,fc_mask=7,umask=201Counts every peer to peer write request of 4 bytes of data made to the MMIO space of a card on IIO Part2 by a different IIO unit. Does not include requests made by the same IIO unit. In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the busunc_iio_data_req_by_cpu.peer_write.part3uncore ioPeer to peer write request of 4 bytes made to IIO Part3 by a different IIO unitevent=0xc0,ch_mask=8,fc_mask=7,umask=201Counts every peer to peer write request of 4 bytes of data made to the MMIO space of a card on IIO Part3 by a different IIO unit. Does not include requests made by the same IIO unit. In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the busunc_iio_data_req_by_cpu.peer_write.vtd0uncore ioData requested by the CPU; Another card (different IIO stack) writing to this cardevent=0xc0,ch_mask=0x10,fc_mask=7,umask=201Number of double word (4 bytes) requests initiated by the main die to the attached device.; VTd - Type 0unc_iio_data_req_by_cpu.peer_write.vtd1uncore ioData requested by the CPU; Another card (different IIO stack) writing to this cardevent=0xc0,ch_mask=0x20,fc_mask=7,umask=201Number of double word (4 bytes) requests initiated by the main die to the attached device.; VTd - Type 1unc_iio_data_req_of_cpu.atomic.part0uncore ioData requested of the CPU; Atomic requests targeting DRAMevent=0x83,ch_mask=1,fc_mask=7,umask=0x1001Number of double word (4 bytes) requests the attached device made of the main die.; x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_of_cpu.atomic.part1uncore ioData requested of the CPU; Atomic requests targeting DRAMevent=0x83,ch_mask=2,fc_mask=7,umask=0x1001Number of double word (4 bytes) requests the attached device made of the main die.; x4 card is plugged in to slot 1unc_iio_data_req_of_cpu.atomic.part2uncore ioData requested of the CPU; Atomic requests targeting DRAMevent=0x83,ch_mask=4,fc_mask=7,umask=0x1001Number of double word (4 bytes) requests the attached device made of the main die.; x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_data_req_of_cpu.atomic.part3uncore ioData requested of the CPU; Atomic requests targeting DRAMevent=0x83,ch_mask=8,fc_mask=7,umask=0x1001Number of double word (4 bytes) requests the attached device made of the main die.; x4 card is plugged in to slot 3unc_iio_data_req_of_cpu.atomic.vtd0uncore ioData requested of the CPU; Atomic requests targeting DRAMevent=0x83,ch_mask=0x10,fc_mask=7,umask=0x1001Number of double word (4 bytes) requests the attached device made of the main die.; VTd - Type 0unc_iio_data_req_of_cpu.atomic.vtd1uncore ioData requested of the CPU; Atomic requests targeting DRAMevent=0x83,ch_mask=0x20,fc_mask=7,umask=0x1001Number of double word (4 bytes) requests the attached device made of the main die.; VTd - Type 1unc_iio_data_req_of_cpu.atomiccmp.part0uncore ioData requested of the CPU; Completion of atomic requests targeting DRAMevent=0x83,ch_mask=1,fc_mask=7,umask=0x2001Number of double word (4 bytes) requests the attached device made of the main die.; x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_of_cpu.atomiccmp.part1uncore ioData requested of the CPU; Completion of atomic requests targeting DRAMevent=0x83,ch_mask=2,fc_mask=7,umask=0x2001Number of double word (4 bytes) requests the attached device made of the main die.; x4 card is plugged in to slot 1unc_iio_data_req_of_cpu.atomiccmp.part2uncore ioData requested of the CPU; Completion of atomic requests targeting DRAMevent=0x83,ch_mask=4,fc_mask=7,umask=0x2001Number of double word (4 bytes) requests the attached device made of the main die.; x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_data_req_of_cpu.atomiccmp.part3uncore ioData requested of the CPU; Completion of atomic requests targeting DRAMevent=0x83,ch_mask=8,fc_mask=7,umask=0x2001Number of double word (4 bytes) requests the attached device made of the main die.; x4 card is plugged in to slot 3unc_iio_data_req_of_cpu.mem_read.part0uncore ioPCI Express bandwidth reading at IIO, part 0event=0x83,ch_mask=1,fc_mask=7,umask=401Counts every read request for 4 bytes of data made by IIO Part0 to a unit on the main die (generally memory). In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the busunc_iio_data_req_of_cpu.mem_read.part1uncore ioPCI Express bandwidth reading at IIO, part 1event=0x83,ch_mask=2,fc_mask=7,umask=401Counts every read request for 4 bytes of data made by IIO Part1 to a unit on the main die (generally memory). In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the busunc_iio_data_req_of_cpu.mem_read.part2uncore ioPCI Express bandwidth reading at IIO, part 2event=0x83,ch_mask=4,fc_mask=7,umask=401Counts every read request for 4 bytes of data made by IIO Part2 to a unit on the main die (generally memory). In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the busunc_iio_data_req_of_cpu.mem_read.part3uncore ioPCI Express bandwidth reading at IIO, part 3event=0x83,ch_mask=8,fc_mask=7,umask=401Counts every read request for 4 bytes of data made by IIO Part3 to a unit on the main die (generally memory). In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to  any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the busunc_iio_data_req_of_cpu.mem_read.vtd0uncore ioData requested of the CPU; Card reading from DRAMevent=0x83,ch_mask=0x10,fc_mask=7,umask=401Number of double word (4 bytes) requests the attached device made of the main die.; VTd - Type 0unc_iio_data_req_of_cpu.mem_read.vtd1uncore ioData requested of the CPU; Card reading from DRAMevent=0x83,ch_mask=0x20,fc_mask=7,umask=401Number of double word (4 bytes) requests the attached device made of the main die.; VTd - Type 1unc_iio_data_req_of_cpu.mem_write.part0uncore ioPCI Express bandwidth writing at IIO, part 0event=0x83,ch_mask=1,fc_mask=7,umask=101Counts every write request of 4 bytes of data made by IIO Part0 to a unit on the main die (generally memory). In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the busunc_iio_data_req_of_cpu.mem_write.part1uncore ioPCI Express bandwidth writing at IIO, part 1event=0x83,ch_mask=2,fc_mask=7,umask=101Counts every write request of 4 bytes of data made by IIO Part1 to a unit on the main die (generally memory). In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the busunc_iio_data_req_of_cpu.mem_write.part2uncore ioPCI Express bandwidth writing at IIO, part 2event=0x83,ch_mask=4,fc_mask=7,umask=101Counts every write request of 4 bytes of data made by IIO Part2 to a unit on the main die (generally memory). In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the busunc_iio_data_req_of_cpu.mem_write.part3uncore ioPCI Express bandwidth writing at IIO, part 3event=0x83,ch_mask=8,fc_mask=7,umask=101Counts every write request of 4 bytes of data made by IIO Part3 to a unit on the main die (generally memory). In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to  any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the busunc_iio_data_req_of_cpu.mem_write.vtd0uncore ioData requested of the CPU; Card writing to DRAMevent=0x83,ch_mask=0x10,fc_mask=7,umask=101Number of double word (4 bytes) requests the attached device made of the main die.; VTd - Type 0unc_iio_data_req_of_cpu.mem_write.vtd1uncore ioData requested of the CPU; Card writing to DRAMevent=0x83,ch_mask=0x20,fc_mask=7,umask=101Number of double word (4 bytes) requests the attached device made of the main die.; VTd - Type 1unc_iio_data_req_of_cpu.msg.part0uncore ioData requested of the CPU; Messagesevent=0x83,ch_mask=1,fc_mask=7,umask=0x4001Number of double word (4 bytes) requests the attached device made of the main die.; x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_of_cpu.msg.part1uncore ioData requested of the CPU; Messagesevent=0x83,ch_mask=2,fc_mask=7,umask=0x4001Number of double word (4 bytes) requests the attached device made of the main die.; x4 card is plugged in to slot 1unc_iio_data_req_of_cpu.msg.part2uncore ioData requested of the CPU; Messagesevent=0x83,ch_mask=4,fc_mask=7,umask=0x4001Number of double word (4 bytes) requests the attached device made of the main die.; x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_data_req_of_cpu.msg.part3uncore ioData requested of the CPU; Messagesevent=0x83,ch_mask=8,fc_mask=7,umask=0x4001Number of double word (4 bytes) requests the attached device made of the main die.; x4 card is plugged in to slot 3unc_iio_data_req_of_cpu.msg.vtd0uncore ioData requested of the CPU; Messagesevent=0x83,ch_mask=0x10,fc_mask=7,umask=0x4001Number of double word (4 bytes) requests the attached device made of the main die.; VTd - Type 0unc_iio_data_req_of_cpu.msg.vtd1uncore ioData requested of the CPU; Messagesevent=0x83,ch_mask=0x20,fc_mask=7,umask=0x4001Number of double word (4 bytes) requests the attached device made of the main die.; VTd - Type 1unc_iio_data_req_of_cpu.peer_read.part0uncore ioPeer to peer read request for 4 bytes made by IIO Part0 to an IIO targetevent=0x83,ch_mask=1,fc_mask=7,umask=801Counts every peer to peer read request for 4 bytes of data made by IIO Part0 to the MMIO space of an IIO target. In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the busunc_iio_data_req_of_cpu.peer_read.part1uncore ioPeer to peer read request for 4 bytes made by IIO Part1 to an IIO targetevent=0x83,ch_mask=2,fc_mask=7,umask=801Counts every peer to peer read request for 4 bytes of data made by IIO Part1 to the MMIO space of an IIO target. In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the busunc_iio_data_req_of_cpu.peer_read.part2uncore ioPeer to peer read request for 4 bytes made by IIO Part2 to an IIO targetevent=0x83,ch_mask=4,fc_mask=7,umask=801Counts every peer to peer read request for 4 bytes of data made by IIO Part2 to the MMIO space of an IIO target. In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the busunc_iio_data_req_of_cpu.peer_read.part3uncore ioPeer to peer read request for 4 bytes made by IIO Part3 to an IIO targetevent=0x83,ch_mask=8,fc_mask=7,umask=801Counts every peer to peer read request for 4 bytes of data made by IIO Part3 to the MMIO space of an IIO target. In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the busunc_iio_data_req_of_cpu.peer_read.vtd0uncore ioData requested of the CPU; Card reading from another Card (same or different stack)event=0x83,ch_mask=0x10,fc_mask=7,umask=801Number of double word (4 bytes) requests the attached device made of the main die.; VTd - Type 0unc_iio_data_req_of_cpu.peer_read.vtd1uncore ioData requested of the CPU; Card reading from another Card (same or different stack)event=0x83,ch_mask=0x20,fc_mask=7,umask=801Number of double word (4 bytes) requests the attached device made of the main die.; VTd - Type 1unc_iio_data_req_of_cpu.peer_write.part0uncore ioPeer to peer write request of 4 bytes made by IIO Part0 to an IIO targetevent=0x83,ch_mask=1,fc_mask=7,umask=201Counts every peer to peer write request of 4 bytes of data made by IIO Part0 to the MMIO space of an IIO target. In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the busunc_iio_data_req_of_cpu.peer_write.part1uncore ioPeer to peer write request of 4 bytes made by IIO Part0 to an IIO targetevent=0x83,ch_mask=2,fc_mask=7,umask=201Counts every peer to peer write request of 4 bytes of data made by IIO Part1 to the MMIO space of an IIO target. In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the busunc_iio_data_req_of_cpu.peer_write.part2uncore ioPeer to peer write request of 4 bytes made by IIO Part0 to an IIO targetevent=0x83,ch_mask=4,fc_mask=7,umask=201Counts every peer to peer write request of 4 bytes of data made by IIO Part2 to the MMIO space of an IIO target. In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the busunc_iio_data_req_of_cpu.peer_write.part3uncore ioPeer to peer write request of 4 bytes made by IIO Part0 to an IIO targetevent=0x83,ch_mask=8,fc_mask=7,umask=201Counts every peer to peer write request of 4 bytes of data made by IIO Part3 to the MMIO space of an IIO target. In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to  any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the busunc_iio_data_req_of_cpu.peer_write.vtd0uncore ioData requested of the CPU; Card writing to another Card (same or different stack)event=0x83,ch_mask=0x10,fc_mask=7,umask=201Number of double word (4 bytes) requests the attached device made of the main die.; VTd - Type 0unc_iio_data_req_of_cpu.peer_write.vtd1uncore ioData requested of the CPU; Card writing to another Card (same or different stack)event=0x83,ch_mask=0x20,fc_mask=7,umask=201Number of double word (4 bytes) requests the attached device made of the main die.; VTd - Type 1unc_iio_link_num_corr_erruncore ioNum Link  Correctable Errorsevent=0xf01unc_iio_link_num_retriesuncore ioNum Link Retriesevent=0xe01unc_iio_mask_matchuncore ioNumber packets that passed the Mask/Match Filterevent=0x2101unc_iio_mask_match_and.bus0uncore ioAND Mask/match for debug bus; Non-PCIE busevent=2,umask=101Asserted if all bits specified by mask matchunc_iio_mask_match_and.bus0_bus1uncore ioAND Mask/match for debug bus; Non-PCIE bus and PCIE busevent=2,umask=801Asserted if all bits specified by mask matchunc_iio_mask_match_and.bus0_not_bus1uncore ioAND Mask/match for debug bus; Non-PCIE bus and !(PCIE bus)event=2,umask=401Asserted if all bits specified by mask matchunc_iio_mask_match_and.bus1uncore ioAND Mask/match for debug bus; PCIE busevent=2,umask=201Asserted if all bits specified by mask matchunc_iio_mask_match_and.not_bus0_bus1uncore ioAND Mask/match for debug bus; !(Non-PCIE bus) and PCIE busevent=2,umask=0x1001Asserted if all bits specified by mask matchunc_iio_mask_match_and.not_bus0_not_bus1uncore ioAND Mask/match for debug busevent=2,umask=0x2001Asserted if all bits specified by mask matchunc_iio_mask_match_or.bus0uncore ioOR Mask/match for debug bus; Non-PCIE busevent=3,umask=101Asserted if any bits specified by mask matchunc_iio_mask_match_or.bus0_bus1uncore ioOR Mask/match for debug bus; Non-PCIE bus and PCIE busevent=3,umask=801Asserted if any bits specified by mask matchunc_iio_mask_match_or.bus0_not_bus1uncore ioOR Mask/match for debug bus; Non-PCIE bus and !(PCIE bus)event=3,umask=401Asserted if any bits specified by mask matchunc_iio_mask_match_or.bus1uncore ioOR Mask/match for debug bus; PCIE busevent=3,umask=201Asserted if any bits specified by mask matchunc_iio_mask_match_or.not_bus0_bus1uncore ioOR Mask/match for debug bus; !(Non-PCIE bus) and PCIE busevent=3,umask=0x1001Asserted if any bits specified by mask matchunc_iio_mask_match_or.not_bus0_not_bus1uncore ioOR Mask/match for debug bus; !(Non-PCIE bus) and !(PCIE bus)event=3,umask=0x2001Asserted if any bits specified by mask matchunc_iio_nothinguncore ioCounting disabledevent=001unc_iio_payload_bytes_in.atomic.part0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.PART0event=0x83,ch_mask=1,fc_mask=7,umask=0x1011unc_iio_payload_bytes_in.atomic.part1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.PART1event=0x83,ch_mask=2,fc_mask=7,umask=0x1011unc_iio_payload_bytes_in.atomic.part2uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.PART2event=0x83,ch_mask=4,fc_mask=7,umask=0x1011unc_iio_payload_bytes_in.atomic.part3uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.PART3event=0x83,ch_mask=8,fc_mask=7,umask=0x1011unc_iio_payload_bytes_in.atomic.vtd0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.VTD0event=0x83,ch_mask=0x10,fc_mask=7,umask=0x1011unc_iio_payload_bytes_in.atomic.vtd1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.VTD1event=0x83,ch_mask=0x20,fc_mask=7,umask=0x1011unc_iio_payload_bytes_in.atomiccmp.part0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.ATOMICCMP.PART0event=0x83,ch_mask=1,fc_mask=7,umask=0x2011unc_iio_payload_bytes_in.atomiccmp.part1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.ATOMICCMP.PART1event=0x83,ch_mask=2,fc_mask=7,umask=0x2011unc_iio_payload_bytes_in.atomiccmp.part2uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.ATOMICCMP.PART2event=0x83,ch_mask=4,fc_mask=7,umask=0x2011unc_iio_payload_bytes_in.atomiccmp.part3uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.ATOMICCMP.PART3event=0x83,ch_mask=8,fc_mask=7,umask=0x2011unc_iio_payload_bytes_in.mem_read.part0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART0event=0x83,ch_mask=1,fc_mask=7,umask=411unc_iio_payload_bytes_in.mem_read.part1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART1event=0x83,ch_mask=2,fc_mask=7,umask=411unc_iio_payload_bytes_in.mem_read.part2uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2event=0x83,ch_mask=4,fc_mask=7,umask=411unc_iio_payload_bytes_in.mem_read.part3uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3event=0x83,ch_mask=8,fc_mask=7,umask=411unc_iio_payload_bytes_in.mem_read.vtd0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.VTD0event=0x83,ch_mask=0x10,fc_mask=7,umask=411unc_iio_payload_bytes_in.mem_read.vtd1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.VTD1event=0x83,ch_mask=0x20,fc_mask=7,umask=411unc_iio_payload_bytes_in.mem_write.part0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART0event=0x83,ch_mask=1,fc_mask=7,umask=111unc_iio_payload_bytes_in.mem_write.part1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART1event=0x83,ch_mask=2,fc_mask=7,umask=111unc_iio_payload_bytes_in.mem_write.part2uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART2event=0x83,ch_mask=4,fc_mask=7,umask=111unc_iio_payload_bytes_in.mem_write.part3uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART3event=0x83,ch_mask=8,fc_mask=7,umask=111unc_iio_payload_bytes_in.mem_write.vtd0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.VTD0event=0x83,ch_mask=0x10,fc_mask=7,umask=111unc_iio_payload_bytes_in.mem_write.vtd1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.VTD1event=0x83,ch_mask=0x20,fc_mask=7,umask=111unc_iio_payload_bytes_in.msg.part0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MSG.PART0event=0x83,ch_mask=1,fc_mask=7,umask=0x4011unc_iio_payload_bytes_in.msg.part1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MSG.PART1event=0x83,ch_mask=2,fc_mask=7,umask=0x4011unc_iio_payload_bytes_in.msg.part2uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MSG.PART2event=0x83,ch_mask=4,fc_mask=7,umask=0x4011unc_iio_payload_bytes_in.msg.part3uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MSG.PART3event=0x83,ch_mask=8,fc_mask=7,umask=0x4011unc_iio_payload_bytes_in.msg.vtd0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MSG.VTD0event=0x83,ch_mask=0x10,fc_mask=7,umask=0x4011unc_iio_payload_bytes_in.msg.vtd1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MSG.VTD1event=0x83,ch_mask=0x20,fc_mask=7,umask=0x4011unc_iio_payload_bytes_in.peer_read.part0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART0event=0x83,ch_mask=1,fc_mask=7,umask=811unc_iio_payload_bytes_in.peer_read.part1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART1event=0x83,ch_mask=2,fc_mask=7,umask=811unc_iio_payload_bytes_in.peer_read.part2uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART2event=0x83,ch_mask=4,fc_mask=7,umask=811unc_iio_payload_bytes_in.peer_read.part3uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART3event=0x83,ch_mask=8,fc_mask=7,umask=811unc_iio_payload_bytes_in.peer_read.vtd0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.VTD0event=0x83,ch_mask=0x10,fc_mask=7,umask=811unc_iio_payload_bytes_in.peer_read.vtd1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.VTD1event=0x83,ch_mask=0x20,fc_mask=7,umask=811unc_iio_payload_bytes_in.peer_write.part0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART0event=0x83,ch_mask=1,fc_mask=7,umask=211unc_iio_payload_bytes_in.peer_write.part1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART1event=0x83,ch_mask=2,fc_mask=7,umask=211unc_iio_payload_bytes_in.peer_write.part2uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART2event=0x83,ch_mask=4,fc_mask=7,umask=211unc_iio_payload_bytes_in.peer_write.part3uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART3event=0x83,ch_mask=8,fc_mask=7,umask=211unc_iio_payload_bytes_in.peer_write.vtd0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.VTD0event=0x83,ch_mask=0x10,fc_mask=7,umask=211unc_iio_payload_bytes_in.peer_write.vtd1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.VTD1event=0x83,ch_mask=0x20,fc_mask=7,umask=211unc_iio_payload_bytes_out.cfg_read.part0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.PART0event=0xc0,ch_mask=1,fc_mask=7,umask=0x4011unc_iio_payload_bytes_out.cfg_read.part1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.PART1event=0xc0,ch_mask=2,fc_mask=7,umask=0x4011unc_iio_payload_bytes_out.cfg_read.part2uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.PART2event=0xc0,ch_mask=4,fc_mask=7,umask=0x4011unc_iio_payload_bytes_out.cfg_read.part3uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.PART3event=0xc0,ch_mask=8,fc_mask=7,umask=0x4011unc_iio_payload_bytes_out.cfg_read.vtd0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.VTD0event=0xc0,ch_mask=0x10,fc_mask=7,umask=0x4011unc_iio_payload_bytes_out.cfg_read.vtd1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.VTD1event=0xc0,ch_mask=0x20,fc_mask=7,umask=0x4011unc_iio_payload_bytes_out.cfg_write.part0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.PART0event=0xc0,ch_mask=1,fc_mask=7,umask=0x1011unc_iio_payload_bytes_out.cfg_write.part1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.PART1event=0xc0,ch_mask=2,fc_mask=7,umask=0x1011unc_iio_payload_bytes_out.cfg_write.part2uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.PART2event=0xc0,ch_mask=4,fc_mask=7,umask=0x1011unc_iio_payload_bytes_out.cfg_write.part3uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.PART3event=0xc0,ch_mask=8,fc_mask=7,umask=0x1011unc_iio_payload_bytes_out.cfg_write.vtd0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.VTD0event=0xc0,ch_mask=0x10,fc_mask=7,umask=0x1011unc_iio_payload_bytes_out.cfg_write.vtd1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.VTD1event=0xc0,ch_mask=0x20,fc_mask=7,umask=0x1011unc_iio_payload_bytes_out.io_read.part0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.IO_READ.PART0event=0xc0,ch_mask=1,fc_mask=7,umask=0x8011unc_iio_payload_bytes_out.io_read.part1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.IO_READ.PART1event=0xc0,ch_mask=2,fc_mask=7,umask=0x8011unc_iio_payload_bytes_out.io_read.part2uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.IO_READ.PART2event=0xc0,ch_mask=4,fc_mask=7,umask=0x8011unc_iio_payload_bytes_out.io_read.part3uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.IO_READ.PART3event=0xc0,ch_mask=8,fc_mask=7,umask=0x8011unc_iio_payload_bytes_out.io_read.vtd0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.IO_READ.VTD0event=0xc0,ch_mask=0x10,fc_mask=7,umask=0x8011unc_iio_payload_bytes_out.io_read.vtd1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.IO_READ.VTD1event=0xc0,ch_mask=0x20,fc_mask=7,umask=0x8011unc_iio_payload_bytes_out.io_write.part0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.PART0event=0xc0,ch_mask=1,fc_mask=7,umask=0x2011unc_iio_payload_bytes_out.io_write.part1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.PART1event=0xc0,ch_mask=2,fc_mask=7,umask=0x2011unc_iio_payload_bytes_out.io_write.part2uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.PART2event=0xc0,ch_mask=4,fc_mask=7,umask=0x2011unc_iio_payload_bytes_out.io_write.part3uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.PART3event=0xc0,ch_mask=8,fc_mask=7,umask=0x2011unc_iio_payload_bytes_out.io_write.vtd0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.VTD0event=0xc0,ch_mask=0x10,fc_mask=7,umask=0x2011unc_iio_payload_bytes_out.io_write.vtd1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.VTD1event=0xc0,ch_mask=0x20,fc_mask=7,umask=0x2011unc_iio_payload_bytes_out.mem_read.part0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART0event=0xc0,ch_mask=1,fc_mask=7,umask=411unc_iio_payload_bytes_out.mem_read.part1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART1event=0xc0,ch_mask=2,fc_mask=7,umask=411unc_iio_payload_bytes_out.mem_read.part2uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART2event=0xc0,ch_mask=4,fc_mask=7,umask=411unc_iio_payload_bytes_out.mem_read.part3uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART3event=0xc0,ch_mask=8,fc_mask=7,umask=411unc_iio_payload_bytes_out.mem_read.vtd0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.VTD0event=0xc0,ch_mask=0x10,fc_mask=7,umask=411unc_iio_payload_bytes_out.mem_read.vtd1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.VTD1event=0xc0,ch_mask=0x20,fc_mask=7,umask=411unc_iio_payload_bytes_out.mem_write.part0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART0event=0xc0,ch_mask=1,fc_mask=7,umask=111unc_iio_payload_bytes_out.mem_write.part1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART1event=0xc0,ch_mask=2,fc_mask=7,umask=111unc_iio_payload_bytes_out.mem_write.part2uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART2event=0xc0,ch_mask=4,fc_mask=7,umask=111unc_iio_payload_bytes_out.mem_write.part3uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART3event=0xc0,ch_mask=8,fc_mask=7,umask=111unc_iio_payload_bytes_out.mem_write.vtd0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.VTD0event=0xc0,ch_mask=0x10,fc_mask=7,umask=111unc_iio_payload_bytes_out.mem_write.vtd1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.VTD1event=0xc0,ch_mask=0x20,fc_mask=7,umask=111unc_iio_payload_bytes_out.peer_read.part0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART0event=0xc0,ch_mask=1,fc_mask=7,umask=811unc_iio_payload_bytes_out.peer_read.part1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART1event=0xc0,ch_mask=2,fc_mask=7,umask=811unc_iio_payload_bytes_out.peer_read.part2uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART2event=0xc0,ch_mask=4,fc_mask=7,umask=811unc_iio_payload_bytes_out.peer_read.part3uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART3event=0xc0,ch_mask=8,fc_mask=7,umask=811unc_iio_payload_bytes_out.peer_read.vtd0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.VTD0event=0xc0,ch_mask=0x10,fc_mask=7,umask=811unc_iio_payload_bytes_out.peer_read.vtd1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.VTD1event=0xc0,ch_mask=0x20,fc_mask=7,umask=811unc_iio_payload_bytes_out.peer_write.part0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART0event=0xc0,ch_mask=1,fc_mask=7,umask=211unc_iio_payload_bytes_out.peer_write.part1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART1event=0xc0,ch_mask=2,fc_mask=7,umask=211unc_iio_payload_bytes_out.peer_write.part2uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART2event=0xc0,ch_mask=4,fc_mask=7,umask=211unc_iio_payload_bytes_out.peer_write.part3uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART3event=0xc0,ch_mask=8,fc_mask=7,umask=211unc_iio_payload_bytes_out.peer_write.vtd0uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.VTD0event=0xc0,ch_mask=0x10,fc_mask=7,umask=211unc_iio_payload_bytes_out.peer_write.vtd1uncore ioThis event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.VTD1event=0xc0,ch_mask=0x20,fc_mask=7,umask=211unc_iio_symbol_timesuncore ioSymbol Times on Linkevent=0x8201Gen1 - increment once every 4nS, Gen2 - increment once every 2nS, Gen3 - increment once every 1nSunc_iio_txn_in.atomic.part0uncore ioThis event is deprecatedevent=0x84,ch_mask=1,fc_mask=7,umask=0x1011unc_iio_txn_in.atomic.part1uncore ioThis event is deprecatedevent=0x84,ch_mask=2,fc_mask=7,umask=0x1011unc_iio_txn_in.atomic.part2uncore ioThis event is deprecatedevent=0x84,ch_mask=4,fc_mask=7,umask=0x1011unc_iio_txn_in.atomic.part3uncore ioThis event is deprecatedevent=0x84,ch_mask=8,fc_mask=7,umask=0x1011unc_iio_txn_in.atomic.vtd0uncore ioThis event is deprecatedevent=0x84,ch_mask=0x10,fc_mask=7,umask=0x1011unc_iio_txn_in.atomic.vtd1uncore ioThis event is deprecatedevent=0x84,ch_mask=0x20,fc_mask=7,umask=0x1011unc_iio_txn_in.atomiccmp.part0uncore ioThis event is deprecatedevent=0x84,ch_mask=1,fc_mask=7,umask=0x2011unc_iio_txn_in.atomiccmp.part1uncore ioThis event is deprecatedevent=0x84,ch_mask=2,fc_mask=7,umask=0x2011unc_iio_txn_in.atomiccmp.part2uncore ioThis event is deprecatedevent=0x84,ch_mask=4,fc_mask=7,umask=0x2011unc_iio_txn_in.atomiccmp.part3uncore ioThis event is deprecatedevent=0x84,ch_mask=8,fc_mask=7,umask=0x2011unc_iio_txn_in.mem_read.part0uncore ioThis event is deprecatedevent=0x84,ch_mask=1,fc_mask=7,umask=411unc_iio_txn_in.mem_read.part1uncore ioThis event is deprecatedevent=0x84,ch_mask=2,fc_mask=7,umask=411unc_iio_txn_in.mem_read.part2uncore ioThis event is deprecatedevent=0x84,ch_mask=4,fc_mask=7,umask=411unc_iio_txn_in.mem_read.part3uncore ioThis event is deprecatedevent=0x84,ch_mask=8,fc_mask=7,umask=411unc_iio_txn_in.mem_read.vtd0uncore ioThis event is deprecatedevent=0x84,ch_mask=0x10,fc_mask=7,umask=411unc_iio_txn_in.mem_read.vtd1uncore ioThis event is deprecatedevent=0x84,ch_mask=0x20,fc_mask=7,umask=411unc_iio_txn_in.mem_write.part0uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART0event=0x84,ch_mask=1,fc_mask=7,umask=111unc_iio_txn_in.mem_write.part1uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART1event=0x84,ch_mask=2,fc_mask=7,umask=111unc_iio_txn_in.mem_write.part2uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART2event=0x84,ch_mask=4,fc_mask=7,umask=111unc_iio_txn_in.mem_write.part3uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART3event=0x84,ch_mask=8,fc_mask=7,umask=111unc_iio_txn_in.mem_write.vtd0uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.VTD0event=0x84,ch_mask=0x10,fc_mask=7,umask=111unc_iio_txn_in.mem_write.vtd1uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.VTD1event=0x84,ch_mask=0x20,fc_mask=7,umask=111unc_iio_txn_in.msg.part0uncore ioThis event is deprecatedevent=0x84,ch_mask=1,fc_mask=7,umask=0x4011unc_iio_txn_in.msg.part1uncore ioThis event is deprecatedevent=0x84,ch_mask=2,fc_mask=7,umask=0x4011unc_iio_txn_in.msg.part2uncore ioThis event is deprecatedevent=0x84,ch_mask=4,fc_mask=7,umask=0x4011unc_iio_txn_in.msg.part3uncore ioThis event is deprecatedevent=0x84,ch_mask=8,fc_mask=7,umask=0x4011unc_iio_txn_in.msg.vtd0uncore ioThis event is deprecatedevent=0x84,ch_mask=0x10,fc_mask=7,umask=0x4011unc_iio_txn_in.msg.vtd1uncore ioThis event is deprecatedevent=0x84,ch_mask=0x20,fc_mask=7,umask=0x4011unc_iio_txn_in.peer_read.part0uncore ioThis event is deprecatedevent=0x84,ch_mask=1,fc_mask=7,umask=811unc_iio_txn_in.peer_read.part1uncore ioThis event is deprecatedevent=0x84,ch_mask=2,fc_mask=7,umask=811unc_iio_txn_in.peer_read.part2uncore ioThis event is deprecatedevent=0x84,ch_mask=4,fc_mask=7,umask=811unc_iio_txn_in.peer_read.part3uncore ioThis event is deprecatedevent=0x84,ch_mask=8,fc_mask=7,umask=811unc_iio_txn_in.peer_read.vtd0uncore ioThis event is deprecatedevent=0x84,ch_mask=0x10,fc_mask=7,umask=811unc_iio_txn_in.peer_read.vtd1uncore ioThis event is deprecatedevent=0x84,ch_mask=0x20,fc_mask=7,umask=811unc_iio_txn_in.peer_write.part0uncore ioThis event is deprecatedevent=0x84,ch_mask=1,fc_mask=7,umask=211unc_iio_txn_in.peer_write.part1uncore ioThis event is deprecatedevent=0x84,ch_mask=2,fc_mask=7,umask=211unc_iio_txn_in.peer_write.part2uncore ioThis event is deprecatedevent=0x84,ch_mask=4,fc_mask=7,umask=211unc_iio_txn_in.peer_write.part3uncore ioThis event is deprecatedevent=0x84,ch_mask=8,fc_mask=7,umask=211unc_iio_txn_in.peer_write.vtd0uncore ioThis event is deprecatedevent=0x84,ch_mask=0x10,fc_mask=7,umask=211unc_iio_txn_in.peer_write.vtd1uncore ioThis event is deprecatedevent=0x84,ch_mask=0x20,fc_mask=7,umask=211unc_iio_txn_out.cfg_read.part0uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.PART0event=0xc1,ch_mask=1,fc_mask=7,umask=0x4011unc_iio_txn_out.cfg_read.part1uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.PART1event=0xc1,ch_mask=2,fc_mask=7,umask=0x4011unc_iio_txn_out.cfg_read.part2uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.PART2event=0xc1,ch_mask=4,fc_mask=7,umask=0x4011unc_iio_txn_out.cfg_read.part3uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.PART3event=0xc1,ch_mask=8,fc_mask=7,umask=0x4011unc_iio_txn_out.cfg_read.vtd0uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.VTD0event=0xc1,ch_mask=0x10,fc_mask=7,umask=0x4011unc_iio_txn_out.cfg_read.vtd1uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.VTD1event=0xc1,ch_mask=0x20,fc_mask=7,umask=0x4011unc_iio_txn_out.cfg_write.part0uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.PART0event=0xc1,ch_mask=1,fc_mask=7,umask=0x1011unc_iio_txn_out.cfg_write.part1uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.PART1event=0xc1,ch_mask=2,fc_mask=7,umask=0x1011unc_iio_txn_out.cfg_write.part2uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.PART2event=0xc1,ch_mask=4,fc_mask=7,umask=0x1011unc_iio_txn_out.cfg_write.part3uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.PART3event=0xc1,ch_mask=8,fc_mask=7,umask=0x1011unc_iio_txn_out.cfg_write.vtd0uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.VTD0event=0xc1,ch_mask=0x10,fc_mask=7,umask=0x1011unc_iio_txn_out.io_read.part0uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.IO_READ.PART0event=0xc1,ch_mask=1,fc_mask=7,umask=0x8011unc_iio_txn_out.io_read.part1uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.IO_READ.PART1event=0xc1,ch_mask=2,fc_mask=7,umask=0x8011unc_iio_txn_out.io_read.part2uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.IO_READ.PART2event=0xc1,ch_mask=4,fc_mask=7,umask=0x8011unc_iio_txn_out.io_read.part3uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.IO_READ.PART3event=0xc1,ch_mask=8,fc_mask=7,umask=0x8011unc_iio_txn_out.io_read.vtd0uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.IO_READ.VTD0event=0xc1,ch_mask=0x10,fc_mask=7,umask=0x8011unc_iio_txn_out.io_read.vtd1uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.IO_READ.VTD1event=0xc1,ch_mask=0x20,fc_mask=7,umask=0x8011unc_iio_txn_out.io_write.part0uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.PART0event=0xc1,ch_mask=1,fc_mask=7,umask=0x2011unc_iio_txn_out.io_write.part1uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.PART1event=0xc1,ch_mask=2,fc_mask=7,umask=0x2011unc_iio_txn_out.io_write.part2uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.PART2event=0xc1,ch_mask=4,fc_mask=7,umask=0x2011unc_iio_txn_out.io_write.part3uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.PART3event=0xc1,ch_mask=8,fc_mask=7,umask=0x2011unc_iio_txn_out.io_write.vtd0uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.VTD0event=0xc1,ch_mask=0x10,fc_mask=7,umask=0x2011unc_iio_txn_out.io_write.vtd1uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.VTD1event=0xc1,ch_mask=0x20,fc_mask=7,umask=0x2011unc_iio_txn_out.mem_read.part0uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART0event=0xc1,ch_mask=1,fc_mask=7,umask=411unc_iio_txn_out.mem_read.part1uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART1event=0xc1,ch_mask=2,fc_mask=7,umask=411unc_iio_txn_out.mem_read.part2uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART2event=0xc1,ch_mask=4,fc_mask=7,umask=411unc_iio_txn_out.mem_read.part3uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART3event=0xc1,ch_mask=8,fc_mask=7,umask=411unc_iio_txn_out.mem_read.vtd0uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.VTD0event=0xc1,ch_mask=0x10,fc_mask=7,umask=411unc_iio_txn_out.mem_read.vtd1uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.VTD1event=0xc1,ch_mask=0x20,fc_mask=7,umask=411unc_iio_txn_out.mem_write.part0uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART0event=0xc1,ch_mask=1,fc_mask=7,umask=111unc_iio_txn_out.mem_write.part1uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART1event=0xc1,ch_mask=2,fc_mask=7,umask=111unc_iio_txn_out.mem_write.part2uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART2event=0xc1,ch_mask=4,fc_mask=7,umask=111unc_iio_txn_out.mem_write.part3uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART3event=0xc1,ch_mask=8,fc_mask=7,umask=111unc_iio_txn_out.mem_write.vtd0uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.VTD0event=0xc1,ch_mask=0x10,fc_mask=7,umask=111unc_iio_txn_out.mem_write.vtd1uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.VTD1event=0xc1,ch_mask=0x20,fc_mask=7,umask=111unc_iio_txn_out.peer_read.part0uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART0event=0xc1,ch_mask=1,fc_mask=7,umask=811unc_iio_txn_out.peer_read.part1uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART1event=0xc1,ch_mask=2,fc_mask=7,umask=811unc_iio_txn_out.peer_read.part2uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART2event=0xc1,ch_mask=4,fc_mask=7,umask=811unc_iio_txn_out.peer_read.part3uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART3event=0xc1,ch_mask=8,fc_mask=7,umask=811unc_iio_txn_out.peer_read.vtd0uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.VTD0event=0xc1,ch_mask=0x10,fc_mask=7,umask=811unc_iio_txn_out.peer_read.vtd1uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.VTD1event=0xc1,ch_mask=0x20,fc_mask=7,umask=811unc_iio_txn_out.peer_write.part0uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART0event=0xc1,ch_mask=1,fc_mask=7,umask=211unc_iio_txn_out.peer_write.part1uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART1event=0xc1,ch_mask=2,fc_mask=7,umask=211unc_iio_txn_out.peer_write.part2uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART2event=0xc1,ch_mask=4,fc_mask=7,umask=211unc_iio_txn_out.peer_write.part3uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART3event=0xc1,ch_mask=8,fc_mask=7,umask=211unc_iio_txn_out.peer_write.vtd0uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.VTD0event=0xc1,ch_mask=0x10,fc_mask=7,umask=211unc_iio_txn_out.peer_write.vtd1uncore ioThis event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.VTD1event=0xc1,ch_mask=0x20,fc_mask=7,umask=211unc_iio_txn_req_by_cpu.cfg_read.part0uncore ioNumber Transactions requested by the CPU; Core reading from Card's PCICFG spaceevent=0xc1,ch_mask=1,fc_mask=7,umask=0x4001Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_by_cpu.cfg_read.part1uncore ioNumber Transactions requested by the CPU; Core reading from Card's PCICFG spaceevent=0xc1,ch_mask=2,fc_mask=7,umask=0x4001Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; x4 card is plugged in to slot 1unc_iio_txn_req_by_cpu.cfg_read.part2uncore ioNumber Transactions requested by the CPU; Core reading from Card's PCICFG spaceevent=0xc1,ch_mask=4,fc_mask=7,umask=0x4001Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_txn_req_by_cpu.cfg_read.part3uncore ioNumber Transactions requested by the CPU; Core reading from Card's PCICFG spaceevent=0xc1,ch_mask=8,fc_mask=7,umask=0x4001Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; x4 card is plugged in to slot 3unc_iio_txn_req_by_cpu.cfg_read.vtd0uncore ioNumber Transactions requested by the CPU; Core reading from Card's PCICFG spaceevent=0xc1,ch_mask=0x10,fc_mask=7,umask=0x4001Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; VTd - Type 0unc_iio_txn_req_by_cpu.cfg_read.vtd1uncore ioNumber Transactions requested by the CPU; Core reading from Card's PCICFG spaceevent=0xc1,ch_mask=0x20,fc_mask=7,umask=0x4001Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; VTd - Type 1unc_iio_txn_req_by_cpu.cfg_write.part0uncore ioNumber Transactions requested by the CPU; Core writing to Card's PCICFG spaceevent=0xc1,ch_mask=1,fc_mask=7,umask=0x1001Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_by_cpu.cfg_write.part1uncore ioNumber Transactions requested by the CPU; Core writing to Card's PCICFG spaceevent=0xc1,ch_mask=2,fc_mask=7,umask=0x1001Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; x4 card is plugged in to slot 1unc_iio_txn_req_by_cpu.cfg_write.part2uncore ioNumber Transactions requested by the CPU; Core writing to Card's PCICFG spaceevent=0xc1,ch_mask=4,fc_mask=7,umask=0x1001Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_txn_req_by_cpu.cfg_write.part3uncore ioNumber Transactions requested by the CPU; Core writing to Card's PCICFG spaceevent=0xc1,ch_mask=8,fc_mask=7,umask=0x1001Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; x4 card is plugged in to slot 3unc_iio_txn_req_by_cpu.cfg_write.vtd0uncore ioNumber Transactions requested by the CPU; Core writing to Card's PCICFG spaceevent=0xc1,ch_mask=0x10,fc_mask=7,umask=0x1001Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; VTd - Type 0unc_iio_txn_req_by_cpu.cfg_write.vtd1uncore ioNumber Transactions requested by the CPU; Core writing to Card's PCICFG spaceevent=0xc1,ch_mask=0x20,fc_mask=7,umask=0x1001Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; VTd - Type 1unc_iio_txn_req_by_cpu.io_read.part0uncore ioNumber Transactions requested by the CPU; Core reading from Card's IO spaceevent=0xc1,ch_mask=1,fc_mask=7,umask=0x8001Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_by_cpu.io_read.part1uncore ioNumber Transactions requested by the CPU; Core reading from Card's IO spaceevent=0xc1,ch_mask=2,fc_mask=7,umask=0x8001Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; x4 card is plugged in to slot 1unc_iio_txn_req_by_cpu.io_read.part2uncore ioNumber Transactions requested by the CPU; Core reading from Card's IO spaceevent=0xc1,ch_mask=4,fc_mask=7,umask=0x8001Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_txn_req_by_cpu.io_read.part3uncore ioNumber Transactions requested by the CPU; Core reading from Card's IO spaceevent=0xc1,ch_mask=8,fc_mask=7,umask=0x8001Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; x4 card is plugged in to slot 3unc_iio_txn_req_by_cpu.io_read.vtd0uncore ioNumber Transactions requested by the CPU; Core reading from Card's IO spaceevent=0xc1,ch_mask=0x10,fc_mask=7,umask=0x8001Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; VTd - Type 0unc_iio_txn_req_by_cpu.io_read.vtd1uncore ioNumber Transactions requested by the CPU; Core reading from Card's IO spaceevent=0xc1,ch_mask=0x20,fc_mask=7,umask=0x8001Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; VTd - Type 1unc_iio_txn_req_by_cpu.io_write.part0uncore ioNumber Transactions requested by the CPU; Core writing to Card's IO spaceevent=0xc1,ch_mask=1,fc_mask=7,umask=0x2001Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_by_cpu.io_write.part1uncore ioNumber Transactions requested by the CPU; Core writing to Card's IO spaceevent=0xc1,ch_mask=2,fc_mask=7,umask=0x2001Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; x4 card is plugged in to slot 1unc_iio_txn_req_by_cpu.io_write.part2uncore ioNumber Transactions requested by the CPU; Core writing to Card's IO spaceevent=0xc1,ch_mask=4,fc_mask=7,umask=0x2001Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_txn_req_by_cpu.io_write.part3uncore ioNumber Transactions requested by the CPU; Core writing to Card's IO spaceevent=0xc1,ch_mask=8,fc_mask=7,umask=0x2001Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; x4 card is plugged in to slot 3unc_iio_txn_req_by_cpu.io_write.vtd0uncore ioNumber Transactions requested by the CPU; Core writing to Card's IO spaceevent=0xc1,ch_mask=0x10,fc_mask=7,umask=0x2001Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; VTd - Type 0unc_iio_txn_req_by_cpu.io_write.vtd1uncore ioNumber Transactions requested by the CPU; Core writing to Card's IO spaceevent=0xc1,ch_mask=0x20,fc_mask=7,umask=0x2001Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; VTd - Type 1unc_iio_txn_req_by_cpu.mem_read.part0uncore ioRead request for up to a 64 byte transaction is made by the CPU to IIO Part0event=0xc1,ch_mask=1,fc_mask=7,umask=401Counts every read request for up to a 64 byte transaction of data made by a unit on the main die (generally a core) or by another IIO unit to the MMIO space of a card on IIO Part0. In the general case, part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the busunc_iio_txn_req_by_cpu.mem_read.part1uncore ioRead request for up to a 64 byte transaction is made by the CPU to IIO Part1event=0xc1,ch_mask=2,fc_mask=7,umask=401Counts every read request for up to a 64 byte transaction of data made by a unit on the main die (generally a core) or by another IIO unit to the MMIO space of a card on IIO Part1. In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the busunc_iio_txn_req_by_cpu.mem_read.part2uncore ioRead request for up to a 64 byte transaction is made by the CPU to IIO Part2event=0xc1,ch_mask=4,fc_mask=7,umask=401Counts every read request for up to a 64 byte transaction of data made by a unit on the main die (generally a core) or by another IIO unit to the MMIO space of a card on IIO Part2. In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the busunc_iio_txn_req_by_cpu.mem_read.part3uncore ioRead request for up to a 64 byte transaction is made by the CPU to IIO Part3event=0xc1,ch_mask=8,fc_mask=7,umask=401Counts every read request for up to a 64 byte transaction of data made by a unit on the main die (generally a core) or by another IIO unit to the MMIO space of a card on IIO Part3. In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to  any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the busunc_iio_txn_req_by_cpu.mem_read.vtd0uncore ioNumber Transactions requested by the CPU; Core reading from Card's MMIO spaceevent=0xc1,ch_mask=0x10,fc_mask=7,umask=401Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; VTd - Type 0unc_iio_txn_req_by_cpu.mem_read.vtd1uncore ioNumber Transactions requested by the CPU; Core reading from Card's MMIO spaceevent=0xc1,ch_mask=0x20,fc_mask=7,umask=401Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; VTd - Type 1unc_iio_txn_req_by_cpu.mem_write.part0uncore ioWrite request of up to a 64 byte transaction is made to IIO Part0 by the CPUevent=0xc1,ch_mask=1,fc_mask=7,umask=101Counts every write request of up to a 64 byte transaction of data made to the MMIO space of a card on IIO Part0 by a unit on the main die (generally a core) or by another IIO unit. In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the busunc_iio_txn_req_by_cpu.mem_write.part1uncore ioWrite request of up to a 64 byte transaction is made to IIO Part1 by the CPUevent=0xc1,ch_mask=2,fc_mask=7,umask=101Counts every write request of up to a 64 byte transaction of data made to the MMIO space of a card on IIO Part1 by a unit on the main die (generally a core) or by another IIO unit. In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the busunc_iio_txn_req_by_cpu.mem_write.part2uncore ioWrite request of up to a 64 byte transaction is made to IIO Part2 by the CPUevent=0xc1,ch_mask=4,fc_mask=7,umask=101Counts every write request of up to a 64 byte transaction of data made to the MMIO space of a card on IIO Part2 by a unit on the main die (generally a core) or by another IIO unit. In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the busunc_iio_txn_req_by_cpu.mem_write.part3uncore ioWrite request of up to a 64 byte transaction is made to IIO Part3 by the CPUevent=0xc1,ch_mask=8,fc_mask=7,umask=101Counts every write request of up to a 64 byte transaction of data made to the MMIO space of a card on IIO Part3 by a unit on the main die (generally a core) or by another IIO unit. In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to  any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the busunc_iio_txn_req_by_cpu.mem_write.vtd0uncore ioNumber Transactions requested by the CPU; Core writing to Card's MMIO spaceevent=0xc1,ch_mask=0x10,fc_mask=7,umask=101Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; VTd - Type 0unc_iio_txn_req_by_cpu.mem_write.vtd1uncore ioNumber Transactions requested by the CPU; Core writing to Card's MMIO spaceevent=0xc1,ch_mask=0x20,fc_mask=7,umask=101Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; VTd - Type 1unc_iio_txn_req_by_cpu.peer_read.part0uncore ioPeer to peer read request for up to a 64 byte transaction is made by a different IIO unit to IIO Part0event=0xc1,ch_mask=1,fc_mask=7,umask=801Counts every peer to peer read request for up to a 64 byte transaction of data made by a different IIO unit to the MMIO space of a card on IIO Part0. Does not include requests made by the same IIO unit. In the general case, part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the busunc_iio_txn_req_by_cpu.peer_read.part1uncore ioPeer to peer read request for up to a 64 byte transaction is made by a different IIO unit to IIO Part1event=0xc1,ch_mask=2,fc_mask=7,umask=801Counts every peer to peer read request for up to a 64 byte transaction of data made by a different IIO unit to the MMIO space of a card on IIO Part1. Does not include requests made by the same IIO unit. In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the busunc_iio_txn_req_by_cpu.peer_read.part2uncore ioPeer to peer read request for up to a 64 byte transaction is made by a different IIO unit to IIO Part2event=0xc1,ch_mask=4,fc_mask=7,umask=801Counts every peer to peer read request for up to a 64 byte transaction of data made by a different IIO unit to the MMIO space of a card on IIO Part2. Does not include requests made by the same IIO unit. In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the busunc_iio_txn_req_by_cpu.peer_read.part3uncore ioPeer to peer read request for up to a 64 byte transaction is made by a different IIO unit to IIO Part3event=0xc1,ch_mask=8,fc_mask=7,umask=801Counts every peer to peer read request for up to a 64 byte transaction of data made by a different IIO unit to the MMIO space of a card on IIO Part3. Does not include requests made by the same IIO unit. In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to  any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the busunc_iio_txn_req_by_cpu.peer_read.vtd0uncore ioNumber Transactions requested by the CPU; Another card (different IIO stack) reading from this cardevent=0xc1,ch_mask=0x10,fc_mask=7,umask=801Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; VTd - Type 0unc_iio_txn_req_by_cpu.peer_read.vtd1uncore ioNumber Transactions requested by the CPU; Another card (different IIO stack) reading from this cardevent=0xc1,ch_mask=0x20,fc_mask=7,umask=801Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; VTd - Type 1unc_iio_txn_req_by_cpu.peer_write.part0uncore ioPeer to peer write request of up to a 64 byte transaction is made to IIO Part0 by a different IIO unitevent=0xc1,ch_mask=1,fc_mask=7,umask=201Counts every peer to peer write request of up to a 64 byte transaction of data made to the MMIO space of a card on IIO Part0 by a different IIO unit. Does not include requests made by the same IIO unit. In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the busunc_iio_txn_req_by_cpu.peer_write.part1uncore ioPeer to peer write request of up to a 64 byte transaction is made to IIO Part1 by a different IIO unitevent=0xc1,ch_mask=2,fc_mask=7,umask=201Counts every peer to peer write request of up to a 64 byte transaction of data made to the MMIO space of a card on IIO Part1 by a different IIO unit. Does not include requests made by the same IIO unit. In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the busunc_iio_txn_req_by_cpu.peer_write.part2uncore ioPeer to peer write request of up to a 64 byte transaction is made to IIO Part2 by a different IIO unitevent=0xc1,ch_mask=4,fc_mask=7,umask=201Counts every peer to peer write request of up to a 64 byte transaction of data made to the MMIO space of a card on IIO Part2 by a different IIO unit. Does not include requests made by the same IIO unit. In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the busunc_iio_txn_req_by_cpu.peer_write.part3uncore ioPeer to peer write request of up to a 64 byte transaction is made to IIO Part3 by a different IIO unitevent=0xc1,ch_mask=8,fc_mask=7,umask=201Counts every peer to peer write request of up to a 64 byte transaction of data made to the MMIO space of a card on IIO Part3 by a different IIO unit. Does not include requests made by the same IIO unit. In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to  any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the busunc_iio_txn_req_by_cpu.peer_write.vtd0uncore ioNumber Transactions requested by the CPU; Another card (different IIO stack) writing to this cardevent=0xc1,ch_mask=0x10,fc_mask=7,umask=201Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; VTd - Type 0unc_iio_txn_req_by_cpu.peer_write.vtd1uncore ioNumber Transactions requested by the CPU; Another card (different IIO stack) writing to this cardevent=0xc1,ch_mask=0x20,fc_mask=7,umask=201Also known as Outbound.  Number of requests, to the attached device, initiated by the main die.; VTd - Type 1unc_iio_txn_req_of_cpu.atomic.part0uncore ioNumber Transactions requested of the CPU; Atomic requests targeting DRAMevent=0x84,ch_mask=1,fc_mask=7,umask=0x1001Also known as Inbound.  Number of 64 byte cache line requests initiated by the attached device.; x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_of_cpu.atomic.part1uncore ioNumber Transactions requested of the CPU; Atomic requests targeting DRAMevent=0x84,ch_mask=2,fc_mask=7,umask=0x1001Also known as Inbound.  Number of 64 byte cache line requests initiated by the attached device.; x4 card is plugged in to slot 1unc_iio_txn_req_of_cpu.atomic.part2uncore ioNumber Transactions requested of the CPU; Atomic requests targeting DRAMevent=0x84,ch_mask=4,fc_mask=7,umask=0x1001Also known as Inbound.  Number of 64 byte cache line requests initiated by the attached device.; x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_txn_req_of_cpu.atomic.part3uncore ioNumber Transactions requested of the CPU; Atomic requests targeting DRAMevent=0x84,ch_mask=8,fc_mask=7,umask=0x1001Also known as Inbound.  Number of 64 byte cache line requests initiated by the attached device.; x4 card is plugged in to slot 3unc_iio_txn_req_of_cpu.atomic.vtd0uncore ioNumber Transactions requested of the CPU; Atomic requests targeting DRAMevent=0x84,ch_mask=0x10,fc_mask=7,umask=0x1001Also known as Inbound.  Number of 64 byte cache line requests initiated by the attached device.; VTd - Type 0unc_iio_txn_req_of_cpu.atomic.vtd1uncore ioNumber Transactions requested of the CPU; Atomic requests targeting DRAMevent=0x84,ch_mask=0x20,fc_mask=7,umask=0x1001Also known as Inbound.  Number of 64 byte cache line requests initiated by the attached device.; VTd - Type 1unc_iio_txn_req_of_cpu.atomiccmp.part0uncore ioNumber Transactions requested of the CPU; Completion of atomic requests targeting DRAMevent=0x84,ch_mask=1,fc_mask=7,umask=0x2001Also known as Inbound.  Number of 64 byte cache line requests initiated by the attached device.; x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_of_cpu.atomiccmp.part1uncore ioNumber Transactions requested of the CPU; Completion of atomic requests targeting DRAMevent=0x84,ch_mask=2,fc_mask=7,umask=0x2001Also known as Inbound.  Number of 64 byte cache line requests initiated by the attached device.; x4 card is plugged in to slot 1unc_iio_txn_req_of_cpu.atomiccmp.part2uncore ioNumber Transactions requested of the CPU; Completion of atomic requests targeting DRAMevent=0x84,ch_mask=4,fc_mask=7,umask=0x2001Also known as Inbound.  Number of 64 byte cache line requests initiated by the attached device.; x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_txn_req_of_cpu.atomiccmp.part3uncore ioNumber Transactions requested of the CPU; Completion of atomic requests targeting DRAMevent=0x84,ch_mask=8,fc_mask=7,umask=0x2001Also known as Inbound.  Number of 64 byte cache line requests initiated by the attached device.; x4 card is plugged in to slot 3unc_iio_txn_req_of_cpu.mem_read.part0uncore ioRead request for up to a 64 byte transaction is made by IIO Part0 to Memoryevent=0x84,ch_mask=1,fc_mask=7,umask=401Counts every read request for up to a 64 byte transaction of data made by IIO Part0 to a unit on the main die (generally memory). In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the busunc_iio_txn_req_of_cpu.mem_read.part1uncore ioRead request for up to a 64 byte transaction is  made by IIO Part1 to Memoryevent=0x84,ch_mask=2,fc_mask=7,umask=401Counts every read request for up to a 64 byte transaction of data made by IIO Part1 to a unit on the main die (generally memory). In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the busunc_iio_txn_req_of_cpu.mem_read.part2uncore ioRead request for up to a 64 byte transaction is made by IIO Part2 to Memoryevent=0x84,ch_mask=4,fc_mask=7,umask=401Counts every read request for up to a 64 byte transaction of data made by IIO Part2 to a unit on the main die (generally memory). In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the busunc_iio_txn_req_of_cpu.mem_read.part3uncore ioRead request for up to a 64 byte transaction is made by IIO Part3 to Memoryevent=0x84,ch_mask=8,fc_mask=7,umask=401Counts every read request for up to a 64 byte transaction of data made by IIO Part3 to a unit on the main die (generally memory). In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to  any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the busunc_iio_txn_req_of_cpu.mem_read.vtd0uncore ioNumber Transactions requested of the CPU; Card reading from DRAMevent=0x84,ch_mask=0x10,fc_mask=7,umask=401Also known as Inbound.  Number of 64 byte cache line requests initiated by the attached device.; VTd - Type 0unc_iio_txn_req_of_cpu.mem_read.vtd1uncore ioNumber Transactions requested of the CPU; Card reading from DRAMevent=0x84,ch_mask=0x20,fc_mask=7,umask=401Also known as Inbound.  Number of 64 byte cache line requests initiated by the attached device.; VTd - Type 1unc_iio_txn_req_of_cpu.mem_write.part0uncore ioWrite request of up to a 64 byte transaction is made by IIO Part0 to Memoryevent=0x84,ch_mask=1,fc_mask=7,umask=101Counts every write request of up to a 64 byte transaction of data made by IIO Part0 to a unit on the main die (generally memory). In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the busunc_iio_txn_req_of_cpu.mem_write.part1uncore ioWrite request of up to a 64 byte transaction is made by IIO Part1 to Memoryevent=0x84,ch_mask=2,fc_mask=7,umask=101Counts every write request of up to a 64 byte transaction of data made by IIO Part1 to a unit on the main die (generally memory). In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the busunc_iio_txn_req_of_cpu.mem_write.part2uncore ioWrite request of up to a 64 byte transaction is made by IIO Part2 to Memoryevent=0x84,ch_mask=4,fc_mask=7,umask=101Counts every write request of up to a 64 byte transaction of data made by IIO Part2 to a unit on the main die (generally memory). In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the busunc_iio_txn_req_of_cpu.mem_write.part3uncore ioWrite request of up to a 64 byte transaction is made by IIO Part3 to Memoryevent=0x84,ch_mask=8,fc_mask=7,umask=101Counts every write request of up to a 64 byte transaction of data made by IIO Part3 to a unit on the main die (generally memory). In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to  any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the busunc_iio_txn_req_of_cpu.mem_write.vtd0uncore ioNumber Transactions requested of the CPU; Card writing to DRAMevent=0x84,ch_mask=0x10,fc_mask=7,umask=101Also known as Inbound.  Number of 64 byte cache line requests initiated by the attached device.; VTd - Type 0unc_iio_txn_req_of_cpu.mem_write.vtd1uncore ioNumber Transactions requested of the CPU; Card writing to DRAMevent=0x84,ch_mask=0x20,fc_mask=7,umask=101Also known as Inbound.  Number of 64 byte cache line requests initiated by the attached device.; VTd - Type 1unc_iio_txn_req_of_cpu.msg.part0uncore ioNumber Transactions requested of the CPU; Messagesevent=0x84,ch_mask=1,fc_mask=7,umask=0x4001Also known as Inbound.  Number of 64 byte cache line requests initiated by the attached device.; x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_of_cpu.msg.part1uncore ioNumber Transactions requested of the CPU; Messagesevent=0x84,ch_mask=2,fc_mask=7,umask=0x4001Also known as Inbound.  Number of 64 byte cache line requests initiated by the attached device.; x4 card is plugged in to slot 1unc_iio_txn_req_of_cpu.msg.part2uncore ioNumber Transactions requested of the CPU; Messagesevent=0x84,ch_mask=4,fc_mask=7,umask=0x4001Also known as Inbound.  Number of 64 byte cache line requests initiated by the attached device.; x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_txn_req_of_cpu.msg.part3uncore ioNumber Transactions requested of the CPU; Messagesevent=0x84,ch_mask=8,fc_mask=7,umask=0x4001Also known as Inbound.  Number of 64 byte cache line requests initiated by the attached device.; x4 card is plugged in to slot 3unc_iio_txn_req_of_cpu.msg.vtd0uncore ioNumber Transactions requested of the CPU; Messagesevent=0x84,ch_mask=0x10,fc_mask=7,umask=0x4001Also known as Inbound.  Number of 64 byte cache line requests initiated by the attached device.; VTd - Type 0unc_iio_txn_req_of_cpu.msg.vtd1uncore ioNumber Transactions requested of the CPU; Messagesevent=0x84,ch_mask=0x20,fc_mask=7,umask=0x4001Also known as Inbound.  Number of 64 byte cache line requests initiated by the attached device.; VTd - Type 1unc_iio_txn_req_of_cpu.peer_read.part0uncore ioPeer to peer read request of up to a 64 byte transaction is made by IIO Part0 to an IIO targetevent=0x84,ch_mask=1,fc_mask=7,umask=801Counts every peer to peer read request of up to a 64 byte transaction made by IIO Part0 to the MMIO space of an IIO target. In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the busunc_iio_txn_req_of_cpu.peer_read.part1uncore ioPeer to peer read request of up to a 64 byte transaction is made by IIO Part1 to an IIO targetevent=0x84,ch_mask=2,fc_mask=7,umask=801Counts every peer to peer read request of up to a 64 byte transaction made by IIO Part1 to the MMIO space of an IIO target. In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the busunc_iio_txn_req_of_cpu.peer_read.part2uncore ioPeer to peer read request of up to a 64 byte transaction is made by IIO Part2 to an IIO targetevent=0x84,ch_mask=4,fc_mask=7,umask=801Counts every peer to peer read request of up to a 64 byte transaction made by IIO Part2 to the MMIO space of an IIO target. In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the busunc_iio_txn_req_of_cpu.peer_read.part3uncore ioPeer to peer read request of up to a 64 byte transaction is made by IIO Part3 to an IIO targetevent=0x84,ch_mask=8,fc_mask=7,umask=801Counts every peer to peer read request of up to a 64 byte transaction made by IIO Part3 to the MMIO space of an IIO target. In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the busunc_iio_txn_req_of_cpu.peer_read.vtd0uncore ioNumber Transactions requested of the CPU; Card reading from another Card (same or different stack)event=0x84,ch_mask=0x10,fc_mask=7,umask=801Also known as Inbound.  Number of 64 byte cache line requests initiated by the attached device.; VTd - Type 0unc_iio_txn_req_of_cpu.peer_read.vtd1uncore ioNumber Transactions requested of the CPU; Card reading from another Card (same or different stack)event=0x84,ch_mask=0x20,fc_mask=7,umask=801Also known as Inbound.  Number of 64 byte cache line requests initiated by the attached device.; VTd - Type 1unc_iio_txn_req_of_cpu.peer_write.part0uncore ioPeer to peer write request of up to a 64 byte transaction is made by IIO Part0 to an IIO targetevent=0x84,ch_mask=1,fc_mask=7,umask=201Counts every peer to peer write request of up to a 64 byte transaction of data made by IIO Part0 to the MMIO space of an IIO target. In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the busunc_iio_txn_req_of_cpu.peer_write.part1uncore ioPeer to peer write request of up to a 64 byte transaction is made by IIO Part1 to an IIO targetevent=0x84,ch_mask=2,fc_mask=7,umask=201Counts every peer to peer write request of up to a 64 byte transaction of data made by IIO Part1 to the MMIO space of an IIO target.In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the busunc_iio_txn_req_of_cpu.peer_write.part2uncore ioPeer to peer write request of up to a 64 byte transaction is made by IIO Part2 to an IIO targetevent=0x84,ch_mask=4,fc_mask=7,umask=201Counts every peer to peer write request of up to a 64 byte transaction of data made by IIO Part2 to the MMIO space of an IIO target. In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the busunc_iio_txn_req_of_cpu.peer_write.part3uncore ioPeer to peer write request of up to a 64 byte transaction is made by IIO Part3 to an IIO targetevent=0x84,ch_mask=8,fc_mask=7,umask=201Counts every peer to peer write request of up to a 64 byte transaction of data made by IIO Part3 to the MMIO space of an IIO target. In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to  any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the busunc_iio_txn_req_of_cpu.peer_write.vtd0uncore ioNumber Transactions requested of the CPU; Card writing to another Card (same or different stack)event=0x84,ch_mask=0x10,fc_mask=7,umask=201Also known as Inbound.  Number of 64 byte cache line requests initiated by the attached device.; VTd - Type 0unc_iio_txn_req_of_cpu.peer_write.vtd1uncore ioNumber Transactions requested of the CPU; Card writing to another Card (same or different stack)event=0x84,ch_mask=0x20,fc_mask=7,umask=201Also known as Inbound.  Number of 64 byte cache line requests initiated by the attached device.; VTd - Type 1unc_iio_vtd_access.ctxt_missuncore ioVTd Access; context cache missevent=0x41,umask=201unc_iio_vtd_access.l1_missuncore ioVTd Access; L1 missevent=0x41,umask=401unc_iio_vtd_access.l2_missuncore ioVTd Access; L2 missevent=0x41,umask=801unc_iio_vtd_access.l3_missuncore ioVTd Access; L3 missevent=0x41,umask=0x1001unc_iio_vtd_access.l4_page_hituncore ioVTd Access; Vtd hitevent=0x41,umask=101unc_iio_vtd_access.tlb1_missuncore ioVTd Access; TLB missevent=0x41,umask=0x8001unc_iio_vtd_access.tlb_fulluncore ioVTd Access; TLB is fullevent=0x41,umask=0x4001unc_iio_vtd_access.tlb_missuncore ioVTd Access; TLB missevent=0x41,umask=0x2001unc_iio_vtd_occupancyuncore ioVTd Occupancyevent=0x4001llc_misses.mem_readuncore memoryread requests to memory controller. Derived from unc_m_cas_count.rdevent=4,umask=30164BytesCounts all CAS (Column Access Select) read commands issued to DRAM on a per channel basis.  CAS commands are issued to specify the address to read or write on DRAM, and this event increments for every read.  This event includes underfill reads due to partial write requests.  This event counts whether AutoPrecharge (which closes the DRAM Page automatically after a read/write)  is enabled or notllc_misses.mem_writeuncore memorywrite requests to memory controller. Derived from unc_m_cas_count.wrevent=4,umask=0xc0164BytesCounts all CAS (Column Address Select) commands issued to DRAM per memory channel.  CAS commands are issued to specify the address to read or write on DRAM, and this event increments for every write. This event counts whether AutoPrecharge (which closes the DRAM Page automatically after a read/write) is enabled or notunc_m_act_count.bypuncore memoryDRAM Activate Count; Activate due to Bypassevent=1,umask=801Counts the number of DRAM Activate commands sent on this channel.  Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS.  One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activatesunc_m_act_count.wruncore memoryDRAM Page Activate commands sent due to a write requestevent=1,umask=201Counts DRAM Page Activate commands sent on this channel due to a write request to the iMC (Memory Controller).  Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS (Column Access Select) commandunc_m_cas_count.alluncore memoryAll DRAM CAS Commands issuedevent=4,umask=0xf01Counts all CAS (Column Address Select) commands issued to DRAM per memory channel.  CAS commands are issued to specify the address to read or write on DRAM, so this event increments for every read and write. This event counts whether AutoPrecharge (which closes the DRAM Page automatically after a read/write) is enabled or notunc_m_cas_count.rduncore memoryAll DRAM Read CAS Commands issued (including underfills)event=4,umask=301Counts all CAS (Column Access Select) read commands issued to DRAM on a per channel basis.  CAS commands are issued to specify the address to read or write on DRAM, and this event increments for every read.  This event includes underfill reads due to partial write requests.  This event counts whether AutoPrecharge (which closes the DRAM Page automatically after a read/write)  is enabled or notunc_m_cas_count.rd_isochuncore memoryDRAM CAS (Column Address Strobe) Commands.; Read CAS issued in Read ISOCH Modeevent=4,umask=0x4001unc_m_cas_count.rd_reguncore memoryAll DRAM Read CAS Commands issued (does not include underfills)event=4,umask=101Counts CAS (Column Access Select) regular read commands issued to DRAM on a per channel basis.  CAS commands are issued to specify the address to read or write on DRAM, and this event increments for every regular read.  This event only counts regular reads and does not includes underfill reads due to partial write requests.  This event counts whether AutoPrecharge (which closes the DRAM Page automatically after a read/write)  is enabled or notunc_m_cas_count.rd_rmmuncore memoryDRAM CAS (Column Address Strobe) Commands.; Read CAS issued in RMMevent=4,umask=0x2001unc_m_cas_count.rd_underfilluncore memoryDRAM Underfill Read CAS Commands issuedevent=4,umask=201Counts CAS (Column Access Select) underfill read commands issued to DRAM due to a partial write, on a per channel basis.  CAS commands are issued to specify the address to read or write on DRAM, and this command counts underfill reads.  Partial writes must be completed by first reading in the underfill from DRAM and then merging in the partial write data before writing the full line back to DRAM. This event will generally count about the same as the number of partial writes, but may be slightly less because of partials hitting in the WPQ (due to a previous write request)unc_m_cas_count.rd_wmmuncore memoryDRAM CAS (Column Address Strobe) Commands.; Read CAS issued in WMMevent=4,umask=0x1001unc_m_cas_count.wruncore memoryAll DRAM Write CAS commands issuedevent=4,umask=0xc01Counts all CAS (Column Address Select) commands issued to DRAM per memory channel.  CAS commands are issued to specify the address to read or write on DRAM, and this event increments for every write. This event counts whether AutoPrecharge (which closes the DRAM Page automatically after a read/write) is enabled or notunc_m_cas_count.wr_isochuncore memoryDRAM CAS (Column Address Strobe) Commands.; Read CAS issued in Write ISOCH Modeevent=4,umask=0x8001unc_m_cas_count.wr_rmmuncore memoryDRAM CAS (Column Address Strobe) Commands.; DRAM WR_CAS (w/ and w/out auto-pre) in Read Major Modeevent=4,umask=801Counts the total number of Opportunistic DRAM Write CAS commands issued on this channel while in Read-Major-Modeunc_m_cas_count.wr_wmmuncore memoryDRAM CAS (Column Address Strobe) Commands.; DRAM WR_CAS (w/ and w/out auto-pre) in Write Major Modeevent=4,umask=401Counts the total number or DRAM Write CAS commands issued on this channel while in Write-Major-Modeunc_m_clockticksuncore memoryMemory controller clock ticksevent=001Counts clockticks of the fixed frequency clock of the memory controller using one of the programmable countersunc_m_clockticks_funcore memoryClockticks in the Memory Controller using a dedicated 48-bit Fixed Counterevent=0xff01unc_m_majmode2.dram_cycuncore memoryUNC_M_MAJMODE2.DRAM_CYCevent=0xed,umask=201unc_m_majmode2.dram_enteruncore memoryUNC_M_MAJMODE2.DRAM_ENTERevent=0xed,umask=801unc_m_majmode2.pmm_cycuncore memoryMajor Mode 2 : Cycles in PMM major modeevent=0xed,umask=101unc_m_majmode2.pmm_enteruncore memoryMajor Mode 2 : Entered PMM major modeevent=0xed,umask=401unc_m_pmm_bandwidth.readuncore memoryIntel Optane DC persistent memory bandwidth read (MB/sec). Derived from unc_m_pmm_rpq_insertsevent=0xe3016.103515625E-5MB/secunc_m_pmm_bandwidth.totaluncore memoryIntel Optane DC persistent memory bandwidth total (MB/sec). Derived from unc_m_pmm_rpq_insertsevent=0xe3016.103515625E-5MB/secunc_m_pmm_bandwidth.writeuncore memoryIntel Optane DC persistent memory bandwidth write (MB/sec). Derived from unc_m_pmm_wpq_insertsevent=0xe7016.103515625E-5MB/secunc_m_pmm_cmd1.alluncore memoryAll commands for Intel(R) Optane(TM) DC persistent memoryevent=0xea,umask=101unc_m_pmm_cmd1.miscuncore memoryMisc Commands (error, flow ACKs)event=0xea,umask=0x8001unc_m_pmm_cmd1.misc_gntuncore memoryMisc GNTsevent=0xea,umask=0x4001unc_m_pmm_cmd1.rduncore memoryRegular reads(RPQ) commands for Intel(R) Optane(TM) DC persistent memoryevent=0xea,umask=201All Reads - RPQ or Ufillunc_m_pmm_cmd1.rpq_gntsuncore memoryRPQ GNTsevent=0xea,umask=0x1001unc_m_pmm_cmd1.ufill_rduncore memoryUnderfill read commands for Intel(R) Optane(TM) DC persistent memoryevent=0xea,umask=801Underfill readsunc_m_pmm_cmd1.wpq_gntsuncore memoryUnderfill GNTsevent=0xea,umask=0x2001unc_m_pmm_cmd1.wruncore memoryWrite commands for Intel(R) Optane(TM) DC persistent memoryevent=0xea,umask=401Writesunc_m_pmm_cmd2.nodata_expuncore memoryExpected No data packet (ERID matched NDP encoding)event=0xeb,umask=201unc_m_pmm_cmd2.nodata_unexpuncore memoryUnexpected No data packet (ERID matched a Read, but data was a NDP)event=0xeb,umask=401unc_m_pmm_cmd2.opp_rduncore memoryOpportunistic Readsevent=0xeb,umask=101unc_m_pmm_cmd2.pmm_ecc_erroruncore memoryPMM ECC Errorsevent=0xeb,umask=0x2001unc_m_pmm_cmd2.pmm_erid_erroruncore memoryPMM ERID detectable parity errorevent=0xeb,umask=0x4001unc_m_pmm_cmd2.reqs_slot0uncore memoryRead Requests - Slot 0event=0xeb,umask=801unc_m_pmm_cmd2.reqs_slot1uncore memoryRead Requests - Slot 1event=0xeb,umask=0x1001unc_m_pmm_majmode1.partial_wr_cycuncore memoryPMM Major Mode; Cycles PMM is in Partial Write Major Modeevent=0xec,umask=401unc_m_pmm_majmode1.partial_wr_enteruncore memoryPMM Major Modeevent=0xec,umask=0x2001unc_m_pmm_majmode1.partial_wr_exituncore memoryPMM Major Modeevent=0xec,umask=0x4001unc_m_pmm_majmode1.rd_cycuncore memoryPMM Major Mode; Cycles PMM is in Read Major Modeevent=0xec,umask=101unc_m_pmm_majmode1.wr_cycuncore memoryPMM Major Mode; Cycles PMM is in Write Major Modeevent=0xec,umask=201unc_m_pmm_read_latencyuncore memoryIntel Optane DC persistent memory read latency (ns). Derived from unc_m_pmm_rpq_occupancy.allevent=0xe0,umask=1016000000000nsunc_m_pmm_rpq_cycles_fulluncore memoryPMM Read Queue Cycles Fullevent=0xe201unc_m_pmm_rpq_cycles_neuncore memoryPMM Read Queue Cycles Not Emptyevent=0xe101unc_m_pmm_rpq_insertsuncore memoryWrite requests allocated in the PMM Write Pending Queue for Intel Optane DC persistent memoryevent=0xe301unc_m_pmm_rpq_occupancy.alluncore memoryRead Pending Queue Occupancy of all read requests for Intel Optane DC persistent memoryevent=0xe0,umask=101unc_m_pmm_rpq_occupancy.gnt_waituncore memoryPMM Occupancyevent=0xe0,umask=401unc_m_pmm_wpq_cycles_fulluncore memoryPMM Write Queue Cycles Fullevent=0xe601unc_m_pmm_wpq_cycles_neuncore memoryPMM Write Queue Cycles Not Emptyevent=0xe501unc_m_pmm_wpq_insertsuncore memoryWrite requests allocated in the PMM Write Pending Queue for Intel Optane DC persistent memoryevent=0xe701unc_m_pmm_wpq_occupancy.alluncore memoryWrite Pending Queue Occupancy of all write requests for Intel(R) Optane(TM) DC persistent memoryevent=0xe4,umask=101unc_m_pmm_wpq_occupancy.casuncore memoryPMM Occupancyevent=0xe4,umask=201unc_m_pmm_wpq_occupancy.pwruncore memoryPMM Occupancyevent=0xe4,umask=401unc_m_pmm_wpq_pcommituncore memoryUNC_M_PMM_WPQ_PCOMMITevent=0xe801unc_m_pmm_wpq_pcommit_cycuncore memoryUNC_M_PMM_WPQ_PCOMMIT_CYCevent=0xe901unc_m_power_channel_ppduncore memoryCycles where DRAM ranks are in power down (CKE) mode+C37event=0x8501Counts cycles when all the ranks in the channel are in PPD (PreCharge Power Down) mode. If IBT (Input Buffer Terminators)=off is enabled, then this event counts the cycles in PPD mode. If IBT=off is not enabled, then this event counts the number of cycles when being in PPD mode could have been taken advantage ofunc_m_power_self_refreshuncore memoryCycles Memory is in self refresh power modeevent=0x4301Counts the number of cycles when the iMC (memory controller) is in self-refresh and has a clock. This happens in some ACPI CPU package C-states for the sleep levels. For example, the PCU (Power Control Unit) may ask the iMC to enter self-refresh even though some of the cores are still processing. One use of this is for Intel? Dynamic Power Technology.  Self-refresh is required during package C3 and C6, but there is no clock in the iMC at this time, so it is not possible to count these casesunc_m_pre_count.page_missuncore memoryPre-charges due to page missesevent=2,umask=101Counts the number of explicit DRAM Precharge commands sent on this channel as a result of a DRAM page miss. This does not include the implicit precharge commands sent with CAS commands in Auto-Precharge mode. This does not include Precharge commands sent as a result of a page close counter expirationunc_m_pre_count.rduncore memoryPre-charge for readsevent=2,umask=401Counts the number of explicit DRAM Precharge commands issued on a per channel basis due to a read, so as to close the previous DRAM page, before opening the requested pageunc_m_pre_count.wruncore memoryPre-charge for writesevent=2,umask=801Counts the number of DRAM Precharge commands sent on this channelunc_m_rd_cas_rank0.allbanksuncore memoryRD_CAS Access to Rank 0; All Banksevent=0xb0,umask=0x1001unc_m_rd_cas_rank0.bank0uncore memoryRD_CAS Access to Rank 0; Bank 0event=0xb001unc_m_rd_cas_rank0.bank1uncore memoryRD_CAS Access to Rank 0; Bank 1event=0xb0,umask=101unc_m_rd_cas_rank0.bank10uncore memoryRD_CAS Access to Rank 0; Bank 10event=0xb0,umask=0xa01unc_m_rd_cas_rank0.bank11uncore memoryRD_CAS Access to Rank 0; Bank 11event=0xb0,umask=0xb01unc_m_rd_cas_rank0.bank12uncore memoryRD_CAS Access to Rank 0; Bank 12event=0xb0,umask=0xc01unc_m_rd_cas_rank0.bank13uncore memoryRD_CAS Access to Rank 0; Bank 13event=0xb0,umask=0xd01unc_m_rd_cas_rank0.bank14uncore memoryRD_CAS Access to Rank 0; Bank 14event=0xb0,umask=0xe01unc_m_rd_cas_rank0.bank15uncore memoryRD_CAS Access to Rank 0; Bank 15event=0xb0,umask=0xf01unc_m_rd_cas_rank0.bank2uncore memoryRD_CAS Access to Rank 0; Bank 2event=0xb0,umask=201unc_m_rd_cas_rank0.bank3uncore memoryRD_CAS Access to Rank 0; Bank 3event=0xb0,umask=301unc_m_rd_cas_rank0.bank4uncore memoryRD_CAS Access to Rank 0; Bank 4event=0xb0,umask=401unc_m_rd_cas_rank0.bank5uncore memoryRD_CAS Access to Rank 0; Bank 5event=0xb0,umask=501unc_m_rd_cas_rank0.bank6uncore memoryRD_CAS Access to Rank 0; Bank 6event=0xb0,umask=601unc_m_rd_cas_rank0.bank7uncore memoryRD_CAS Access to Rank 0; Bank 7event=0xb0,umask=701unc_m_rd_cas_rank0.bank8uncore memoryRD_CAS Access to Rank 0; Bank 8event=0xb0,umask=801unc_m_rd_cas_rank0.bank9uncore memoryRD_CAS Access to Rank 0; Bank 9event=0xb0,umask=901unc_m_rd_cas_rank0.bankg0uncore memoryRD_CAS Access to Rank 0; Bank Group 0 (Banks 0-3)event=0xb0,umask=0x1101unc_m_rd_cas_rank0.bankg1uncore memoryRD_CAS Access to Rank 0; Bank Group 1 (Banks 4-7)event=0xb0,umask=0x1201unc_m_rd_cas_rank0.bankg2uncore memoryRD_CAS Access to Rank 0; Bank Group 2 (Banks 8-11)event=0xb0,umask=0x1301unc_m_rd_cas_rank0.bankg3uncore memoryRD_CAS Access to Rank 0; Bank Group 3 (Banks 12-15)event=0xb0,umask=0x1401unc_m_rd_cas_rank1.allbanksuncore memoryRD_CAS Access to Rank 1; All Banksevent=0xb1,umask=0x1001unc_m_rd_cas_rank1.bank0uncore memoryRD_CAS Access to Rank 1; Bank 0event=0xb101unc_m_rd_cas_rank1.bank1uncore memoryRD_CAS Access to Rank 1; Bank 1event=0xb1,umask=101unc_m_rd_cas_rank1.bank10uncore memoryRD_CAS Access to Rank 1; Bank 10event=0xb1,umask=0xa01unc_m_rd_cas_rank1.bank11uncore memoryRD_CAS Access to Rank 1; Bank 11event=0xb1,umask=0xb01unc_m_rd_cas_rank1.bank12uncore memoryRD_CAS Access to Rank 1; Bank 12event=0xb1,umask=0xc01unc_m_rd_cas_rank1.bank13uncore memoryRD_CAS Access to Rank 1; Bank 13event=0xb1,umask=0xd01unc_m_rd_cas_rank1.bank14uncore memoryRD_CAS Access to Rank 1; Bank 14event=0xb1,umask=0xe01unc_m_rd_cas_rank1.bank15uncore memoryRD_CAS Access to Rank 1; Bank 15event=0xb1,umask=0xf01unc_m_rd_cas_rank1.bank2uncore memoryRD_CAS Access to Rank 1; Bank 2event=0xb1,umask=201unc_m_rd_cas_rank1.bank3uncore memoryRD_CAS Access to Rank 1; Bank 3event=0xb1,umask=301unc_m_rd_cas_rank1.bank4uncore memoryRD_CAS Access to Rank 1; Bank 4event=0xb1,umask=401unc_m_rd_cas_rank1.bank5uncore memoryRD_CAS Access to Rank 1; Bank 5event=0xb1,umask=501unc_m_rd_cas_rank1.bank6uncore memoryRD_CAS Access to Rank 1; Bank 6event=0xb1,umask=601unc_m_rd_cas_rank1.bank7uncore memoryRD_CAS Access to Rank 1; Bank 7event=0xb1,umask=701unc_m_rd_cas_rank1.bank8uncore memoryRD_CAS Access to Rank 1; Bank 8event=0xb1,umask=801unc_m_rd_cas_rank1.bank9uncore memoryRD_CAS Access to Rank 1; Bank 9event=0xb1,umask=901unc_m_rd_cas_rank1.bankg0uncore memoryRD_CAS Access to Rank 1; Bank Group 0 (Banks 0-3)event=0xb1,umask=0x1101unc_m_rd_cas_rank1.bankg1uncore memoryRD_CAS Access to Rank 1; Bank Group 1 (Banks 4-7)event=0xb1,umask=0x1201unc_m_rd_cas_rank1.bankg2uncore memoryRD_CAS Access to Rank 1; Bank Group 2 (Banks 8-11)event=0xb1,umask=0x1301unc_m_rd_cas_rank1.bankg3uncore memoryRD_CAS Access to Rank 1; Bank Group 3 (Banks 12-15)event=0xb1,umask=0x1401unc_m_rd_cas_rank2.allbanksuncore memoryRD_CAS Access to Rank 2; All Banksevent=0xb2,umask=0x1001unc_m_rd_cas_rank2.bank0uncore memoryRD_CAS Access to Rank 2; Bank 0event=0xb201unc_m_rd_cas_rank2.bank1uncore memoryRD_CAS Access to Rank 2; Bank 1event=0xb2,umask=101unc_m_rd_cas_rank2.bank10uncore memoryRD_CAS Access to Rank 2; Bank 10event=0xb2,umask=0xa01unc_m_rd_cas_rank2.bank11uncore memoryRD_CAS Access to Rank 2; Bank 11event=0xb2,umask=0xb01unc_m_rd_cas_rank2.bank12uncore memoryRD_CAS Access to Rank 2; Bank 12event=0xb2,umask=0xc01unc_m_rd_cas_rank2.bank13uncore memoryRD_CAS Access to Rank 2; Bank 13event=0xb2,umask=0xd01unc_m_rd_cas_rank2.bank14uncore memoryRD_CAS Access to Rank 2; Bank 14event=0xb2,umask=0xe01unc_m_rd_cas_rank2.bank15uncore memoryRD_CAS Access to Rank 2; Bank 15event=0xb2,umask=0xf01unc_m_rd_cas_rank2.bank2uncore memoryRD_CAS Access to Rank 2; Bank 2event=0xb2,umask=201unc_m_rd_cas_rank2.bank3uncore memoryRD_CAS Access to Rank 2; Bank 3event=0xb2,umask=301unc_m_rd_cas_rank2.bank4uncore memoryRD_CAS Access to Rank 2; Bank 4event=0xb2,umask=401unc_m_rd_cas_rank2.bank5uncore memoryRD_CAS Access to Rank 2; Bank 5event=0xb2,umask=501unc_m_rd_cas_rank2.bank6uncore memoryRD_CAS Access to Rank 2; Bank 6event=0xb2,umask=601unc_m_rd_cas_rank2.bank7uncore memoryRD_CAS Access to Rank 2; Bank 7event=0xb2,umask=701unc_m_rd_cas_rank2.bank8uncore memoryRD_CAS Access to Rank 2; Bank 8event=0xb2,umask=801unc_m_rd_cas_rank2.bank9uncore memoryRD_CAS Access to Rank 2; Bank 9event=0xb2,umask=901unc_m_rd_cas_rank2.bankg0uncore memoryRD_CAS Access to Rank 2; Bank Group 0 (Banks 0-3)event=0xb2,umask=0x1101unc_m_rd_cas_rank2.bankg1uncore memoryRD_CAS Access to Rank 2; Bank Group 1 (Banks 4-7)event=0xb2,umask=0x1201unc_m_rd_cas_rank2.bankg2uncore memoryRD_CAS Access to Rank 2; Bank Group 2 (Banks 8-11)event=0xb2,umask=0x1301unc_m_rd_cas_rank2.bankg3uncore memoryRD_CAS Access to Rank 2; Bank Group 3 (Banks 12-15)event=0xb2,umask=0x1401unc_m_rd_cas_rank3.allbanksuncore memoryRD_CAS Access to Rank 3; All Banksevent=0xb3,umask=0x1001unc_m_rd_cas_rank3.bank0uncore memoryRD_CAS Access to Rank 3; Bank 0event=0xb301unc_m_rd_cas_rank3.bank1uncore memoryRD_CAS Access to Rank 3; Bank 1event=0xb3,umask=101unc_m_rd_cas_rank3.bank10uncore memoryRD_CAS Access to Rank 3; Bank 10event=0xb3,umask=0xa01unc_m_rd_cas_rank3.bank11uncore memoryRD_CAS Access to Rank 3; Bank 11event=0xb3,umask=0xb01unc_m_rd_cas_rank3.bank12uncore memoryRD_CAS Access to Rank 3; Bank 12event=0xb3,umask=0xc01unc_m_rd_cas_rank3.bank13uncore memoryRD_CAS Access to Rank 3; Bank 13event=0xb3,umask=0xd01unc_m_rd_cas_rank3.bank14uncore memoryRD_CAS Access to Rank 3; Bank 14event=0xb3,umask=0xe01unc_m_rd_cas_rank3.bank15uncore memoryRD_CAS Access to Rank 3; Bank 15event=0xb3,umask=0xf01unc_m_rd_cas_rank3.bank2uncore memoryRD_CAS Access to Rank 3; Bank 2event=0xb3,umask=201unc_m_rd_cas_rank3.bank3uncore memoryRD_CAS Access to Rank 3; Bank 3event=0xb3,umask=301unc_m_rd_cas_rank3.bank4uncore memoryRD_CAS Access to Rank 3; Bank 4event=0xb3,umask=401unc_m_rd_cas_rank3.bank5uncore memoryRD_CAS Access to Rank 3; Bank 5event=0xb3,umask=501unc_m_rd_cas_rank3.bank6uncore memoryRD_CAS Access to Rank 3; Bank 6event=0xb3,umask=601unc_m_rd_cas_rank3.bank7uncore memoryRD_CAS Access to Rank 3; Bank 7event=0xb3,umask=701unc_m_rd_cas_rank3.bank8uncore memoryRD_CAS Access to Rank 3; Bank 8event=0xb3,umask=801unc_m_rd_cas_rank3.bank9uncore memoryRD_CAS Access to Rank 3; Bank 9event=0xb3,umask=901unc_m_rd_cas_rank3.bankg0uncore memoryRD_CAS Access to Rank 3; Bank Group 0 (Banks 0-3)event=0xb3,umask=0x1101unc_m_rd_cas_rank3.bankg1uncore memoryRD_CAS Access to Rank 3; Bank Group 1 (Banks 4-7)event=0xb3,umask=0x1201unc_m_rd_cas_rank3.bankg2uncore memoryRD_CAS Access to Rank 3; Bank Group 2 (Banks 8-11)event=0xb3,umask=0x1301unc_m_rd_cas_rank3.bankg3uncore memoryRD_CAS Access to Rank 3; Bank Group 3 (Banks 12-15)event=0xb3,umask=0x1401unc_m_rd_cas_rank4.allbanksuncore memoryRD_CAS Access to Rank 4; All Banksevent=0xb4,umask=0x1001unc_m_rd_cas_rank4.bank0uncore memoryRD_CAS Access to Rank 4; Bank 0event=0xb401unc_m_rd_cas_rank4.bank1uncore memoryRD_CAS Access to Rank 4; Bank 1event=0xb4,umask=101unc_m_rd_cas_rank4.bank10uncore memoryRD_CAS Access to Rank 4; Bank 10event=0xb4,umask=0xa01unc_m_rd_cas_rank4.bank11uncore memoryRD_CAS Access to Rank 4; Bank 11event=0xb4,umask=0xb01unc_m_rd_cas_rank4.bank12uncore memoryRD_CAS Access to Rank 4; Bank 12event=0xb4,umask=0xc01unc_m_rd_cas_rank4.bank13uncore memoryRD_CAS Access to Rank 4; Bank 13event=0xb4,umask=0xd01unc_m_rd_cas_rank4.bank14uncore memoryRD_CAS Access to Rank 4; Bank 14event=0xb4,umask=0xe01unc_m_rd_cas_rank4.bank15uncore memoryRD_CAS Access to Rank 4; Bank 15event=0xb4,umask=0xf01unc_m_rd_cas_rank4.bank2uncore memoryRD_CAS Access to Rank 4; Bank 2event=0xb4,umask=201unc_m_rd_cas_rank4.bank3uncore memoryRD_CAS Access to Rank 4; Bank 3event=0xb4,umask=301unc_m_rd_cas_rank4.bank4uncore memoryRD_CAS Access to Rank 4; Bank 4event=0xb4,umask=401unc_m_rd_cas_rank4.bank5uncore memoryRD_CAS Access to Rank 4; Bank 5event=0xb4,umask=501unc_m_rd_cas_rank4.bank6uncore memoryRD_CAS Access to Rank 4; Bank 6event=0xb4,umask=601unc_m_rd_cas_rank4.bank7uncore memoryRD_CAS Access to Rank 4; Bank 7event=0xb4,umask=701unc_m_rd_cas_rank4.bank8uncore memoryRD_CAS Access to Rank 4; Bank 8event=0xb4,umask=801unc_m_rd_cas_rank4.bank9uncore memoryRD_CAS Access to Rank 4; Bank 9event=0xb4,umask=901unc_m_rd_cas_rank4.bankg0uncore memoryRD_CAS Access to Rank 4; Bank Group 0 (Banks 0-3)event=0xb4,umask=0x1101unc_m_rd_cas_rank4.bankg1uncore memoryRD_CAS Access to Rank 4; Bank Group 1 (Banks 4-7)event=0xb4,umask=0x1201unc_m_rd_cas_rank4.bankg2uncore memoryRD_CAS Access to Rank 4; Bank Group 2 (Banks 8-11)event=0xb4,umask=0x1301unc_m_rd_cas_rank4.bankg3uncore memoryRD_CAS Access to Rank 4; Bank Group 3 (Banks 12-15)event=0xb4,umask=0x1401unc_m_rd_cas_rank5.allbanksuncore memoryRD_CAS Access to Rank 5; All Banksevent=0xb5,umask=0x1001unc_m_rd_cas_rank5.bank0uncore memoryRD_CAS Access to Rank 5; Bank 0event=0xb501unc_m_rd_cas_rank5.bank1uncore memoryRD_CAS Access to Rank 5; Bank 1event=0xb5,umask=101unc_m_rd_cas_rank5.bank10uncore memoryRD_CAS Access to Rank 5; Bank 10event=0xb5,umask=0xa01unc_m_rd_cas_rank5.bank11uncore memoryRD_CAS Access to Rank 5; Bank 11event=0xb5,umask=0xb01unc_m_rd_cas_rank5.bank12uncore memoryRD_CAS Access to Rank 5; Bank 12event=0xb5,umask=0xc01unc_m_rd_cas_rank5.bank13uncore memoryRD_CAS Access to Rank 5; Bank 13event=0xb5,umask=0xd01unc_m_rd_cas_rank5.bank14uncore memoryRD_CAS Access to Rank 5; Bank 14event=0xb5,umask=0xe01unc_m_rd_cas_rank5.bank15uncore memoryRD_CAS Access to Rank 5; Bank 15event=0xb5,umask=0xf01unc_m_rd_cas_rank5.bank2uncore memoryRD_CAS Access to Rank 5; Bank 2event=0xb5,umask=201unc_m_rd_cas_rank5.bank3uncore memoryRD_CAS Access to Rank 5; Bank 3event=0xb5,umask=301unc_m_rd_cas_rank5.bank4uncore memoryRD_CAS Access to Rank 5; Bank 4event=0xb5,umask=401unc_m_rd_cas_rank5.bank5uncore memoryRD_CAS Access to Rank 5; Bank 5event=0xb5,umask=501unc_m_rd_cas_rank5.bank6uncore memoryRD_CAS Access to Rank 5; Bank 6event=0xb5,umask=601unc_m_rd_cas_rank5.bank7uncore memoryRD_CAS Access to Rank 5; Bank 7event=0xb5,umask=701unc_m_rd_cas_rank5.bank8uncore memoryRD_CAS Access to Rank 5; Bank 8event=0xb5,umask=801unc_m_rd_cas_rank5.bank9uncore memoryRD_CAS Access to Rank 5; Bank 9event=0xb5,umask=901unc_m_rd_cas_rank5.bankg0uncore memoryRD_CAS Access to Rank 5; Bank Group 0 (Banks 0-3)event=0xb5,umask=0x1101unc_m_rd_cas_rank5.bankg1uncore memoryRD_CAS Access to Rank 5; Bank Group 1 (Banks 4-7)event=0xb5,umask=0x1201unc_m_rd_cas_rank5.bankg2uncore memoryRD_CAS Access to Rank 5; Bank Group 2 (Banks 8-11)event=0xb5,umask=0x1301unc_m_rd_cas_rank5.bankg3uncore memoryRD_CAS Access to Rank 5; Bank Group 3 (Banks 12-15)event=0xb5,umask=0x1401unc_m_rd_cas_rank6.allbanksuncore memoryRD_CAS Access to Rank 6; All Banksevent=0xb6,umask=0x1001unc_m_rd_cas_rank6.bank0uncore memoryRD_CAS Access to Rank 6; Bank 0event=0xb601unc_m_rd_cas_rank6.bank1uncore memoryRD_CAS Access to Rank 6; Bank 1event=0xb6,umask=101unc_m_rd_cas_rank6.bank10uncore memoryRD_CAS Access to Rank 6; Bank 10event=0xb6,umask=0xa01unc_m_rd_cas_rank6.bank11uncore memoryRD_CAS Access to Rank 6; Bank 11event=0xb6,umask=0xb01unc_m_rd_cas_rank6.bank12uncore memoryRD_CAS Access to Rank 6; Bank 12event=0xb6,umask=0xc01unc_m_rd_cas_rank6.bank13uncore memoryRD_CAS Access to Rank 6; Bank 13event=0xb6,umask=0xd01unc_m_rd_cas_rank6.bank14uncore memoryRD_CAS Access to Rank 6; Bank 14event=0xb6,umask=0xe01unc_m_rd_cas_rank6.bank15uncore memoryRD_CAS Access to Rank 6; Bank 15event=0xb6,umask=0xf01unc_m_rd_cas_rank6.bank2uncore memoryRD_CAS Access to Rank 6; Bank 2event=0xb6,umask=201unc_m_rd_cas_rank6.bank3uncore memoryRD_CAS Access to Rank 6; Bank 3event=0xb6,umask=301unc_m_rd_cas_rank6.bank4uncore memoryRD_CAS Access to Rank 6; Bank 4event=0xb6,umask=401unc_m_rd_cas_rank6.bank5uncore memoryRD_CAS Access to Rank 6; Bank 5event=0xb6,umask=501unc_m_rd_cas_rank6.bank6uncore memoryRD_CAS Access to Rank 6; Bank 6event=0xb6,umask=601unc_m_rd_cas_rank6.bank7uncore memoryRD_CAS Access to Rank 6; Bank 7event=0xb6,umask=701unc_m_rd_cas_rank6.bank8uncore memoryRD_CAS Access to Rank 6; Bank 8event=0xb6,umask=801unc_m_rd_cas_rank6.bank9uncore memoryRD_CAS Access to Rank 6; Bank 9event=0xb6,umask=901unc_m_rd_cas_rank6.bankg0uncore memoryRD_CAS Access to Rank 6; Bank Group 0 (Banks 0-3)event=0xb6,umask=0x1101unc_m_rd_cas_rank6.bankg1uncore memoryRD_CAS Access to Rank 6; Bank Group 1 (Banks 4-7)event=0xb6,umask=0x1201unc_m_rd_cas_rank6.bankg2uncore memoryRD_CAS Access to Rank 6; Bank Group 2 (Banks 8-11)event=0xb6,umask=0x1301unc_m_rd_cas_rank6.bankg3uncore memoryRD_CAS Access to Rank 6; Bank Group 3 (Banks 12-15)event=0xb6,umask=0x1401unc_m_rd_cas_rank7.allbanksuncore memoryRD_CAS Access to Rank 7; All Banksevent=0xb7,umask=0x1001unc_m_rd_cas_rank7.bank0uncore memoryRD_CAS Access to Rank 7; Bank 0event=0xb701unc_m_rd_cas_rank7.bank1uncore memoryRD_CAS Access to Rank 7; Bank 1event=0xb7,umask=101unc_m_rd_cas_rank7.bank10uncore memoryRD_CAS Access to Rank 7; Bank 10event=0xb7,umask=0xa01unc_m_rd_cas_rank7.bank11uncore memoryRD_CAS Access to Rank 7; Bank 11event=0xb7,umask=0xb01unc_m_rd_cas_rank7.bank12uncore memoryRD_CAS Access to Rank 7; Bank 12event=0xb7,umask=0xc01unc_m_rd_cas_rank7.bank13uncore memoryRD_CAS Access to Rank 7; Bank 13event=0xb7,umask=0xd01unc_m_rd_cas_rank7.bank14uncore memoryRD_CAS Access to Rank 7; Bank 14event=0xb7,umask=0xe01unc_m_rd_cas_rank7.bank15uncore memoryRD_CAS Access to Rank 7; Bank 15event=0xb7,umask=0xf01unc_m_rd_cas_rank7.bank2uncore memoryRD_CAS Access to Rank 7; Bank 2event=0xb7,umask=201unc_m_rd_cas_rank7.bank3uncore memoryRD_CAS Access to Rank 7; Bank 3event=0xb7,umask=301unc_m_rd_cas_rank7.bank4uncore memoryRD_CAS Access to Rank 7; Bank 4event=0xb7,umask=401unc_m_rd_cas_rank7.bank5uncore memoryRD_CAS Access to Rank 7; Bank 5event=0xb7,umask=501unc_m_rd_cas_rank7.bank6uncore memoryRD_CAS Access to Rank 7; Bank 6event=0xb7,umask=601unc_m_rd_cas_rank7.bank7uncore memoryRD_CAS Access to Rank 7; Bank 7event=0xb7,umask=701unc_m_rd_cas_rank7.bank8uncore memoryRD_CAS Access to Rank 7; Bank 8event=0xb7,umask=801unc_m_rd_cas_rank7.bank9uncore memoryRD_CAS Access to Rank 7; Bank 9event=0xb7,umask=901unc_m_rd_cas_rank7.bankg0uncore memoryRD_CAS Access to Rank 7; Bank Group 0 (Banks 0-3)event=0xb7,umask=0x1101unc_m_rd_cas_rank7.bankg1uncore memoryRD_CAS Access to Rank 7; Bank Group 1 (Banks 4-7)event=0xb7,umask=0x1201unc_m_rd_cas_rank7.bankg2uncore memoryRD_CAS Access to Rank 7; Bank Group 2 (Banks 8-11)event=0xb7,umask=0x1301unc_m_rd_cas_rank7.bankg3uncore memoryRD_CAS Access to Rank 7; Bank Group 3 (Banks 12-15)event=0xb7,umask=0x1401unc_m_rpq_cycles_fulluncore memoryRead Pending Queue Full Cyclesevent=0x1201Counts the number of cycles when the Read Pending Queue is full.  When the RPQ is full, the HA will not be able to issue any additional read requests into the iMC.  This count should be similar count in the HA which tracks the number of cycles that the HA has no RPQ credits, just somewhat smaller to account for the credit return overhead.  We generally do not expect to see RPQ become full except for potentially during Write Major Mode or while running with slow DRAM.  This event only tracks non-ISOC queue entriesunc_m_rpq_insertsuncore memoryRead Pending Queue Allocationsevent=0x1001Counts the number of read requests allocated into the Read Pending Queue (RPQ).  This queue is used to schedule reads out to the memory controller and to track the requests.  Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the CHA to the iMC.  The requests deallocate after the read CAS command has been issued to DRAM.  This event counts both Isochronous and non-Isochronous requests which were issued to the RPQunc_m_rpq_occupancyuncore memoryRead Pending Queue Occupancyevent=0x8001Counts the number of entries in the Read Pending Queue (RPQ) at each cycle.  This can then be used to calculate both the average occupancy of the queue (in conjunction with the number of cycles not empty) and the average latency in the queue (in conjunction with the number of allocations).  The RPQ is used to schedule reads out to the memory controller and to track the requests.  Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the CHA to the iMC. They deallocate from the RPQ after the CAS command has been issued to memoryunc_m_sb_accesses.fm_rd_cmpsuncore memoryScoreboard Accesses; Write Acceptsevent=0xd2,umask=0x4001unc_m_sb_accesses.fm_wr_cmpsuncore memoryScoreboard Accesses; Write Rejectsevent=0xd2,umask=0x8001unc_m_sb_accesses.nm_rd_cmpsuncore memoryScoreboard Accesses; FM read completionsevent=0xd2,umask=0x1001unc_m_sb_accesses.nm_wr_cmpsuncore memoryScoreboard Accesses; FM write completionsevent=0xd2,umask=0x2001unc_m_sb_accesses.rd_acceptsuncore memoryScoreboard Accesses; Read Acceptsevent=0xd2,umask=101unc_m_sb_accesses.rd_rejectsuncore memoryScoreboard Accesses; Read Rejectsevent=0xd2,umask=201unc_m_sb_accesses.wr_acceptsuncore memoryScoreboard Accesses; NM read completionsevent=0xd2,umask=401unc_m_sb_accesses.wr_rejectsuncore memoryScoreboard Accesses; NM write completionsevent=0xd2,umask=801unc_m_sb_canary.allocuncore memoryAllocevent=0xd9,umask=101unc_m_sb_canary.deallocuncore memoryDeallocevent=0xd9,umask=201unc_m_sb_canary.fmrd_starveduncore memoryFar Mem Read Starvedevent=0xd9,umask=0x4001unc_m_sb_canary.fmwr_starveduncore memoryFar Mem Write Starvedevent=0xd9,umask=0x8001unc_m_sb_canary.nmrd_starveduncore memoryNear Mem Read Starvedevent=0xd9,umask=0x1001unc_m_sb_canary.nmwr_starveduncore memoryNear Mem Write Starvedevent=0xd9,umask=0x2001unc_m_sb_canary.rejuncore memoryRejectevent=0xd9,umask=401unc_m_sb_canary.vlduncore memoryValidevent=0xd9,umask=801unc_m_sb_cycles_fulluncore memoryScoreboard Cycles Fullevent=0xd101unc_m_sb_cycles_neuncore memoryScoreboard Cycles Not-Emptyevent=0xd001unc_m_sb_inserts.block_rdsuncore memoryScoreboard Inserts; Block region readsevent=0xd6,umask=0x1001unc_m_sb_inserts.block_wrsuncore memoryScoreboard Inserts; Block region writesevent=0xd6,umask=0x2001unc_m_sb_inserts.deallocuncore memoryScoreboard Inserts; Dealloc all commands (for error flows)event=0xd6,umask=0x4001unc_m_sb_inserts.patroluncore memoryScoreboard Inserts; Patrol insertsevent=0xd6,umask=0x8001unc_m_sb_inserts.pmm_rdsuncore memoryScoreboard Inserts; Persistent Mem readsevent=0xd6,umask=401unc_m_sb_inserts.pmm_wrsuncore memoryScoreboard Inserts; Persistent Mem writesevent=0xd6,umask=801unc_m_sb_inserts.rdsuncore memoryScoreboard Inserts; Readsevent=0xd6,umask=101unc_m_sb_inserts.wrsuncore memoryScoreboard Inserts; Writesevent=0xd6,umask=201unc_m_sb_occupancy.block_rdsuncore memoryScoreboard Occupancy; Block region readsevent=0xd5,umask=0x2001unc_m_sb_occupancy.block_wrsuncore memoryScoreboard Occupancy; Block region writesevent=0xd5,umask=0x4001unc_m_sb_occupancy.patroluncore memoryScoreboard Occupancy; Patrolevent=0xd5,umask=0x8001unc_m_sb_occupancy.pmm_rdsuncore memoryScoreboard Occupancy; Persistent Mem readsevent=0xd5,umask=401unc_m_sb_occupancy.pmm_wrsuncore memoryScoreboard Occupancy; Persistent Mem writesevent=0xd5,umask=801unc_m_sb_occupancy.rdsuncore memoryScoreboard Occupancy; Readsevent=0xd5,umask=101unc_m_sb_occupancy.wrsuncore memoryScoreboard Occupancy; Writesevent=0xd5,umask=201unc_m_sb_reject.fm_addr_cnfltuncore memoryNumber of Scoreboard Requests Rejected; FM requests rejected due to full address conflictevent=0xd4,umask=201unc_m_sb_reject.nm_set_cnfltuncore memoryNumber of Scoreboard Requests Rejected; NM requests rejected due to set conflictevent=0xd4,umask=101unc_m_sb_reject.patrol_set_cnfltuncore memoryNumber of Scoreboard Requests Rejected; Patrol requests rejected due to set conflictevent=0xd4,umask=401unc_m_sb_strv_alloc.fmrd_clruncore memoryFar Mem Read - Clearevent=0xd7,umask=0x2001unc_m_sb_strv_alloc.fmrd_setuncore memoryFar Mem Read - Setevent=0xd7,umask=201unc_m_sb_strv_alloc.fmwr_clruncore memoryFar Mem Write - Clearevent=0xd7,umask=0x8001unc_m_sb_strv_alloc.fmwr_setuncore memoryFar Mem Write - Setevent=0xd7,umask=801unc_m_sb_strv_alloc.nmrd_clruncore memoryNear Mem Read - Clearevent=0xd7,umask=0x1001unc_m_sb_strv_alloc.nmrd_setuncore memoryNear Mem Read - Setevent=0xd7,umask=101unc_m_sb_strv_alloc.nmwr_clruncore memoryNear Mem Write - Clearevent=0xd7,umask=0x4001unc_m_sb_strv_alloc.nmwr_setuncore memoryNear Mem Write - Setevent=0xd7,umask=401unc_m_sb_strv_occ.fmrduncore memoryFar Mem Readevent=0xd8,umask=201unc_m_sb_strv_occ.fmwruncore memoryFar Mem Writeevent=0xd8,umask=801unc_m_sb_strv_occ.nmrduncore memoryNear Mem Readevent=0xd8,umask=101unc_m_sb_strv_occ.nmwruncore memoryNear Mem Writeevent=0xd8,umask=401unc_m_sb_tagged.ddr4_cmpuncore memoryUNC_M_SB_TAGGED.DDR4_CMPevent=0xdd,umask=801unc_m_sb_tagged.newuncore memoryUNC_M_SB_TAGGED.NEWevent=0xdd,umask=101unc_m_sb_tagged.occuncore memoryUNC_M_SB_TAGGED.OCCevent=0xdd,umask=0x8001unc_m_sb_tagged.pmm0_cmpuncore memoryUNC_M_SB_TAGGED.PMM0_CMPevent=0xdd,umask=0x1001unc_m_sb_tagged.pmm1_cmpuncore memoryUNC_M_SB_TAGGED.PMM1_CMPevent=0xdd,umask=0x2001unc_m_sb_tagged.pmm2_cmpuncore memoryUNC_M_SB_TAGGED.PMM2_CMPevent=0xdd,umask=0x4001unc_m_sb_tagged.rd_hituncore memoryUNC_M_SB_TAGGED.RD_HITevent=0xdd,umask=201unc_m_sb_tagged.rd_missuncore memoryUNC_M_SB_TAGGED.RD_MISSevent=0xdd,umask=401unc_m_tagchk.hituncore memoryAll hits to Near Memory(DRAM cache) in Memory Modeevent=0xd3,umask=101Tag Check; Hitunc_m_tagchk.miss_cleanuncore memoryAll Clean line misses to Near Memory(DRAM cache) in Memory Modeevent=0xd3,umask=201Tag Check; Cleanunc_m_tagchk.miss_dirtyuncore memoryAll dirty line misses to Near Memory(DRAM cache) in Memory Modeevent=0xd3,umask=401Tag Check; Dirtyunc_m_wpq_cycles_fulluncore memoryWrite Pending Queue Full Cyclesevent=0x2201Counts the number of cycles when the Write Pending Queue is full.  When the WPQ is full, the HA will not be able to issue any additional write requests into the iMC.  This count should be similar count in the CHA which tracks the number of cycles that the CHA has no WPQ credits, just somewhat smaller to account for the credit return overheadunc_m_wpq_cycles_neuncore memoryWrite Pending Queue Not Emptyevent=0x2101Counts the number of cycles that the Write Pending Queue is not empty.  This can then be used to calculate the average queue occupancy (in conjunction with the WPQ Occupancy Accumulation count).  The WPQ is used to schedule write out to the memory controller and to track the writes.  Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the CHA to the iMC.  They deallocate after being issued to DRAM.  Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have posted to the iMC.  This is not to be confused with actually performing the write to DRAM.  Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latenciesunc_m_wpq_insertsuncore memoryWrite Pending Queue Allocationsevent=0x2001Counts the number of writes requests allocated into the Write Pending Queue (WPQ).  The WPQ is used to schedule writes out to the memory controller and to track the requests.  Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the CHA to the iMC (Memory Controller).  The write requests deallocate after being issued to DRAM.  Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have 'posted' to the iMCunc_m_wpq_occupancyuncore memoryWrite Pending Queue Occupancyevent=0x8101Counts the number of entries in the Write Pending Queue (WPQ) at each cycle.  This can then be used to calculate both the average queue occupancy (in conjunction with the number of cycles not empty) and the average latency (in conjunction with the number of allocations).  The WPQ is used to schedule writes out to the memory controller and to track the requests.  Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the CHA to the iMC (memory controller).  They deallocate after being issued to DRAM.  Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have 'posted' to the iMC.  This is not to be confused with actually performing the write to DRAM.  Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latencies.  So, we provide filtering based on if the request has posted or not.  By using the 'not posted' filter, we can track how long writes spent in the iMC before completions were sent to the HA.  The 'posted' filter, on the other hand, provides information about how much queueing is actually happening in the iMC for writes before they are actually issued to memory.  High average occupancies will generally coincide with high write major mode countsunc_m_wr_cas_rank0.allbanksuncore memoryWR_CAS Access to Rank 0; All Banksevent=0xb8,umask=0x1001unc_m_wr_cas_rank0.bank0uncore memoryWR_CAS Access to Rank 0; Bank 0event=0xb801unc_m_wr_cas_rank0.bank1uncore memoryWR_CAS Access to Rank 0; Bank 1event=0xb8,umask=101unc_m_wr_cas_rank0.bank10uncore memoryWR_CAS Access to Rank 0; Bank 10event=0xb8,umask=0xa01unc_m_wr_cas_rank0.bank11uncore memoryWR_CAS Access to Rank 0; Bank 11event=0xb8,umask=0xb01unc_m_wr_cas_rank0.bank12uncore memoryWR_CAS Access to Rank 0; Bank 12event=0xb8,umask=0xc01unc_m_wr_cas_rank0.bank13uncore memoryWR_CAS Access to Rank 0; Bank 13event=0xb8,umask=0xd01unc_m_wr_cas_rank0.bank14uncore memoryWR_CAS Access to Rank 0; Bank 14event=0xb8,umask=0xe01unc_m_wr_cas_rank0.bank15uncore memoryWR_CAS Access to Rank 0; Bank 15event=0xb8,umask=0xf01unc_m_wr_cas_rank0.bank2uncore memoryWR_CAS Access to Rank 0; Bank 2event=0xb8,umask=201unc_m_wr_cas_rank0.bank3uncore memoryWR_CAS Access to Rank 0; Bank 3event=0xb8,umask=301unc_m_wr_cas_rank0.bank4uncore memoryWR_CAS Access to Rank 0; Bank 4event=0xb8,umask=401unc_m_wr_cas_rank0.bank5uncore memoryWR_CAS Access to Rank 0; Bank 5event=0xb8,umask=501unc_m_wr_cas_rank0.bank6uncore memoryWR_CAS Access to Rank 0; Bank 6event=0xb8,umask=601unc_m_wr_cas_rank0.bank7uncore memoryWR_CAS Access to Rank 0; Bank 7event=0xb8,umask=701unc_m_wr_cas_rank0.bank8uncore memoryWR_CAS Access to Rank 0; Bank 8event=0xb8,umask=801unc_m_wr_cas_rank0.bank9uncore memoryWR_CAS Access to Rank 0; Bank 9event=0xb8,umask=901unc_m_wr_cas_rank0.bankg0uncore memoryWR_CAS Access to Rank 0; Bank Group 0 (Banks 0-3)event=0xb8,umask=0x1101unc_m_wr_cas_rank0.bankg1uncore memoryWR_CAS Access to Rank 0; Bank Group 1 (Banks 4-7)event=0xb8,umask=0x1201unc_m_wr_cas_rank0.bankg2uncore memoryWR_CAS Access to Rank 0; Bank Group 2 (Banks 8-11)event=0xb8,umask=0x1301unc_m_wr_cas_rank0.bankg3uncore memoryWR_CAS Access to Rank 0; Bank Group 3 (Banks 12-15)event=0xb8,umask=0x1401unc_m_wr_cas_rank1.allbanksuncore memoryWR_CAS Access to Rank 1; All Banksevent=0xb9,umask=0x1001unc_m_wr_cas_rank1.bank0uncore memoryWR_CAS Access to Rank 1; Bank 0event=0xb901unc_m_wr_cas_rank1.bank1uncore memoryWR_CAS Access to Rank 1; Bank 1event=0xb9,umask=101unc_m_wr_cas_rank1.bank10uncore memoryWR_CAS Access to Rank 1; Bank 10event=0xb9,umask=0xa01unc_m_wr_cas_rank1.bank11uncore memoryWR_CAS Access to Rank 1; Bank 11event=0xb9,umask=0xb01unc_m_wr_cas_rank1.bank12uncore memoryWR_CAS Access to Rank 1; Bank 12event=0xb9,umask=0xc01unc_m_wr_cas_rank1.bank13uncore memoryWR_CAS Access to Rank 1; Bank 13event=0xb9,umask=0xd01unc_m_wr_cas_rank1.bank14uncore memoryWR_CAS Access to Rank 1; Bank 14event=0xb9,umask=0xe01unc_m_wr_cas_rank1.bank15uncore memoryWR_CAS Access to Rank 1; Bank 15event=0xb9,umask=0xf01unc_m_wr_cas_rank1.bank2uncore memoryWR_CAS Access to Rank 1; Bank 2event=0xb9,umask=201unc_m_wr_cas_rank1.bank3uncore memoryWR_CAS Access to Rank 1; Bank 3event=0xb9,umask=301unc_m_wr_cas_rank1.bank4uncore memoryWR_CAS Access to Rank 1; Bank 4event=0xb9,umask=401unc_m_wr_cas_rank1.bank5uncore memoryWR_CAS Access to Rank 1; Bank 5event=0xb9,umask=501unc_m_wr_cas_rank1.bank6uncore memoryWR_CAS Access to Rank 1; Bank 6event=0xb9,umask=601unc_m_wr_cas_rank1.bank7uncore memoryWR_CAS Access to Rank 1; Bank 7event=0xb9,umask=701unc_m_wr_cas_rank1.bank8uncore memoryWR_CAS Access to Rank 1; Bank 8event=0xb9,umask=801unc_m_wr_cas_rank1.bank9uncore memoryWR_CAS Access to Rank 1; Bank 9event=0xb9,umask=901unc_m_wr_cas_rank1.bankg0uncore memoryWR_CAS Access to Rank 1; Bank Group 0 (Banks 0-3)event=0xb9,umask=0x1101unc_m_wr_cas_rank1.bankg1uncore memoryWR_CAS Access to Rank 1; Bank Group 1 (Banks 4-7)event=0xb9,umask=0x1201unc_m_wr_cas_rank1.bankg2uncore memoryWR_CAS Access to Rank 1; Bank Group 2 (Banks 8-11)event=0xb9,umask=0x1301unc_m_wr_cas_rank1.bankg3uncore memoryWR_CAS Access to Rank 1; Bank Group 3 (Banks 12-15)event=0xb9,umask=0x1401unc_m_wr_cas_rank2.allbanksuncore memoryWR_CAS Access to Rank 2; All Banksevent=0xba,umask=0x1001unc_m_wr_cas_rank2.bank0uncore memoryWR_CAS Access to Rank 2; Bank 0event=0xba01unc_m_wr_cas_rank2.bank1uncore memoryWR_CAS Access to Rank 2; Bank 1event=0xba,umask=101unc_m_wr_cas_rank2.bank10uncore memoryWR_CAS Access to Rank 2; Bank 10event=0xba,umask=0xa01unc_m_wr_cas_rank2.bank11uncore memoryWR_CAS Access to Rank 2; Bank 11event=0xba,umask=0xb01unc_m_wr_cas_rank2.bank12uncore memoryWR_CAS Access to Rank 2; Bank 12event=0xba,umask=0xc01unc_m_wr_cas_rank2.bank13uncore memoryWR_CAS Access to Rank 2; Bank 13event=0xba,umask=0xd01unc_m_wr_cas_rank2.bank14uncore memoryWR_CAS Access to Rank 2; Bank 14event=0xba,umask=0xe01unc_m_wr_cas_rank2.bank15uncore memoryWR_CAS Access to Rank 2; Bank 15event=0xba,umask=0xf01unc_m_wr_cas_rank2.bank2uncore memoryWR_CAS Access to Rank 2; Bank 2event=0xba,umask=201unc_m_wr_cas_rank2.bank3uncore memoryWR_CAS Access to Rank 2; Bank 3event=0xba,umask=301unc_m_wr_cas_rank2.bank4uncore memoryWR_CAS Access to Rank 2; Bank 4event=0xba,umask=401unc_m_wr_cas_rank2.bank5uncore memoryWR_CAS Access to Rank 2; Bank 5event=0xba,umask=501unc_m_wr_cas_rank2.bank6uncore memoryWR_CAS Access to Rank 2; Bank 6event=0xba,umask=601unc_m_wr_cas_rank2.bank7uncore memoryWR_CAS Access to Rank 2; Bank 7event=0xba,umask=701unc_m_wr_cas_rank2.bank8uncore memoryWR_CAS Access to Rank 2; Bank 8event=0xba,umask=801unc_m_wr_cas_rank2.bank9uncore memoryWR_CAS Access to Rank 2; Bank 9event=0xba,umask=901unc_m_wr_cas_rank2.bankg0uncore memoryWR_CAS Access to Rank 2; Bank Group 0 (Banks 0-3)event=0xba,umask=0x1101unc_m_wr_cas_rank2.bankg1uncore memoryWR_CAS Access to Rank 2; Bank Group 1 (Banks 4-7)event=0xba,umask=0x1201unc_m_wr_cas_rank2.bankg2uncore memoryWR_CAS Access to Rank 2; Bank Group 2 (Banks 8-11)event=0xba,umask=0x1301unc_m_wr_cas_rank2.bankg3uncore memoryWR_CAS Access to Rank 2; Bank Group 3 (Banks 12-15)event=0xba,umask=0x1401unc_m_wr_cas_rank3.allbanksuncore memoryWR_CAS Access to Rank 3; All Banksevent=0xbb,umask=0x1001unc_m_wr_cas_rank3.bank0uncore memoryWR_CAS Access to Rank 3; Bank 0event=0xbb01unc_m_wr_cas_rank3.bank1uncore memoryWR_CAS Access to Rank 3; Bank 1event=0xbb,umask=101unc_m_wr_cas_rank3.bank10uncore memoryWR_CAS Access to Rank 3; Bank 10event=0xbb,umask=0xa01unc_m_wr_cas_rank3.bank11uncore memoryWR_CAS Access to Rank 3; Bank 11event=0xbb,umask=0xb01unc_m_wr_cas_rank3.bank12uncore memoryWR_CAS Access to Rank 3; Bank 12event=0xbb,umask=0xc01unc_m_wr_cas_rank3.bank13uncore memoryWR_CAS Access to Rank 3; Bank 13event=0xbb,umask=0xd01unc_m_wr_cas_rank3.bank14uncore memoryWR_CAS Access to Rank 3; Bank 14event=0xbb,umask=0xe01unc_m_wr_cas_rank3.bank15uncore memoryWR_CAS Access to Rank 3; Bank 15event=0xbb,umask=0xf01unc_m_wr_cas_rank3.bank2uncore memoryWR_CAS Access to Rank 3; Bank 2event=0xbb,umask=201unc_m_wr_cas_rank3.bank3uncore memoryWR_CAS Access to Rank 3; Bank 3event=0xbb,umask=301unc_m_wr_cas_rank3.bank4uncore memoryWR_CAS Access to Rank 3; Bank 4event=0xbb,umask=401unc_m_wr_cas_rank3.bank5uncore memoryWR_CAS Access to Rank 3; Bank 5event=0xbb,umask=501unc_m_wr_cas_rank3.bank6uncore memoryWR_CAS Access to Rank 3; Bank 6event=0xbb,umask=601unc_m_wr_cas_rank3.bank7uncore memoryWR_CAS Access to Rank 3; Bank 7event=0xbb,umask=701unc_m_wr_cas_rank3.bank8uncore memoryWR_CAS Access to Rank 3; Bank 8event=0xbb,umask=801unc_m_wr_cas_rank3.bank9uncore memoryWR_CAS Access to Rank 3; Bank 9event=0xbb,umask=901unc_m_wr_cas_rank3.bankg0uncore memoryWR_CAS Access to Rank 3; Bank Group 0 (Banks 0-3)event=0xbb,umask=0x1101unc_m_wr_cas_rank3.bankg1uncore memoryWR_CAS Access to Rank 3; Bank Group 1 (Banks 4-7)event=0xbb,umask=0x1201unc_m_wr_cas_rank3.bankg2uncore memoryWR_CAS Access to Rank 3; Bank Group 2 (Banks 8-11)event=0xbb,umask=0x1301unc_m_wr_cas_rank3.bankg3uncore memoryWR_CAS Access to Rank 3; Bank Group 3 (Banks 12-15)event=0xbb,umask=0x1401unc_m_wr_cas_rank4.allbanksuncore memoryWR_CAS Access to Rank 4; All Banksevent=0xbc,umask=0x1001unc_m_wr_cas_rank4.bank0uncore memoryWR_CAS Access to Rank 4; Bank 0event=0xbc01unc_m_wr_cas_rank4.bank1uncore memoryWR_CAS Access to Rank 4; Bank 1event=0xbc,umask=101unc_m_wr_cas_rank4.bank10uncore memoryWR_CAS Access to Rank 4; Bank 10event=0xbc,umask=0xa01unc_m_wr_cas_rank4.bank11uncore memoryWR_CAS Access to Rank 4; Bank 11event=0xbc,umask=0xb01unc_m_wr_cas_rank4.bank12uncore memoryWR_CAS Access to Rank 4; Bank 12event=0xbc,umask=0xc01unc_m_wr_cas_rank4.bank13uncore memoryWR_CAS Access to Rank 4; Bank 13event=0xbc,umask=0xd01unc_m_wr_cas_rank4.bank14uncore memoryWR_CAS Access to Rank 4; Bank 14event=0xbc,umask=0xe01unc_m_wr_cas_rank4.bank15uncore memoryWR_CAS Access to Rank 4; Bank 15event=0xbc,umask=0xf01unc_m_wr_cas_rank4.bank2uncore memoryWR_CAS Access to Rank 4; Bank 2event=0xbc,umask=201unc_m_wr_cas_rank4.bank3uncore memoryWR_CAS Access to Rank 4; Bank 3event=0xbc,umask=301unc_m_wr_cas_rank4.bank4uncore memoryWR_CAS Access to Rank 4; Bank 4event=0xbc,umask=401unc_m_wr_cas_rank4.bank5uncore memoryWR_CAS Access to Rank 4; Bank 5event=0xbc,umask=501unc_m_wr_cas_rank4.bank6uncore memoryWR_CAS Access to Rank 4; Bank 6event=0xbc,umask=601unc_m_wr_cas_rank4.bank7uncore memoryWR_CAS Access to Rank 4; Bank 7event=0xbc,umask=701unc_m_wr_cas_rank4.bank8uncore memoryWR_CAS Access to Rank 4; Bank 8event=0xbc,umask=801unc_m_wr_cas_rank4.bank9uncore memoryWR_CAS Access to Rank 4; Bank 9event=0xbc,umask=901unc_m_wr_cas_rank4.bankg0uncore memoryWR_CAS Access to Rank 4; Bank Group 0 (Banks 0-3)event=0xbc,umask=0x1101unc_m_wr_cas_rank4.bankg1uncore memoryWR_CAS Access to Rank 4; Bank Group 1 (Banks 4-7)event=0xbc,umask=0x1201unc_m_wr_cas_rank4.bankg2uncore memoryWR_CAS Access to Rank 4; Bank Group 2 (Banks 8-11)event=0xbc,umask=0x1301unc_m_wr_cas_rank4.bankg3uncore memoryWR_CAS Access to Rank 4; Bank Group 3 (Banks 12-15)event=0xbc,umask=0x1401unc_m_wr_cas_rank5.allbanksuncore memoryWR_CAS Access to Rank 5; All Banksevent=0xbd,umask=0x1001unc_m_wr_cas_rank5.bank0uncore memoryWR_CAS Access to Rank 5; Bank 0event=0xbd01unc_m_wr_cas_rank5.bank1uncore memoryWR_CAS Access to Rank 5; Bank 1event=0xbd,umask=101unc_m_wr_cas_rank5.bank10uncore memoryWR_CAS Access to Rank 5; Bank 10event=0xbd,umask=0xa01unc_m_wr_cas_rank5.bank11uncore memoryWR_CAS Access to Rank 5; Bank 11event=0xbd,umask=0xb01unc_m_wr_cas_rank5.bank12uncore memoryWR_CAS Access to Rank 5; Bank 12event=0xbd,umask=0xc01unc_m_wr_cas_rank5.bank13uncore memoryWR_CAS Access to Rank 5; Bank 13event=0xbd,umask=0xd01unc_m_wr_cas_rank5.bank14uncore memoryWR_CAS Access to Rank 5; Bank 14event=0xbd,umask=0xe01unc_m_wr_cas_rank5.bank15uncore memoryWR_CAS Access to Rank 5; Bank 15event=0xbd,umask=0xf01unc_m_wr_cas_rank5.bank2uncore memoryWR_CAS Access to Rank 5; Bank 2event=0xbd,umask=201unc_m_wr_cas_rank5.bank3uncore memoryWR_CAS Access to Rank 5; Bank 3event=0xbd,umask=301unc_m_wr_cas_rank5.bank4uncore memoryWR_CAS Access to Rank 5; Bank 4event=0xbd,umask=401unc_m_wr_cas_rank5.bank5uncore memoryWR_CAS Access to Rank 5; Bank 5event=0xbd,umask=501unc_m_wr_cas_rank5.bank6uncore memoryWR_CAS Access to Rank 5; Bank 6event=0xbd,umask=601unc_m_wr_cas_rank5.bank7uncore memoryWR_CAS Access to Rank 5; Bank 7event=0xbd,umask=701unc_m_wr_cas_rank5.bank8uncore memoryWR_CAS Access to Rank 5; Bank 8event=0xbd,umask=801unc_m_wr_cas_rank5.bank9uncore memoryWR_CAS Access to Rank 5; Bank 9event=0xbd,umask=901unc_m_wr_cas_rank5.bankg0uncore memoryWR_CAS Access to Rank 5; Bank Group 0 (Banks 0-3)event=0xbd,umask=0x1101unc_m_wr_cas_rank5.bankg1uncore memoryWR_CAS Access to Rank 5; Bank Group 1 (Banks 4-7)event=0xbd,umask=0x1201unc_m_wr_cas_rank5.bankg2uncore memoryWR_CAS Access to Rank 5; Bank Group 2 (Banks 8-11)event=0xbd,umask=0x1301unc_m_wr_cas_rank5.bankg3uncore memoryWR_CAS Access to Rank 5; Bank Group 3 (Banks 12-15)event=0xbd,umask=0x1401unc_m_wr_cas_rank6.allbanksuncore memoryWR_CAS Access to Rank 6; All Banksevent=0xbe,umask=0x1001unc_m_wr_cas_rank6.bank0uncore memoryWR_CAS Access to Rank 6; Bank 0event=0xbe01unc_m_wr_cas_rank6.bank1uncore memoryWR_CAS Access to Rank 6; Bank 1event=0xbe,umask=101unc_m_wr_cas_rank6.bank10uncore memoryWR_CAS Access to Rank 6; Bank 10event=0xbe,umask=0xa01unc_m_wr_cas_rank6.bank11uncore memoryWR_CAS Access to Rank 6; Bank 11event=0xbe,umask=0xb01unc_m_wr_cas_rank6.bank12uncore memoryWR_CAS Access to Rank 6; Bank 12event=0xbe,umask=0xc01unc_m_wr_cas_rank6.bank13uncore memoryWR_CAS Access to Rank 6; Bank 13event=0xbe,umask=0xd01unc_m_wr_cas_rank6.bank14uncore memoryWR_CAS Access to Rank 6; Bank 14event=0xbe,umask=0xe01unc_m_wr_cas_rank6.bank15uncore memoryWR_CAS Access to Rank 6; Bank 15event=0xbe,umask=0xf01unc_m_wr_cas_rank6.bank2uncore memoryWR_CAS Access to Rank 6; Bank 2event=0xbe,umask=201unc_m_wr_cas_rank6.bank3uncore memoryWR_CAS Access to Rank 6; Bank 3event=0xbe,umask=301unc_m_wr_cas_rank6.bank4uncore memoryWR_CAS Access to Rank 6; Bank 4event=0xbe,umask=401unc_m_wr_cas_rank6.bank5uncore memoryWR_CAS Access to Rank 6; Bank 5event=0xbe,umask=501unc_m_wr_cas_rank6.bank6uncore memoryWR_CAS Access to Rank 6; Bank 6event=0xbe,umask=601unc_m_wr_cas_rank6.bank7uncore memoryWR_CAS Access to Rank 6; Bank 7event=0xbe,umask=701unc_m_wr_cas_rank6.bank8uncore memoryWR_CAS Access to Rank 6; Bank 8event=0xbe,umask=801unc_m_wr_cas_rank6.bank9uncore memoryWR_CAS Access to Rank 6; Bank 9event=0xbe,umask=901unc_m_wr_cas_rank6.bankg0uncore memoryWR_CAS Access to Rank 6; Bank Group 0 (Banks 0-3)event=0xbe,umask=0x1101unc_m_wr_cas_rank6.bankg1uncore memoryWR_CAS Access to Rank 6; Bank Group 1 (Banks 4-7)event=0xbe,umask=0x1201unc_m_wr_cas_rank6.bankg2uncore memoryWR_CAS Access to Rank 6; Bank Group 2 (Banks 8-11)event=0xbe,umask=0x1301unc_m_wr_cas_rank6.bankg3uncore memoryWR_CAS Access to Rank 6; Bank Group 3 (Banks 12-15)event=0xbe,umask=0x1401unc_m_wr_cas_rank7.allbanksuncore memoryWR_CAS Access to Rank 7; All Banksevent=0xbf,umask=0x1001unc_m_wr_cas_rank7.bank0uncore memoryWR_CAS Access to Rank 7; Bank 0event=0xbf01unc_m_wr_cas_rank7.bank1uncore memoryWR_CAS Access to Rank 7; Bank 1event=0xbf,umask=101unc_m_wr_cas_rank7.bank10uncore memoryWR_CAS Access to Rank 7; Bank 10event=0xbf,umask=0xa01unc_m_wr_cas_rank7.bank11uncore memoryWR_CAS Access to Rank 7; Bank 11event=0xbf,umask=0xb01unc_m_wr_cas_rank7.bank12uncore memoryWR_CAS Access to Rank 7; Bank 12event=0xbf,umask=0xc01unc_m_wr_cas_rank7.bank13uncore memoryWR_CAS Access to Rank 7; Bank 13event=0xbf,umask=0xd01unc_m_wr_cas_rank7.bank14uncore memoryWR_CAS Access to Rank 7; Bank 14event=0xbf,umask=0xe01unc_m_wr_cas_rank7.bank15uncore memoryWR_CAS Access to Rank 7; Bank 15event=0xbf,umask=0xf01unc_m_wr_cas_rank7.bank2uncore memoryWR_CAS Access to Rank 7; Bank 2event=0xbf,umask=201unc_m_wr_cas_rank7.bank3uncore memoryWR_CAS Access to Rank 7; Bank 3event=0xbf,umask=301unc_m_wr_cas_rank7.bank4uncore memoryWR_CAS Access to Rank 7; Bank 4event=0xbf,umask=401unc_m_wr_cas_rank7.bank5uncore memoryWR_CAS Access to Rank 7; Bank 5event=0xbf,umask=501unc_m_wr_cas_rank7.bank6uncore memoryWR_CAS Access to Rank 7; Bank 6event=0xbf,umask=601unc_m_wr_cas_rank7.bank7uncore memoryWR_CAS Access to Rank 7; Bank 7event=0xbf,umask=701unc_m_wr_cas_rank7.bank8uncore memoryWR_CAS Access to Rank 7; Bank 8event=0xbf,umask=801unc_m_wr_cas_rank7.bank9uncore memoryWR_CAS Access to Rank 7; Bank 9event=0xbf,umask=901unc_m_wr_cas_rank7.bankg0uncore memoryWR_CAS Access to Rank 7; Bank Group 0 (Banks 0-3)event=0xbf,umask=0x1101unc_m_wr_cas_rank7.bankg1uncore memoryWR_CAS Access to Rank 7; Bank Group 1 (Banks 4-7)event=0xbf,umask=0x1201unc_m_wr_cas_rank7.bankg2uncore memoryWR_CAS Access to Rank 7; Bank Group 2 (Banks 8-11)event=0xbf,umask=0x1301unc_m_wr_cas_rank7.bankg3uncore memoryWR_CAS Access to Rank 7; Bank Group 3 (Banks 12-15)event=0xbf,umask=0x1401unc_p_core_transition_cyclesuncore powerUNC_P_CORE_TRANSITION_CYCLESevent=0x6001unc_p_demotionsuncore powerUNC_P_DEMOTIONSevent=0x3001unc_p_fivr_ps_ps0_cyclesuncore powerPhase Shed 0 Cyclesevent=0x7501Cycles spent in phase-shedding power state 0unc_p_fivr_ps_ps1_cyclesuncore powerPhase Shed 1 Cyclesevent=0x7601Cycles spent in phase-shedding power state 1unc_p_fivr_ps_ps2_cyclesuncore powerPhase Shed 2 Cyclesevent=0x7701Cycles spent in phase-shedding power state 2unc_p_fivr_ps_ps3_cyclesuncore powerPhase Shed 3 Cyclesevent=0x7801Cycles spent in phase-shedding power state 3unc_p_mcp_prochot_cyclesuncore powerUNC_P_MCP_PROCHOT_CYCLESevent=601unc_p_pmax_throttled_cyclesuncore powerUNC_P_PMAX_THROTTLED_CYCLESevent=701unc_p_power_state_occupancy.cores_c0uncore powerNumber of cores in C-State; C0 and C1event=0x80,umask=0x4001This is an occupancy event that tracks the number of cores that are in the chosen C-State.  It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other detailsunc_p_power_state_occupancy.cores_c3uncore powerNumber of cores in C-State; C3event=0x80,umask=0x8001This is an occupancy event that tracks the number of cores that are in the chosen C-State.  It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other detailsunc_p_power_state_occupancy.cores_c6uncore powerNumber of cores in C-State; C6 and C7event=0x80,umask=0xc001This is an occupancy event that tracks the number of cores that are in the chosen C-State.  It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other detailsunc_p_vr_hot_cyclesuncore powerVR Hotevent=0x4201dtlb_load_misses.miss_causes_a_walkvirtual memoryLoad misses in all DTLB levels that cause page walksevent=8,period=100003,umask=100Counts demand data loads that caused a page walk of any page size (4K/2M/4M/1G). This implies it missed in all TLB levels, but the walk need not have completeddtlb_load_misses.stlb_hitvirtual memoryLoads that miss the DTLB and hit the STLBevent=8,period=2000003,umask=0x2000Counts loads that miss the DTLB (Data TLB) and hit the STLB (Second level TLB)dtlb_load_misses.walk_activevirtual memoryCycles when at least one PMH is busy with a page walk for a load. EPT page walk duration are excluded in Skylakeevent=8,cmask=1,period=100003,umask=0x1000Counts cycles when at least one PMH (Page Miss Handler) is busy with a page walk for a loaddtlb_load_misses.walk_completedvirtual memoryLoad miss in all TLB levels causes a page walk that completes. (All page sizes)event=8,period=100003,umask=0xe00Counts completed page walks  (all page sizes) caused by demand data loads. This implies it missed in the DTLB and further levels of TLB. The page walk can end with or without a faultdtlb_load_misses.walk_completed_1gvirtual memoryPage walk completed due to a demand data load to a 1G pageevent=8,period=2000003,umask=800Counts completed page walks  (1G sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a faultdtlb_load_misses.walk_completed_2m_4mvirtual memoryPage walk completed due to a demand data load to a 2M/4M pageevent=8,period=2000003,umask=400Counts completed page walks  (2M/4M sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a faultdtlb_load_misses.walk_completed_4kvirtual memoryPage walk completed due to a demand data load to a 4K pageevent=8,period=2000003,umask=200Counts completed page walks  (4K sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a faultdtlb_load_misses.walk_pendingvirtual memoryCounts 1 per cycle for each PMH that is busy with a page walk for a load. EPT page walk duration are excluded in Skylakeevent=8,period=2000003,umask=0x1000Counts 1 per cycle for each PMH that is busy with a page walk for a load. EPT page walk duration are excluded in Skylake microarchitecturedtlb_store_misses.miss_causes_a_walkvirtual memoryStore misses in all DTLB levels that cause page walksevent=0x49,period=100003,umask=100Counts demand data stores that caused a page walk of any page size (4K/2M/4M/1G). This implies it missed in all TLB levels, but the walk need not have completeddtlb_store_misses.stlb_hitvirtual memoryStores that miss the DTLB and hit the STLBevent=0x49,period=100003,umask=0x2000Stores that miss the DTLB (Data TLB) and hit the STLB (2nd Level TLB)dtlb_store_misses.walk_activevirtual memoryCycles when at least one PMH is busy with a page walk for a store. EPT page walk duration are excluded in Skylakeevent=0x49,cmask=1,period=100003,umask=0x1000Counts cycles when at least one PMH (Page Miss Handler) is busy with a page walk for a storedtlb_store_misses.walk_completedvirtual memoryStore misses in all TLB levels causes a page walk that completes. (All page sizes)event=0x49,period=100003,umask=0xe00Counts completed page walks  (all page sizes) caused by demand data stores. This implies it missed in the DTLB and further levels of TLB. The page walk can end with or without a faultdtlb_store_misses.walk_completed_1gvirtual memoryPage walk completed due to a demand data store to a 1G pageevent=0x49,period=100003,umask=800Counts completed page walks  (1G sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a faultdtlb_store_misses.walk_completed_2m_4mvirtual memoryPage walk completed due to a demand data store to a 2M/4M pageevent=0x49,period=100003,umask=400Counts completed page walks  (2M/4M sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a faultdtlb_store_misses.walk_completed_4kvirtual memoryPage walk completed due to a demand data store to a 4K pageevent=0x49,period=100003,umask=200Counts completed page walks  (4K sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a faultdtlb_store_misses.walk_pendingvirtual memoryCounts 1 per cycle for each PMH that is busy with a page walk for a store. EPT page walk duration are excluded in Skylakeevent=0x49,period=2000003,umask=0x1000Counts 1 per cycle for each PMH that is busy with a page walk for a store. EPT page walk duration are excluded in Skylake microarchitectureept.walk_pendingvirtual memoryCounts 1 per cycle for each PMH that is busy with a EPT (Extended Page Table) walk for any request typeevent=0x4f,period=2000003,umask=0x1000Counts cycles for each PMH (Page Miss Handler) that is busy with an EPT (Extended Page Table) walk for any request typeitlb.itlb_flushvirtual memoryFlushing of the Instruction TLB (ITLB) pages, includes 4k/2M/4M pagesevent=0xae,period=100007,umask=100Counts the number of flushes of the big or small ITLB pages. Counting include both TLB Flush (covering all sets) and TLB Set Clear (set-specific)itlb_misses.miss_causes_a_walkvirtual memoryMisses at all ITLB levels that cause page walksevent=0x85,period=100003,umask=100Counts page walks of any page size (4K/2M/4M/1G) caused by a code fetch. This implies it missed in the ITLB and further levels of TLB, but the walk need not have completeditlb_misses.stlb_hitvirtual memoryInstruction fetch requests that miss the ITLB and hit the STLBevent=0x85,period=100003,umask=0x2000itlb_misses.walk_activevirtual memoryCycles when at least one PMH is busy with a page walk for code (instruction fetch) request. EPT page walk duration are excluded in Skylakeevent=0x85,cmask=1,period=100003,umask=0x1000Cycles when at least one PMH is busy with a page walk for code (instruction fetch) request. EPT page walk duration are excluded in Skylake microarchitectureitlb_misses.walk_completedvirtual memoryCode miss in all TLB levels causes a page walk that completes. (All page sizes)event=0x85,period=100003,umask=0xe00Counts completed page walks (all page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a faultitlb_misses.walk_completed_1gvirtual memoryCode miss in all TLB levels causes a page walk that completes. (1G)event=0x85,period=100003,umask=800Counts completed page walks (1G page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a faultitlb_misses.walk_completed_2m_4mvirtual memoryCode miss in all TLB levels causes a page walk that completes. (2M/4M)event=0x85,period=100003,umask=400Counts completed page walks (2M/4M page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a faultitlb_misses.walk_completed_4kvirtual memoryCode miss in all TLB levels causes a page walk that completes. (4K)event=0x85,period=100003,umask=200Counts completed page walks (4K page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a faultitlb_misses.walk_pendingvirtual memoryCounts 1 per cycle for each PMH that is busy with a page walk for an instruction fetch request. EPT page walk duration are excluded in Skylakeevent=0x85,period=100003,umask=0x1000Counts 1 per cycle for each PMH (Page Miss Handler) that is busy with a page walk for an instruction fetch request. EPT page walk duration are excluded in Skylake microarchitecturetlb_flush.dtlb_threadvirtual memoryDTLB flush attempts of the thread-specific entriesevent=0xbd,period=100007,umask=100Counts the number of DTLB flush attempts of the thread-specific entriestlb_flush.stlb_anyvirtual memorySTLB flush attemptsevent=0xbd,period=100007,umask=0x2000Counts the number of any STLB flush attempts (such as entire, VPID, PCID, InvPage, CR3 write, etc.)core_reject_l2q.anycacheCounts the number of core requests (demand and L1 prefetchers) rejected by the L2 queue (L2Q) due to a full conditionevent=0x31,period=20000300Counts the number of (demand and L1 prefetchers) core requests rejected by the L2 queue (L2Q) due to a full or nearly full condition, which likely indicates back pressure from L2Q.  It also counts requests that would have gone directly to the External Queue (XQ), but are rejected due to a full or nearly full condition, indicating back pressure from the IDI link.  The L2Q may also reject transactions  from a core to ensure fairness between cores, or to delay a cores dirty eviction when the address conflicts incoming external snoops.  (Note that L2 prefetcher requests that are dropped are not counted by this event).  Counts on a per core basisdl1.dirty_evictioncacheCounts the number of L1D cacheline (dirty) evictions caused by load misses, stores, and prefetchesevent=0x51,period=200003,umask=100Counts the number of L1D cacheline (dirty) evictions caused by load misses, stores, and prefetches.  Does not count evictions or dirty writebacks caused by snoops.  Does not count a replacement unless a (dirty) line was written backl2_reject_xq.anycacheCounts the number of demand and prefetch transactions that the External Queue (XQ) rejects due to a full or near full conditionevent=0x30,period=20000300Counts the number of demand and prefetch transactions that the External Queue (XQ) rejects due to a full or near full condition which likely indicates back pressure from the IDI link.  The XQ may reject transactions from the L2Q (non-cacheable requests), BBL (L2 misses) and WOB (L2 write-back victims)l2_request.allcacheCounts the total number of L2 Cache accesses. Counts on a per core basisevent=0x24,period=20000300Counts the total number of L2 Cache Accesses, includes hits, misses, rejects  front door requests for CRd/DRd/RFO/ItoM/L2 Prefetches only.  Counts on a per core basisl2_request.hitcacheCounts the number of L2 Cache accesses that resulted in a hit. Counts on a per core basisevent=0x24,period=200003,umask=200Counts the number of L2 Cache accesses that resulted in a hit from a front door request only (does not include rejects or recycles), Counts on a per core basisl2_request.misscacheCounts the number of L2 Cache accesses that resulted in a miss. Counts on a per core basisevent=0x24,period=200003,umask=100Counts the number of L2 Cache accesses that resulted in a miss from a front door request only (does not include rejects or recycles). Counts on a per core basisl2_request.rejectscacheCounts the number of L2 Cache accesses that miss the L2 and get rejected. Counts on a per core basisevent=0x24,period=200003,umask=400Counts the number of L2 Cache accesses that miss the L2 and get BBL reject  short and long rejects (includes those counted in L2_reject_XQ.any). Counts on a per core basislongest_lat_cache.misscacheCounts the number of cacheable memory requests that miss in the LLC. Counts on a per core basisevent=0x2e,period=200003,umask=0x4100Counts the number of cacheable memory requests that miss in the Last Level Cache (LLC). Requests include demand loads, reads for ownership (RFO), instruction fetches and L1 HW prefetches. If the platform has an L3 cache, the LLC is the L3 cache, otherwise it is the L2 cache. Counts on a per core basislongest_lat_cache.referencecacheCounts the number of cacheable memory requests that access the LLC. Counts on a per core basisevent=0x2e,period=200003,umask=0x4f00Counts the number of cacheable memory requests that access the Last Level Cache (LLC). Requests include demand loads, reads for ownership (RFO), instruction fetches and L1 HW prefetches. If the platform has an L3 cache, the LLC is the L3 cache, otherwise it is the L2 cache. Counts on a per core basismem_bound_stalls.store_buffer_fullcacheCounts the number of cycles the core is stalled due to a store buffer being fullevent=0x34,period=200003,umask=0x4000mem_load_uops_retired.hitmcacheCounts the number of load uops retired that hit in the L3 cache, in which a snoop was required and modified data was forwarded from another core or module  Supports address when precise (Precise event)event=0xd1,period=200003,umask=0x2000mem_load_uops_retired.l1_hitcacheCounts the number of load uops retired that hit in the L1 data cache  Supports address when precise (Precise event)event=0xd1,period=200003,umask=100mem_load_uops_retired.l1_misscacheCounts the number of load uops retired that miss in the L1 data cache  Supports address when precise (Precise event)event=0xd1,period=200003,umask=800mem_load_uops_retired.l2_misscacheCounts the number of load uops retired that miss in the L2 cache  Supports address when precise (Precise event)event=0xd1,period=200003,umask=0x1000mem_uops_retired.allcacheCounts the number of memory uops retired  Supports address when precise (Precise event)event=0xd0,period=200003,umask=0x8300Counts the number of memory uops retired.  A single uop that performs both a load AND a store will be counted as 1, not 2 (e.g. ADD [mem], CONST)  Supports address when precise (Precise event)mem_uops_retired.splitcacheCounts the number of memory uops retired that were splits  Supports address when precise (Precise event)event=0xd0,period=200003,umask=0x4300mem_uops_retired.split_storescacheCounts the number of retired split store uops  Supports address when precise (Precise event)event=0xd0,period=200003,umask=0x4200ocr.all_code_rd.l3_hitcacheCounts all code reads that were supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x1F803C004400ocr.all_code_rd.l3_hit.snoop_hitmcacheCounts all code reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C004400ocr.all_code_rd.l3_hit.snoop_hit_no_fwdcacheCounts all code reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C004400ocr.all_code_rd.l3_hit.snoop_hit_with_fwdcacheCounts all code reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C004400ocr.all_code_rd.l3_hit.snoop_misscacheCounts all code reads that were supplied by the L3 cache where a snoop was sent but the snoop missedevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C004400ocr.all_code_rd.l3_hit.snoop_not_neededcacheCounts all code reads that were supplied by the L3 cache where no snoop was needed to satisfy the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C004400ocr.corewb_m.l3_hitcacheCounts modified writebacks from L1 cache and L2 cache that were supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x3001F803C000000ocr.demand_code_rd.l3_hitcacheCounts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x1F803C000400ocr.demand_code_rd.l3_hit.snoop_hitmcacheCounts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000400ocr.demand_code_rd.l3_hit.snoop_hit_no_fwdcacheCounts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C000400ocr.demand_code_rd.l3_hit.snoop_hit_with_fwdcacheCounts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C000400ocr.demand_code_rd.l3_hit.snoop_misscacheCounts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache where a snoop was sent but the snoop missedevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C000400ocr.demand_code_rd.l3_hit.snoop_not_neededcacheCounts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache where no snoop was needed to satisfy the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C000400ocr.demand_data_and_l1pf_rd.l3_hitcacheCounts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x1F803C000100ocr.demand_data_and_l1pf_rd.l3_hit.snoop_hitmcacheCounts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000100ocr.demand_data_and_l1pf_rd.l3_hit.snoop_hit_no_fwdcacheCounts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C000100ocr.demand_data_and_l1pf_rd.l3_hit.snoop_hit_with_fwdcacheCounts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C000100ocr.demand_data_and_l1pf_rd.l3_hit.snoop_misscacheCounts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were supplied by the L3 cache where a snoop was sent but the snoop missedevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C000100ocr.demand_data_and_l1pf_rd.l3_hit.snoop_not_neededcacheCounts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were supplied by the L3 cache where no snoop was needed to satisfy the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C000100ocr.demand_data_rd.l3_hitcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.L3_HITevent=0xb7,period=100003,umask=1,offcore_rsp=0x1F803C000110ocr.demand_data_rd.l3_hit.snoop_hitmcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.L3_HIT.SNOOP_HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000110ocr.demand_data_rd.l3_hit.snoop_hit_no_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.L3_HIT.SNOOP_HIT_NO_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C000110ocr.demand_data_rd.l3_hit.snoop_hit_with_fwdcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C000110ocr.demand_data_rd.l3_hit.snoop_misscacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.L3_HIT.SNOOP_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C000110ocr.demand_data_rd.l3_hit.snoop_not_neededcacheThis event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.L3_HIT.SNOOP_NOT_NEEDEDevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C000110ocr.demand_rfo.l3_hitcacheCounts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x1F803C000200ocr.demand_rfo.l3_hit.snoop_hit_no_fwdcacheCounts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C000200ocr.demand_rfo.l3_hit.snoop_hit_with_fwdcacheCounts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C000200ocr.demand_rfo.l3_hit.snoop_misscacheCounts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where a snoop was sent but the snoop missedevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C000200ocr.demand_rfo.l3_hit.snoop_not_neededcacheCounts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where no snoop was needed to satisfy the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C000200ocr.full_streaming_wr.l3_hitcacheCounts streaming stores which modify a full 64 byte cacheline that were supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x801F803C000000ocr.hwpf_l1d_and_swpf.l3_hit.snoop_hitmcacheCounts L1 data cache hardware prefetches and software prefetches (except PREFETCHW and PFRFO) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C040000ocr.hwpf_l2_code_rd.l3_hitcacheCounts L2 cache hardware prefetch code reads (written to the L2 cache only) that were supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x1F803C004000ocr.hwpf_l2_code_rd.l3_hit.snoop_hitmcacheCounts L2 cache hardware prefetch code reads (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C004000ocr.hwpf_l2_code_rd.l3_hit.snoop_hit_no_fwdcacheCounts L2 cache hardware prefetch code reads (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C004000ocr.hwpf_l2_code_rd.l3_hit.snoop_hit_with_fwdcacheCounts L2 cache hardware prefetch code reads (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C004000ocr.hwpf_l2_code_rd.l3_hit.snoop_misscacheCounts L2 cache hardware prefetch code reads (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent but the snoop missedevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C004000ocr.hwpf_l2_code_rd.l3_hit.snoop_not_neededcacheCounts L2 cache hardware prefetch code reads (written to the L2 cache only) that were supplied by the L3 cache where no snoop was needed to satisfy the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C004000ocr.hwpf_l2_data_rd.l3_hitcacheCounts L2 cache hardware prefetch data reads (written to the L2 cache only) that were supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x1F803C001000ocr.hwpf_l2_data_rd.l3_hit.snoop_hitmcacheCounts L2 cache hardware prefetch data reads (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C001000ocr.hwpf_l2_data_rd.l3_hit.snoop_hit_no_fwdcacheCounts L2 cache hardware prefetch data reads (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C001000ocr.hwpf_l2_data_rd.l3_hit.snoop_hit_with_fwdcacheCounts L2 cache hardware prefetch data reads (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C001000ocr.hwpf_l2_data_rd.l3_hit.snoop_misscacheCounts L2 cache hardware prefetch data reads (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent but the snoop missedevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C001000ocr.hwpf_l2_data_rd.l3_hit.snoop_not_neededcacheCounts L2 cache hardware prefetch data reads (written to the L2 cache only) that were supplied by the L3 cache where no snoop was needed to satisfy the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C001000ocr.hwpf_l2_rfo.l3_hitcacheCounts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x1F803C002000ocr.hwpf_l2_rfo.l3_hit.snoop_hitmcacheCounts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C002000ocr.hwpf_l2_rfo.l3_hit.snoop_hit_no_fwdcacheCounts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C002000ocr.hwpf_l2_rfo.l3_hit.snoop_hit_with_fwdcacheCounts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C002000ocr.hwpf_l2_rfo.l3_hit.snoop_misscacheCounts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent but the snoop missedevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C002000ocr.hwpf_l2_rfo.l3_hit.snoop_not_neededcacheCounts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were supplied by the L3 cache where no snoop was needed to satisfy the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C002000ocr.l1wb_m.l3_hitcacheCounts modified writebacks from L1 cache that miss the L2 cache that were supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x1001F803C000000ocr.l2wb_m.l3_hitcacheCounts modified writeBacks from L2 cache that miss the L3 cache that were supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x2001F803C000000ocr.partial_streaming_wr.l3_hitcacheCounts streaming stores which modify only part of a 64 byte cacheline that were supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x401F803C000000ocr.reads_to_core.l3_hitcacheCounts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x1F803C047700ocr.reads_to_core.l3_hit.snoop_hitmcacheCounts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C047700ocr.reads_to_core.l3_hit.snoop_hit_no_fwdcacheCounts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C047700ocr.reads_to_core.l3_hit.snoop_hit_with_fwdcacheCounts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C047700ocr.reads_to_core.l3_hit.snoop_misscacheCounts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by the L3 cache where a snoop was sent but the snoop missedevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C047700ocr.reads_to_core.l3_hit.snoop_not_neededcacheCounts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by the L3 cache where no snoop was needed to satisfy the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C047700ocr.streaming_wr.l3_hitcacheCounts streaming stores that were supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x1F803C080000ocr.uc_rd.l3_hitcacheCounts uncached memory reads that were supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x101F803C000000ocr.uc_rd.l3_hit.snoop_hitmcacheCounts uncached memory reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x1010003C000000ocr.uc_rd.l3_hit.snoop_hit_no_fwdcacheCounts uncached memory reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x1004003C000000ocr.uc_rd.l3_hit.snoop_hit_with_fwdcacheCounts uncached memory reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x1008003C000000ocr.uc_rd.l3_hit.snoop_misscacheCounts uncached memory reads that were supplied by the L3 cache where a snoop was sent but the snoop missedevent=0xb7,period=100003,umask=1,offcore_rsp=0x1002003C000000ocr.uc_rd.l3_hit.snoop_not_neededcacheCounts uncached memory reads that were supplied by the L3 cache where no snoop was needed to satisfy the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x1001003C000000ocr.uc_wr.l3_hitcacheCounts uncached memory writes that were supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x201F803C000000cycles_div_busy.fpdivfloating pointCounts the number of cycles the floating point divider is busyevent=0xcd,period=200003,umask=200Counts the number of cycles the floating point divider is busy.  Does not imply a stall waiting for the dividerbaclears.anyfrontendCounts the total number of BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branchesevent=0xe6,period=200003,umask=100Counts the total number of BACLEARS, which occur when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend.  Includes BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branchesbaclears.condfrontendCounts the number of BACLEARS due to a conditional jumpevent=0xe6,period=200003,umask=0x1000baclears.indirectfrontendCounts the number of BACLEARS due to an indirect branchevent=0xe6,period=200003,umask=200baclears.returnfrontendCounts the number of BACLEARS due to a return branchevent=0xe6,period=200003,umask=800baclears.uncondfrontendCounts the number of BACLEARS due to a direct, unconditional jumpevent=0xe6,period=200003,umask=400decode_restriction.predecode_wrongfrontendCounts the number of times a decode restriction reduces the decode throughput due to wrong instruction length predictionevent=0xe9,period=200003,umask=100icache.hitfrontendCounts the number of instruction cache hitsevent=0x80,period=200003,umask=100Counts the number of requests that hit in the instruction cache.  The event only counts new cache line accesses, so that multiple back to back fetches to the exact same cache line and byte chunk count as one.  Specifically, the event counts when accesses from sequential code crosses the cache line boundary, or when a branch target is moved to a new line or to a non-sequential byte chunk of the same linemisalign_mem_ref.load_page_splitmemoryCounts the number of misaligned load uops that are 4K page splits (Precise event)event=0x13,period=200003,umask=200misalign_mem_ref.store_page_splitmemoryCounts the number of misaligned store uops that are 4K page splits (Precise event)event=0x13,period=200003,umask=400ocr.all_code_rd.l3_missmemoryCounts all code reads that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x218400004400ocr.all_code_rd.l3_miss_localmemoryCounts all code reads that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x218400004400ocr.corewb_m.l3_missmemoryCounts modified writebacks from L1 cache and L2 cache that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x300218400000000ocr.corewb_m.l3_miss_localmemoryCounts modified writebacks from L1 cache and L2 cache that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x300218400000000ocr.demand_code_rd.l3_missmemoryCounts demand instruction fetches and L1 instruction cache prefetches that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x218400000400ocr.demand_code_rd.l3_miss_localmemoryCounts demand instruction fetches and L1 instruction cache prefetches that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x218400000400ocr.demand_data_and_l1pf_rd.l3_missmemoryCounts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x218400000100ocr.demand_data_and_l1pf_rd.l3_miss_localmemoryCounts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x218400000100ocr.demand_data_rd.l3_missmemoryThis event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.L3_MISSevent=0xb7,period=100003,umask=1,offcore_rsp=0x218400000110ocr.demand_data_rd.l3_miss_localmemoryThis event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.L3_MISS_LOCALevent=0xb7,period=100003,umask=1,offcore_rsp=0x218400000110ocr.demand_rfo.l3_missmemoryCounts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x218400000200ocr.demand_rfo.l3_miss_localmemoryCounts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x218400000200ocr.full_streaming_wr.l3_missmemoryCounts streaming stores which modify a full 64 byte cacheline that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x80218400000000ocr.full_streaming_wr.l3_miss_localmemoryCounts streaming stores which modify a full 64 byte cacheline that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x80218400000000ocr.hwpf_l2_code_rd.l3_missmemoryCounts L2 cache hardware prefetch code reads (written to the L2 cache only) that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x218400004000ocr.hwpf_l2_code_rd.l3_miss_localmemoryCounts L2 cache hardware prefetch code reads (written to the L2 cache only) that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x218400004000ocr.hwpf_l2_data_rd.l3_missmemoryCounts L2 cache hardware prefetch data reads (written to the L2 cache only) that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x218400001000ocr.hwpf_l2_data_rd.l3_miss_localmemoryCounts L2 cache hardware prefetch data reads (written to the L2 cache only) that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x218400001000ocr.hwpf_l2_rfo.l3_missmemoryCounts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x218400002000ocr.hwpf_l2_rfo.l3_miss_localmemoryCounts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x218400002000ocr.l1wb_m.l3_missmemoryCounts modified writebacks from L1 cache that miss the L2 cache that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x100218400000000ocr.l1wb_m.l3_miss_localmemoryCounts modified writebacks from L1 cache that miss the L2 cache that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x100218400000000ocr.l2wb_m.l3_missmemoryCounts modified writeBacks from L2 cache that miss the L3 cache that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x200218400000000ocr.l2wb_m.l3_miss_localmemoryCounts modified writeBacks from L2 cache that miss the L3 cache that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x200218400000000ocr.other.l3_missmemoryCounts miscellaneous requests, such as I/O accesses, that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x218400800000ocr.other.l3_miss_localmemoryCounts miscellaneous requests, such as I/O accesses, that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x218400800000ocr.partial_streaming_wr.l3_missmemoryCounts streaming stores which modify only part of a 64 byte cacheline that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x40218400000000ocr.partial_streaming_wr.l3_miss_localmemoryCounts streaming stores which modify only part of a 64 byte cacheline that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x40218400000000ocr.prefetches.l3_missmemoryCounts all hardware and software prefetches that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x218400047000ocr.reads_to_core.l3_missmemoryCounts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x218400047700ocr.reads_to_core.l3_miss_localmemoryCounts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x218400047700ocr.streaming_wr.l3_missmemoryCounts streaming stores that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x218400080000ocr.streaming_wr.l3_miss_localmemoryCounts streaming stores that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x218400080000ocr.uc_rd.l3_missmemoryCounts uncached memory reads that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x10218400000000ocr.uc_rd.l3_miss_localmemoryCounts uncached memory reads that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x10218400000000ocr.uc_wr.l3_missmemoryCounts uncached memory writes that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x20218400000000ocr.uc_wr.l3_miss_localmemoryCounts uncached memory writes that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x20218400000000bus_lock.allotherThis event is deprecated. Refer to new event BUS_LOCK.SELF_LOCKSevent=0x63,edge=1,period=20000310bus_lock.block_cyclesotherCounts the number of unhalted cycles a core is blocked due to an accepted lock issued by other coresevent=0x63,period=200003,umask=200Counts the number of unhalted cycles a core is blocked due to an accepted lock issued by other cores. Counts on a per core basisbus_lock.cycles_other_blockotherThis event is deprecated. Refer to new event BUS_LOCK.BLOCK_CYCLESevent=0x63,period=200003,umask=210bus_lock.cycles_self_blockotherThis event is deprecated. Refer to new event BUS_LOCK.LOCK_CYCLESevent=0x63,period=200003,umask=110bus_lock.lock_cyclesotherCounts the number of unhalted cycles a core is blocked due to an accepted lock it issuedevent=0x63,period=200003,umask=100Counts the number of unhalted cycles a core is blocked due to an accepted lock it issued. Counts on a per core basisbus_lock.self_locksotherCounts the number of bus locks a core issued its self (e.g. lock to UC or Split Lock) and does not include cache locksevent=0x63,edge=1,period=20000300Counts the number of bus locks a core issued its self (e.g. lock to UC or Split Lock) and does not include cache locks. Counts on a per core basisc0_stalls.load_dram_hitotherThis event is deprecated. Refer to new event MEM_BOUND_STALLS.LOAD_DRAM_HITevent=0x34,period=200003,umask=410c0_stalls.load_l2_hitotherThis event is deprecated. Refer to new event MEM_BOUND_STALLS.LOAD_L2_HITevent=0x34,period=200003,umask=110c0_stalls.load_llc_hitotherThis event is deprecated. Refer to new event MEM_BOUND_STALLS.LOAD_LLC_HITevent=0x34,period=200003,umask=210hw_interrupts.maskedotherCounts the number of core cycles during which interrupts are masked (disabled)event=0xcb,period=200003,umask=200Counts the number of core cycles during which interrupts are masked (disabled). Increments by 1 each core cycle that EFLAGS.IF is 0, regardless of whether interrupts are pending or nothw_interrupts.pending_and_maskedotherCounts the number of core cycles during which there are pending interrupts while interrupts are masked (disabled)event=0xcb,period=200003,umask=400Counts the number of core cycles during which there are pending interrupts while interrupts are masked (disabled). Increments by 1 each core cycle that both EFLAGS.IF is 0 and an INTR is pending (which means the APIC is telling the ROB to cause an INTR). This event does not increment if EFLAGS.IF is 0 but all interrupt in the APICs Interrupt Request Register (IRR) are inhibited by the PPR (thus either by ISRV or TPR)  because in these cases the interrupts would be held up in the APIC and would not be pended to the ROB. This event does count when an interrupt is only inhibited by MOV/POP SS state machines or the STI state machine. These extra inhibits only last for a single instructions and would not be importanthw_interrupts.receivedotherCounts the number of hardware interrupts received by the processorevent=0xcb,period=203,umask=100ocr.all_code_rd.any_responseotherCounts all code reads that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x1004400ocr.all_code_rd.dramotherCounts all code reads that were supplied by DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400004400ocr.all_code_rd.local_dramotherCounts all code reads that were supplied by DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400004400ocr.all_code_rd.outstandingotherCounts all code reads that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency)event=0xb7,period=100003,umask=1,offcore_rsp=0x800000000000004400ocr.corewb_m.any_responseotherCounts modified writebacks from L1 cache and L2 cache that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x300000001000000ocr.corewb_m.outstandingotherCounts modified writebacks from L1 cache and L2 cache that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency)event=0xb7,period=100003,umask=1,offcore_rsp=0x800300000000000000ocr.demand_code_rd.any_responseotherCounts demand instruction fetches and L1 instruction cache prefetches that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000400ocr.demand_code_rd.dramotherCounts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400000400ocr.demand_code_rd.local_dramotherCounts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400000400ocr.demand_data_and_l1pf_rd.any_responseotherCounts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000100ocr.demand_data_and_l1pf_rd.dramotherCounts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were supplied by DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400000100ocr.demand_data_and_l1pf_rd.local_dramotherCounts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were supplied by DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400000100ocr.demand_data_and_l1pf_rd.outstandingotherCounts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency)event=0xb7,period=100003,umask=1,offcore_rsp=0x800000000000000100ocr.demand_data_rd.any_responseotherThis event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.ANY_RESPONSEevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000110ocr.demand_data_rd.dramotherThis event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400000110ocr.demand_data_rd.local_dramotherThis event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.LOCAL_DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400000110ocr.demand_data_rd.outstandingotherThis event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.OUTSTANDINGevent=0xb7,period=100003,umask=1,offcore_rsp=0x800000000000000110ocr.demand_rfo.dramotherCounts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400000200ocr.demand_rfo.local_dramotherCounts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400000200ocr.demand_rfo.outstandingotherCounts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency)event=0xb7,period=100003,umask=1,offcore_rsp=0x800000000000000200ocr.full_streaming_wr.any_responseotherCounts streaming stores which modify a full 64 byte cacheline that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x80000001000000ocr.hwpf_l1d_and_swpf.any_responseotherCounts L1 data cache hardware prefetches and software prefetches (except PREFETCHW and PFRFO) that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x1040000ocr.hwpf_l2_code_rd.any_responseotherCounts L2 cache hardware prefetch code reads (written to the L2 cache only) that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x1004000ocr.hwpf_l2_code_rd.dramotherCounts L2 cache hardware prefetch code reads (written to the L2 cache only) that were supplied by DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400004000ocr.hwpf_l2_code_rd.local_dramotherCounts L2 cache hardware prefetch code reads (written to the L2 cache only) that were supplied by DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400004000ocr.hwpf_l2_code_rd.outstandingotherCounts L2 cache hardware prefetch code reads (written to the L2 cache only) that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency)event=0xb7,period=100003,umask=1,offcore_rsp=0x800000000000004000ocr.hwpf_l2_data_rd.any_responseotherCounts L2 cache hardware prefetch data reads (written to the L2 cache only) that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x1001000ocr.hwpf_l2_data_rd.dramotherCounts L2 cache hardware prefetch data reads (written to the L2 cache only) that were supplied by DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400001000ocr.hwpf_l2_data_rd.local_dramotherCounts L2 cache hardware prefetch data reads (written to the L2 cache only) that were supplied by DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400001000ocr.hwpf_l2_rfo.any_responseotherCounts L2 cache hardware prefetch RFOs (written to the L2 cache only) that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x1002000ocr.hwpf_l2_rfo.dramotherCounts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were supplied by DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400002000ocr.hwpf_l2_rfo.local_dramotherCounts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were supplied by DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400002000ocr.hwpf_l2_rfo.outstandingotherCounts L2 cache hardware prefetch RFOs (written to the L2 cache only) that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency)event=0xb7,period=100003,umask=1,offcore_rsp=0x800000000000002000ocr.l1wb_m.any_responseotherCounts modified writebacks from L1 cache that miss the L2 cache that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x100000001000000ocr.l2wb_m.any_responseotherCounts modified writeBacks from L2 cache that miss the L3 cache that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x200000001000000ocr.other.any_responseotherCounts miscellaneous requests, such as I/O accesses, that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x1800000ocr.partial_streaming_wr.any_responseotherCounts streaming stores which modify only part of a 64 byte cacheline that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x40000001000000ocr.prefetches.any_responseotherCounts all hardware and software prefetches that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x1047000ocr.reads_to_core.any_responseotherCounts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x1047700ocr.reads_to_core.dramotherCounts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400047700ocr.reads_to_core.local_dramotherCounts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400047700ocr.reads_to_core.outstandingotherCounts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency)event=0xb7,period=100003,umask=1,offcore_rsp=0x800000000000047700ocr.uc_rd.any_responseotherCounts uncached memory reads that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x10000001000000ocr.uc_rd.dramotherCounts uncached memory reads that were supplied by DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x10018400000000ocr.uc_rd.local_dramotherCounts uncached memory reads that were supplied by DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x10018400000000ocr.uc_rd.outstandingotherCounts uncached memory reads that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency)event=0xb7,period=100003,umask=1,offcore_rsp=0x800010000000000000ocr.uc_wr.any_responseotherCounts uncached memory writes that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x20000001000000br_inst_retired.callpipelineCounts the number of near CALL branch instructions retired (Precise event)event=0xc4,period=200003,umask=0xf900br_inst_retired.ind_callpipelineCounts the number of near indirect CALL branch instructions retired (Precise event)event=0xc4,period=200003,umask=0xfb00br_inst_retired.jccpipelineCounts the number of retired JCC (Jump on Conditional Code) branch instructions retired, includes both taken and not taken branches (Precise event)event=0xc4,period=200003,umask=0x7e00br_inst_retired.non_return_indpipelineCounts the number of near indirect JMP and near indirect CALL branch instructions retired (Precise event)event=0xc4,period=200003,umask=0xeb00br_inst_retired.returnpipelineCounts the number of near RET branch instructions retired (Precise event)event=0xc4,period=200003,umask=0xf700br_inst_retired.taken_jccpipelineCounts the number of taken JCC (Jump on Conditional Code) branch instructions retired (Precise event)event=0xc4,period=200003,umask=0xfe00br_misp_retired.ind_callpipelineCounts the number of mispredicted near indirect CALL branch instructions retired (Precise event)event=0xc5,period=200003,umask=0xfb00br_misp_retired.jccpipelineCounts the number of mispredicted JCC (Jump on Conditional Code) branch instructions retired (Precise event)event=0xc5,period=200003,umask=0x7e00br_misp_retired.non_return_indpipelineCounts the number of mispredicted near indirect JMP and near indirect CALL branch instructions retired (Precise event)event=0xc5,period=200003,umask=0xeb00br_misp_retired.taken_jccpipelineCounts the number of mispredicted taken JCC (Jump on Conditional Code) branch instructions retired (Precise event)event=0xc5,period=200003,umask=0xfe00btclear.anypipelineCounts the total number of BTCLEARSevent=0xe8,period=20000300Counts the total number of BTCLEARS which occurs when the Branch Target Buffer (BTB) predicts a taken branchcpu_clk_unhalted.refpipelineCounts the number of unhalted reference clock cycles at TSC frequencyevent=0x0,umask=0x03,period=200000300Counts the number of reference cycles that the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. This event is not affected by core frequency changes and increments at a fixed frequency that is also used for the Time Stamp Counter (TSC). This event uses fixed counter 2cycles_div_busy.anypipelineThis event is deprecatedevent=0xcd,period=200000310cycles_div_busy.idivpipelineCounts the number of cycles the integer divider is busyevent=0xcd,period=200003,umask=100Counts the number of cycles the integer divider is busy.  Does not imply a stall waiting for the dividerld_blocks.4k_aliaspipelineCounts the number of retired loads that are blocked because it initially appears to be store forward blocked, but subsequently is shown not to be blocked based on 4K alias check (Precise event)event=3,period=1000003,umask=400ld_blocks.allpipelineCounts the number of retired loads that are blocked for any of the following reasons:  DTLB miss, address alias, store forward or data unknown (includes memory disambiguation blocks and ESP consuming load blocks) (Precise event)event=3,period=1000003,umask=0x1000ld_blocks.store_forwardpipelineCounts the number of retired loads that are blocked because its address partially overlapped with an older store (Precise event)event=3,period=1000003,umask=200machine_clears.anypipelineCounts the total number of machine clears for any reason including, but not limited to, memory ordering, memory disambiguation, SMC, and FP assistevent=0xc3,period=2000300topdown_bad_speculation.allpipelineCounts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clearevent=0x73,period=1000003,umask=600Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear. Only issue slots wasted due to fast nukes such as memory ordering nukes are counted. Other nukes are not accounted for. Counts all issue slots blocked during this recovery window including relevant microcode flows and while uops are not yet available in the instruction queue (IQ) even if an FE_bound event occurs during this period. Also includes the issue slots that were consumed by the backend but were thrown away because they were younger than the mispredict or machine cleartopdown_bad_speculation.machine_clearspipelineCounts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a machine clear (nuke) of any kind including memory ordering and memory disambiguationevent=0x73,period=1000003,umask=200topdown_bad_speculation.monukepipelineThis event is deprecated. Refer to new event TOPDOWN_BAD_SPECULATION.FASTNUKEevent=0x73,period=1000003,umask=210topdown_be_bound.store_bufferpipelineThis event is deprecatedevent=0x74,period=1000003,umask=410dtlb_load_misses.pde_cache_missvirtual memoryCounts the number of page walks due to loads that miss the PDE (Page Directory Entry) cacheevent=8,period=200003,umask=0x8000dtlb_load_misses.stlb_hitvirtual memoryCounts the number of first level TLB misses but second level hits due to a demand load that did not start a page walk. Account for all page sizes. Will result in a DTLB write from STLBevent=8,period=200003,umask=0x2000dtlb_load_misses.walk_completed_1gvirtual memoryCounts the number of page walks completed due to load DTLB misses to a 1G pageevent=8,period=200003,umask=800Counts the number of page walks completed due to loads (including SW prefetches) whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 1GB pages. Includes page walks that page faultdtlb_load_misses.walk_completed_2m_4mvirtual memoryCounts the number of page walks completed due to load DTLB misses to a 2M or 4M pageevent=8,period=200003,umask=400Counts the number of page walks completed due to loads (including SW prefetches) whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 2M or 4M pages. Includes page walks that page faultdtlb_load_misses.walk_completed_4kvirtual memoryCounts the number of page walks completed due to load DTLB misses to a 4K pageevent=8,period=200003,umask=200Counts the number of page walks completed due to loads (including SW prefetches) whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 4K pages. Includes page walks that page faultdtlb_load_misses.walk_pendingvirtual memoryCounts the number of page walks outstanding in the page miss handler (PMH) for demand loads every cycleevent=8,period=200003,umask=0x1000Counts the number of page walks outstanding in the page miss handler (PMH) for demand loads every cycle.  A page walk is outstanding from start till PMH becomes idle again (ready to serve next walk). Includes EPT-walk intervalsdtlb_store_misses.pde_cache_missvirtual memoryCounts the number of page walks due to stores that miss the PDE (Page Directory Entry) cacheevent=0x49,period=2000003,umask=0x8000dtlb_store_misses.stlb_hitvirtual memoryCounts the number of first level TLB misses but second level hits due to stores that did not start a page walk. Account for all pages sizes. Will result in a DTLB write from STLBevent=0x49,period=2000003,umask=0x2000dtlb_store_misses.walk_completedvirtual memoryCounts the number of page walks completed due to store DTLB misses to any page sizeevent=0x49,period=200003,umask=0xe00Counts the number of page walks completed due to stores whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to any page size.  Includes page walks that page faultdtlb_store_misses.walk_completed_1gvirtual memoryCounts the number of page walks completed due to store DTLB misses to a 1G pageevent=0x49,period=200003,umask=800Counts the number of page walks completed due to stores whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 1G pages.  Includes page walks that page faultdtlb_store_misses.walk_completed_2m_4mvirtual memoryCounts the number of page walks completed due to store DTLB misses to a 2M or 4M pageevent=0x49,period=2000003,umask=400Counts the number of page walks completed due to stores whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 2M or 4M pages.  Includes page walks that page faultdtlb_store_misses.walk_completed_4kvirtual memoryCounts the number of page walks completed due to store DTLB misses to a 4K pageevent=0x49,period=2000003,umask=200Counts the number of page walks completed due to stores whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 4K pages.  Includes page walks that page faultdtlb_store_misses.walk_pendingvirtual memoryCounts the number of page walks outstanding in the page miss handler (PMH) for stores every cycleevent=0x49,period=200003,umask=0x1000Counts the number of page walks outstanding in the page miss handler (PMH) for stores every cycle.  A page walk is outstanding from start till PMH becomes idle again (ready to serve next walk). Includes EPT-walk intervalsept.epde_hitvirtual memoryCounts the number of Extended Page Directory Entry hitsevent=0x4f,period=2000003,umask=100Counts the number of Extended Page Directory Entry hits.  The Extended Page Directory cache is used by Virtual Machine operating systems while the guest operating systems use the standard TLB cachesept.epde_missvirtual memoryCounts the number of Extended Page Directory Entry missesevent=0x4f,period=2000003,umask=200Counts the number Extended Page Directory Entry misses.  The Extended Page Directory cache is used by Virtual Machine operating systems while the guest operating systems use the standard TLB cachesept.epdpe_hitvirtual memoryCounts the number of Extended Page Directory Pointer Entry hitsevent=0x4f,period=2000003,umask=400Counts the number Extended Page Directory Pointer Entry hits.  The Extended Page Directory cache is used by Virtual Machine operating systems while the guest operating systems use the standard TLB cachesept.epdpe_missvirtual memoryCounts the number of Extended Page Directory Pointer Entry missesevent=0x4f,period=2000003,umask=800Counts the number Extended Page Directory Pointer Entry misses.  The Extended Page Directory cache is used by Virtual Machine operating systems while the guest operating systems use the standard TLB cachesept.walk_pendingvirtual memoryCounts the number of page walks outstanding for an Extended Page table walk including GTLB hits per cycleevent=0x4f,period=200003,umask=0x1000Counts the number of page walks outstanding for an Extended Page table walk including GTLB hits per cycle.  The Extended Page Directory cache is used by Virtual Machine operating systems while the guest operating systems use the standard TLB cachesitlb.fillsvirtual memoryCounts the number of times there was an ITLB miss and a new translation was filled into the ITLBevent=0x81,period=200003,umask=400Counts the number of times the machine was unable to find a translation in the Instruction Translation Lookaside Buffer (ITLB) and a new translation was filled into the ITLB. The event is speculative in nature, but will not count translations (page walks) that are begun and not finished, or translations that are finished but not filled into the ITLBitlb_misses.stlb_hitvirtual memoryCounts the number of first level TLB misses but second level hits due to an instruction fetch that did not start a page walk. Account for all pages sizes. Will result in an ITLB write from STLBevent=0x85,period=2000003,umask=0x2000itlb_misses.walk_completed_1gvirtual memoryCounts the number of page walks completed due to instruction fetch misses to a 1G pageevent=0x85,period=200003,umask=800Counts the number of page walks completed due to instruction fetches whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 1G pages.  Includes page walks that page faultitlb_misses.walk_completed_2m_4mvirtual memoryCounts the number of page walks completed due to instruction fetch misses to a 2M or 4M pageevent=0x85,period=2000003,umask=400Counts the number of page walks completed due to instruction fetches whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 2M or 4M pages.  Includes page walks that page faultitlb_misses.walk_completed_4kvirtual memoryCounts the number of page walks completed due to instruction fetch misses to a 4K pageevent=0x85,period=2000003,umask=200Counts the number of page walks completed due to instruction fetches whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 4K pages.  Includes page walks that page faultitlb_misses.walk_pendingvirtual memoryCounts the number of page walks outstanding in the page miss handler (PMH) for instruction fetches every cycleevent=0x85,period=200003,umask=0x1000Counts the number of page walks outstanding in the page miss handler (PMH) for instruction fetches every cycle.  A page walk is outstanding from start till PMH becomes idle again (ready to serve next walk)ld_blocks.dtlb_missvirtual memoryCounts the number of retired loads that are blocked due to a first level TLB miss (Precise event)event=3,period=1000003,umask=800mem_uops_retired.dtlb_missvirtual memoryCounts the number of memory uops retired that missed in the second level TLB  Supports address when precise (Precise event)event=0xd0,period=200003,umask=0x1300mem_uops_retired.dtlb_miss_loadsvirtual memoryCounts the number of load uops retired that miss in the second Level TLB  Supports address when precise (Precise event)event=0xd0,period=200003,umask=0x1100mem_uops_retired.dtlb_miss_storesvirtual memoryCounts the number of store uops retired that miss in the second level TLB  Supports address when precise (Precise event)event=0xd0,period=200003,umask=0x1200l2_lines_out.non_silentcacheModified cache lines that are evicted by L2 cache when triggered by an L2 cache fillevent=0x26,period=200003,umask=200Counts the number of lines that are evicted by L2 cache when triggered by an L2 cache fill. Those lines are in Modified state. Modified lines are written back to L3l2_lines_out.silentcacheNon-modified cache lines that are silently dropped by L2 cache when triggered by an L2 cache fillevent=0x26,period=200003,umask=100Counts the number of lines that are silently dropped by L2 cache when triggered by an L2 cache fill. These lines are typically in Shared or Exclusive state. A non-threaded eventl2_rqsts.all_demand_referencescacheDemand requests to L2 cacheevent=0x24,period=200003,umask=0xe700Counts demand requests to L2 cachemem_load_l3_miss_retired.remote_dramcacheMEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM  Supports address when precise (Precise event)event=0xd3,period=1000003,umask=200mem_load_l3_miss_retired.remote_fwdcacheRetired load instructions whose data sources was forwarded from a remote cache  Supports address when precise (Precise event)event=0xd3,period=100007,umask=800Retired load instructions whose data sources was forwarded from a remote cache  Supports address when precise (Precise event)mem_load_l3_miss_retired.remote_hitmcacheMEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM  Supports address when precise (Precise event)event=0xd3,period=1000003,umask=400ocr.demand_code_rd.l3_hitcacheCounts demand instruction fetches and L1 instruction cache prefetches that hit in the L3 or were snooped from another core's caches on the same socketevent=0x2a,period=100003,umask=1,offcore_rsp=0x3F803C000400ocr.demand_code_rd.l3_hit.snoop_hitmcacheCounts demand instruction fetches and L1 instruction cache prefetches that resulted in a snoop hit a modified line in another core's caches which forwarded the dataevent=0x2a,period=100003,umask=1,offcore_rsp=0x10003C000400ocr.demand_code_rd.snc_cache.hitmcacheCounts demand instruction fetches and L1 instruction cache prefetches that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) modeevent=0x2a,period=100003,umask=1,offcore_rsp=0x100800000400ocr.demand_code_rd.snc_cache.hit_with_fwdcacheCounts demand instruction fetches and L1 instruction cache prefetches that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) modeevent=0x2a,period=100003,umask=1,offcore_rsp=0x80800000400ocr.demand_data_rd.l3_hitcacheCounts demand data reads that hit in the L3 or were snooped from another core's caches on the same socketevent=0x2a,period=100003,umask=1,offcore_rsp=0x3F803C000100ocr.demand_data_rd.l3_hit.snoop_hitmcacheCounts demand data reads that resulted in a snoop hit a modified line in another core's caches which forwarded the dataevent=0x2a,period=100003,umask=1,offcore_rsp=0x10003C000100ocr.demand_data_rd.l3_hit.snoop_hit_no_fwdcacheCounts demand data reads that resulted in a snoop that hit in another core, which did not forward the dataevent=0x2a,period=100003,umask=1,offcore_rsp=0x4003C000100ocr.demand_data_rd.l3_hit.snoop_hit_with_fwdcacheCounts demand data reads that resulted in a snoop hit in another core's caches which forwarded the unmodified data to the requesting coreevent=0x2a,period=100003,umask=1,offcore_rsp=0x8003C000100ocr.demand_data_rd.remote_cache.snoop_hitmcacheCounts demand data reads that were supplied by a cache on a remote socket where a snoop hit a modified line in another core's caches which forwarded the dataevent=0x2a,period=100003,umask=1,offcore_rsp=0x103000000100ocr.demand_data_rd.remote_cache.snoop_hit_with_fwdcacheCounts demand data reads that were supplied by a cache on a remote socket where a snoop hit in another core's caches which forwarded the unmodified data to the requesting coreevent=0x2a,period=100003,umask=1,offcore_rsp=0x83000000100ocr.demand_data_rd.snc_cache.hitmcacheCounts demand data reads that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) modeevent=0x2a,period=100003,umask=1,offcore_rsp=0x100800000100ocr.demand_data_rd.snc_cache.hit_with_fwdcacheCounts demand data reads that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) modeevent=0x2a,period=100003,umask=1,offcore_rsp=0x80800000100ocr.demand_rfo.l3_hitcacheCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that hit in the L3 or were snooped from another core's caches on the same socketevent=0x2a,period=100003,umask=1,offcore_rsp=0x3F803C000200ocr.demand_rfo.l3_hit.snoop_hitmcacheCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that resulted in a snoop hit a modified line in another core's caches which forwarded the dataevent=0x2a,period=100003,umask=1,offcore_rsp=0x10003C000200ocr.demand_rfo.snc_cache.hitmcacheCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) modeevent=0x2a,period=100003,umask=1,offcore_rsp=0x100800000200ocr.demand_rfo.snc_cache.hit_with_fwdcacheCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) modeevent=0x2a,period=100003,umask=1,offcore_rsp=0x80800000200ocr.hwpf_l3.l3_hitcacheCounts hardware prefetches to the L3 only that hit in the L3 or were snooped from another core's caches on the same socketevent=0x2a,period=100003,umask=1,offcore_rsp=0x8008238000ocr.reads_to_core.l3_hitcacheCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that hit in the L3 or were snooped from another core's caches on the same socketevent=0x2a,period=100003,umask=1,offcore_rsp=0x3F003C447700ocr.reads_to_core.l3_hit.snoop_hitmcacheCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that resulted in a snoop hit a modified line in another core's caches which forwarded the dataevent=0x2a,period=100003,umask=1,offcore_rsp=0x10003C447700ocr.reads_to_core.l3_hit.snoop_hit_no_fwdcacheCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that resulted in a snoop that hit in another core, which did not forward the dataevent=0x2a,period=100003,umask=1,offcore_rsp=0x4003C447700ocr.reads_to_core.l3_hit.snoop_hit_with_fwdcacheCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that resulted in a snoop hit in another core's caches which forwarded the unmodified data to the requesting coreevent=0x2a,period=100003,umask=1,offcore_rsp=0x8003C447700ocr.reads_to_core.remote_cache.snoop_fwdcacheCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by a cache on a remote socket where a snoop was sent and data was returned (Modified or Not Modified)event=0x2a,period=100003,umask=1,offcore_rsp=0x183000447700ocr.reads_to_core.remote_cache.snoop_hitmcacheCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by a cache on a remote socket where a snoop hit a modified line in another core's caches which forwarded the dataevent=0x2a,period=100003,umask=1,offcore_rsp=0x103000447700ocr.reads_to_core.remote_cache.snoop_hit_with_fwdcacheCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by a cache on a remote socket where a snoop hit in another core's caches which forwarded the unmodified data to the requesting coreevent=0x2a,period=100003,umask=1,offcore_rsp=0x83000447700ocr.reads_to_core.snc_cache.hitmcacheCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) modeevent=0x2a,period=100003,umask=1,offcore_rsp=0x100800447700ocr.reads_to_core.snc_cache.hit_with_fwdcacheCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) modeevent=0x2a,period=100003,umask=1,offcore_rsp=0x80800447700ocr.rfo_to_core.l3_hit_mcacheCounts demand reads for ownership (RFO), hardware prefetch RFOs (which bring data to L2), and software prefetches for exclusive ownership (PREFETCHW) that hit to a (M)odified cacheline in the L3 or snoop filterevent=0x2a,period=100003,umask=1,offcore_rsp=0x1F8004002200ocr.streaming_wr.l3_hitcacheCounts streaming stores that hit in the L3 or were snooped from another core's caches on the same socketevent=0x2a,period=100003,umask=1,offcore_rsp=0x8008080000offcore_requests_outstanding.all_data_rdcacheThis event is deprecated. Refer to new event OFFCORE_REQUESTS_OUTSTANDING.DATA_RDevent=0x20,period=1000003,umask=810offcore_requests_outstanding.cycles_with_data_rdcacheOFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RDevent=0x20,cmask=1,period=1000003,umask=800offcore_requests_outstanding.cycles_with_demand_rfocacheOFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFOevent=0x20,cmask=1,period=1000003,umask=400offcore_requests_outstanding.data_rdcacheOFFCORE_REQUESTS_OUTSTANDING.DATA_RDevent=0x20,period=1000003,umask=800fp_arith_inst_retired.512b_packed_doublefloating pointCounts number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 8 computation operations, one for each element.  Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per elementevent=0xc7,period=100003,umask=0x4000Number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 8 computation operations, one for each element.  Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.512b_packed_singlefloating pointCounts number of SSE/AVX computational 512-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 16 computation operations, one for each element.  Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per elementevent=0xc7,period=100003,umask=0x8000Number of SSE/AVX computational 512-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 16 computation operations, one for each element.  Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.8_flopsfloating pointNumber of SSE/AVX computational 256-bit packed single precision and 512-bit packed double precision  FP instructions retired; some instructions will count twice as noted below.  Each count represents 8 computation operations, 1 for each element.  Applies to SSE* and AVX* packed single precision and double precision FP instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RSQRT14 RCP RCP14 DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB count twice as they perform 2 calculations per elementevent=0xc7,period=100003,umask=0x6000Number of SSE/AVX computational 256-bit packed single precision and 512-bit packed double precision  floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 8 computation operations, one for each element.  Applies to SSE* and AVX* packed single precision and double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RSQRT14 RCP RCP14 DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired2.128b_packed_halffloating pointFP_ARITH_INST_RETIRED2.128B_PACKED_HALFevent=0xcf,period=100003,umask=400fp_arith_inst_retired2.256b_packed_halffloating pointFP_ARITH_INST_RETIRED2.256B_PACKED_HALFevent=0xcf,period=100003,umask=800fp_arith_inst_retired2.512b_packed_halffloating pointFP_ARITH_INST_RETIRED2.512B_PACKED_HALFevent=0xcf,period=100003,umask=0x1000fp_arith_inst_retired2.complex_scalar_halffloating pointFP_ARITH_INST_RETIRED2.COMPLEX_SCALAR_HALFevent=0xcf,period=100003,umask=200fp_arith_inst_retired2.scalarfloating pointNumber of all Scalar Half-Precision FP arithmetic instructions(1) retired - regular and complexevent=0xcf,period=100003,umask=300FP_ARITH_INST_RETIRED2.SCALARfp_arith_inst_retired2.scalar_halffloating pointFP_ARITH_INST_RETIRED2.SCALAR_HALFevent=0xcf,period=100003,umask=100fp_arith_inst_retired2.vectorfloating pointNumber of all Vector (also called packed) Half-Precision FP arithmetic instructions(1) retiredevent=0xcf,period=100003,umask=0x1c00FP_ARITH_INST_RETIRED2.VECTORocr.demand_code_rd.l3_missmemoryCounts demand instruction fetches and L1 instruction cache prefetches that were not supplied by the local socket's L1, L2, or L3 cachesevent=0x2a,period=100003,umask=1,offcore_rsp=0x3FBFC0000400ocr.demand_data_rd.l3_missmemoryCounts demand data reads that were not supplied by the local socket's L1, L2, or L3 cachesevent=0x2a,period=100003,umask=1,offcore_rsp=0x3FBFC0000100ocr.demand_rfo.l3_missmemoryCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the local socket's L1, L2, or L3 cachesevent=0x2a,period=100003,umask=1,offcore_rsp=0x3F3FC0000200ocr.hwpf_l3.l3_missmemoryCounts hardware prefetches to the L3 only that missed the local socket's L1, L2, and L3 cachesevent=0x2a,period=100003,umask=1,offcore_rsp=0x9400238000ocr.hwpf_l3.l3_miss_localmemoryCounts hardware prefetches to the L3 only that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locallyevent=0x2a,period=100003,umask=1,offcore_rsp=0x8400238000ocr.reads_to_core.l3_missmemoryCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 cachesevent=0x2a,period=100003,umask=1,offcore_rsp=0x3F3FC0447700ocr.reads_to_core.l3_miss_localmemoryCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locallyevent=0x2a,period=100003,umask=1,offcore_rsp=0x3F04C0447700ocr.reads_to_core.l3_miss_local_socketmemoryCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that missed the L3 Cache and were supplied by the local socket (DRAM or PMM), whether or not in Sub NUMA Cluster(SNC) Mode.  In SNC Mode counts PMM or DRAM accesses that are controlled by the close or distant SNC Cluster.  It does not count misses to the L3 which go to Local CXL Type 2 Memory or Local Non DRAMevent=0x2a,period=100003,umask=1,offcore_rsp=0x70CC0447700ocr.streaming_wr.l3_missmemoryCounts streaming stores that missed the local socket's L1, L2, and L3 cachesevent=0x2a,period=100003,umask=1,offcore_rsp=0x9400080000ocr.streaming_wr.l3_miss_localmemoryCounts streaming stores that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locallyevent=0x2a,period=100003,umask=1,offcore_rsp=0x8400080000rtm_retired.abortedmemoryNumber of times an RTM execution aborted (Precise event)event=0xc9,period=100003,umask=400Counts the number of times RTM abort was triggered (Precise event)rtm_retired.aborted_eventsmemoryNumber of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt)event=0xc9,period=100003,umask=0x8000Counts the number of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt)rtm_retired.aborted_memmemoryNumber of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts)event=0xc9,period=100003,umask=800Counts the number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts)rtm_retired.aborted_memtypememoryNumber of times an RTM execution aborted due to incompatible memory typeevent=0xc9,period=100003,umask=0x4000Counts the number of times an RTM execution aborted due to incompatible memory typertm_retired.aborted_unfriendlymemoryNumber of times an RTM execution aborted due to HLE-unfriendly instructionsevent=0xc9,period=100003,umask=0x2000Counts the number of times an RTM execution aborted due to HLE-unfriendly instructionsrtm_retired.commitmemoryNumber of times an RTM execution successfully committedevent=0xc9,period=100003,umask=200Counts the number of times RTM commit succeededrtm_retired.startmemoryNumber of times an RTM execution startedevent=0xc9,period=100003,umask=100Counts the number of times we entered an RTM region. Does not count nested transactionstx_mem.abort_capacity_readmemorySpeculatively counts the number of TSX aborts due to a data capacity limitation for transactional readsevent=0x54,period=100003,umask=0x8000Speculatively counts the number of Transactional Synchronization Extensions (TSX) aborts due to a data capacity limitation for transactional readstx_mem.abort_capacity_writememorySpeculatively counts the number of TSX aborts due to a data capacity limitation for transactional writesevent=0x54,period=100003,umask=200Speculatively counts the number of Transactional Synchronization Extensions (TSX) aborts due to a data capacity limitation for transactional writestx_mem.abort_conflictmemoryNumber of times a transactional abort was signaled due to a data conflict on a transactionally accessed addressevent=0x54,period=100003,umask=100Counts the number of times a TSX line had a cache conflictexe.amx_busyotherCounts the cycles where the AMX (Advance Matrix Extension) unit is busy performing an operationevent=0xb7,period=2000003,umask=200ocr.demand_code_rd.any_responseotherCounts demand instruction fetches and L1 instruction cache prefetches that have any type of responseevent=0x2a,period=100003,umask=1,offcore_rsp=0x1000400ocr.demand_code_rd.dramotherCounts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAMevent=0x2a,period=100003,umask=1,offcore_rsp=0x73C00000400ocr.demand_code_rd.local_dramotherCounts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode.  In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Clusterevent=0x2a,period=100003,umask=1,offcore_rsp=0x10400000400ocr.demand_code_rd.snc_dramotherCounts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) modeevent=0x2a,period=100003,umask=1,offcore_rsp=0x70800000400ocr.demand_data_rd.dramotherCounts demand data reads that were supplied by DRAMevent=0x2a,period=100003,umask=1,offcore_rsp=0x73C00000100ocr.demand_data_rd.local_dramotherCounts demand data reads that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode.  In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Clusterevent=0x2a,period=100003,umask=1,offcore_rsp=0x10400000100ocr.demand_data_rd.remote_dramotherCounts demand data reads that were supplied by DRAM attached to another socketevent=0x2a,period=100003,umask=1,offcore_rsp=0x73000000100ocr.demand_data_rd.snc_dramotherCounts demand data reads that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) modeevent=0x2a,period=100003,umask=1,offcore_rsp=0x70800000100ocr.demand_rfo.any_responseotherCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that have any type of responseevent=0x2a,period=100003,umask=1,offcore_rsp=0x3F3FFC000200ocr.demand_rfo.dramotherCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAMevent=0x2a,period=100003,umask=1,offcore_rsp=0x73C00000200ocr.demand_rfo.local_dramotherCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode.  In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Clusterevent=0x2a,period=100003,umask=1,offcore_rsp=0x10400000200ocr.demand_rfo.snc_dramotherCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) modeevent=0x2a,period=100003,umask=1,offcore_rsp=0x70800000200ocr.hwpf_l1d.any_responseotherCounts data load hardware prefetch requests to the L1 data cache that have any type of responseevent=0x2a,period=100003,umask=1,offcore_rsp=0x1040000ocr.hwpf_l2.any_responseotherCounts hardware prefetches (which bring data to L2) that have any type of responseevent=0x2a,period=100003,umask=1,offcore_rsp=0x1007000ocr.hwpf_l3.any_responseotherCounts hardware prefetches to the L3 only that have any type of responseevent=0x2a,period=100003,umask=1,offcore_rsp=0x1238000ocr.hwpf_l3.remoteotherCounts hardware prefetches to the L3 only that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline was homed in a remote socketevent=0x2a,period=100003,umask=1,offcore_rsp=0x9000238000ocr.modified_write.any_responseotherCounts writebacks of modified cachelines and streaming stores that have any type of responseevent=0x2a,period=100003,umask=1,offcore_rsp=0x1080800ocr.reads_to_core.any_responseotherCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that have any type of responseevent=0x2a,period=100003,umask=1,offcore_rsp=0x3F3FFC447700ocr.reads_to_core.dramotherCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAMevent=0x2a,period=100003,umask=1,offcore_rsp=0x73C00447700ocr.reads_to_core.local_dramotherCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode.  In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Clusterevent=0x2a,period=100003,umask=1,offcore_rsp=0x10400447700ocr.reads_to_core.local_socket_dramotherCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to this socket, whether or not in Sub NUMA Cluster(SNC) Mode.  In SNC Mode counts DRAM accesses that are controlled by the close or distant SNC Clusterevent=0x2a,period=100003,umask=1,offcore_rsp=0x70C00447700ocr.reads_to_core.remoteotherCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 caches and were supplied by a remote socketevent=0x2a,period=100003,umask=1,offcore_rsp=0x3F3300447700ocr.reads_to_core.remote_dramotherCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to another socketevent=0x2a,period=100003,umask=1,offcore_rsp=0x73000447700ocr.reads_to_core.remote_memoryotherCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM or PMM attached to another socketevent=0x2a,period=100003,umask=1,offcore_rsp=0x73300447700ocr.reads_to_core.snc_dramotherCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) modeevent=0x2a,period=100003,umask=1,offcore_rsp=0x70800447700ocr.write_estimate.memoryotherCounts Demand RFOs, ItoM's, PREFECTHW's, Hardware RFO Prefetches to the L1/L2 and Streaming stores that likely resulted in a store to Memory (DRAM or PMM)event=0x2a,period=100003,umask=1,offcore_rsp=0xFBFF8082200int_misc.mba_stallspipelineINT_MISC.MBA_STALLSevent=0xad,period=1000003,umask=0x2000uops_executed.corepipelineNumber of uops executed on the coreevent=0xb1,period=2000003,umask=200Counts the number of uops executed from any threadunc_cha_bypass_cha_imc.intermediateuncore cacheCHA to iMC Bypass : Intermediate bypass Takenevent=0x57,umask=201CHA to iMC Bypass : Intermediate bypass Taken : Counts the number of times when the CHA was able to bypass HA pipe on the way to iMC.  This is a latency optimization for situations when there is light loadings on the memory subsystem.  This can be filtered by when the bypass was taken and when it was not. : Filter for transactions that succeeded in taking the intermediate bypassunc_cha_bypass_cha_imc.not_takenuncore cacheCHA to iMC Bypass : Not Takenevent=0x57,umask=401CHA to iMC Bypass : Not Taken : Counts the number of times when the CHA was able to bypass HA pipe on the way to iMC.  This is a latency optimization for situations when there is light loadings on the memory subsystem.  This can be filtered by when the bypass was taken and when it was not. : Filter for transactions that could not take the bypass, and issues a read to memory. Note that transactions that did not take the bypass but did not issue read to memory will not be countedunc_cha_bypass_cha_imc.takenuncore cacheCHA to iMC Bypass : Takenevent=0x57,umask=101CHA to iMC Bypass : Taken : Counts the number of times when the CHA was able to bypass HA pipe on the way to iMC.  This is a latency optimization for situations when there is light loadings on the memory subsystem.  This can be filtered by when the bypass was taken and when it was not. : Filter for transactions that succeeded in taking the full bypassunc_cha_clockticksuncore cacheCHA Clockticksevent=101Number of CHA clock cycles while the event is enabledunc_cha_core_snp.any_gtoneuncore cacheCore Cross Snoops Issued : Any Cycle with Multiple Snoopsevent=0x33,umask=0xf201Core Cross Snoops Issued : Any Cycle with Multiple Snoops : Counts the number of transactions that trigger a configurable number of cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set.  For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them.  However, if only 1 CV bit is set the core my have modified the data.  If the transaction was an RFO, it would need to invalidate the lines.  This event can be filtered based on who triggered the initial snoop(s)unc_cha_core_snp.any_oneuncore cacheCore Cross Snoops Issued : Any Single Snoopevent=0x33,umask=0xf101Core Cross Snoops Issued : Any Single Snoop : Counts the number of transactions that trigger a configurable number of cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set.  For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them.  However, if only 1 CV bit is set the core my have modified the data.  If the transaction was an RFO, it would need to invalidate the lines.  This event can be filtered based on who triggered the initial snoop(s)unc_cha_core_snp.core_gtoneuncore cacheCore Cross Snoops Issued : Multiple Core Requestsevent=0x33,umask=0x4201Core Cross Snoops Issued : Multiple Core Requests : Counts the number of transactions that trigger a configurable number of cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set.  For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them.  However, if only 1 CV bit is set the core my have modified the data.  If the transaction was an RFO, it would need to invalidate the lines.  This event can be filtered based on who triggered the initial snoop(s)unc_cha_core_snp.core_oneuncore cacheCore Cross Snoops Issued : Single Core Requestsevent=0x33,umask=0x4101Core Cross Snoops Issued : Single Core Requests : Counts the number of transactions that trigger a configurable number of cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set.  For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them.  However, if only 1 CV bit is set the core my have modified the data.  If the transaction was an RFO, it would need to invalidate the lines.  This event can be filtered based on who triggered the initial snoop(s)unc_cha_core_snp.evict_gtoneuncore cacheCore Cross Snoops Issued : Multiple Evictionevent=0x33,umask=0x8201Core Cross Snoops Issued : Multiple Eviction : Counts the number of transactions that trigger a configurable number of cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set.  For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them.  However, if only 1 CV bit is set the core my have modified the data.  If the transaction was an RFO, it would need to invalidate the lines.  This event can be filtered based on who triggered the initial snoop(s)unc_cha_core_snp.evict_oneuncore cacheCore Cross Snoops Issued : Single Evictionevent=0x33,umask=0x8101Core Cross Snoops Issued : Single Eviction : Counts the number of transactions that trigger a configurable number of cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set.  For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them.  However, if only 1 CV bit is set the core my have modified the data.  If the transaction was an RFO, it would need to invalidate the lines.  This event can be filtered based on who triggered the initial snoop(s)unc_cha_core_snp.ext_gtoneuncore cacheCore Cross Snoops Issued : Multiple External Snoopsevent=0x33,umask=0x2201Core Cross Snoops Issued : Multiple External Snoops : Counts the number of transactions that trigger a configurable number of cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set.  For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them.  However, if only 1 CV bit is set the core my have modified the data.  If the transaction was an RFO, it would need to invalidate the lines.  This event can be filtered based on who triggered the initial snoop(s)unc_cha_core_snp.ext_oneuncore cacheCore Cross Snoops Issued : Single External Snoopsevent=0x33,umask=0x2101Core Cross Snoops Issued : Single External Snoops : Counts the number of transactions that trigger a configurable number of cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set.  For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them.  However, if only 1 CV bit is set the core my have modified the data.  If the transaction was an RFO, it would need to invalidate the lines.  This event can be filtered based on who triggered the initial snoop(s)unc_cha_core_snp.remote_gtoneuncore cacheCore Cross Snoops Issued : Multiple Snoop Targets from Remoteevent=0x33,umask=0x1201Core Cross Snoops Issued : Multiple Snoop Targets from Remote : Counts the number of transactions that trigger a configurable number of cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set.  For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them.  However, if only 1 CV bit is set the core my have modified the data.  If the transaction was an RFO, it would need to invalidate the lines.  This event can be filtered based on who triggered the initial snoop(s)unc_cha_core_snp.remote_oneuncore cacheCore Cross Snoops Issued : Single Snoop Target from Remoteevent=0x33,umask=0x1101Core Cross Snoops Issued : Single Snoop Target from Remote : Counts the number of transactions that trigger a configurable number of cross snoops.  Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set.  For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them.  However, if only 1 CV bit is set the core my have modified the data.  If the transaction was an RFO, it would need to invalidate the lines.  This event can be filtered based on who triggered the initial snoop(s)unc_cha_direct_go.ha_suppress_drduncore cacheDirect GOevent=0x6e,umask=401unc_cha_direct_go.ha_suppress_no_d2cuncore cacheDirect GOevent=0x6e,umask=201unc_cha_direct_go.ha_tor_deallocuncore cacheDirect GOevent=0x6e,umask=101unc_cha_direct_go_opc.extcmpuncore cacheDirect GOevent=0x6d,umask=101unc_cha_direct_go_opc.fast_gouncore cacheDirect GOevent=0x6d,umask=0x1001unc_cha_direct_go_opc.fast_go_pulluncore cacheDirect GOevent=0x6d,umask=0x2001unc_cha_direct_go_opc.gouncore cacheDirect GOevent=0x6d,umask=401unc_cha_direct_go_opc.go_pulluncore cacheDirect GOevent=0x6d,umask=801unc_cha_direct_go_opc.idle_due_suppressuncore cacheDirect GOevent=0x6d,umask=0x8001unc_cha_direct_go_opc.nopuncore cacheDirect GOevent=0x6d,umask=0x4001unc_cha_direct_go_opc.pulluncore cacheDirect GOevent=0x6d,umask=201unc_cha_egress_ordering.iv_snoopgo_dnuncore cacheEgress Blocking due to Ordering requirements : Downevent=0xba,umask=401Egress Blocking due to Ordering requirements : Down : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirementsunc_cha_egress_ordering.iv_snoopgo_upuncore cacheEgress Blocking due to Ordering requirements : Upevent=0xba,umask=101Egress Blocking due to Ordering requirements : Up : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirementsunc_cha_hitme_hit.shared_ownrequncore cacheCounts Number of Hits in HitMe Cache : Shared hit and op is RdInvOwn, RdInv, Inv*event=0x5f,umask=401unc_cha_hitme_hit.wbmtoeuncore cacheCounts Number of Hits in HitMe Cache : op is WbMtoEevent=0x5f,umask=801unc_cha_hitme_hit.wbmtoi_or_suncore cacheCounts Number of Hits in HitMe Cache : op is WbMtoI, WbPushMtoI, WbFlush, or WbMtoSevent=0x5f,umask=0x1001unc_cha_hitme_lookup.readuncore cacheCounts Number of times HitMe Cache is accessed : op is RdCode, RdData, RdDataMigratory, RdCur, RdInvOwn, RdInv, Inv*event=0x5e,umask=101unc_cha_hitme_lookup.writeuncore cacheCounts Number of times HitMe Cache is accessed : op is WbMtoE, WbMtoI, WbPushMtoI, WbFlush, or WbMtoSevent=0x5e,umask=201unc_cha_hitme_miss.notshared_rdinvownuncore cacheCounts Number of Misses in HitMe Cache : No SF/LLC HitS/F and op is RdInvOwnevent=0x60,umask=0x4001unc_cha_hitme_miss.read_or_invuncore cacheCounts Number of Misses in HitMe Cache : op is RdCode, RdData, RdDataMigratory, RdCur, RdInv, Inv*event=0x60,umask=0x8001unc_cha_hitme_miss.shared_rdinvownuncore cacheCounts Number of Misses in HitMe Cache : SF/LLC HitS/F and op is RdInvOwnevent=0x60,umask=0x2001unc_cha_hitme_update.deallocateuncore cacheCounts the number of Allocate/Update to HitMe Cache : Deallocate HitME$ on Reads without RspFwdI*event=0x61,umask=0x1001unc_cha_hitme_update.deallocate_rspfwdi_locuncore cacheCounts the number of Allocate/Update to HitMe Cache : op is RspIFwd or RspIFwdWb for a local requestevent=0x61,umask=101Counts the number of Allocate/Update to HitMe Cache : op is RspIFwd or RspIFwdWb for a local request : Received RspFwdI* for a local request, but converted HitME$ to SF entryunc_cha_hitme_update.rdinvownuncore cacheCounts the number of Allocate/Update to HitMe Cache : Update HitMe Cache on RdInvOwn even if not RspFwdI*event=0x61,umask=801unc_cha_hitme_update.rspfwdi_remuncore cacheCounts the number of Allocate/Update to HitMe Cache : op is RspIFwd or RspIFwdWb for a remote requestevent=0x61,umask=201Counts the number of Allocate/Update to HitMe Cache : op is RspIFwd or RspIFwdWb for a remote request : Updated HitME$ on RspFwdI* or local HitM/E received for a remote requestunc_cha_hitme_update.shareduncore cacheCounts the number of Allocate/Update to HitMe Cache : Update HitMe Cache to SHARedevent=0x61,umask=401unc_cha_imc_reads_count.priorityuncore cacheHA to iMC Reads Issued : ISOCHevent=0x59,umask=201HA to iMC Reads Issued : ISOCH : Count of the number of reads issued to any of the memory controller channels.  This can be filtered by the priority of the readsunc_cha_imc_writes_count.full_priorityuncore cacheCHA to iMC Full Line Writes Issued : ISOCH Full Lineevent=0x5b,umask=401CHA to iMC Full Line Writes Issued : ISOCH Full Line : Counts the total number of full line writes issued from the HA into the memory controllerunc_cha_imc_writes_count.partialuncore cacheCHA to iMC Full Line Writes Issued : Partial Non-ISOCHevent=0x5b,umask=201CHA to iMC Full Line Writes Issued : Partial Non-ISOCH : Counts the total number of full line writes issued from the HA into the memory controllerunc_cha_imc_writes_count.partial_priorityuncore cacheCHA to iMC Full Line Writes Issued : ISOCH Partialevent=0x5b,umask=801CHA to iMC Full Line Writes Issued : ISOCH Partial : Counts the total number of full line writes issued from the HA into the memory controllerunc_cha_llc_lookup.alluncore cacheCache and Snoop Filter Lookups; Any Requestevent=0x34,umask=0x1fffff01Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CHAFilter0[24:21,17] bits correspond to [FMESI] state.; Filters for any transaction originating from the IPQ or IRQ.  This does not include lookups originating from the ISMQunc_cha_llc_lookup.all_remoteuncore cacheCache Lookups : All transactions from Remote Agentsevent=0x34,umask=0x17e0ff01Cache Lookups : All transactions from Remote Agents : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothingunc_cha_llc_lookup.any_funcore cacheCache Lookups : All Requestsevent=0x3401Cache Lookups : All Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Any local or remote transaction to the LLC, including prefetchunc_cha_llc_lookup.codeuncore cacheCache Lookups : CRd Requestsevent=0x34,umask=0x1bd0ff01Cache Lookups : CRd Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Local or remote CRd transactions to the LLC.  This includes CRd prefetchunc_cha_llc_lookup.code_read_funcore cacheCache Lookups : CRd Requestsevent=0x3401Cache Lookups : CRd Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Local or remote CRd transactions to the LLC.  This includes CRd prefetchunc_cha_llc_lookup.corepref_or_dmnd_local_funcore cacheCache Lookups : Local non-prefetch requestsevent=0x3401Cache Lookups : Local non-prefetch requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Any local transaction to the LLC, not including prefetchunc_cha_llc_lookup.data_rduncore cacheCache and Snoop Filter Lookups; Data Read Requestevent=0x34,umask=0x1bc1ff01Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactionsunc_cha_llc_lookup.data_read_alluncore cacheCache Lookups : Data Readsevent=0x34,umask=0x1fc1ff01Cache Lookups : Data Reads : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothingunc_cha_llc_lookup.data_read_funcore cacheCache Lookups : Data Read Requestevent=0x3401Cache Lookups : Data Read Request : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Read transactionsunc_cha_llc_lookup.data_read_localuncore cacheCache Lookups : Demand Data Reads, Core and LLC prefetchesevent=0x34,umask=0x841ff01Cache Lookups : Demand Data Reads, Core and LLC prefetches : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothingunc_cha_llc_lookup.data_read_missuncore cacheCache Lookups : Data Read Missesevent=0x34,umask=0x1fc10101Cache Lookups : Data Read Misses : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothingunc_cha_llc_lookup.euncore cacheCache Lookups : E Stateevent=0x34,umask=0x2001Cache Lookups : E State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Hit Exclusive Stateunc_cha_llc_lookup.funcore cacheCache Lookups : F Stateevent=0x34,umask=0x8001Cache Lookups : F State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Hit Forward Stateunc_cha_llc_lookup.flush_invuncore cacheCache Lookups : Flush or Invalidate Requestsevent=0x34,umask=0x1a44ff01Cache Lookups : Flush : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothingunc_cha_llc_lookup.flush_or_inv_funcore cacheCache Lookups : Flushevent=0x3401Cache Lookups : Flush : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothingunc_cha_llc_lookup.iuncore cacheCache Lookups : I Stateevent=0x34,umask=101Cache Lookups : I State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Missunc_cha_llc_lookup.llcpref_local_funcore cacheCache Lookups : Local LLC prefetch requests (from LLC)event=0x3401Cache Lookups : Local LLC prefetch requests (from LLC) : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Any local LLC prefetch to the LLCunc_cha_llc_lookup.locally_homed_addressuncore cacheCache Lookups : Transactions homed locallyevent=0x34,umask=0xbdfff01Cache Lookups : Transactions homed locally : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Transaction whose address resides in the local MCunc_cha_llc_lookup.local_codeuncore cacheCache Lookups : CRd Requests that come from the local socket (usually the core)event=0x34,umask=0x19d0ff01Cache Lookups : CRd Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Local or remote CRd transactions to the LLC.  This includes CRd prefetchunc_cha_llc_lookup.local_data_rduncore cacheCache and Snoop Filter Lookups; Data Read Request that come from the local socket (usually the core)event=0x34,umask=0x19c1ff01Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactionsunc_cha_llc_lookup.local_dmnd_codeuncore cacheCache Lookups : Demand CRd Requests that come from the local socket (usually the core)event=0x34,umask=0x1850ff01Cache Lookups : CRd Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Local or remote CRd transactions to the LLC.  This includes CRd prefetchunc_cha_llc_lookup.local_dmnd_data_rduncore cacheCache and Snoop Filter Lookups; Demand Data Reads that come from the local socket (usually the core)event=0x34,umask=0x1841ff01Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactionsunc_cha_llc_lookup.local_dmnd_rfouncore cacheCache Lookups : Demand RFO Requests that come from the local socket (usually the core)event=0x34,umask=0x1848ff01Cache Lookups : RFO Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Local or remote RFO transactions to the LLC.  This includes RFO prefetchunc_cha_llc_lookup.local_funcore cacheCache Lookups : Transactions homed locallyevent=0x3401Cache Lookups : Transactions homed locally : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Transaction whose address resides in the local MCunc_cha_llc_lookup.local_flush_invuncore cacheCache Lookups : Flush or Invalidate Requests that come from the local socket (usually the core)event=0x34,umask=0x1844ff01Cache Lookups : Flush : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothingunc_cha_llc_lookup.local_llc_pfuncore cacheCache and Snoop Filter Lookups; Prefetch requests to the LLC that come from the local socket (usually the core)event=0x34,umask=0x189dff01Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactionsunc_cha_llc_lookup.local_pfuncore cacheCache and Snoop Filter Lookups; Data Read Prefetches that come from the local socket (usually the core)event=0x34,umask=0x199dff01Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactionsunc_cha_llc_lookup.local_pf_codeuncore cacheCache Lookups : CRd Prefetches that come from the local socket (usually the core)event=0x34,umask=0x1910ff01Cache Lookups : CRd Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Local or remote CRd transactions to the LLC.  This includes CRd prefetchunc_cha_llc_lookup.local_pf_data_rduncore cacheCache and Snoop Filter Lookups; Data Read Prefetches that come from the local socket (usually the core)event=0x34,umask=0x1981ff01Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactionsunc_cha_llc_lookup.local_pf_rfouncore cacheCache Lookups : RFO Prefetches that come from the local socket (usually the core)event=0x34,umask=0x1908ff01Cache Lookups : RFO Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Local or remote RFO transactions to the LLC.  This includes RFO prefetchunc_cha_llc_lookup.local_rfouncore cacheCache Lookups : RFO Requests that come from the local socket (usually the core)event=0x34,umask=0x19c8ff01Cache Lookups : RFO Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Local or remote RFO transactions to the LLC.  This includes RFO prefetchunc_cha_llc_lookup.muncore cacheCache Lookups : M Stateevent=0x34,umask=0x4001Cache Lookups : M State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Hit Modified Stateunc_cha_llc_lookup.miss_alluncore cacheCache Lookups : All Missesevent=0x34,umask=0x1fe00101Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothingunc_cha_llc_lookup.other_req_funcore cacheCache Lookups : Write Requestsevent=0x3401Cache Lookups : Write Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Writeback transactions from L2 to the LLC  This includes all write transactions -- both Cacheable and UCunc_cha_llc_lookup.pref_or_dmnd_remote_funcore cacheCache Lookups : Remote non-snoop requestsevent=0x3401Cache Lookups : Remote non-snoop requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Remote non-snoop transactions to the LLCunc_cha_llc_lookup.remotely_homed_addressuncore cacheCache Lookups : Transactions homed remotelyevent=0x34,umask=0x15dfff01Cache Lookups : Transactions homed remotely : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Transaction whose address resides in a remote MCunc_cha_llc_lookup.remote_codeuncore cacheCache Lookups : CRd Requests that come from a Remote socketevent=0x34,umask=0x1a10ff01Cache Lookups : CRd Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Local or remote CRd transactions to the LLC.  This includes CRd prefetchunc_cha_llc_lookup.remote_data_rduncore cacheCache and Snoop Filter Lookups; Data Read Requests that come from a Remote socketevent=0x34,umask=0x1a01ff01Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactionsunc_cha_llc_lookup.remote_funcore cacheCache Lookups : Transactions homed remotelyevent=0x3401Cache Lookups : Transactions homed remotely : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Transaction whose address resides in a remote MCunc_cha_llc_lookup.remote_flush_invuncore cacheCache Lookups : Flush or Invalidate requests that come from a Remote socketevent=0x34,umask=0x1a04ff01Cache Lookups : Flush : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothingunc_cha_llc_lookup.remote_otheruncore cacheCache Lookups : Filters Requests for those that write info into the cache that come from a remote socketevent=0x34,umask=0x1a02ff01Cache Lookups : Write Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Writeback transactions from L2 to the LLC  This includes all write transactions -- both Cacheable and UCunc_cha_llc_lookup.remote_rfouncore cacheCache Lookups : RFO Requests that come from a Remote socketevent=0x34,umask=0x1a08ff01Cache Lookups : RFO Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Local or remote RFO transactions to the LLC.  This includes RFO prefetchunc_cha_llc_lookup.remote_snoop_funcore cacheCache Lookups : Remote snoop requestsevent=0x3401Cache Lookups : Remote snoop requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Remote snoop transactions to the LLCunc_cha_llc_lookup.remote_snpuncore cacheCache and Snoop Filter Lookups; Snoop Requests from a Remote Socketevent=0x34,umask=0x1c19ff01Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CHAFilter0[24:21,17] bits correspond to [FMESI] state.; Filters for any transaction originating from the IPQ or IRQ.  This does not include lookups originating from the ISMQunc_cha_llc_lookup.rfouncore cacheCache Lookups : RFO Requestsevent=0x34,umask=0x1bc8ff01Cache Lookups : RFO Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Local or remote RFO transactions to the LLC.  This includes RFO prefetchunc_cha_llc_lookup.rfo_funcore cacheCache Lookups : RFO Request Filterevent=0x3401Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothing. : Local or remote RFO transactions to the LLC.  This includes RFO prefetchunc_cha_llc_lookup.rfo_localuncore cacheCache Lookups : Locally HOMed RFOs - Demand and Prefetchesevent=0x34,umask=0x9c8ff01Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothingunc_cha_llc_lookup.suncore cacheCache Lookups : S Stateevent=0x34,umask=0x1001Cache Lookups : S State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Hit Shared Stateunc_cha_llc_lookup.sf_euncore cacheCache Lookups : SnoopFilter - E Stateevent=0x34,umask=401Cache Lookups : SnoopFilter - E State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : SF Hit Exclusive Stateunc_cha_llc_lookup.sf_huncore cacheCache Lookups : SnoopFilter - H Stateevent=0x34,umask=801Cache Lookups : SnoopFilter - H State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : SF Hit HitMe Stateunc_cha_llc_lookup.sf_suncore cacheCache Lookups : SnoopFilter - S Stateevent=0x34,umask=201Cache Lookups : SnoopFilter - S State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : SF Hit Shared Stateunc_cha_llc_lookup.write_localuncore cacheCache Lookups : Writesevent=0x34,umask=0x842ff01Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothing. : Requests that install or change a line in the LLC.    Examples:  Writebacks from Core L2's and UPI.  Prefetches into the LLCunc_cha_llc_lookup.write_remoteuncore cacheCache Lookups : Remote Writesevent=0x34,umask=0x17c2ff01Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothingunc_cha_llc_victims.e_stateuncore cacheLines Victimized : Lines in E stateevent=0x37,umask=201Lines Victimized : Lines in E state : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.iauncore cacheLines Victimized : IA trafficevent=0x37,umask=0x2001Lines Victimized : IA traffic : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.iouncore cacheLines Victimized : IO trafficevent=0x37,umask=0x1001Lines Victimized : IO traffic : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.io_euncore cacheAll LLC lines in E state that are victimized on a fill from an IO deviceevent=0x37,umask=0x1201Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.io_fsuncore cacheAll LLC lines in F or S state that are victimized on a fill from an IO deviceevent=0x37,umask=0x1c01Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.io_muncore cacheAll LLC lines in M state that are victimized on a fill from an IO deviceevent=0x37,umask=0x1101Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.io_mesfuncore cacheAll LLC lines in any state that are victimized on a fill from an IO deviceevent=0x37,umask=0x1f01Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.local_alluncore cacheLines Victimized; Local - All Linesevent=0x37,umask=0x200f01Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.local_euncore cacheLines Victimizedevent=0x37,umask=0x200201Lines Victimized : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.local_muncore cacheLines Victimizedevent=0x37,umask=0x200101Lines Victimized : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.local_onlyuncore cacheLines Victimized : Local Onlyevent=0x3701Lines Victimized : Local Only : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.local_suncore cacheLines Victimizedevent=0x37,umask=0x200401Lines Victimized : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.m_stateuncore cacheLines Victimized : Lines in M stateevent=0x37,umask=101Lines Victimized : Lines in M state : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.remote_alluncore cacheLines Victimized; Remote - All Linesevent=0x37,umask=0x800f01Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.remote_euncore cacheLines Victimizedevent=0x37,umask=0x800201Lines Victimized : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.remote_muncore cacheLines Victimizedevent=0x37,umask=0x800101Lines Victimized : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.remote_onlyuncore cacheLines Victimized : Remote Onlyevent=0x3701Lines Victimized : Remote Only : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.remote_suncore cacheLines Victimizedevent=0x37,umask=0x800401Lines Victimized : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.s_stateuncore cacheLines Victimized : Lines in S Stateevent=0x37,umask=401Lines Victimized : Lines in S State : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.total_euncore cacheAll LLC lines in E state that are victimized on a fillevent=0x37,umask=201Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.total_muncore cacheAll LLC lines in M state that are victimized on a fillevent=0x37,umask=101Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.total_suncore cacheAll LLC lines in S state that are victimized on a fillevent=0x37,umask=401Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_misc.cv0_pref_missuncore cacheCbo Misc : CV0 Prefetch Missevent=0x39,umask=0x2001Cbo Misc : CV0 Prefetch Miss : Miscellaneous events in the Cbounc_cha_misc.cv0_pref_vicuncore cacheCbo Misc : CV0 Prefetch Victimevent=0x39,umask=0x1001Cbo Misc : CV0 Prefetch Victim : Miscellaneous events in the Cbounc_cha_misc.rspi_was_fseuncore cacheCbo Misc : Silent Snoop Evictionevent=0x39,umask=101Cbo Misc : Silent Snoop Eviction : Miscellaneous events in the Cbo. : Counts the number of times when a Snoop hit in FSE states and triggered a silent eviction.  This is useful because this information is lost in the PRE encodingsunc_cha_misc.wc_aliasinguncore cacheCbo Misc : Write Combining Aliasingevent=0x39,umask=201Cbo Misc : Write Combining Aliasing : Miscellaneous events in the Cbo. : Counts the number of times that a USWC write (WCIL(F)) transaction hit in the LLC in M state, triggering a WBMtoI followed by the USWC write.  This occurs when there is WC aliasingunc_cha_osb.local_invitoeuncore cacheOSB Snoop Broadcast : Local InvItoEevent=0x55,umask=101OSB Snoop Broadcast : Local InvItoE : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSBunc_cha_osb.local_readuncore cacheOSB Snoop Broadcast : Local Rdevent=0x55,umask=201OSB Snoop Broadcast : Local Rd : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSBunc_cha_osb.off_pwrheuristicuncore cacheOSB Snoop Broadcast : Offevent=0x55,umask=0x2001OSB Snoop Broadcast : Off : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSBunc_cha_osb.remote_readuncore cacheOSB Snoop Broadcast : Remote Rdevent=0x55,umask=401OSB Snoop Broadcast : Remote Rd : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSBunc_cha_osb.remote_readinvitoeuncore cacheOSB Snoop Broadcast : Remote Rd InvItoEevent=0x55,umask=801OSB Snoop Broadcast : Remote Rd InvItoE : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSBunc_cha_osb.rfo_hits_snp_bcastuncore cacheOSB Snoop Broadcast : RFO HitS Snoop Broadcastevent=0x55,umask=0x1001OSB Snoop Broadcast : RFO HitS Snoop Broadcast : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSBunc_cha_pmm_memmode_nm_invitox.localuncore cacheUNC_CHA_PMM_MEMMODE_NM_INVITOX.LOCALevent=0x65,umask=101unc_cha_pmm_memmode_nm_invitox.remoteuncore cacheUNC_CHA_PMM_MEMMODE_NM_INVITOX.REMOTEevent=0x65,umask=201unc_cha_pmm_memmode_nm_invitox.setconflictuncore cacheUNC_CHA_PMM_MEMMODE_NM_INVITOX.SETCONFLICTevent=0x65,umask=401unc_cha_pmm_memmode_nm_setconflicts.llcuncore cacheMemory Mode related events; Counts the number of times CHA saw a Near Memory set conflict in SF/LLCevent=0x64,umask=201Near Memory evictions due to another read to the same Near Memory set in the LLCunc_cha_pmm_memmode_nm_setconflicts.sfuncore cacheMemory Mode related events; Counts the number of times CHA saw a Near memory set conflict in SF/LLCevent=0x64,umask=101Near Memory evictions due to another read to the same Near Memory set in the SFunc_cha_pmm_memmode_nm_setconflicts.toruncore cacheMemory Mode related events; Counts the number of times CHA saw a Near Memory set conflict in TORevent=0x64,umask=401No Reject in the CHA due to a pending read to the same Near Memory set in the TORunc_cha_pmm_memmode_nm_setconflicts2.iodcuncore cacheUNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS2.IODCevent=0x70,umask=101unc_cha_pmm_memmode_nm_setconflicts2.memwruncore cacheUNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS2.MEMWRevent=0x70,umask=201unc_cha_pmm_memmode_nm_setconflicts2.memwrniuncore cacheUNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS2.MEMWRNIevent=0x70,umask=401unc_cha_pmm_qos.ddr4_fast_insertuncore cacheUNC_CHA_PMM_QOS.DDR4_FAST_INSERTevent=0x66,umask=201unc_cha_pmm_qos.rej_irquncore cacheUNC_CHA_PMM_QOS.REJ_IRQevent=0x66,umask=801unc_cha_pmm_qos.slowtorq_skipuncore cacheUNC_CHA_PMM_QOS.SLOWTORQ_SKIPevent=0x66,umask=0x4001unc_cha_pmm_qos.slow_insertuncore cacheUNC_CHA_PMM_QOS.SLOW_INSERTevent=0x66,umask=101unc_cha_pmm_qos.throttleuncore cacheUNC_CHA_PMM_QOS.THROTTLEevent=0x66,umask=401unc_cha_pmm_qos.throttle_irquncore cacheUNC_CHA_PMM_QOS.THROTTLE_IRQevent=0x66,umask=0x2001unc_cha_pmm_qos.throttle_prquncore cacheUNC_CHA_PMM_QOS.THROTTLE_PRQevent=0x66,umask=0x1001unc_cha_pmm_qos_occupancy.ddr_fast_fifouncore cacheUNC_CHA_PMM_QOS_OCCUPANCY.DDR_FAST_FIFOevent=0x67,umask=201: count # of FAST TOR Request inserted to ha_tor_req_fifounc_cha_pmm_qos_occupancy.ddr_slow_fifouncore cacheNumber of SLOW TOR Request inserted to ha_pmm_tor_req_fifoevent=0x67,umask=101unc_cha_read_no_credits.mc0uncore cacheCHA iMC CHNx READ Credits Empty : MC0event=0x58,umask=101CHA iMC CHNx READ Credits Empty : MC0 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 0 onlyunc_cha_read_no_credits.mc1uncore cacheCHA iMC CHNx READ Credits Empty : MC1event=0x58,umask=201CHA iMC CHNx READ Credits Empty : MC1 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 1 onlyunc_cha_read_no_credits.mc2uncore cacheCHA iMC CHNx READ Credits Empty : MC2event=0x58,umask=401CHA iMC CHNx READ Credits Empty : MC2 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 2 onlyunc_cha_read_no_credits.mc3uncore cacheCHA iMC CHNx READ Credits Empty : MC3event=0x58,umask=801CHA iMC CHNx READ Credits Empty : MC3 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 3 onlyunc_cha_read_no_credits.mc4uncore cacheCHA iMC CHNx READ Credits Empty : MC4event=0x58,umask=0x1001CHA iMC CHNx READ Credits Empty : MC4 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 4 onlyunc_cha_read_no_credits.mc5uncore cacheCHA iMC CHNx READ Credits Empty : MC5event=0x58,umask=0x2001CHA iMC CHNx READ Credits Empty : MC5 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 5 onlyunc_cha_requests.invitoeuncore cacheRequests for exclusive ownership of a cache line without receiving dataevent=0x50,umask=0x3001Counts the total number of requests coming from a unit on this socket for exclusive ownership of a cache line without receiving data (INVITOE) to the CHAunc_cha_requests.invitoe_remoteuncore cacheRemote requests for exclusive ownership of a cache line  without receiving dataevent=0x50,umask=0x2001Counts the total number of requests coming from a remote socket for exclusive ownership of a cache line without receiving data (INVITOE) to the CHAunc_cha_requests.readsuncore cacheRead requests made into the CHAevent=0x50,umask=301Counts read requests made into this CHA. Reads include all read opcodes (including RFO: the Read for Ownership issued before a  write) unc_cha_requests.writesuncore cacheWrite requests made into the CHAevent=0x50,umask=0xc01Counts write requests made into the CHA, including streaming, evictions, HitM (Reads from another core to a Modified cacheline), etcunc_cha_rxc_inserts.ipquncore cacheIngress (from CMS) Allocations : IPQevent=0x13,umask=401Ingress (from CMS) Allocations : IPQ : Counts number of allocations per cycle into the specified Ingress queueunc_cha_rxc_inserts.irquncore cacheIngress (from CMS) Allocations : IRQevent=0x13,umask=101Ingress (from CMS) Allocations : IRQ : Counts number of allocations per cycle into the specified Ingress queueunc_cha_rxc_inserts.irq_rejuncore cacheIngress (from CMS) Allocations : IRQ Rejectedevent=0x13,umask=201Ingress (from CMS) Allocations : IRQ Rejected : Counts number of allocations per cycle into the specified Ingress queueunc_cha_rxc_inserts.prquncore cacheIngress (from CMS) Allocations : PRQevent=0x13,umask=0x1001Ingress (from CMS) Allocations : PRQ : Counts number of allocations per cycle into the specified Ingress queueunc_cha_rxc_inserts.prq_rejuncore cacheIngress (from CMS) Allocations : PRQevent=0x13,umask=0x2001Ingress (from CMS) Allocations : PRQ : Counts number of allocations per cycle into the specified Ingress queueunc_cha_rxc_inserts.rrquncore cacheIngress (from CMS) Allocations : RRQevent=0x13,umask=0x4001Ingress (from CMS) Allocations : RRQ : Counts number of allocations per cycle into the specified Ingress queueunc_cha_rxc_inserts.wbquncore cacheIngress (from CMS) Allocations : WBQevent=0x13,umask=0x8001Ingress (from CMS) Allocations : WBQ : Counts number of allocations per cycle into the specified Ingress queueunc_cha_rxc_ipq0_reject.ad_req_vn0uncore cacheIPQ Requests (from CMS) Rejected - Set 0 : AD REQ on VN0event=0x22,umask=101IPQ Requests (from CMS) Rejected - Set 0 : AD REQ on VN0 : No AD VN0 credit for generating a requestunc_cha_rxc_ipq0_reject.ad_rsp_vn0uncore cacheIPQ Requests (from CMS) Rejected - Set 0 : AD RSP on VN0event=0x22,umask=201IPQ Requests (from CMS) Rejected - Set 0 : AD RSP on VN0 : No AD VN0 credit for generating a responseunc_cha_rxc_ipq0_reject.ak_non_upiuncore cacheIPQ Requests (from CMS) Rejected - Set 0 : Non UPI AK Requestevent=0x22,umask=0x4001IPQ Requests (from CMS) Rejected - Set 0 : Non UPI AK Request : Can't inject AK ring messageunc_cha_rxc_ipq0_reject.bl_ncb_vn0uncore cacheIPQ Requests (from CMS) Rejected - Set 0 : BL NCB on VN0event=0x22,umask=0x1001IPQ Requests (from CMS) Rejected - Set 0 : BL NCB on VN0 : No BL VN0 credit for NCBunc_cha_rxc_ipq0_reject.bl_ncs_vn0uncore cacheIPQ Requests (from CMS) Rejected - Set 0 : BL NCS on VN0event=0x22,umask=0x2001IPQ Requests (from CMS) Rejected - Set 0 : BL NCS on VN0 : No BL VN0 credit for NCSunc_cha_rxc_ipq0_reject.bl_rsp_vn0uncore cacheIPQ Requests (from CMS) Rejected - Set 0 : BL RSP on VN0event=0x22,umask=401IPQ Requests (from CMS) Rejected - Set 0 : BL RSP on VN0 : No BL VN0 credit for generating a responseunc_cha_rxc_ipq0_reject.bl_wb_vn0uncore cacheIPQ Requests (from CMS) Rejected - Set 0 : BL WB on VN0event=0x22,umask=801IPQ Requests (from CMS) Rejected - Set 0 : BL WB on VN0 : No BL VN0 credit for generating a writebackunc_cha_rxc_ipq0_reject.iv_non_upiuncore cacheIPQ Requests (from CMS) Rejected - Set 0 : Non UPI IV Requestevent=0x22,umask=0x8001IPQ Requests (from CMS) Rejected - Set 0 : Non UPI IV Request : Can't inject IV ring messageunc_cha_rxc_ipq1_reject.allow_snpuncore cacheIPQ Requests (from CMS) Rejected - Set 1 : Allow Snoopevent=0x23,umask=0x4001unc_cha_rxc_ipq1_reject.any0uncore cacheIPQ Requests (from CMS) Rejected - Set 1 : ANY0event=0x23,umask=101IPQ Requests (from CMS) Rejected - Set 1 : ANY0 : Any condition listed in the IPQ0 Reject counter was trueunc_cha_rxc_ipq1_reject.hauncore cacheIPQ Requests (from CMS) Rejected - Set 1 : HAevent=0x23,umask=201unc_cha_rxc_ipq1_reject.llc_or_sf_wayuncore cacheIPQ Requests (from CMS) Rejected - Set 1 : LLC OR SF Wayevent=0x23,umask=0x2001IPQ Requests (from CMS) Rejected - Set 1 : LLC OR SF Way : Way conflict with another request that caused the rejectunc_cha_rxc_ipq1_reject.llc_victimuncore cacheIPQ Requests (from CMS) Rejected - Set 1 : LLC Victimevent=0x23,umask=401unc_cha_rxc_ipq1_reject.pa_matchuncore cacheIPQ Requests (from CMS) Rejected - Set 1 : PhyAddr Matchevent=0x23,umask=0x8001IPQ Requests (from CMS) Rejected - Set 1 : PhyAddr Match : Address match with an outstanding request that was rejectedunc_cha_rxc_ipq1_reject.sf_victimuncore cacheIPQ Requests (from CMS) Rejected - Set 1 : SF Victimevent=0x23,umask=801IPQ Requests (from CMS) Rejected - Set 1 : SF Victim : Requests did not generate Snoop filter victimunc_cha_rxc_ipq1_reject.victimuncore cacheIPQ Requests (from CMS) Rejected - Set 1 : Victimevent=0x23,umask=0x1001unc_cha_rxc_irq0_reject.ad_req_vn0uncore cacheIRQ Requests (from CMS) Rejected - Set 0 : AD REQ on VN0event=0x18,umask=101IRQ Requests (from CMS) Rejected - Set 0 : AD REQ on VN0 : No AD VN0 credit for generating a requestunc_cha_rxc_irq0_reject.ad_rsp_vn0uncore cacheIRQ Requests (from CMS) Rejected - Set 0 : AD RSP on VN0event=0x18,umask=201IRQ Requests (from CMS) Rejected - Set 0 : AD RSP on VN0 : No AD VN0 credit for generating a responseunc_cha_rxc_irq0_reject.ak_non_upiuncore cacheIRQ Requests (from CMS) Rejected - Set 0 : Non UPI AK Requestevent=0x18,umask=0x4001IRQ Requests (from CMS) Rejected - Set 0 : Non UPI AK Request : Can't inject AK ring messageunc_cha_rxc_irq0_reject.bl_ncb_vn0uncore cacheIRQ Requests (from CMS) Rejected - Set 0 : BL NCB on VN0event=0x18,umask=0x1001IRQ Requests (from CMS) Rejected - Set 0 : BL NCB on VN0 : No BL VN0 credit for NCBunc_cha_rxc_irq0_reject.bl_ncs_vn0uncore cacheIRQ Requests (from CMS) Rejected - Set 0 : BL NCS on VN0event=0x18,umask=0x2001IRQ Requests (from CMS) Rejected - Set 0 : BL NCS on VN0 : No BL VN0 credit for NCSunc_cha_rxc_irq0_reject.bl_rsp_vn0uncore cacheIRQ Requests (from CMS) Rejected - Set 0 : BL RSP on VN0event=0x18,umask=401IRQ Requests (from CMS) Rejected - Set 0 : BL RSP on VN0 : No BL VN0 credit for generating a responseunc_cha_rxc_irq0_reject.bl_wb_vn0uncore cacheIRQ Requests (from CMS) Rejected - Set 0 : BL WB on VN0event=0x18,umask=801IRQ Requests (from CMS) Rejected - Set 0 : BL WB on VN0 : No BL VN0 credit for generating a writebackunc_cha_rxc_irq0_reject.iv_non_upiuncore cacheIRQ Requests (from CMS) Rejected - Set 0 : Non UPI IV Requestevent=0x18,umask=0x8001IRQ Requests (from CMS) Rejected - Set 0 : Non UPI IV Request : Can't inject IV ring messageunc_cha_rxc_irq1_reject.allow_snpuncore cacheIRQ Requests (from CMS) Rejected - Set 1 : Allow Snoopevent=0x19,umask=0x4001unc_cha_rxc_irq1_reject.any0uncore cacheIRQ Requests (from CMS) Rejected - Set 1 : ANY0event=0x19,umask=101IRQ Requests (from CMS) Rejected - Set 1 : ANY0 : Any condition listed in the IRQ0 Reject counter was trueunc_cha_rxc_irq1_reject.hauncore cacheIRQ Requests (from CMS) Rejected - Set 1 : HAevent=0x19,umask=201unc_cha_rxc_irq1_reject.llc_or_sf_wayuncore cacheIRQ Requests (from CMS) Rejected - Set 1 : LLC or SF Wayevent=0x19,umask=0x2001IRQ Requests (from CMS) Rejected - Set 1 : LLC or SF Way : Way conflict with another request that caused the rejectunc_cha_rxc_irq1_reject.llc_victimuncore cacheIRQ Requests (from CMS) Rejected - Set 1 : LLC Victimevent=0x19,umask=401unc_cha_rxc_irq1_reject.sf_victimuncore cacheIRQ Requests (from CMS) Rejected - Set 1 : SF Victimevent=0x19,umask=801IRQ Requests (from CMS) Rejected - Set 1 : SF Victim : Requests did not generate Snoop filter victimunc_cha_rxc_irq1_reject.victimuncore cacheIRQ Requests (from CMS) Rejected - Set 1 : Victimevent=0x19,umask=0x1001unc_cha_rxc_ismq0_reject.ad_req_vn0uncore cacheISMQ Rejects - Set 0 : AD REQ on VN0event=0x24,umask=101ISMQ Rejects - Set 0 : AD REQ on VN0 : Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the cores. : No AD VN0 credit for generating a requestunc_cha_rxc_ismq0_reject.ad_rsp_vn0uncore cacheISMQ Rejects - Set 0 : AD RSP on VN0event=0x24,umask=201ISMQ Rejects - Set 0 : AD RSP on VN0 : Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the cores. : No AD VN0 credit for generating a responseunc_cha_rxc_ismq0_reject.ak_non_upiuncore cacheISMQ Rejects - Set 0 : Non UPI AK Requestevent=0x24,umask=0x4001ISMQ Rejects - Set 0 : Non UPI AK Request : Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the cores. : Can't inject AK ring messageunc_cha_rxc_ismq0_reject.bl_ncb_vn0uncore cacheISMQ Rejects - Set 0 : BL NCB on VN0event=0x24,umask=0x1001ISMQ Rejects - Set 0 : BL NCB on VN0 : Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the cores. : No BL VN0 credit for NCBunc_cha_rxc_ismq0_reject.bl_ncs_vn0uncore cacheISMQ Rejects - Set 0 : BL NCS on VN0event=0x24,umask=0x2001ISMQ Rejects - Set 0 : BL NCS on VN0 : Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the cores. : No BL VN0 credit for NCSunc_cha_rxc_ismq0_reject.bl_rsp_vn0uncore cacheISMQ Rejects - Set 0 : BL RSP on VN0event=0x24,umask=401ISMQ Rejects - Set 0 : BL RSP on VN0 : Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the cores. : No BL VN0 credit for generating a responseunc_cha_rxc_ismq0_reject.bl_wb_vn0uncore cacheISMQ Rejects - Set 0 : BL WB on VN0event=0x24,umask=801ISMQ Rejects - Set 0 : BL WB on VN0 : Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the cores. : No BL VN0 credit for generating a writebackunc_cha_rxc_ismq0_reject.iv_non_upiuncore cacheISMQ Rejects - Set 0 : Non UPI IV Requestevent=0x24,umask=0x8001ISMQ Rejects - Set 0 : Non UPI IV Request : Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the cores. : Can't inject IV ring messageunc_cha_rxc_ismq0_retry.ad_req_vn0uncore cacheISMQ Retries - Set 0 : AD REQ on VN0event=0x2c,umask=101ISMQ Retries - Set 0 : AD REQ on VN0 : Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the cores. : No AD VN0 credit for generating a requestunc_cha_rxc_ismq0_retry.ad_rsp_vn0uncore cacheISMQ Retries - Set 0 : AD RSP on VN0event=0x2c,umask=201ISMQ Retries - Set 0 : AD RSP on VN0 : Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the cores. : No AD VN0 credit for generating a responseunc_cha_rxc_ismq0_retry.ak_non_upiuncore cacheISMQ Retries - Set 0 : Non UPI AK Requestevent=0x2c,umask=0x4001ISMQ Retries - Set 0 : Non UPI AK Request : Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the cores. : Can't inject AK ring messageunc_cha_rxc_ismq0_retry.bl_ncb_vn0uncore cacheISMQ Retries - Set 0 : BL NCB on VN0event=0x2c,umask=0x1001ISMQ Retries - Set 0 : BL NCB on VN0 : Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the cores. : No BL VN0 credit for NCBunc_cha_rxc_ismq0_retry.bl_ncs_vn0uncore cacheISMQ Retries - Set 0 : BL NCS on VN0event=0x2c,umask=0x2001ISMQ Retries - Set 0 : BL NCS on VN0 : Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the cores. : No BL VN0 credit for NCSunc_cha_rxc_ismq0_retry.bl_rsp_vn0uncore cacheISMQ Retries - Set 0 : BL RSP on VN0event=0x2c,umask=401ISMQ Retries - Set 0 : BL RSP on VN0 : Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the cores. : No BL VN0 credit for generating a responseunc_cha_rxc_ismq0_retry.bl_wb_vn0uncore cacheISMQ Retries - Set 0 : BL WB on VN0event=0x2c,umask=801ISMQ Retries - Set 0 : BL WB on VN0 : Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the cores. : No BL VN0 credit for generating a writebackunc_cha_rxc_ismq0_retry.iv_non_upiuncore cacheISMQ Retries - Set 0 : Non UPI IV Requestevent=0x2c,umask=0x8001ISMQ Retries - Set 0 : Non UPI IV Request : Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the cores. : Can't inject IV ring messageunc_cha_rxc_ismq1_reject.any0uncore cacheISMQ Rejects - Set 1 : ANY0event=0x25,umask=101ISMQ Rejects - Set 1 : ANY0 : Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the cores. : Any condition listed in the ISMQ0 Reject counter was trueunc_cha_rxc_ismq1_reject.hauncore cacheISMQ Rejects - Set 1 : HAevent=0x25,umask=201ISMQ Rejects - Set 1 : HA : Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the coresunc_cha_rxc_ismq1_retry.any0uncore cacheISMQ Retries - Set 1 : ANY0event=0x2d,umask=101ISMQ Retries - Set 1 : ANY0 : Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the cores. : Any condition listed in the ISMQ0 Reject counter was trueunc_cha_rxc_ismq1_retry.hauncore cacheISMQ Retries - Set 1 : HAevent=0x2d,umask=201ISMQ Retries - Set 1 : HA : Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the coresunc_cha_rxc_occupancy.ipquncore cacheIngress (from CMS) Occupancy : IPQevent=0x11,umask=401Ingress (from CMS) Occupancy : IPQ : Counts number of entries in the specified Ingress queue in each cycleunc_cha_rxc_occupancy.rrquncore cacheIngress (from CMS) Occupancy : RRQevent=0x11,umask=0x4001Ingress (from CMS) Occupancy : RRQ : Counts number of entries in the specified Ingress queue in each cycleunc_cha_rxc_occupancy.wbquncore cacheIngress (from CMS) Occupancy : WBQevent=0x11,umask=0x8001Ingress (from CMS) Occupancy : WBQ : Counts number of entries in the specified Ingress queue in each cycleunc_cha_rxc_other0_retry.ad_req_vn0uncore cacheOther Retries - Set 0 : AD REQ on VN0event=0x2e,umask=101Other Retries - Set 0 : AD REQ on VN0 : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : No AD VN0 credit for generating a requestunc_cha_rxc_other0_retry.ad_rsp_vn0uncore cacheOther Retries - Set 0 : AD RSP on VN0event=0x2e,umask=201Other Retries - Set 0 : AD RSP on VN0 : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : No AD VN0 credit for generating a responseunc_cha_rxc_other0_retry.ak_non_upiuncore cacheOther Retries - Set 0 : Non UPI AK Requestevent=0x2e,umask=0x4001Other Retries - Set 0 : Non UPI AK Request : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : Can't inject AK ring messageunc_cha_rxc_other0_retry.bl_ncb_vn0uncore cacheOther Retries - Set 0 : BL NCB on VN0event=0x2e,umask=0x1001Other Retries - Set 0 : BL NCB on VN0 : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : No BL VN0 credit for NCBunc_cha_rxc_other0_retry.bl_ncs_vn0uncore cacheOther Retries - Set 0 : BL NCS on VN0event=0x2e,umask=0x2001Other Retries - Set 0 : BL NCS on VN0 : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : No BL VN0 credit for NCSunc_cha_rxc_other0_retry.bl_rsp_vn0uncore cacheOther Retries - Set 0 : BL RSP on VN0event=0x2e,umask=401Other Retries - Set 0 : BL RSP on VN0 : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : No BL VN0 credit for generating a responseunc_cha_rxc_other0_retry.bl_wb_vn0uncore cacheOther Retries - Set 0 : BL WB on VN0event=0x2e,umask=801Other Retries - Set 0 : BL WB on VN0 : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : No BL VN0 credit for generating a writebackunc_cha_rxc_other0_retry.iv_non_upiuncore cacheOther Retries - Set 0 : Non UPI IV Requestevent=0x2e,umask=0x8001Other Retries - Set 0 : Non UPI IV Request : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : Can't inject IV ring messageunc_cha_rxc_other1_retry.allow_snpuncore cacheOther Retries - Set 1 : Allow Snoopevent=0x2f,umask=0x4001Other Retries - Set 1 : Allow Snoop : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)unc_cha_rxc_other1_retry.any0uncore cacheOther Retries - Set 1 : ANY0event=0x2f,umask=101Other Retries - Set 1 : ANY0 : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : Any condition listed in the Other0 Reject counter was trueunc_cha_rxc_other1_retry.hauncore cacheOther Retries - Set 1 : HAevent=0x2f,umask=201Other Retries - Set 1 : HA : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)unc_cha_rxc_other1_retry.llc_or_sf_wayuncore cacheOther Retries - Set 1 : LLC OR SF Wayevent=0x2f,umask=0x2001Other Retries - Set 1 : LLC OR SF Way : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : Way conflict with another request that caused the rejectunc_cha_rxc_other1_retry.llc_victimuncore cacheOther Retries - Set 1 : LLC Victimevent=0x2f,umask=401Other Retries - Set 1 : LLC Victim : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)unc_cha_rxc_other1_retry.pa_matchuncore cacheOther Retries - Set 1 : PhyAddr Matchevent=0x2f,umask=0x8001Other Retries - Set 1 : PhyAddr Match : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : Address match with an outstanding request that was rejectedunc_cha_rxc_other1_retry.sf_victimuncore cacheOther Retries - Set 1 : SF Victimevent=0x2f,umask=801Other Retries - Set 1 : SF Victim : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : Requests did not generate Snoop filter victimunc_cha_rxc_other1_retry.victimuncore cacheOther Retries - Set 1 : Victimevent=0x2f,umask=0x1001Other Retries - Set 1 : Victim : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)unc_cha_rxc_prq0_reject.ad_req_vn0uncore cachePRQ Requests (from CMS) Rejected - Set 0 : AD REQ on VN0event=0x20,umask=101PRQ Requests (from CMS) Rejected - Set 0 : AD REQ on VN0 : No AD VN0 credit for generating a requestunc_cha_rxc_prq0_reject.ad_rsp_vn0uncore cachePRQ Requests (from CMS) Rejected - Set 0 : AD RSP on VN0event=0x20,umask=201PRQ Requests (from CMS) Rejected - Set 0 : AD RSP on VN0 : No AD VN0 credit for generating a responseunc_cha_rxc_prq0_reject.ak_non_upiuncore cachePRQ Requests (from CMS) Rejected - Set 0 : Non UPI AK Requestevent=0x20,umask=0x4001PRQ Requests (from CMS) Rejected - Set 0 : Non UPI AK Request : Can't inject AK ring messageunc_cha_rxc_prq0_reject.bl_ncb_vn0uncore cachePRQ Requests (from CMS) Rejected - Set 0 : BL NCB on VN0event=0x20,umask=0x1001PRQ Requests (from CMS) Rejected - Set 0 : BL NCB on VN0 : No BL VN0 credit for NCBunc_cha_rxc_prq0_reject.bl_ncs_vn0uncore cachePRQ Requests (from CMS) Rejected - Set 0 : BL NCS on VN0event=0x20,umask=0x2001PRQ Requests (from CMS) Rejected - Set 0 : BL NCS on VN0 : No BL VN0 credit for NCSunc_cha_rxc_prq0_reject.bl_rsp_vn0uncore cachePRQ Requests (from CMS) Rejected - Set 0 : BL RSP on VN0event=0x20,umask=401PRQ Requests (from CMS) Rejected - Set 0 : BL RSP on VN0 : No BL VN0 credit for generating a responseunc_cha_rxc_prq0_reject.bl_wb_vn0uncore cachePRQ Requests (from CMS) Rejected - Set 0 : BL WB on VN0event=0x20,umask=801PRQ Requests (from CMS) Rejected - Set 0 : BL WB on VN0 : No BL VN0 credit for generating a writebackunc_cha_rxc_prq0_reject.iv_non_upiuncore cachePRQ Requests (from CMS) Rejected - Set 0 : Non UPI IV Requestevent=0x20,umask=0x8001PRQ Requests (from CMS) Rejected - Set 0 : Non UPI IV Request : Can't inject IV ring messageunc_cha_rxc_prq1_reject.allow_snpuncore cachePRQ Requests (from CMS) Rejected - Set 1 : Allow Snoopevent=0x21,umask=0x4001unc_cha_rxc_prq1_reject.any0uncore cachePRQ Requests (from CMS) Rejected - Set 1 : ANY0event=0x21,umask=101PRQ Requests (from CMS) Rejected - Set 1 : ANY0 : Any condition listed in the PRQ0 Reject counter was trueunc_cha_rxc_prq1_reject.hauncore cachePRQ Requests (from CMS) Rejected - Set 1 : HAevent=0x21,umask=201unc_cha_rxc_prq1_reject.llc_or_sf_wayuncore cachePRQ Requests (from CMS) Rejected - Set 1 : LLC OR SF Wayevent=0x21,umask=0x2001PRQ Requests (from CMS) Rejected - Set 1 : LLC OR SF Way : Way conflict with another request that caused the rejectunc_cha_rxc_prq1_reject.llc_victimuncore cachePRQ Requests (from CMS) Rejected - Set 1 : LLC Victimevent=0x21,umask=401unc_cha_rxc_prq1_reject.pa_matchuncore cachePRQ Requests (from CMS) Rejected - Set 1 : PhyAddr Matchevent=0x21,umask=0x8001PRQ Requests (from CMS) Rejected - Set 1 : PhyAddr Match : Address match with an outstanding request that was rejectedunc_cha_rxc_prq1_reject.sf_victimuncore cachePRQ Requests (from CMS) Rejected - Set 1 : SF Victimevent=0x21,umask=801PRQ Requests (from CMS) Rejected - Set 1 : SF Victim : Requests did not generate Snoop filter victimunc_cha_rxc_prq1_reject.victimuncore cachePRQ Requests (from CMS) Rejected - Set 1 : Victimevent=0x21,umask=0x1001unc_cha_rxc_req_q0_retry.ad_req_vn0uncore cacheRequest Queue Retries - Set 0 : AD REQ on VN0event=0x2a,umask=101Request Queue Retries - Set 0 : AD REQ on VN0 : REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : No AD VN0 credit for generating a requestunc_cha_rxc_req_q0_retry.ad_rsp_vn0uncore cacheRequest Queue Retries - Set 0 : AD RSP on VN0event=0x2a,umask=201Request Queue Retries - Set 0 : AD RSP on VN0 : REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : No AD VN0 credit for generating a responseunc_cha_rxc_req_q0_retry.ak_non_upiuncore cacheRequest Queue Retries - Set 0 : Non UPI AK Requestevent=0x2a,umask=0x4001Request Queue Retries - Set 0 : Non UPI AK Request : REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : Can't inject AK ring messageunc_cha_rxc_req_q0_retry.bl_ncb_vn0uncore cacheRequest Queue Retries - Set 0 : BL NCB on VN0event=0x2a,umask=0x1001Request Queue Retries - Set 0 : BL NCB on VN0 : REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : No BL VN0 credit for NCBunc_cha_rxc_req_q0_retry.bl_ncs_vn0uncore cacheRequest Queue Retries - Set 0 : BL NCS on VN0event=0x2a,umask=0x2001Request Queue Retries - Set 0 : BL NCS on VN0 : REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : No BL VN0 credit for NCSunc_cha_rxc_req_q0_retry.bl_rsp_vn0uncore cacheRequest Queue Retries - Set 0 : BL RSP on VN0event=0x2a,umask=401Request Queue Retries - Set 0 : BL RSP on VN0 : REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : No BL VN0 credit for generating a responseunc_cha_rxc_req_q0_retry.bl_wb_vn0uncore cacheRequest Queue Retries - Set 0 : BL WB on VN0event=0x2a,umask=801Request Queue Retries - Set 0 : BL WB on VN0 : REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : No BL VN0 credit for generating a writebackunc_cha_rxc_req_q0_retry.iv_non_upiuncore cacheRequest Queue Retries - Set 0 : Non UPI IV Requestevent=0x2a,umask=0x8001Request Queue Retries - Set 0 : Non UPI IV Request : REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : Can't inject IV ring messageunc_cha_rxc_req_q1_retry.allow_snpuncore cacheRequest Queue Retries - Set 1 : Allow Snoopevent=0x2b,umask=0x4001Request Queue Retries - Set 1 : Allow Snoop : REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)unc_cha_rxc_req_q1_retry.any0uncore cacheRequest Queue Retries - Set 1 : ANY0event=0x2b,umask=101Request Queue Retries - Set 1 : ANY0 : REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : Any condition listed in the WBQ0 Reject counter was trueunc_cha_rxc_req_q1_retry.hauncore cacheRequest Queue Retries - Set 1 : HAevent=0x2b,umask=201Request Queue Retries - Set 1 : HA : REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)unc_cha_rxc_req_q1_retry.llc_or_sf_wayuncore cacheRequest Queue Retries - Set 1 : LLC OR SF Wayevent=0x2b,umask=0x2001Request Queue Retries - Set 1 : LLC OR SF Way : REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : Way conflict with another request that caused the rejectunc_cha_rxc_req_q1_retry.llc_victimuncore cacheRequest Queue Retries - Set 1 : LLC Victimevent=0x2b,umask=401Request Queue Retries - Set 1 : LLC Victim : REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)unc_cha_rxc_req_q1_retry.pa_matchuncore cacheRequest Queue Retries - Set 1 : PhyAddr Matchevent=0x2b,umask=0x8001Request Queue Retries - Set 1 : PhyAddr Match : REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : Address match with an outstanding request that was rejectedunc_cha_rxc_req_q1_retry.sf_victimuncore cacheRequest Queue Retries - Set 1 : SF Victimevent=0x2b,umask=801Request Queue Retries - Set 1 : SF Victim : REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : Requests did not generate Snoop filter victimunc_cha_rxc_req_q1_retry.victimuncore cacheRequest Queue Retries - Set 1 : Victimevent=0x2b,umask=0x1001Request Queue Retries - Set 1 : Victim : REQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)unc_cha_rxc_rrq0_reject.ad_req_vn0uncore cacheRRQ Rejects - Set 0 : AD REQ on VN0event=0x26,umask=101RRQ Rejects - Set 0 : AD REQ on VN0 : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : No AD VN0 credit for generating a requestunc_cha_rxc_rrq0_reject.ad_rsp_vn0uncore cacheRRQ Rejects - Set 0 : AD RSP on VN0event=0x26,umask=201RRQ Rejects - Set 0 : AD RSP on VN0 : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : No AD VN0 credit for generating a responseunc_cha_rxc_rrq0_reject.ak_non_upiuncore cacheRRQ Rejects - Set 0 : Non UPI AK Requestevent=0x26,umask=0x4001RRQ Rejects - Set 0 : Non UPI AK Request : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : Can't inject AK ring messageunc_cha_rxc_rrq0_reject.bl_ncb_vn0uncore cacheRRQ Rejects - Set 0 : BL NCB on VN0event=0x26,umask=0x1001RRQ Rejects - Set 0 : BL NCB on VN0 : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : No BL VN0 credit for NCBunc_cha_rxc_rrq0_reject.bl_ncs_vn0uncore cacheRRQ Rejects - Set 0 : BL NCS on VN0event=0x26,umask=0x2001RRQ Rejects - Set 0 : BL NCS on VN0 : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : No BL VN0 credit for NCSunc_cha_rxc_rrq0_reject.bl_rsp_vn0uncore cacheRRQ Rejects - Set 0 : BL RSP on VN0event=0x26,umask=401RRQ Rejects - Set 0 : BL RSP on VN0 : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : No BL VN0 credit for generating a responseunc_cha_rxc_rrq0_reject.bl_wb_vn0uncore cacheRRQ Rejects - Set 0 : BL WB on VN0event=0x26,umask=801RRQ Rejects - Set 0 : BL WB on VN0 : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : No BL VN0 credit for generating a writebackunc_cha_rxc_rrq0_reject.iv_non_upiuncore cacheRRQ Rejects - Set 0 : Non UPI IV Requestevent=0x26,umask=0x8001RRQ Rejects - Set 0 : Non UPI IV Request : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : Can't inject IV ring messageunc_cha_rxc_rrq1_reject.allow_snpuncore cacheRRQ Rejects - Set 1 : Allow Snoopevent=0x27,umask=0x4001RRQ Rejects - Set 1 : Allow Snoop : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retryunc_cha_rxc_rrq1_reject.any0uncore cacheRRQ Rejects - Set 1 : ANY0event=0x27,umask=101RRQ Rejects - Set 1 : ANY0 : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : Any condition listed in the RRQ0 Reject counter was trueunc_cha_rxc_rrq1_reject.hauncore cacheRRQ Rejects - Set 1 : HAevent=0x27,umask=201RRQ Rejects - Set 1 : HA : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retryunc_cha_rxc_rrq1_reject.llc_or_sf_wayuncore cacheRRQ Rejects - Set 1 : LLC OR SF Wayevent=0x27,umask=0x2001RRQ Rejects - Set 1 : LLC OR SF Way : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : Way conflict with another request that caused the rejectunc_cha_rxc_rrq1_reject.llc_victimuncore cacheRRQ Rejects - Set 1 : LLC Victimevent=0x27,umask=401RRQ Rejects - Set 1 : LLC Victim : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retryunc_cha_rxc_rrq1_reject.pa_matchuncore cacheRRQ Rejects - Set 1 : PhyAddr Matchevent=0x27,umask=0x8001RRQ Rejects - Set 1 : PhyAddr Match : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : Address match with an outstanding request that was rejectedunc_cha_rxc_rrq1_reject.sf_victimuncore cacheRRQ Rejects - Set 1 : SF Victimevent=0x27,umask=801RRQ Rejects - Set 1 : SF Victim : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : Requests did not generate Snoop filter victimunc_cha_rxc_rrq1_reject.victimuncore cacheRRQ Rejects - Set 1 : Victimevent=0x27,umask=0x1001RRQ Rejects - Set 1 : Victim : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retryunc_cha_rxc_wbq0_reject.ad_req_vn0uncore cacheWBQ Rejects - Set 0 : AD REQ on VN0event=0x28,umask=101WBQ Rejects - Set 0 : AD REQ on VN0 : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : No AD VN0 credit for generating a requestunc_cha_rxc_wbq0_reject.ad_rsp_vn0uncore cacheWBQ Rejects - Set 0 : AD RSP on VN0event=0x28,umask=201WBQ Rejects - Set 0 : AD RSP on VN0 : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : No AD VN0 credit for generating a responseunc_cha_rxc_wbq0_reject.ak_non_upiuncore cacheWBQ Rejects - Set 0 : Non UPI AK Requestevent=0x28,umask=0x4001WBQ Rejects - Set 0 : Non UPI AK Request : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : Can't inject AK ring messageunc_cha_rxc_wbq0_reject.bl_ncb_vn0uncore cacheWBQ Rejects - Set 0 : BL NCB on VN0event=0x28,umask=0x1001WBQ Rejects - Set 0 : BL NCB on VN0 : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : No BL VN0 credit for NCBunc_cha_rxc_wbq0_reject.bl_ncs_vn0uncore cacheWBQ Rejects - Set 0 : BL NCS on VN0event=0x28,umask=0x2001WBQ Rejects - Set 0 : BL NCS on VN0 : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : No BL VN0 credit for NCSunc_cha_rxc_wbq0_reject.bl_rsp_vn0uncore cacheWBQ Rejects - Set 0 : BL RSP on VN0event=0x28,umask=401WBQ Rejects - Set 0 : BL RSP on VN0 : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : No BL VN0 credit for generating a responseunc_cha_rxc_wbq0_reject.bl_wb_vn0uncore cacheWBQ Rejects - Set 0 : BL WB on VN0event=0x28,umask=801WBQ Rejects - Set 0 : BL WB on VN0 : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : No BL VN0 credit for generating a writebackunc_cha_rxc_wbq0_reject.iv_non_upiuncore cacheWBQ Rejects - Set 0 : Non UPI IV Requestevent=0x28,umask=0x8001WBQ Rejects - Set 0 : Non UPI IV Request : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : Can't inject IV ring messageunc_cha_rxc_wbq1_reject.allow_snpuncore cacheWBQ Rejects - Set 1 : Allow Snoopevent=0x29,umask=0x4001WBQ Rejects - Set 1 : Allow Snoop : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retryunc_cha_rxc_wbq1_reject.any0uncore cacheWBQ Rejects - Set 1 : ANY0event=0x29,umask=101WBQ Rejects - Set 1 : ANY0 : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : Any condition listed in the WBQ0 Reject counter was trueunc_cha_rxc_wbq1_reject.hauncore cacheWBQ Rejects - Set 1 : HAevent=0x29,umask=201WBQ Rejects - Set 1 : HA : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retryunc_cha_rxc_wbq1_reject.llc_or_sf_wayuncore cacheWBQ Rejects - Set 1 : LLC OR SF Wayevent=0x29,umask=0x2001WBQ Rejects - Set 1 : LLC OR SF Way : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : Way conflict with another request that caused the rejectunc_cha_rxc_wbq1_reject.llc_victimuncore cacheWBQ Rejects - Set 1 : LLC Victimevent=0x29,umask=401WBQ Rejects - Set 1 : LLC Victim : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retryunc_cha_rxc_wbq1_reject.pa_matchuncore cacheWBQ Rejects - Set 1 : PhyAddr Matchevent=0x29,umask=0x8001WBQ Rejects - Set 1 : PhyAddr Match : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : Address match with an outstanding request that was rejectedunc_cha_rxc_wbq1_reject.sf_victimuncore cacheWBQ Rejects - Set 1 : SF Victimevent=0x29,umask=801WBQ Rejects - Set 1 : SF Victim : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : Requests did not generate Snoop filter victimunc_cha_rxc_wbq1_reject.victimuncore cacheWBQ Rejects - Set 1 : Victimevent=0x29,umask=0x1001WBQ Rejects - Set 1 : Victim : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retryunc_cha_snoops_sent.alluncore cacheSnoops Sent : Allevent=0x51,umask=101Snoops Sent : All : Counts the number of snoops issued by the HAunc_cha_snoops_sent.bcst_localuncore cacheSnoops Sent : Broadcast snoop for Local Requestsevent=0x51,umask=0x1001Snoops Sent : Broadcast snoop for Local Requests : Counts the number of snoops issued by the HA. : Counts the number of broadcast snoops issued by the HA. This filter includes only requests coming from local socketsunc_cha_snoops_sent.bcst_remoteuncore cacheSnoops Sent : Broadcast snoops for Remote Requestsevent=0x51,umask=0x2001Snoops Sent : Broadcast snoops for Remote Requests : Counts the number of snoops issued by the HA. : Counts the number of broadcast snoops issued by the HA.This filter includes only requests coming from remote socketsunc_cha_snoops_sent.direct_localuncore cacheSnoops Sent : Directed snoops for Local Requestsevent=0x51,umask=0x4001Snoops Sent : Directed snoops for Local Requests : Counts the number of snoops issued by the HA. : Counts the number of directed snoops issued by the HA. This filter includes only requests coming from local socketsunc_cha_snoops_sent.direct_remoteuncore cacheSnoops Sent : Directed snoops for Remote Requestsevent=0x51,umask=0x8001Snoops Sent : Directed snoops for Remote Requests : Counts the number of snoops issued by the HA. : Counts the number of directed snoops issued by the HA. This filter includes only requests coming from remote socketsunc_cha_snoops_sent.localuncore cacheSnoops Sent : Broadcast or directed Snoops sent for Local Requestsevent=0x51,umask=401Snoops Sent : Broadcast or directed Snoops sent for Local Requests : Counts the number of snoops issued by the HA. : Counts the number of broadcast or directed snoops issued by the HA per request. This filter includes only requests coming from the local socketunc_cha_snoops_sent.remoteuncore cacheSnoops Sent : Broadcast or directed Snoops sent for Remote Requestsevent=0x51,umask=801Snoops Sent : Broadcast or directed Snoops sent for Remote Requests : Counts the number of snoops issued by the HA. : Counts the number of broadcast or directed snoops issued by the HA per request. This filter includes only requests coming from the remote socketunc_cha_snoop_resp.rspcnflctuncore cacheSnoop Responses Received : RSPCNFLCT*event=0x5c,umask=0x4001Snoop Responses Received : RSPCNFLCT* : Counts the total number of RspI snoop responses received.  Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system.   In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received.  For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1. : Filters for snoops responses of RspConflict.  This is returned when a snoop finds an existing outstanding transaction in a remote caching agent when it CAMs that caching agent.  This triggers conflict resolution hardware.  This covers both RspCnflct and RspCnflctWbIunc_cha_snoop_resp.rspfwduncore cacheSnoop Responses Received : RspFwdevent=0x5c,umask=0x8001Snoop Responses Received : RspFwd : Counts the total number of RspI snoop responses received.  Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system.   In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received.  For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1. : Filters for a snoop response of RspFwd to a CA request.  This snoop response is only possible for RdCur when a snoop HITM/E in a remote caching agent and it directly forwards data to a requestor without changing the requestor's cache line stateunc_cha_snoop_resp.rspfwdwbuncore cacheSnoop Responses Received : Rsp*Fwd*WBevent=0x5c,umask=0x2001Snoop Responses Received : Rsp*Fwd*WB : Counts the total number of RspI snoop responses received.  Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system.   In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received.  For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1. : Filters for a snoop response of Rsp*Fwd*WB.  This snoop response is only used in 4s systems.  It is used when a snoop HITM's in a remote caching agent and it directly forwards data to a requestor, and simultaneously returns data to the home to be written back to memoryunc_cha_snoop_resp.rspsuncore cacheRspS Snoop Responses Receivedevent=0x5c,umask=201Counts when a transaction with the opcode type RspS Snoop Response was received which indicates when a remote cache has data but is not forwarding it.  It is a way to let the requesting socket know that it cannot allocate the data in E state.  No data is sent with S RspSunc_cha_snoop_resp.rspwbuncore cacheSnoop Responses Received : Rsp*WBevent=0x5c,umask=0x1001Snoop Responses Received : Rsp*WB : Counts the total number of RspI snoop responses received.  Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system.   In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received.  For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1. : Filters for a snoop response of RspIWB or RspSWB.  This is returned when a non-RFO request hits in M state.  Data and Code Reads can return either RspIWB or RspSWB depending on how the system has been configured.  InvItoE transactions will also return RspIWB because they must acquire ownershipunc_cha_snoop_resp_local.rspcnflctuncore cacheSnoop Responses Received Local : RspCnflctevent=0x5d,umask=0x4001Snoop Responses Received Local : RspCnflct : Number of snoop responses received for a Local  request : Filters for snoops responses of RspConflict to local CA requests.  This is returned when a snoop finds an existing outstanding transaction in a remote caching agent when it CAMs that caching agent.  This triggers conflict resolution hardware.  This covers both RspCnflct and RspCnflctWbIunc_cha_snoop_resp_local.rspfwduncore cacheSnoop Responses Received Local : RspFwdevent=0x5d,umask=0x8001Snoop Responses Received Local : RspFwd : Number of snoop responses received for a Local  request : Filters for a snoop response of RspFwd to local CA requests.  This snoop response is only possible for RdCur when a snoop HITM/E in a remote caching agent and it directly forwards data to a requestor without changing the requestor's cache line stateunc_cha_snoop_resp_local.rspfwdwbuncore cacheSnoop Responses Received Local : Rsp*FWD*WBevent=0x5d,umask=0x2001Snoop Responses Received Local : Rsp*FWD*WB : Number of snoop responses received for a Local  request : Filters for a snoop response of Rsp*Fwd*WB to local CA requests.  This snoop response is only used in 4s systems.  It is used when a snoop HITM's in a remote caching agent and it directly forwards data to a requestor, and simultaneously returns data to the home to be written back to memoryunc_cha_snoop_resp_local.rspiuncore cacheSnoop Responses Received Local : RspIevent=0x5d,umask=101Snoop Responses Received Local : RspI : Number of snoop responses received for a Local  request : Filters for snoops responses of RspI to local CA requests.  RspI is returned when the remote cache does not have the data, or when the remote cache silently evicts data (such as when an RFO hits non-modified data)unc_cha_snoop_resp_local.rspifwduncore cacheSnoop Responses Received Local : RspIFwdevent=0x5d,umask=401Snoop Responses Received Local : RspIFwd : Number of snoop responses received for a Local  request : Filters for snoop responses of RspIFwd to local CA requests.  This is returned when a remote caching agent forwards data and the requesting agent is able to acquire the data in E or M states.  This is commonly returned with RFO transactions.  It can be either a HitM or a HitFEunc_cha_snoop_resp_local.rspsuncore cacheSnoop Responses Received Local : RspSevent=0x5d,umask=201Snoop Responses Received Local : RspS : Number of snoop responses received for a Local  request : Filters for snoop responses of RspS to local CA requests.  RspS is returned when a remote cache has data but is not forwarding it.  It is a way to let the requesting socket know that it cannot allocate the data in E state.  No data is sent with S RspSunc_cha_snoop_resp_local.rspsfwduncore cacheSnoop Responses Received Local : RspSFwdevent=0x5d,umask=801Snoop Responses Received Local : RspSFwd : Number of snoop responses received for a Local  request : Filters for a snoop response of RspSFwd to local CA requests.  This is returned when a remote caching agent forwards data but holds on to its current copy.  This is common for data and code reads that hit in a remote socket in E or F stateunc_cha_snoop_resp_local.rspwbuncore cacheSnoop Responses Received Local : Rsp*WBevent=0x5d,umask=0x1001Snoop Responses Received Local : Rsp*WB : Number of snoop responses received for a Local  request : Filters for a snoop response of RspIWB or RspSWB to local CA requests.  This is returned when a non-RFO request hits in M state.  Data and Code Reads can return either RspIWB or RspSWB depending on how the system has been configured.  InvItoE transactions will also return RspIWB because they must acquire ownershipunc_cha_snoop_rsp_misc.mtoi_rspdatamuncore cacheMisc Snoop Responses Received : MtoI RspIDataMevent=0x6b,umask=201unc_cha_snoop_rsp_misc.mtoi_rspifwdmuncore cacheMisc Snoop Responses Received : MtoI RspIFwdMevent=0x6b,umask=101unc_cha_snoop_rsp_misc.pulldataptl_hitllcuncore cacheMisc Snoop Responses Received : Pull Data Partial - Hit LLCevent=0x6b,umask=0x2001unc_cha_snoop_rsp_misc.pulldataptl_hitsfuncore cacheMisc Snoop Responses Received : Pull Data Partial - Hit SFevent=0x6b,umask=0x1001unc_cha_snoop_rsp_misc.rspifwdmptl_hitllcuncore cacheMisc Snoop Responses Received : RspIFwdPtl Hit LLCevent=0x6b,umask=801unc_cha_snoop_rsp_misc.rspifwdmptl_hitsfuncore cacheMisc Snoop Responses Received : RspIFwdPtl Hit SFevent=0x6b,umask=401unc_cha_tor_inserts.alluncore cacheTOR Inserts : Allevent=0x35,umask=0xc001ffff01TOR Inserts : All : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.ddruncore cacheTOR Inserts : DDR Accessevent=0x3501TOR Inserts : DDR Access : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.evictuncore cacheTOR Inserts : SF/LLC Evictionsevent=0x35,umask=201TOR Inserts : SF/LLC Evictions : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. : TOR allocation occurred as a result of SF/LLC evictions (came from the ISMQ)unc_cha_tor_inserts.hituncore cacheTOR Inserts : Just Hitsevent=0x3501TOR Inserts : Just Hits : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.iauncore cacheTOR Inserts; All from Local IAevent=0x35,umask=0xc001ff0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.; All locally initiated requests from IA Coresunc_cha_tor_inserts.ia_clflushuncore cacheTOR Inserts;CLFlush from Local IAevent=0x35,umask=0xc8c7ff0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.; CLFlush events that are initiated from the Coreunc_cha_tor_inserts.ia_clflushoptuncore cacheTOR Inserts;CLFlushOpt from Local IAevent=0x35,umask=0xc8d7ff0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.; CLFlushOpt events that are initiated from the Coreunc_cha_tor_inserts.ia_crduncore cacheTOR Inserts; CRd from local IAevent=0x35,umask=0xc80fff0101TOR Inserts; Code read from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_crd_prefuncore cacheTOR Inserts; CRd Pref from local IAevent=0x35,umask=0xc88fff0101TOR Inserts; Code read prefetch from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_drduncore cacheTOR Inserts; DRd from local IAevent=0x35,umask=0xc817ff0101TOR Inserts; Data read from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_drdpteuncore cacheTOR Inserts : DRd PTEs issued by iA Coresevent=0x35,umask=0xc837ff0101TOR Inserts : DRd PTEs issued by iA Cores due to a page walk : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_drd_optuncore cacheTOR Inserts; DRd Opt from local IAevent=0x35,umask=0xc827ff0101TOR Inserts; Data read opt from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_drd_opt_prefuncore cacheTOR Inserts; DRd Opt Pref from local IAevent=0x35,umask=0xc8a7ff0101TOR Inserts; Data read opt prefetch from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_drd_prefuncore cacheTOR Inserts; DRd Pref from local IAevent=0x35,umask=0xc897ff0101TOR Inserts; Data read prefetch from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_hituncore cacheTOR Inserts; Hits from Local IAevent=0x35,umask=0xc001fd0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.ia_hit_crduncore cacheTOR Inserts; CRd hits from local IAevent=0x35,umask=0xc80ffd0101TOR Inserts; Code read from local IA that hits in the snoop filterunc_cha_tor_inserts.ia_hit_crd_prefuncore cacheTOR Inserts; CRd Pref hits from local IAevent=0x35,umask=0xc88ffd0101TOR Inserts; Code read prefetch from local IA that hits in the snoop filterunc_cha_tor_inserts.ia_hit_cxl_accuncore cacheAll requests issued from IA cores to CXL accelerator memory regions that hit the LLCevent=0x35,umask=0x10c001810101unc_cha_tor_inserts.ia_hit_cxl_acc_localuncore cacheUNC_CHA_TOR_INSERTS.IA_HIT_CXL_ACC_LOCALevent=0x35,umask=0x10c000810101unc_cha_tor_inserts.ia_hit_drduncore cacheTOR Inserts; DRd hits from local IAevent=0x35,umask=0xc817fd0101TOR Inserts; Data read from local IA that hits in the snoop filterunc_cha_tor_inserts.ia_hit_drdpteuncore cacheTOR Inserts : DRd PTEs issued by iA Cores that Hit the LLCevent=0x35,umask=0xc837fd0101TOR Inserts : DRd PTEs issued by iA Cores due to page walks that hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_hit_drd_optuncore cacheTOR Inserts; DRd Opt hits from local IAevent=0x35,umask=0xc827fd0101TOR Inserts; Data read opt from local IA that hits in the snoop filterunc_cha_tor_inserts.ia_hit_drd_opt_prefuncore cacheTOR Inserts; DRd Opt Pref hits from local IAevent=0x35,umask=0xc8a7fd0101TOR Inserts; Data read opt prefetch from local IA that hits in the snoop filterunc_cha_tor_inserts.ia_hit_drd_prefuncore cacheTOR Inserts; DRd Pref hits from local IAevent=0x35,umask=0xc897fd0101TOR Inserts; Data read prefetch from local IA that hits in the snoop filterunc_cha_tor_inserts.ia_hit_itomuncore cacheTOR Inserts : ItoMs issued by iA Cores that Hit LLCevent=0x35,umask=0xcc47fd0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_hit_llcprefcodeuncore cacheTOR Inserts; LLCPrefCode hits from local IAevent=0x35,umask=0xcccffd0101TOR Inserts; Last level cache prefetch code read from local IA that hits in the snoop filterunc_cha_tor_inserts.ia_hit_llcprefdatauncore cacheTOR Inserts; LLCPrefData hits from local IAevent=0x35,umask=0xccd7fd0101TOR Inserts; Last level cache prefetch data read from local IA that hits in the snoop filterunc_cha_tor_inserts.ia_hit_llcprefrfouncore cacheTOR Inserts; LLCPrefRFO hits from local IAevent=0x35,umask=0xccc7fd0101TOR Inserts; Last level cache prefetch read for ownership from local IA that hits in the snoop filterunc_cha_tor_inserts.ia_hit_rfouncore cacheTOR Inserts; RFO hits from local IAevent=0x35,umask=0xc807fd0101TOR Inserts; Read for ownership from local IA that hits in the snoop filterunc_cha_tor_inserts.ia_hit_rfo_prefuncore cacheTOR Inserts; RFO Pref hits from local IAevent=0x35,umask=0xc887fd0101TOR Inserts; Read for ownership prefetch from local IA that hits in the snoop filterunc_cha_tor_inserts.ia_itomuncore cacheTOR Inserts;ItoM from Local IAevent=0x35,umask=0xcc47ff0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.; ItoM events that are initiated from the Coreunc_cha_tor_inserts.ia_itomcachenearuncore cacheTOR Inserts : ItoMCacheNears issued by iA Coresevent=0x35,umask=0xcd47ff0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_llcprefcodeuncore cacheTOR Inserts; LLCPrefCode from local IAevent=0x35,umask=0xcccfff0101TOR Inserts; Last level cache prefetch code read from local IAunc_cha_tor_inserts.ia_llcprefdatauncore cacheTOR Inserts; LLCPrefData from local IAevent=0x35,umask=0xccd7ff0101TOR Inserts; Last level cache prefetch data read from local IAunc_cha_tor_inserts.ia_llcprefrfouncore cacheTOR Inserts; LLCPrefRFO from local IAevent=0x35,umask=0xccc7ff0101TOR Inserts; Last level cache prefetch read for ownership from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_missuncore cacheTOR Inserts; misses from Local IAevent=0x35,umask=0xc001fe0101TOR Inserts : All requests from iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_crduncore cacheTOR Inserts for CRd misses from local IAevent=0x35,umask=0xc80ffe0101Inserts into the TOR from local IA cores which miss the LLC and snoop filter with the opcode CRdunc_cha_tor_inserts.ia_miss_crdmorph_cxl_accuncore cacheCRds and equivalent opcodes issued from an IA core which miss the L3 and target memory in a CXL type 2 acceleratorevent=0x35,umask=0x10c80b820101unc_cha_tor_inserts.ia_miss_crd_localuncore cacheTOR Inserts : CRd issued by iA Cores that Missed the LLC - HOMed locallyevent=0x35,umask=0xc80efe0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_crd_prefuncore cacheTOR Inserts; CRd Pref misses from local IAevent=0x35,umask=0xc88ffe0101TOR Inserts; Code read prefetch from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_miss_crd_pref_localuncore cacheTOR Inserts : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed locallyevent=0x35,umask=0xc88efe0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_crd_pref_remoteuncore cacheTOR Inserts : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed remotelyevent=0x35,umask=0xc88f7e0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_crd_remoteuncore cacheTOR Inserts : CRd issued by iA Cores that Missed the LLC - HOMed remotelyevent=0x35,umask=0xc80f7e0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_cxl_accuncore cacheAll requests issued from IA cores to CXL accelerator memory regions that miss the LLCevent=0x35,umask=0x10c001820101unc_cha_tor_inserts.ia_miss_cxl_acc_localuncore cacheUNC_CHA_TOR_INSERTS.IA_MISS_CXL_ACC_LOCALevent=0x35,umask=0x10c000820101unc_cha_tor_inserts.ia_miss_drduncore cacheTOR Inserts for DRd misses from local IAevent=0x35,umask=0xc817fe0101Inserts into the TOR from local IA cores which miss the LLC and snoop filter with the opcode DRdunc_cha_tor_inserts.ia_miss_drdmorph_cxl_accuncore cacheDRds and equivalent opcodes issued from an IA core which miss the L3 and target memory in a CXL type 2 acceleratorevent=0x35,umask=0x10c813820101unc_cha_tor_inserts.ia_miss_drdpteuncore cacheTOR Inserts : DRd PTEs issued by iA Cores that Missed the LLCevent=0x35,umask=0xc837fe0101TOR Inserts : DRd PTEs issued by iA Cores due to a page walk that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drd_cxl_accuncore cacheDRds issued from an IA core which miss the L3 and target memory in a CXL type 2 memory expander cardevent=0x35,umask=0x10c817820101unc_cha_tor_inserts.ia_miss_drd_cxl_acc_localuncore cacheUNC_CHA_TOR_INSERTS.IA_MISS_DRD_CXL_ACC_LOCALevent=0x35,umask=0x10c816820101unc_cha_tor_inserts.ia_miss_drd_cxl_exp_localuncore cacheUNC_CHA_TOR_INSERTS.IA_MISS_DRD_CXL_EXP_LOCALevent=0x35,umask=0x20c816820101unc_cha_tor_inserts.ia_miss_drd_ddruncore cacheTOR Inserts for DRds issued by IA Cores targeting DDR Mem that Missed the LLCevent=0x35,umask=0xc817860101Inserts into the TOR from local IA cores which miss the LLC and snoop filter with the opcode DRd, and which target DDR memoryunc_cha_tor_inserts.ia_miss_drd_localuncore cacheTOR Inserts for DRd misses from local IA targeting local memoryevent=0x35,umask=0xc816fe0101Inserts into the TOR from local IA cores which miss the LLC and snoop filter with the opcode DRd, and which target local memoryunc_cha_tor_inserts.ia_miss_drd_local_ddruncore cacheTOR Inserts : DRds issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locallyevent=0x35,umask=0xc816860101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drd_local_pmmuncore cacheTOR Inserts : DRds issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locallyevent=0x35,umask=0xc8168a0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drd_optuncore cacheTOR Inserts; DRd Opt misses from local IAevent=0x35,umask=0xc827fe0101TOR Inserts; Data read opt from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_miss_drd_opt_cxl_acc_localuncore cacheUNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_CXL_ACC_LOCALevent=0x35,umask=0x10c826820101unc_cha_tor_inserts.ia_miss_drd_opt_prefuncore cacheTOR Inserts; DRd Opt Pref misses from local IAevent=0x35,umask=0xc8a7fe0101TOR Inserts; Data read opt prefetch from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_miss_drd_opt_pref_cxl_acc_localuncore cacheUNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_PREF_CXL_ACC_LOCALevent=0x35,umask=0x10c8a6820101unc_cha_tor_inserts.ia_miss_drd_pmmuncore cacheTOR Inserts for DRds issued by iA Cores targeting PMM Mem that Missed the LLCevent=0x35,umask=0xc8178a0101Inserts into the TOR from local IA cores which miss the LLC and snoop filter with the opcode DRd, and which target PMM memoryunc_cha_tor_inserts.ia_miss_drd_prefuncore cacheTOR Inserts for DRd Pref misses from local IAevent=0x35,umask=0xc897fe0101Inserts into the TOR from local IA cores which miss the LLC and snoop filter with the opcode DRD_PREFunc_cha_tor_inserts.ia_miss_drd_pref_cxl_accuncore cacheL2 data prefetches issued from an IA core which miss the L3 and target memory in a CXL type 2 acceleratorevent=0x35,umask=0x10c897820101unc_cha_tor_inserts.ia_miss_drd_pref_cxl_acc_localuncore cacheUNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_CXL_ACC_LOCALevent=0x35,umask=0x10c896820101unc_cha_tor_inserts.ia_miss_drd_pref_cxl_exp_localuncore cacheUNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_CXL_EXP_LOCALevent=0x35,umask=0x20c896820101unc_cha_tor_inserts.ia_miss_drd_pref_ddruncore cacheTOR Inserts : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLCevent=0x35,umask=0xc897860101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drd_pref_localuncore cacheTOR Inserts for DRd Pref misses from local IA targeting local memoryevent=0x35,umask=0xc896fe0101Inserts into the TOR from local IA cores which miss the LLC and snoop filter with the opcode DRD_PREF, and target local memoryunc_cha_tor_inserts.ia_miss_drd_pref_local_ddruncore cacheTOR Inserts : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locallyevent=0x35,umask=0xc896860101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drd_pref_local_pmmuncore cacheTOR Inserts : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locallyevent=0x35,umask=0xc8968a0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drd_pref_pmmuncore cacheTOR Inserts : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLCevent=0x35,umask=0xc8978a0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drd_pref_remoteuncore cacheTOR Inserts for DRd Pref misses from local IA targeting remote memoryevent=0x35,umask=0xc8977e0101Inserts into the TOR from local IA cores which miss the LLC and snoop filter with the opcode DRD_PREF, and target remote memoryunc_cha_tor_inserts.ia_miss_drd_pref_remote_ddruncore cacheTOR Inserts : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotelyevent=0x35,umask=0xc897060101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drd_pref_remote_pmmuncore cacheTOR Inserts : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotelyevent=0x35,umask=0xc8970a0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drd_remoteuncore cacheTOR Inserts for DRd misses from local IA targeting remote memoryevent=0x35,umask=0xc8177e0101Inserts into the TOR from local IA cores which miss the LLC and snoop filter with the opcode DRd, and target remote memoryunc_cha_tor_inserts.ia_miss_drd_remote_ddruncore cacheTOR Inserts : DRds issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotelyevent=0x35,umask=0xc817060101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drd_remote_pmmuncore cacheTOR Inserts : DRds issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotelyevent=0x35,umask=0xc8170a0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_itomuncore cacheTOR Inserts : ItoMs issued by iA Cores that Missed LLCevent=0x35,umask=0xcc47fe0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_llcprefcodeuncore cacheTOR Inserts; LLCPrefCode misses from local IAevent=0x35,umask=0xcccffe0101TOR Inserts; Last level cache prefetch code read from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_miss_llcprefcode_cxl_accuncore cacheLLC Prefetch Code transactions issued from an IA core which miss the L3 and target memory in a CXL type 2 acceleratorevent=0x35,umask=0x10cccf820101unc_cha_tor_inserts.ia_miss_llcprefdatauncore cacheTOR Inserts; LLCPrefData misses from local IAevent=0x35,umask=0xccd7fe0101TOR Inserts; Last level cache prefetch data read from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_miss_llcprefdata_cxl_accuncore cacheLLC data prefetches issued from an IA core which miss the L3 and target memory in a CXL type 2 acceleratorevent=0x35,umask=0x10ccd7820101unc_cha_tor_inserts.ia_miss_llcprefdata_cxl_acc_localuncore cacheUNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFDATA_CXL_ACC_LOCALevent=0x35,umask=0x10ccd6820101unc_cha_tor_inserts.ia_miss_llcprefdata_cxl_exp_localuncore cacheUNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFDATA_CXL_EXP_LOCALevent=0x35,umask=0x20ccd6820101unc_cha_tor_inserts.ia_miss_llcprefrfouncore cacheTOR Inserts; LLCPrefRFO misses from local IAevent=0x35,umask=0xccc7fe0101TOR Inserts; Last level cache prefetch read for ownership from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_miss_llcprefrfo_cxl_accuncore cacheL2 RFO prefetches issued from an IA core which miss the L3 and target memory in a CXL type 2 acceleratorevent=0x35,umask=0x10c887820101unc_cha_tor_inserts.ia_miss_llcprefrfo_cxl_acc_localuncore cacheUNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFRFO_CXL_ACC_LOCALevent=0x35,umask=0x10c886820101unc_cha_tor_inserts.ia_miss_llcprefrfo_cxl_exp_localuncore cacheUNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFRFO_CXL_EXP_LOCALevent=0x35,umask=0x20c886820101unc_cha_tor_inserts.ia_miss_local_wcilf_ddruncore cacheTOR Inserts : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed locallyevent=0x35,umask=0xc866860101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_local_wcilf_pmmuncore cacheTOR Inserts : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed locallyevent=0x35,umask=0xc8668a0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_local_wcil_ddruncore cacheTOR Inserts : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed locallyevent=0x35,umask=0xc86e860101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_local_wcil_pmmuncore cacheTOR Inserts : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed locallyevent=0x35,umask=0xc86e8a0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_remote_wcilf_ddruncore cacheTOR Inserts : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed remotelyevent=0x35,umask=0xc867060101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_remote_wcilf_pmmuncore cacheTOR Inserts : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed remotelyevent=0x35,umask=0xc8670a0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_remote_wcil_ddruncore cacheTOR Inserts : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed remotelyevent=0x35,umask=0xc86f060101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_remote_wcil_pmmuncore cacheTOR Inserts : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed remotelyevent=0x35,umask=0xc86f0a0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_rfouncore cacheTOR Inserts; RFO misses from local IAevent=0x35,umask=0xc807fe0101TOR Inserts; Read for ownership from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_miss_rfomorph_cxl_accuncore cacheRFO and L2 RFO prefetches issued from an IA core which miss the L3 and target memory in a CXL type 2 acceleratorevent=0x35,umask=0x10c803820101unc_cha_tor_inserts.ia_miss_rfo_cxl_accuncore cacheRFOs issued from an IA core which miss the L3 and target memory in a CXL type 2 acceleratorevent=0x35,umask=0x10c807820101unc_cha_tor_inserts.ia_miss_rfo_cxl_acc_localuncore cacheUNC_CHA_TOR_INSERTS.IA_MISS_RFO_CXL_ACC_LOCALevent=0x35,umask=0x10c806820101unc_cha_tor_inserts.ia_miss_rfo_cxl_exp_localuncore cacheUNC_CHA_TOR_INSERTS.IA_MISS_RFO_CXL_EXP_LOCALevent=0x35,umask=0x20c806820101unc_cha_tor_inserts.ia_miss_rfo_localuncore cacheTOR Inserts RFO misses from local IAevent=0x35,umask=0xc806fe0101TOR Inserts; Read for ownership from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_miss_rfo_prefuncore cacheTOR Inserts; RFO pref misses from local IAevent=0x35,umask=0xc887fe0101TOR Inserts; Read for ownership prefetch from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_miss_rfo_pref_cxl_accuncore cacheLLC RFO prefetches issued from an IA core which miss the L3 and target memory in a CXL type 2 acceleratorevent=0x35,umask=0x10ccc7820101unc_cha_tor_inserts.ia_miss_rfo_pref_cxl_acc_localuncore cacheUNC_CHA_TOR_INSERTS.IA_MISS_RFO_PREF_CXL_ACC_LOCALevent=0x35,umask=0x10ccc6820101unc_cha_tor_inserts.ia_miss_rfo_pref_cxl_exp_localuncore cacheUNC_CHA_TOR_INSERTS.IA_MISS_RFO_PREF_CXL_EXP_LOCALevent=0x35,umask=0x20ccc6820101unc_cha_tor_inserts.ia_miss_rfo_pref_localuncore cacheTOR Inserts; RFO prefetch misses from local IAevent=0x35,umask=0xc886fe0101TOR Inserts; Read for ownership prefetch from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_miss_rfo_pref_remoteuncore cacheTOR Inserts; RFO prefetch misses from local IAevent=0x35,umask=0xc8877e0101TOR Inserts; Read for ownership prefetch from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_miss_rfo_remoteuncore cacheTOR Inserts; RFO misses from local IAevent=0x35,umask=0xc8077e0101TOR Inserts Read for ownership from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_miss_ucrdfuncore cacheTOR Inserts : UCRdFs issued by iA Cores that Missed LLCevent=0x35,umask=0xc877de0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_wciluncore cacheTOR Inserts : WCiLs issued by iA Cores that Missed the LLCevent=0x35,umask=0xc86ffe0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_wcilfuncore cacheTOR Inserts : WCiLF issued by iA Cores that Missed the LLCevent=0x35,umask=0xc867fe0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_wcilf_ddruncore cacheTOR Inserts : WCiLFs issued by iA Cores targeting DDR that missed the LLCevent=0x35,umask=0xc867860101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_wcilf_pmmuncore cacheTOR Inserts : WCiLFs issued by iA Cores targeting PMM that missed the LLCevent=0x35,umask=0xc8678a0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_wcil_ddruncore cacheTOR Inserts : WCiLs issued by iA Cores targeting DDR that missed the LLCevent=0x35,umask=0xc86f860101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_wcil_pmmuncore cacheTOR Inserts : WCiLs issued by iA Cores targeting PMM that missed the LLCevent=0x35,umask=0xc86f8a0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_wiluncore cacheTOR Inserts : WiLs issued by iA Cores that Missed LLCevent=0x35,umask=0xc87fde0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_rfouncore cacheTOR Inserts; RFO from local IAevent=0x35,umask=0xc807ff0101TOR Inserts; Read for ownership from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_rfo_prefuncore cacheTOR Inserts; RFO pref from local IAevent=0x35,umask=0xc887ff0101TOR Inserts; Read for ownership prefetch from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_specitomuncore cacheTOR Inserts;SpecItoM from Local IAevent=0x35,umask=0xcc57ff0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.; SpecItoM events that are initiated from the Coreunc_cha_tor_inserts.ia_wbeftoeuncore cacheTOR Inserts : WBEFtoEs issued by an IA Core.  Non Modified Write Backsevent=0x35,umask=0xcc3fff0101WbEFtoEs issued by iA Cores .  (Non Modified Write Backs)  :Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.  Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_wbeftoiuncore cacheTOR Inserts : WBEFtoEs issued by an IA Core.  Non Modified Write Backsevent=0x35,umask=0xcc37ff0101WbEFtoEs issued by iA Cores .  (Non Modified Write Backs)  :Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.  Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_wbmtoeuncore cacheTOR Inserts : WBEFtoEs issued by an IA Core.  Non Modified Write Backsevent=0x35,umask=0xcc2fff0101WbEFtoEs issued by iA Cores .  (Non Modified Write Backs)  :Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.  Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_wbmtoiuncore cacheTOR Inserts : WbMtoIs issued by an iA Cores. Modified Write Backsevent=0x35,umask=0xcc27ff0101WbMtoIs issued by iA Cores .  (Modified Write Backs)  :Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.  Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_wbstoiuncore cacheTOR Inserts : WBEFtoEs issued by an IA Core.  Non Modified Write Backsevent=0x35,umask=0xcc67ff0101WbEFtoEs issued by iA Cores .  (Non Modified Write Backs)  :Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.  Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_wciluncore cacheTOR Inserts : WCiLs issued by iA Coresevent=0x35,umask=0xc86fff0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_wcilfuncore cacheTOR Inserts : WCiLF issued by iA Coresevent=0x35,umask=0xc867ff0101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.iouncore cacheTOR Inserts; All from local IOevent=0x35,umask=0xc001ff0401Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_clflushuncore cacheTOR Inserts : CLFlushes issued by IO Devicesevent=0x35,umask=0xc8c3ff0401Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_hituncore cacheTOR Inserts; Hits from local IOevent=0x35,umask=0xc001fd0401Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_hit_itomuncore cacheTOR Inserts; ItoM hits from local IOevent=0x35,umask=0xcc43fd0401Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_hit_itomcachenearuncore cacheTOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices that hit the LLCevent=0x35,umask=0xcd43fd0401TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices that hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_hit_pcirdcuruncore cacheTOR Inserts; RdCur and FsRdCur hits from local IOevent=0x35,umask=0xc8f3fd0401TOR Inserts : PCIRdCurs issued by IO Devices that hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_hit_rfouncore cacheTOR Inserts; RFO hits from local IOevent=0x35,umask=0xc803fd0401Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_itomuncore cacheTOR Inserts for ItoM from local IOevent=0x35,umask=0xcc43ff0401Inserts into the TOR from local IO with the opcode ItoMunc_cha_tor_inserts.io_itomcachenearuncore cacheTOR Inserts for ItoMCacheNears from IO devicesevent=0x35,umask=0xcd43ff0401Inserts into the TOR from local IO devices with the opcode ItoMCacheNears.  This event indicates a partial write requestunc_cha_tor_inserts.io_itomcachenear_localuncore cacheItoMCacheNear (partial write) transactions from an IO device that addresses memory on the local socketevent=0x35,umask=0xcd42ff0401TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_itomcachenear_remoteuncore cacheItoMCacheNear (partial write) transactions from an IO device that addresses memory on a remote socketevent=0x35,umask=0xcd437f0401TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_itom_localuncore cacheItoM (write) transactions from an IO device that addresses memory on the local socketevent=0x35,umask=0xcc42ff0401Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_itom_remoteuncore cacheItoM (write) transactions from an IO device that addresses memory on a remote socketevent=0x35,umask=0xcc437f0401Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_missuncore cacheTOR Inserts; Misses from local IOevent=0x35,umask=0xc001fe0401Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_miss_itomuncore cacheTOR Inserts : ItoM, indicating a full cacheline write request, from IO Devices that missed the LLCevent=0x35,umask=0xcc43fe0401Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_miss_itomcachenearuncore cacheTOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLCevent=0x35,umask=0xcd43fe0401TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_miss_pcirdcuruncore cacheTOR Inserts; RdCur and FsRdCur requests from local IO that miss LLCevent=0x35,umask=0xc8f3fe0401TOR Inserts : PCIRdCurs issued by IO Devices that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_miss_rfouncore cacheTOR Inserts; RFO misses from local IOevent=0x35,umask=0xc803fe0401TOR Inserts : RFOs issued by IO Devices that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_pcirdcuruncore cacheTOR Inserts for RdCur from local IOevent=0x35,umask=0xc8f3ff0401Inserts into the TOR from local IO with the opcode RdCurunc_cha_tor_inserts.io_pcirdcur_localuncore cachePCIRDCUR (read) transactions from an IO device that addresses memory on a remote socketevent=0x35,umask=0xc8f2ff0401Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_pcirdcur_remoteuncore cachePCIRDCUR (read) transactions from an IO device that addresses memory on the local socketevent=0x35,umask=0xc8f37f0401Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_rfouncore cacheTOR Inserts; RFO from local IOevent=0x35,umask=0xc803ff0401TOR Inserts : RFOs issued by IO Devices : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_wbmtoiuncore cacheTOR Inserts : WbMtoIs issued by IO Devicesevent=0x35,umask=0xcc23ff0401Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ipquncore cacheTOR Inserts : IPQevent=0x35,umask=801TOR Inserts : IPQ : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.irq_iauncore cacheTOR Inserts : IRQ - iAevent=0x35,umask=101TOR Inserts : IRQ - iA : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. : From an iA Coreunc_cha_tor_inserts.irq_non_iauncore cacheTOR Inserts : IRQ - Non iAevent=0x35,umask=0x1001TOR Inserts : IRQ - Non iA : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.isocuncore cacheTOR Inserts : Just ISOCevent=0x3501TOR Inserts : Just ISOC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.local_tgtuncore cacheTOR Inserts : Just Local Targetsevent=0x3501TOR Inserts : Just Local Targets : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.loc_alluncore cacheTOR Inserts : All from Local iA and IOevent=0x35,umask=0xc000ff0501TOR Inserts : All from Local iA and IO : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. : All locally initiated requestsunc_cha_tor_inserts.loc_iauncore cacheTOR Inserts : All from Local iAevent=0x35,umask=0xc000ff0101TOR Inserts : All from Local iA : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. : All locally initiated requests from iA Coresunc_cha_tor_inserts.loc_iouncore cacheTOR Inserts : All from Local IOevent=0x35,umask=0xc000ff0401TOR Inserts : All from Local IO : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. : All locally generated IO trafficunc_cha_tor_inserts.match_opcuncore cacheTOR Inserts : Match the Opcode in b[29:19] of the extended umask fieldevent=0x3501TOR Inserts : Match the Opcode in b[29:19] of the extended umask field : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.missuncore cacheTOR Inserts : Just Missesevent=0x3501TOR Inserts : Just Misses : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.mmcfguncore cacheTOR Inserts : MMCFG Accessevent=0x3501TOR Inserts : MMCFG Access : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.mmiouncore cacheTOR Inserts : MMIO Accessevent=0x3501TOR Inserts : MMIO Access : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.nearmemuncore cacheTOR Inserts : Just NearMemevent=0x3501TOR Inserts : Just NearMem : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.noncohuncore cacheTOR Inserts : Just NonCoherentevent=0x3501TOR Inserts : Just NonCoherent : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.not_nearmemuncore cacheTOR Inserts : Just NotNearMemevent=0x3501TOR Inserts : Just NotNearMem : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.pmmuncore cacheTOR Inserts : PMM Accessevent=0x3501TOR Inserts : PM Access : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.premorph_opcuncore cacheTOR Inserts : Match the PreMorphed Opcode in b[29:19] of the extended umask fieldevent=0x3501TOR Inserts : Match the PreMorphed Opcode in b[29:19] of the extended umask field : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.prq_iosfuncore cacheTOR Inserts : PRQ - IOSFevent=0x35,umask=401TOR Inserts : PRQ - IOSF : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. : From a PCIe Deviceunc_cha_tor_inserts.prq_non_iosfuncore cacheTOR Inserts : PRQ - Non IOSFevent=0x35,umask=0x2001TOR Inserts : PRQ - Non IOSF : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.remote_tgtuncore cacheTOR Inserts : Just Remote Targetsevent=0x3501TOR Inserts : Just Remote Targets : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.rem_alluncore cacheTOR Inserts : All from Remoteevent=0x35,umask=0xc001ffc801TOR Inserts : All from Remote : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. : All remote requests (e.g. snoops, writebacks) that came from remote socketsunc_cha_tor_inserts.rem_snpsuncore cacheTOR Inserts : All Snoops from Remoteevent=0x35,umask=0xc001ff0801TOR Inserts : All Snoops from Remote : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. : All snoops to this LLC that came from remote socketsunc_cha_tor_inserts.rrquncore cacheTOR Inserts : RRQevent=0x35,umask=0x4001TOR Inserts : RRQ : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_inserts.rrq_miss_invxtom_cxl_exp_localuncore cacheTOR Inserts for INVXTOM opcodes received from a remote socket which miss the L3 and target memory in a CXL type 3 memory expander local to this socketevent=0x35,umask=0x20e87e824001unc_cha_tor_inserts.rrq_miss_rdcode_cxl_exp_localuncore cacheTOR Inserts for RDCODE opcodes received from a remote socket which miss the L3 and target memory in a CXL type 3 memory expander local to this socketevent=0x35,umask=0x20e80e824001unc_cha_tor_inserts.rrq_miss_rdcur_cxl_exp_localuncore cacheTOR Inserts for RDCUR opcodes received from a remote socket which miss the L3 and target memory in a CXL type 3 memory expander local to this socketevent=0x35,umask=0x20e806824001unc_cha_tor_inserts.rrq_miss_rddata_cxl_exp_localuncore cacheTOR Inserts for RDDATA opcodes received from a remote socket which miss the L3 and target memory in a CXL type 3 memory expander local to this socketevent=0x35,umask=0x20e816824001unc_cha_tor_inserts.rrq_miss_rdinvown_opt_cxl_exp_localuncore cacheTOR Inserts for RDINVOWN_OPT opcodes received from a remote socket which miss the L3 and target memory in a CXL type 3 memory expander local to this socketevent=0x35,umask=0x20e826824001unc_cha_tor_inserts.snps_from_remuncore cacheTOR Inserts; All Snoops from Remoteevent=0x35,umask=0xc001ff0801Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. All snoops to this LLC that came from remote socketsunc_cha_tor_inserts.wbquncore cacheTOR Inserts : WBQevent=0x35,umask=0x8001TOR Inserts : WBQ : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subeventunc_cha_tor_occupancy.alluncore cacheTOR Occupancy : Allevent=0x36,umask=0xc001ffff01TOR Occupancy : All : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   Tunc_cha_tor_occupancy.ddruncore cacheTOR Occupancy : DDR Accessevent=0x3601TOR Occupancy : DDR Access : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subeventunc_cha_tor_occupancy.evictuncore cacheTOR Occupancy : SF/LLC Evictionsevent=0x36,umask=201TOR Occupancy : SF/LLC Evictions : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   T : TOR allocation occurred as a result of SF/LLC evictions (came from the ISMQ)unc_cha_tor_occupancy.hituncore cacheTOR Occupancy : Just Hitsevent=0x3601TOR Occupancy : Just Hits : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   Tunc_cha_tor_occupancy.iauncore cacheTOR Occupancy; All from local IAevent=0x36,umask=0xc001ff0101TOR Occupancy : All requests from iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_clflushuncore cacheTOR Occupancy : CLFlushes issued by iA Coresevent=0x36,umask=0xc8c7ff0101TOR Occupancy : CLFlushes issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_clflushoptuncore cacheTOR Occupancy : CLFlushOpts issued by iA Coresevent=0x36,umask=0xc8d7ff0101TOR Occupancy : CLFlushOpts issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_crduncore cacheTOR Occupancy; CRd from local IAevent=0x36,umask=0xc80fff0101TOR Occupancy; Code read from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_crd_prefuncore cacheTOR Occupancy; CRd Pref from local IAevent=0x36,umask=0xc88fff0101TOR Occupancy; Code read prefetch from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_drduncore cacheTOR Occupancy; DRd from local IAevent=0x36,umask=0xc817ff0101TOR Occupancy; Data read from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_drdpteuncore cacheTOR Occupancy : DRdPte issued by iA Cores due to a page walkevent=0x36,umask=0xc837ff0101TOR Occupancy : DRdPte issued by iA Cores due to a page walk : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_drd_optuncore cacheTOR Occupancy; DRd Opt from local IAevent=0x36,umask=0xc827ff0101TOR Occupancy; Data read opt from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_drd_opt_prefuncore cacheTOR Occupancy; DRd Opt Pref from local IAevent=0x36,umask=0xc8a7ff0101TOR Occupancy; Data read opt prefetch from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_drd_prefuncore cacheTOR Occupancy; DRd Pref from local IAevent=0x36,umask=0xc897ff0101TOR Occupancy; Data read prefetch from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_hituncore cacheTOR Occupancy; Hits from local IAevent=0x36,umask=0xc001fd0101TOR Occupancy : All requests from iA Cores that Hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_hit_crduncore cacheTOR Occupancy; CRd hits from local IAevent=0x36,umask=0xc80ffd0101TOR Occupancy; Code read from local IA that hits in the snoop filterunc_cha_tor_occupancy.ia_hit_crd_prefuncore cacheTOR Occupancy; CRd Pref hits from local IAevent=0x36,umask=0xc88ffd0101TOR Occupancy; Code read prefetch from local IA that hits in the snoop filterunc_cha_tor_occupancy.ia_hit_cxl_accuncore cacheTOR Occupancy for All requests issued from IA cores to CXL accelerator memory regions that hit the LLCevent=0x36,umask=0x10c001810101unc_cha_tor_occupancy.ia_hit_cxl_acc_localuncore cacheUNC_CHA_TOR_OCCUPANCY.IA_HIT_CXL_ACC_LOCALevent=0x36,umask=0x10c000810101unc_cha_tor_occupancy.ia_hit_drduncore cacheTOR Occupancy; DRd hits from local IAevent=0x36,umask=0xc817fd0101TOR Occupancy; Data read from local IA that hits in the snoop filterunc_cha_tor_occupancy.ia_hit_drdpteuncore cacheTOR Occupancy : DRdPte issued by iA Cores due to a page walk that hit the LLCevent=0x36,umask=0xc837fd0101TOR Occupancy : DRdPte issued by iA Cores due to a page walk that hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_hit_drd_optuncore cacheTOR Occupancy; DRd Opt hits from local IAevent=0x36,umask=0xc827fd0101TOR Occupancy; Data read opt from local IA that hits in the snoop filterunc_cha_tor_occupancy.ia_hit_drd_opt_prefuncore cacheTOR Occupancy; DRd Opt Pref hits from local IAevent=0x36,umask=0xc8a7fd0101TOR Occupancy; Data read opt prefetch from local IA that hits in the snoop filterunc_cha_tor_occupancy.ia_hit_drd_prefuncore cacheTOR Occupancy; DRd Pref hits from local IAevent=0x36,umask=0xc897fd0101TOR Occupancy; Data read prefetch from local IA that hits in the snoop filterunc_cha_tor_occupancy.ia_hit_itomuncore cacheTOR Occupancy : ItoMs issued by iA Cores that Hit LLCevent=0x36,umask=0xcc47fd0101TOR Occupancy : ItoMs issued by iA Cores that Hit LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_hit_llcprefcodeuncore cacheTOR Occupancy; LLCPrefCode hits from local IAevent=0x36,umask=0xcccffd0101TOR Occupancy; Last level cache prefetch code read from local IA that hits in the snoop filterunc_cha_tor_occupancy.ia_hit_llcprefdatauncore cacheTOR Occupancy; LLCPrefData hits from local IAevent=0x36,umask=0xccd7fd0101TOR Occupancy; Last level cache prefetch data read from local IA that hits in the snoop filterunc_cha_tor_occupancy.ia_hit_llcprefrfouncore cacheTOR Occupancy; LLCPrefRFO hits from local IAevent=0x36,umask=0xccc7fd0101TOR Occupancy; Last level cache prefetch read for ownership from local IA that hits in the snoop filterunc_cha_tor_occupancy.ia_hit_rfouncore cacheTOR Occupancy; RFO hits from local IAevent=0x36,umask=0xc807fd0101TOR Occupancy; Read for ownership from local IA that hits in the snoop filterunc_cha_tor_occupancy.ia_hit_rfo_prefuncore cacheTOR Occupancy; RFO Pref hits from local IAevent=0x36,umask=0xc887fd0101TOR Occupancy; Read for ownership prefetch from local IA that hits in the snoop filterunc_cha_tor_occupancy.ia_itomuncore cacheTOR Occupancy : ItoMs issued by iA Coresevent=0x36,umask=0xcc47ff0101TOR Occupancy : ItoMs issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_itomcachenearuncore cacheTOR Occupancy : ItoMCacheNears issued by iA Coresevent=0x36,umask=0xcd47ff0101TOR Occupancy : ItoMCacheNears issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_llcprefcodeuncore cacheTOR Occupancy; LLCPrefCode from local IAevent=0x36,umask=0xcccfff0101TOR Occupancy; Last level cache prefetch data read from local IAunc_cha_tor_occupancy.ia_llcprefdatauncore cacheTOR Occupancy; LLCPrefData from local IAevent=0x36,umask=0xccd7ff0101TOR Occupancy; Last level cache prefetch data read from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_llcprefrfouncore cacheTOR Occupancy; LLCPrefRFO from local IAevent=0x36,umask=0xccc7ff0101TOR Occupancy; Last level cache prefetch read for ownership from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_missuncore cacheTOR Occupancy; Misses from Local IAevent=0x36,umask=0xc001fe0101TOR Occupancy : All requests from iA Cores that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_crduncore cacheTOR Occupancy; CRd misses from local IAevent=0x36,umask=0xc80ffe0101TOR Occupancy; Code read from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_crdmorph_cxl_accuncore cacheTOR Occupancy for CRds and equivalent opcodes issued from an IA core which miss the L3 and target memory in a CXL type 2 acceleratorevent=0x36,umask=0x10c80b820101unc_cha_tor_occupancy.ia_miss_crd_localuncore cacheTOR Occupancy : CRd issued by iA Cores that Missed the LLC - HOMed locallyevent=0x36,umask=0xc80efe0101TOR Occupancy : CRd issued by iA Cores that Missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_crd_prefuncore cacheTOR Occupancy; CRd Pref misses from local IAevent=0x36,umask=0xc88ffe0101TOR Occupancy; Code read prefetch from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_crd_pref_localuncore cacheTOR Occupancy : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed locallyevent=0x36,umask=0xc88efe0101TOR Occupancy : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_crd_pref_remoteuncore cacheTOR Occupancy : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed remotelyevent=0x36,umask=0xc88f7e0101TOR Occupancy : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_crd_remoteuncore cacheTOR Occupancy : CRd issued by iA Cores that Missed the LLC - HOMed remotelyevent=0x36,umask=0xc80f7e0101TOR Occupancy : CRd issued by iA Cores that Missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_cxl_accuncore cacheTOR Occupancy for All requests issued from IA cores to CXL accelerator memory regions that miss the LLCevent=0x36,umask=0x10c001820101unc_cha_tor_occupancy.ia_miss_cxl_acc_localuncore cacheUNC_CHA_TOR_OCCUPANCY.IA_MISS_CXL_ACC_LOCALevent=0x36,umask=0x10c000820101unc_cha_tor_occupancy.ia_miss_drduncore cacheTOR Occupancy for DRd misses from local IAevent=0x36,umask=0xc817fe0101Number of cycles for elements in the TOR from local IA cores which miss the LLC and snoop filter with the opcode DRdunc_cha_tor_occupancy.ia_miss_drdmorph_cxl_accuncore cacheTOR Occupancy for DRds and equivalent opcodes issued from an IA core which miss the L3 and target memory in a CXL type 2 acceleratorevent=0x36,umask=0x10c813820101unc_cha_tor_occupancy.ia_miss_drdpteuncore cacheTOR Occupancy : DRdPte issued by iA Cores due to a page walk that missed the LLCevent=0x36,umask=0xc837fe0101TOR Occupancy : DRdPte issued by iA Cores due to a page walk that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_drd_cxl_accuncore cacheTOR Occupancy for DRds and equivalent opcodes issued from an IA core which miss the L3 and target memory in a CXL type 2 memory expander cardevent=0x36,umask=0x10c817820101unc_cha_tor_occupancy.ia_miss_drd_cxl_acc_localuncore cacheUNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_CXL_ACC_LOCALevent=0x36,umask=0x10c816820101unc_cha_tor_occupancy.ia_miss_drd_cxl_exp_localuncore cacheUNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_CXL_EXP_LOCALevent=0x36,umask=0x20c816820101unc_cha_tor_occupancy.ia_miss_drd_ddruncore cacheTOR Occupancy for DRds issued by iA Cores targeting DDR Mem that Missed the LLCevent=0x36,umask=0xc817860101Number of cycles for elements in the TOR from local IA cores which miss the LLC and snoop filter with the opcode DRd, and which target DDR memoryunc_cha_tor_occupancy.ia_miss_drd_localuncore cacheTOR Occupancy for DRd misses from local IA targeting local memoryevent=0x36,umask=0xc816fe0101Number of cycles for elements in the TOR from local IA cores which miss the LLC and snoop filter with the opcode DRd, and which target local memoryunc_cha_tor_occupancy.ia_miss_drd_local_ddruncore cacheTOR Occupancy : DRds issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locallyevent=0x36,umask=0xc816860101TOR Occupancy : DRds issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_drd_local_pmmuncore cacheTOR Occupancy : DRds issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locallyevent=0x36,umask=0xc8168a0101TOR Occupancy : DRds issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_drd_optuncore cacheTOR Occupancy; DRd Opt misses from local IAevent=0x36,umask=0xc827fe0101TOR Occupancy; Data read opt from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_drd_opt_cxl_acc_localuncore cacheUNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_OPT_CXL_ACC_LOCALevent=0x36,umask=0x10c826820101unc_cha_tor_occupancy.ia_miss_drd_opt_prefuncore cacheTOR Occupancy; DRd Opt Pref misses from local IAevent=0x36,umask=0xc8a7fe0101TOR Occupancy; Data read opt prefetch from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_drd_opt_pref_cxl_acc_localuncore cacheUNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_OPT_PREF_CXL_ACC_LOCALevent=0x36,umask=0x10c8a6820101unc_cha_tor_occupancy.ia_miss_drd_pmmuncore cacheTOR Occupancy for DRds issued by iA Cores targeting PMM Mem that Missed the LLCevent=0x36,umask=0xc8178a0101Number of cycles for elements in the TOR from local IA cores which miss the LLC and snoop filter with the opcode DRd, and which target PMM memoryunc_cha_tor_occupancy.ia_miss_drd_prefuncore cacheTOR Occupancy; DRd Pref misses from local IAevent=0x36,umask=0xc897fe0101TOR Occupancy; Data read prefetch from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_drd_pref_cxl_accuncore cacheTOR Occupancy for L2 data prefetches issued from an IA core which miss the L3 and target memory in a CXL type 2 acceleratorevent=0x36,umask=0x10c897820101unc_cha_tor_occupancy.ia_miss_drd_pref_cxl_acc_localuncore cacheUNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_CXL_ACC_LOCALevent=0x36,umask=0x10c896820101unc_cha_tor_occupancy.ia_miss_drd_pref_cxl_exp_localuncore cacheUNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_CXL_EXP_LOCALevent=0x36,umask=0x20c896820101unc_cha_tor_occupancy.ia_miss_drd_pref_ddruncore cacheTOR Occupancy : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLCevent=0x36,umask=0xc897860101TOR Occupancy : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_drd_pref_localuncore cacheTOR Occupancy; DRd Pref misses from local IAevent=0x36,umask=0xc896fe0101TOR Occupancy; Data read prefetch from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_drd_pref_local_ddruncore cacheTOR Occupancy : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locallyevent=0x36,umask=0xc896860101TOR Occupancy : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_drd_pref_local_pmmuncore cacheTOR Occupancy : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locallyevent=0x36,umask=0xc8968a0101TOR Occupancy : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_drd_pref_pmmuncore cacheTOR Occupancy : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLCevent=0x36,umask=0xc8978a0101TOR Occupancy : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_drd_pref_remoteuncore cacheTOR Occupancy; DRd Pref misses from local IAevent=0x36,umask=0xc8977e0101TOR Occupancy; Data read prefetch from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_drd_pref_remote_ddruncore cacheTOR Occupancy : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotelyevent=0x36,umask=0xc897060101TOR Occupancy : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_drd_pref_remote_pmmuncore cacheTOR Occupancy : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotelyevent=0x36,umask=0xc8970a0101TOR Occupancy : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_drd_remoteuncore cacheTOR Occupancy for DRd misses from local IA targeting remote memoryevent=0x36,umask=0xc8177e0101Number of cycles for elements in the TOR from local IA cores which miss the LLC and snoop filter with the opcode DRd, and which target remote memoryunc_cha_tor_occupancy.ia_miss_drd_remote_ddruncore cacheTOR Occupancy : DRds issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotelyevent=0x36,umask=0xc817060101TOR Occupancy : DRds issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_drd_remote_pmmuncore cacheTOR Occupancy : DRds issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotelyevent=0x36,umask=0xc8170a0101TOR Occupancy : DRds issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_itomuncore cacheTOR Occupancy : ItoMs issued by iA Cores that Missed LLCevent=0x36,umask=0xcc47fe0101TOR Occupancy : ItoMs issued by iA Cores that Missed LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_llcprefcodeuncore cacheTOR Occupancy; LLCPrefCode misses from local IAevent=0x36,umask=0xcccffe0101TOR Occupancy; Last level cache prefetch code read from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_llcprefcode_cxl_accuncore cacheTOR Occupancy for LLC Prefetch Code transactions issued from an IA core which miss the L3 and target memory in a CXL type 2 acceleratorevent=0x36,umask=0x10cccf820101unc_cha_tor_occupancy.ia_miss_llcprefdatauncore cacheTOR Occupancy; LLCPrefData misses from local IAevent=0x36,umask=0xccd7fe0101TOR Occupancy; Last level cache prefetch data read from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_llcprefdata_cxl_accuncore cacheTOR Occupancy for LLC data prefetches issued from an IA core which miss the L3 and target memory in a CXL type 2 acceleratorevent=0x36,umask=0x10ccd7820101unc_cha_tor_occupancy.ia_miss_llcprefdata_cxl_acc_localuncore cacheUNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFDATA_CXL_ACC_LOCALevent=0x36,umask=0x10ccd6820101unc_cha_tor_occupancy.ia_miss_llcprefdata_cxl_exp_localuncore cacheUNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFDATA_CXL_EXP_LOCALevent=0x36,umask=0x20ccd6820101unc_cha_tor_occupancy.ia_miss_llcprefrfouncore cacheTOR Occupancy; LLCPrefRFO misses from local IAevent=0x36,umask=0xccc7fe0101TOR Occupancy; Last level cache prefetch read for ownership from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_llcprefrfo_cxl_accuncore cacheTOR Occupancy for L2 RFO prefetches issued from an IA core which miss the L3 and target memory in a CXL type 2 acceleratorevent=0x36,umask=0x10c887820101unc_cha_tor_occupancy.ia_miss_llcprefrfo_cxl_acc_localuncore cacheUNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFRFO_CXL_ACC_LOCALevent=0x36,umask=0x10c886820101unc_cha_tor_occupancy.ia_miss_llcprefrfo_cxl_exp_localuncore cacheUNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFRFO_CXL_EXP_LOCALevent=0x36,umask=0x20c886820101unc_cha_tor_occupancy.ia_miss_local_wcilf_ddruncore cacheTOR Occupancy : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed locallyevent=0x36,umask=0xc866860101TOR Occupancy : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_local_wcilf_pmmuncore cacheTOR Occupancy : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed locallyevent=0x36,umask=0xc8668a0101TOR Occupancy : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_local_wcil_ddruncore cacheTOR Occupancy : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed locallyevent=0x36,umask=0xc86e860101TOR Occupancy : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_local_wcil_pmmuncore cacheTOR Occupancy : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed locallyevent=0x36,umask=0xc86e8a0101TOR Occupancy : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_remote_wcilf_ddruncore cacheTOR Occupancy : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed remotelyevent=0x36,umask=0xc867060101TOR Occupancy : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_remote_wcilf_pmmuncore cacheTOR Occupancy : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed remotelyevent=0x36,umask=0xc8670a0101TOR Occupancy : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_remote_wcil_ddruncore cacheTOR Occupancy : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed remotelyevent=0x36,umask=0xc86f060101TOR Occupancy : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_remote_wcil_pmmuncore cacheTOR Occupancy : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed remotelyevent=0x36,umask=0xc86f0a0101TOR Occupancy : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_rfouncore cacheTOR Occupancy; RFO misses from local IAevent=0x36,umask=0xc807fe0101TOR Occupancy; Read for ownership from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_rfomorph_cxl_accuncore cacheTOR Occupancy for RFO and L2 RFO prefetches issued from an IA core which miss the L3 and target memory in a CXL type 2 acceleratorevent=0x36,umask=0x10c803820101unc_cha_tor_occupancy.ia_miss_rfo_cxl_accuncore cacheTOR Occupancy for RFOs issued from an IA core which miss the L3 and target memory in a CXL type 2 acceleratorevent=0x36,umask=0x10c807820101unc_cha_tor_occupancy.ia_miss_rfo_cxl_acc_localuncore cacheUNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_CXL_ACC_LOCALevent=0x36,umask=0x10c806820101unc_cha_tor_occupancy.ia_miss_rfo_cxl_exp_localuncore cacheUNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_CXL_EXP_LOCALevent=0x36,umask=0x20c806820101unc_cha_tor_occupancy.ia_miss_rfo_localuncore cacheTOR Occupancy; RFO misses from local IAevent=0x36,umask=0xc806fe0101TOR Occupancy; Read for ownership from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_rfo_prefuncore cacheTOR Occupancy; RFO prefetch misses from local IAevent=0x36,umask=0xc887fe0101TOR Occupancy; Read for ownership prefetch from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_rfo_pref_cxl_accuncore cacheTOR Occupancy for LLC RFO prefetches issued from an IA core which miss the L3 and target memory in a CXL type 2 acceleratorevent=0x36,umask=0x10ccc7820101unc_cha_tor_occupancy.ia_miss_rfo_pref_cxl_acc_localuncore cacheUNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_PREF_CXL_ACC_LOCALevent=0x36,umask=0x10ccc6820101unc_cha_tor_occupancy.ia_miss_rfo_pref_cxl_exp_localuncore cacheUNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_PREF_CXL_EXP_LOCALevent=0x36,umask=0x20ccc6820101unc_cha_tor_occupancy.ia_miss_rfo_pref_localuncore cacheTOR Occupancy; RFO prefetch misses from local IAevent=0x36,umask=0xc886fe0101TOR Occupancy; Read for ownership prefetch from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_rfo_pref_remoteuncore cacheTOR Occupancy; RFO prefetch misses from local IAevent=0x36,umask=0xc8877e0101TOR Occupancy; Read for ownership prefetch from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_rfo_remoteuncore cacheTOR Occupancy; RFO misses from local IAevent=0x36,umask=0xc8077e0101TOR Occupancy; Read for ownership from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_ucrdfuncore cacheTOR Occupancy : UCRdFs issued by iA Cores that Missed LLCevent=0x36,umask=0xc877de0101TOR Occupancy : UCRdFs issued by iA Cores that Missed LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_wciluncore cacheTOR Occupancy : WCiLs issued by iA Cores that Missed the LLCevent=0x36,umask=0xc86ffe0101TOR Occupancy : WCiLs issued by iA Cores that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_wcilfuncore cacheTOR Occupancy : WCiLF issued by iA Cores that Missed the LLCevent=0x36,umask=0xc867fe0101TOR Occupancy : WCiLF issued by iA Cores that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_wcilf_ddruncore cacheTOR Occupancy : WCiLFs issued by iA Cores targeting DDR that missed the LLCevent=0x36,umask=0xc867860101TOR Occupancy : WCiLFs issued by iA Cores targeting DDR that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_wcilf_pmmuncore cacheTOR Occupancy : WCiLFs issued by iA Cores targeting PMM that missed the LLCevent=0x36,umask=0xc8678a0101TOR Occupancy : WCiLFs issued by iA Cores targeting PMM that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_wcil_ddruncore cacheTOR Occupancy : WCiLs issued by iA Cores targeting DDR that missed the LLCevent=0x36,umask=0xc86f860101TOR Occupancy : WCiLs issued by iA Cores targeting DDR that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_wcil_pmmuncore cacheTOR Occupancy : WCiLs issued by iA Cores targeting PMM that missed the LLCevent=0x36,umask=0xc86f8a0101TOR Occupancy : WCiLs issued by iA Cores targeting PMM that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_wiluncore cacheTOR Occupancy : WiLs issued by iA Cores that Missed LLCevent=0x36,umask=0xc87fde0101TOR Occupancy : WiLs issued by iA Cores that Missed LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_rfouncore cacheTOR Occupancy; RFO from local IAevent=0x36,umask=0xc807ff0101TOR Occupancy; Read for ownership from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_rfo_prefuncore cacheTOR Occupancy; RFO prefetch from local IAevent=0x36,umask=0xc887ff0101TOR Occupancy; Read for ownership prefetch from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_specitomuncore cacheTOR Occupancy : SpecItoMs issued by iA Coresevent=0x36,umask=0xcc57ff0101TOR Occupancy : SpecItoMs issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_wbmtoiuncore cacheTOR Occupancy : WbMtoIs issued by iA Coresevent=0x36,umask=0xcc27ff0101TOR Occupancy : WbMtoIs issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_wciluncore cacheTOR Occupancy : WCiLs issued by iA Coresevent=0x36,umask=0xc86fff0101TOR Occupancy : WCiLs issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_wcilfuncore cacheTOR Occupancy : WCiLF issued by iA Coresevent=0x36,umask=0xc867ff0101TOR Occupancy : WCiLF issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.iouncore cacheTOR Occupancy; All from local IOevent=0x36,umask=0xc001ff0401TOR Occupancy : All requests from IO Devices : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_clflushuncore cacheTOR Occupancy : CLFlushes issued by IO Devicesevent=0x36,umask=0xc8c3ff0401TOR Occupancy : CLFlushes issued by IO Devices : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_hituncore cacheTOR Occupancy; Hits from local IOevent=0x36,umask=0xc001fd0401TOR Occupancy : All requests from IO Devices that hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_hit_itomuncore cacheTOR Occupancy; ITOM hits from local IOevent=0x36,umask=0xcc43fd0401TOR Occupancy : ItoMs issued by IO Devices that Hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_hit_itomcachenearuncore cacheTOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices that hit the LLCevent=0x36,umask=0xcd43fd0401TOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices that hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_hit_pcirdcuruncore cacheTOR Occupancy; RdCur and FsRdCur hits from local IOevent=0x36,umask=0xc8f3fd0401TOR Occupancy : PCIRdCurs issued by IO Devices that hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_hit_rfouncore cacheTOR Occupancy; RFO hits from local IOevent=0x36,umask=0xc803fd0401TOR Occupancy : RFOs issued by IO Devices that hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_itomuncore cacheTOR Occupancy; ITOM from local IOevent=0x36,umask=0xcc43ff0401TOR Occupancy : ItoMs issued by IO Devices : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_itomcachenearuncore cacheTOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devicesevent=0x36,umask=0xcd43ff0401TOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_missuncore cacheTOR Occupancy; Misses from local IOevent=0x36,umask=0xc001fe0401TOR Occupancy : All requests from IO Devices that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_miss_itomuncore cacheTOR Occupancy; ITOM misses from local IOevent=0x36,umask=0xcc43fe0401TOR Occupancy : ItoMs issued by IO Devices that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_miss_itomcachenearuncore cacheTOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLCevent=0x36,umask=0xcd43fe0401TOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_miss_itomcachenear_localuncore cacheTOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLC and targets local memoryevent=0x36,umask=0xcd42fe0401For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_miss_itomcachenear_remoteuncore cacheTOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLC and targets remote memoryevent=0x36,umask=0xcd437e0401For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_miss_itom_localuncore cacheTOR Occupancy; ITOM misses from local IO and targets local memoryevent=0x36,umask=0xcc42fe0401For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_miss_itom_remoteuncore cacheTOR Occupancy; ITOM misses from local IO and targets remote memoryevent=0x36,umask=0xcc437e0401For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_miss_pcirdcuruncore cacheTOR Occupancy; RdCur and FsRdCur misses from local IOevent=0x36,umask=0xc8f3fe0401TOR Occupancy : PCIRdCurs issued by IO Devices that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_miss_pcirdcur_localuncore cacheTOR Occupancy; RdCur and FsRdCur misses from local IO and targets local memoryevent=0x36,umask=0xc8f2fe0401For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_miss_pcirdcur_remoteuncore cacheTOR Occupancy; RdCur and FsRdCur misses from local IO and targets remote memoryevent=0x36,umask=0xc8f37e0401For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_miss_rfouncore cacheTOR Occupancy; RFO misses from local IOevent=0x36,umask=0xc803fe0401TOR Occupancy : RFOs issued by IO Devices that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_pcirdcuruncore cacheTOR Occupancy; RdCur and FsRdCur from local IOevent=0x36,umask=0xc8f3ff0401TOR Occupancy : PCIRdCurs issued by IO Devices : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_rfouncore cacheTOR Occupancy; ItoM from local IOevent=0x36,umask=0xc803ff0401TOR Occupancy : RFOs issued by IO Devices : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_wbmtoiuncore cacheTOR Occupancy : WbMtoIs issued by IO Devicesevent=0x36,umask=0xcc23ff0401TOR Occupancy : WbMtoIs issued by IO Devices : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ipquncore cacheTOR Occupancy : IPQevent=0x36,umask=801TOR Occupancy : IPQ : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   Tunc_cha_tor_occupancy.irq_iauncore cacheTOR Occupancy : IRQ - iAevent=0x36,umask=101TOR Occupancy : IRQ - iA : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   T : From an iA Coreunc_cha_tor_occupancy.irq_non_iauncore cacheTOR Occupancy : IRQ - Non iAevent=0x36,umask=0x1001TOR Occupancy : IRQ - Non iA : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   Tunc_cha_tor_occupancy.isocuncore cacheTOR Occupancy : Just ISOCevent=0x3601TOR Occupancy : Just ISOC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   Tunc_cha_tor_occupancy.local_tgtuncore cacheTOR Occupancy : Just Local Targetsevent=0x3601TOR Occupancy : Just Local Targets : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   Tunc_cha_tor_occupancy.loc_alluncore cacheTOR Occupancy : All from Local iA and IOevent=0x36,umask=0xc000ff0501TOR Occupancy : All from Local iA and IO : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   T : All locally initiated requestsunc_cha_tor_occupancy.loc_iauncore cacheTOR Occupancy : All from Local iAevent=0x36,umask=0xc000ff0101TOR Occupancy : All from Local iA : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   T : All locally initiated requests from iA Coresunc_cha_tor_occupancy.loc_iouncore cacheTOR Occupancy : All from Local IOevent=0x36,umask=0xc000ff0401TOR Occupancy : All from Local IO : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   T : All locally generated IO trafficunc_cha_tor_occupancy.match_opcuncore cacheTOR Occupancy : Match the Opcode in b[29:19] of the extended umask fieldevent=0x3601TOR Occupancy : Match the Opcode in b[29:19] of the extended umask field : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   Tunc_cha_tor_occupancy.missuncore cacheTOR Occupancy : Just Missesevent=0x3601TOR Occupancy : Just Misses : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   Tunc_cha_tor_occupancy.mmcfguncore cacheTOR Occupancy : MMCFG Accessevent=0x3601TOR Occupancy : MMCFG Access : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   Tunc_cha_tor_occupancy.mmiouncore cacheTOR Occupancy : MMIO Accessevent=0x3601TOR Occupancy : MMIO Access : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   Tunc_cha_tor_occupancy.nearmemuncore cacheTOR Occupancy : Just NearMemevent=0x3601TOR Occupancy : Just NearMem : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   Tunc_cha_tor_occupancy.noncohuncore cacheTOR Occupancy : Just NonCoherentevent=0x3601TOR Occupancy : Just NonCoherent : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   Tunc_cha_tor_occupancy.not_nearmemuncore cacheTOR Occupancy : Just NotNearMemevent=0x3601TOR Occupancy : Just NotNearMem : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   Tunc_cha_tor_occupancy.pmmuncore cacheTOR Occupancy : PMM Accessevent=0x3601TOR Occupancy : PMM Access : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subeventunc_cha_tor_occupancy.premorph_opcuncore cacheTOR Occupancy : Match the PreMorphed Opcode in b[29:19] of the extended umask fieldevent=0x3601TOR Occupancy : Match the PreMorphed Opcode in b[29:19] of the extended umask field : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   Tunc_cha_tor_occupancy.prquncore cacheTOR Occupancy : PRQ - IOSFevent=0x36,umask=401TOR Occupancy : PRQ - IOSF : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   T : From a PCIe Deviceunc_cha_tor_occupancy.prq_non_iosfuncore cacheTOR Occupancy : PRQ - Non IOSFevent=0x36,umask=0x2001TOR Occupancy : PRQ - Non IOSF : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   Tunc_cha_tor_occupancy.remote_tgtuncore cacheTOR Occupancy : Just Remote Targetsevent=0x3601TOR Occupancy : Just Remote Targets : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   Tunc_cha_tor_occupancy.rem_alluncore cacheTOR Occupancy : All from Remoteevent=0x36,umask=0xc001ffc801TOR Occupancy : All from Remote : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   T : All remote requests (e.g. snoops, writebacks) that came from remote socketsunc_cha_tor_occupancy.rem_snpsuncore cacheTOR Occupancy : All Snoops from Remoteevent=0x36,umask=0xc001ff0801TOR Occupancy : All Snoops from Remote : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   T : All snoops to this LLC that came from remote socketsunc_cha_tor_occupancy.rrquncore cacheTOR Occupancy : RRQevent=0x36,umask=0x4001TOR Occupancy : RRQ : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   Tunc_cha_tor_occupancy.rrq_miss_invxtom_cxl_exp_localuncore cacheTOR Occupancy for INVXTOM opcodes received from a remote socket which miss the L3 and target memory in a CXL type 3 memory expander local to this socketevent=0x36,umask=0x20e87e824001unc_cha_tor_occupancy.rrq_miss_rdcode_cxl_exp_localuncore cacheTOR Occupancy for RDCODE opcodes received from a remote socket which miss the L3 and target memory in a CXL type 3 memory expander local to this socketevent=0x36,umask=0x20e80e824001unc_cha_tor_occupancy.rrq_miss_rdcur_cxl_exp_localuncore cacheTOR Occupancy for RDCUR opcodes received from a remote socket which miss the L3 and target memory in a CXL type 3 memory expander local to this socketevent=0x36,umask=0x20e806824001unc_cha_tor_occupancy.rrq_miss_rddata_cxl_exp_localuncore cacheTOR Occupancy for RDDATA opcodes received from a remote socket which miss the L3 and target memory in a CXL type 3 memory expander local to this socketevent=0x36,umask=0x20e816824001unc_cha_tor_occupancy.rrq_miss_rdinvown_opt_cxl_exp_localuncore cacheTOR Occupancy for RDINVOWN_OPT opcodes received from a remote socket which miss the L3 and target memory in a CXL type 3 memory expander local to this socketevent=0x36,umask=0x20e826824001unc_cha_tor_occupancy.snps_from_remuncore cacheTOR Occupancy; All Snoops from Remoteevent=0x36,umask=0xc001ff0801For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   All snoops to this LLC that came from remote socketsunc_cha_tor_occupancy.wbquncore cacheTOR Occupancy : WBQevent=0x36,umask=0x8001TOR Occupancy : WBQ : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   Tunc_cha_wb_push_mtoi.llcuncore cacheWbPushMtoI : Pushed to LLCevent=0x56,umask=101WbPushMtoI : Pushed to LLC : Counts the number of times when the CHA was received WbPushMtoI : Counts the number of times when the CHA was able to push WbPushMToI to LLCunc_cha_wb_push_mtoi.memuncore cacheWbPushMtoI : Pushed to Memoryevent=0x56,umask=201WbPushMtoI : Pushed to Memory : Counts the number of times when the CHA was received WbPushMtoI : Counts the number of times when the CHA was unable to push WbPushMToI to LLC (hence pushed it to MEM)unc_cha_write_no_credits.mc0uncore cacheCHA iMC CHNx WRITE Credits Empty : MC0event=0x5a,umask=101CHA iMC CHNx WRITE Credits Empty : MC0 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC.  In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 0 onlyunc_cha_write_no_credits.mc1uncore cacheCHA iMC CHNx WRITE Credits Empty : MC1event=0x5a,umask=201CHA iMC CHNx WRITE Credits Empty : MC1 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC.  In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 1 onlyunc_cha_write_no_credits.mc2uncore cacheCHA iMC CHNx WRITE Credits Empty : MC2event=0x5a,umask=401CHA iMC CHNx WRITE Credits Empty : MC2 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC.  In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 2 onlyunc_cha_write_no_credits.mc3uncore cacheCHA iMC CHNx WRITE Credits Empty : MC3event=0x5a,umask=801CHA iMC CHNx WRITE Credits Empty : MC3 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC.  In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 3 onlyunc_cha_write_no_credits.mc4uncore cacheCHA iMC CHNx WRITE Credits Empty : MC4event=0x5a,umask=0x1001CHA iMC CHNx WRITE Credits Empty : MC4 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC.  In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 4 onlyunc_cha_write_no_credits.mc5uncore cacheCHA iMC CHNx WRITE Credits Empty : MC5event=0x5a,umask=0x2001CHA iMC CHNx WRITE Credits Empty : MC5 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC.  In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 5 onlyunc_cha_xpt_pref.drop0_conflictuncore cacheXPT Prefetches : Dropped (on 0?) - Conflictevent=0x6f,umask=801XPT Prefetches : Dropped (on 0?) - Conflict : Number of XPT prefetches dropped due to AD CMS write port contentionunc_cha_xpt_pref.drop0_nocrduncore cacheXPT Prefetches : Dropped (on 0?) - No Creditsevent=0x6f,umask=401XPT Prefetches : Dropped (on 0?) - No Credits : Number of XPT prefetches dropped due to lack of XPT AD egress creditsunc_cha_xpt_pref.drop1_conflictuncore cacheXPT Prefetches : Dropped (on 1?) - Conflictevent=0x6f,umask=0x8001XPT Prefetches : Dropped (on 1?) - Conflict : Number of XPT prefetches dropped due to AD CMS write port contentionunc_cha_xpt_pref.drop1_nocrduncore cacheXPT Prefetches : Dropped (on 1?) - No Creditsevent=0x6f,umask=0x4001XPT Prefetches : Dropped (on 1?) - No Credits : Number of XPT prefetches dropped due to lack of XPT AD egress creditsunc_cha_xpt_pref.sent0uncore cacheXPT Prefetches : Sent (on 0?)event=0x6f,umask=101XPT Prefetches : Sent (on 0?) : Number of XPT prefetches sentunc_cha_xpt_pref.sent1uncore cacheXPT Prefetches : Sent (on 1?)event=0x6f,umask=0x1001XPT Prefetches : Sent (on 1?) : Number of XPT prefetches sentuncore_cxlcmunc_cxlcm_clockticksuncore cxlCounts the number of lfclk ticksevent=1,umask=201unc_cxlcm_rxc_agf_inserts.cache_datauncore cxlNumber of Allocation to Mem Rxx AGF 0event=0x43,umask=801unc_cxlcm_rxc_agf_inserts.cache_req0uncore cxlNumber of Allocation to Cache Req AGF0event=0x43,umask=101unc_cxlcm_rxc_agf_inserts.cache_req1uncore cxlNumber of Allocation to Cache Rsp AGFevent=0x43,umask=201unc_cxlcm_rxc_agf_inserts.cache_rsp0uncore cxlNumber of Allocation to Cache Data AGFevent=0x43,umask=401unc_cxlcm_rxc_agf_inserts.cache_rsp1uncore cxlNumber of Allocation to Cache Rsp AGFevent=0x43,umask=0x4001unc_cxlcm_rxc_agf_inserts.mem_datauncore cxlNumber of Allocation to Cache Req AGF 1event=0x43,umask=0x2001unc_cxlcm_rxc_agf_inserts.mem_requncore cxlNumber of Allocation to Mem Data AGFevent=0x43,umask=0x1001unc_cxlcm_rxc_flits.ak_hdruncore cxlCount the number of Flits with AK setevent=0x4b,umask=0x1001unc_cxlcm_rxc_flits.be_hdruncore cxlCount the number of Flits with BE setevent=0x4b,umask=0x2001unc_cxlcm_rxc_flits.ctrluncore cxlCount the number of control flits receivedevent=0x4b,umask=401unc_cxlcm_rxc_flits.no_hdruncore cxlCount the number of Headerless flits receivedevent=0x4b,umask=801unc_cxlcm_rxc_flits.protuncore cxlCount the number of protocol flits receivedevent=0x4b,umask=201unc_cxlcm_rxc_flits.sz_hdruncore cxlCount the number of Flits with SZ setevent=0x4b,umask=0x4001unc_cxlcm_rxc_flits.validuncore cxlCount the number of flits receivedevent=0x4b,umask=101unc_cxlcm_rxc_flits.valid_msguncore cxlCount the number of valid messages in the flitevent=0x4b,umask=0x8001unc_cxlcm_rxc_misc.crc_errorsuncore cxlCount the number of CRC errors detectedevent=0x40,umask=801unc_cxlcm_rxc_misc.inituncore cxlCount the number of Init flits sentevent=0x40,umask=401unc_cxlcm_rxc_misc.llcrduncore cxlCount the number of LLCRD flits sentevent=0x40,umask=101unc_cxlcm_rxc_misc.retryuncore cxlCount the number of Retry flits sentevent=0x40,umask=201unc_cxlcm_rxc_pack_buf_full.cache_datauncore cxlNumber of cycles the Packing Buffer is Fullevent=0x52,umask=401unc_cxlcm_rxc_pack_buf_full.cache_requncore cxlNumber of cycles the Packing Buffer is Fullevent=0x52,umask=101unc_cxlcm_rxc_pack_buf_full.cache_rspuncore cxlNumber of cycles the Packing Buffer is Fullevent=0x52,umask=201unc_cxlcm_rxc_pack_buf_full.mem_datauncore cxlNumber of cycles the Packing Buffer is Fullevent=0x52,umask=0x1001unc_cxlcm_rxc_pack_buf_full.mem_requncore cxlNumber of cycles the Packing Buffer is Fullevent=0x52,umask=801unc_cxlcm_rxc_pack_buf_inserts.cache_datauncore cxlNumber of Allocation to Cache Data Packing bufferevent=0x41,umask=401unc_cxlcm_rxc_pack_buf_inserts.cache_requncore cxlNumber of Allocation to Cache Req Packing bufferevent=0x41,umask=101unc_cxlcm_rxc_pack_buf_inserts.cache_rspuncore cxlNumber of Allocation to Cache Rsp Packing bufferevent=0x41,umask=201unc_cxlcm_rxc_pack_buf_inserts.mem_datauncore cxlNumber of Allocation to Mem Data Packing bufferevent=0x41,umask=0x1001unc_cxlcm_rxc_pack_buf_inserts.mem_requncore cxlNumber of Allocation to Mem Rxx Packing bufferevent=0x41,umask=801unc_cxlcm_rxc_pack_buf_ne.cache_datauncore cxlNumber of cycles of Not Empty for Cache Data Packing bufferevent=0x42,umask=401unc_cxlcm_rxc_pack_buf_ne.cache_requncore cxlNumber of cycles of Not Empty for Cache Req Packing bufferevent=0x42,umask=101unc_cxlcm_rxc_pack_buf_ne.cache_rspuncore cxlNumber of cycles of Not Empty for Cache Rsp Packing bufferevent=0x42,umask=201unc_cxlcm_rxc_pack_buf_ne.mem_datauncore cxlNumber of cycles of Not Empty for Mem Data Packing bufferevent=0x42,umask=0x1001unc_cxlcm_rxc_pack_buf_ne.mem_requncore cxlNumber of cycles of Not Empty for Mem Rxx Packing bufferevent=0x42,umask=801unc_cxlcm_txc_flits.ak_hdruncore cxlCount the number of Flits with AK setevent=5,umask=0x1001unc_cxlcm_txc_flits.be_hdruncore cxlCount the number of Flits with BE setevent=5,umask=0x2001unc_cxlcm_txc_flits.ctrluncore cxlCount the number of control flits packedevent=5,umask=401unc_cxlcm_txc_flits.no_hdruncore cxlCount the number of Headerless flits packedevent=5,umask=801unc_cxlcm_txc_flits.protuncore cxlCount the number of protocol flits packedevent=5,umask=201unc_cxlcm_txc_flits.sz_hdruncore cxlCount the number of Flits with SZ setevent=5,umask=0x4001unc_cxlcm_txc_flits.validuncore cxlCount the number of flits packedevent=5,umask=101unc_cxlcm_txc_pack_buf_inserts.cache_datauncore cxlNumber of Allocation to Cache Data Packing bufferevent=2,umask=401unc_cxlcm_txc_pack_buf_inserts.cache_req0uncore cxlNumber of Allocation to Cache Req Packing bufferevent=2,umask=101unc_cxlcm_txc_pack_buf_inserts.cache_req1uncore cxlNumber of Allocation to Cache Rsp1 Packing bufferevent=2,umask=0x4001unc_cxlcm_txc_pack_buf_inserts.cache_rsp0uncore cxlNumber of Allocation to Cache Rsp0 Packing bufferevent=2,umask=201unc_cxlcm_txc_pack_buf_inserts.cache_rsp1uncore cxlNumber of Allocation to Cache Req Packing bufferevent=2,umask=0x2001unc_cxlcm_txc_pack_buf_inserts.mem_datauncore cxlNumber of Allocation to Mem Data Packing bufferevent=2,umask=0x1001unc_cxlcm_txc_pack_buf_inserts.mem_requncore cxlNumber of Allocation to Mem Rxx Packing bufferevent=2,umask=801uncore_cxldpunc_cxldp_clockticksuncore cxlCounts the number of uclk ticksevent=1,umask=101unc_cxldp_txc_agf_inserts.m2s_datauncore cxlNumber of Allocation to M2S Data AGFevent=2,umask=0x2001unc_cxldp_txc_agf_inserts.m2s_requncore cxlNumber of Allocation to M2S Req AGFevent=2,umask=0x1001unc_cxldp_txc_agf_inserts.u2c_datauncore cxlNumber of Allocation to U2C Data AGFevent=2,umask=801unc_cxldp_txc_agf_inserts.u2c_requncore cxlNumber of Allocation to U2C Req AGFevent=2,umask=101unc_cxldp_txc_agf_inserts.u2c_rsp0uncore cxlNumber of Allocation to U2C Rsp AGF 0event=2,umask=201unc_cxldp_txc_agf_inserts.u2c_rsp1uncore cxlNumber of Allocation to U2C Rsp AGF 1event=2,umask=401unc_i_cache_total_occupancy.memuncore interconnectTotal IRP occupancy of inbound read and write requests to coherent memoryevent=0xf,umask=401Total IRP occupancy of inbound read and write requests to coherent memory.  This is effectively the sum of read occupancy and write occupancyunc_i_clockticksuncore interconnectIRP Clockticksevent=101Number of IRP clock cycles while the event is enabledunc_i_faf_insertsuncore interconnectFAF - request insert from TCevent=0x1801unc_i_faf_occupancyuncore interconnectFAF occupancyevent=0x1901unc_i_irp_all.evictsuncore interconnect: All Inserts Outbound (BL, AK, Snoops)event=0x20,umask=401unc_i_irp_all.inbound_insertsuncore interconnect: All Inserts Inbound (p2p + faf + cset)event=0x20,umask=101unc_i_irp_all.outbound_insertsuncore interconnect: All Inserts Outbound (BL, AK, Snoops)event=0x20,umask=201unc_i_misc0.2nd_atomic_insertuncore interconnectCounts Timeouts - Set 0 : Cache Inserts of Atomic Transactions as Secondaryevent=0x1e,umask=0x1001unc_i_misc0.2nd_rd_insertuncore interconnectCounts Timeouts - Set 0 : Cache Inserts of Read Transactions as Secondaryevent=0x1e,umask=401unc_i_misc0.2nd_wr_insertuncore interconnectCounts Timeouts - Set 0 : Cache Inserts of Write Transactions as Secondaryevent=0x1e,umask=801unc_i_misc0.fast_rejuncore interconnectCounts Timeouts - Set 0 : Fastpath Rejectsevent=0x1e,umask=201unc_i_misc0.fast_requncore interconnectCounts Timeouts - Set 0 : Fastpath Requestsevent=0x1e,umask=101unc_i_misc0.fast_xferuncore interconnectCounts Timeouts - Set 0 : Fastpath Transfers From Primary to Secondaryevent=0x1e,umask=0x2001unc_i_misc0.pf_ack_hintuncore interconnectCounts Timeouts - Set 0 : Prefetch Ack Hints From Primary to Secondaryevent=0x1e,umask=0x4001unc_i_misc0.slowpath_fwpf_no_prfuncore interconnectCounts Timeouts - Set 0 : Slow path fwpf didn't find prefetchevent=0x1e,umask=0x8001unc_i_misc1.lost_fwduncore interconnectMisc Events - Set 1 : Lost Forwardevent=0x1f,umask=0x1001Misc Events - Set 1 : Lost Forward : Snoop pulled away ownership before a write was committedunc_i_misc1.sec_rcvd_invlduncore interconnectMisc Events - Set 1 : Received Invalidevent=0x1f,umask=0x2001Misc Events - Set 1 : Received Invalid : Secondary received a transfer that did not have sufficient MESI stateunc_i_misc1.sec_rcvd_vlduncore interconnectMisc Events - Set 1 : Received Validevent=0x1f,umask=0x4001Misc Events - Set 1 : Received Valid : Secondary received a transfer that did have sufficient MESI stateunc_i_misc1.slow_euncore interconnectMisc Events - Set 1 : Slow Transfer of E Lineevent=0x1f,umask=401Misc Events - Set 1 : Slow Transfer of E Line : Secondary received a transfer that did have sufficient MESI stateunc_i_misc1.slow_iuncore interconnectMisc Events - Set 1 : Slow Transfer of I Lineevent=0x1f,umask=101Misc Events - Set 1 : Slow Transfer of I Line : Snoop took cacheline ownership before write from data was committedunc_i_misc1.slow_muncore interconnectMisc Events - Set 1 : Slow Transfer of M Lineevent=0x1f,umask=801Misc Events - Set 1 : Slow Transfer of M Line : Snoop took cacheline ownership before write from data was committedunc_i_misc1.slow_suncore interconnectMisc Events - Set 1 : Slow Transfer of S Lineevent=0x1f,umask=201Misc Events - Set 1 : Slow Transfer of S Line : Secondary received a transfer that did not have sufficient MESI stateunc_i_snoop_resp.hit_esuncore interconnectSnoop Responses : Hit E or Sevent=0x12,umask=401unc_i_snoop_resp.hit_iuncore interconnectSnoop Responses : Hit Ievent=0x12,umask=201unc_i_snoop_resp.hit_muncore interconnectSnoop Responses : Hit Mevent=0x12,umask=801unc_i_snoop_resp.missuncore interconnectSnoop Responses : Missevent=0x12,umask=101unc_i_snoop_resp.snpcodeuncore interconnectSnoop Responses : SnpCodeevent=0x12,umask=0x1001unc_i_snoop_resp.snpdatauncore interconnectSnoop Responses : SnpDataevent=0x12,umask=0x2001unc_i_snoop_resp.snpinvuncore interconnectSnoop Responses : SnpInvevent=0x12,umask=0x4001unc_i_txr2_ad01_stall_credit_cyclesuncore interconnectUNC_I_TxR2_AD01_STALL_CREDIT_CYCLESevent=0x1c01: Counts the number times when it is not possible to issue a request to the M2PCIe because there are no Egress Credits available on AD0, A1 or AD0AD1 both. Stalls on both AD0 and AD1 will count as 2unc_i_txr2_ad0_stall_credit_cyclesuncore interconnectNo AD0 Egress Credits Stallsevent=0x1a01No AD0 Egress Credits Stalls : Counts the number times when it is not possible to issue a request to the M2PCIe because there are no AD0 Egress Credits availableunc_i_txr2_ad1_stall_credit_cyclesuncore interconnectNo AD1 Egress Credits Stallsevent=0x1b01No AD1 Egress Credits Stalls : Counts the number times when it is not possible to issue a request to the M2PCIe because there are no AD1 Egress Credits availableunc_i_txr2_bl_stall_credit_cyclesuncore interconnectNo BL Egress Credit Stallsevent=0x1d01No BL Egress Credit Stalls : Counts the number times when it is not possible to issue data to the R2PCIe because there are no BL Egress Credits availableunc_i_txs_data_inserts_ncbuncore interconnectOutbound Read Requestsevent=0xd01Outbound Read Requests : Counts the number of requests issued to the switch (towards the devices)unc_i_txs_data_inserts_ncsuncore interconnectOutbound Read Requestsevent=0xe01Outbound Read Requests : Counts the number of requests issued to the switch (towards the devices)unc_i_txs_request_occupancyuncore interconnectOutbound Request Queue Occupancyevent=0xc01Outbound Request Queue Occupancy : Accumulates the number of outstanding outbound requests from the IRP to the switch (towards the devices).  This can be used in conjunction with the allocations event in order to calculate average latency of outbound requestsunc_m2m_clockticksuncore interconnectM2M Clockticksevent=101Clockticks of the mesh to memory (M2M)unc_m2m_direct2core_not_taken_dirstateuncore interconnectCycles when direct to core mode (which bypasses the CHA) was disabledevent=0x17,umask=701unc_m2m_direct2core_not_taken_dirstate.non_cisgressuncore interconnectCycles when direct to core mode, which bypasses the CHA, was disabled : Non Cisgressevent=0x17,umask=201Cycles when direct to core mode, which bypasses the CHA, was disabled : Non Cisgress : Counts the number of time non cisgress D2C was not honoured by egress due to directory state constraintsunc_m2m_direct2core_not_taken_notforkeduncore interconnectCounts the time when FM didn't do d2c for fill reads (cross tile case)event=0x4a01unc_m2m_direct2core_txn_overrideuncore interconnectNumber of reads in which direct to core transaction were overriddenevent=0x18,umask=301unc_m2m_direct2core_txn_override.cisgressuncore interconnectNumber of reads in which direct to core transaction was overridden : Cisgressevent=0x18,umask=201unc_m2m_direct2core_txn_override.pmm_hituncore interconnectNumber of reads in which direct to core transaction was overridden : 2LM Hit?event=0x18,umask=101unc_m2m_direct2upitxn_override.pmm_hituncore interconnectNumber of times a direct to UPI transaction was overriddenevent=0x1c,umask=101Number of times a direct to UPI transaction was overridden. : Counts the number of times D2K wasn't honored even though the incoming request had d2k setunc_m2m_direct2upi_not_taken_creditsuncore interconnectNumber of reads in which direct to Intel UPI transactions were overriddenevent=0x1b,umask=701unc_m2m_direct2upi_not_taken_dirstateuncore interconnectCycles when direct to Intel UPI was disabledevent=0x1a,umask=701unc_m2m_direct2upi_not_taken_dirstate.cisgressuncore interconnectCycles when Direct2UPI was Disabled : Cisgress D2U Ignoredevent=0x1a,umask=401Cycles when Direct2UPI was Disabled : Cisgress D2U Ignored : Counts cisgress d2K that was not honored due to directory constraintsunc_m2m_direct2upi_not_taken_dirstate.egressuncore interconnectCycles when Direct2UPI was Disabled : Egress Ignored D2Uevent=0x1a,umask=101Cycles when Direct2UPI was Disabled : Egress Ignored D2U : Counts the number of time D2K was not honoured by egress due to directory state constraintsunc_m2m_direct2upi_not_taken_dirstate.non_cisgressuncore interconnectCycles when Direct2UPI was Disabled : Non Cisgress D2U Ignoredevent=0x1a,umask=201Cycles when Direct2UPI was Disabled : Non Cisgress D2U Ignored : Counts non cisgress d2K that was not honored due to directory constraintsunc_m2m_direct2upi_takenuncore interconnectMessages sent direct to the Intel UPIevent=0x19,umask=701Counts the number of times egress did D2K (Direct to KTI)unc_m2m_direct2upi_txn_overrideuncore interconnectNumber of reads that a message sent direct2 Intel UPI was overriddenevent=0x1c,umask=301unc_m2m_direct2upi_txn_override.cisgressuncore interconnectNumber of times a direct to UPI transaction was overriddenevent=0x1c,umask=201unc_m2m_directory_hit.clean_auncore interconnectDirectory Hit : On NonDirty Line in A Stateevent=0x1d,umask=0x8001unc_m2m_directory_hit.clean_iuncore interconnectDirectory Hit : On NonDirty Line in I Stateevent=0x1d,umask=0x1001unc_m2m_directory_hit.clean_puncore interconnectDirectory Hit : On NonDirty Line in L Stateevent=0x1d,umask=0x4001unc_m2m_directory_hit.clean_suncore interconnectDirectory Hit : On NonDirty Line in S Stateevent=0x1d,umask=0x2001unc_m2m_directory_hit.dirty_auncore interconnectDirectory Hit : On Dirty Line in A Stateevent=0x1d,umask=801unc_m2m_directory_hit.dirty_iuncore interconnectDirectory Hit : On Dirty Line in I Stateevent=0x1d,umask=101unc_m2m_directory_hit.dirty_puncore interconnectDirectory Hit : On Dirty Line in L Stateevent=0x1d,umask=401unc_m2m_directory_hit.dirty_suncore interconnectDirectory Hit : On Dirty Line in S Stateevent=0x1d,umask=201unc_m2m_directory_lookup.anyuncore interconnectMulti-socket cacheline Directory lookups (any state found)event=0x20,umask=101Counts the number of hit data returns to egress with any directory to non persistent memoryunc_m2m_directory_lookup.state_auncore interconnectMulti-socket cacheline Directory lookups (cacheline found in A state)event=0x20,umask=801Counts the number of hit data returns to egress with directory A to non persistent memoryunc_m2m_directory_lookup.state_iuncore interconnectMulti-socket cacheline Directory lookup (cacheline found in I state)event=0x20,umask=201Counts the number of hit data returns to egress with directory I to non persistent memoryunc_m2m_directory_lookup.state_suncore interconnectMulti-socket cacheline Directory lookup (cacheline found in S state)event=0x20,umask=401Counts the number of hit data returns to egress with directory S to non persistent memoryunc_m2m_directory_miss.clean_auncore interconnectDirectory Miss : On NonDirty Line in A Stateevent=0x1e,umask=0x8001unc_m2m_directory_miss.clean_iuncore interconnectDirectory Miss : On NonDirty Line in I Stateevent=0x1e,umask=0x1001unc_m2m_directory_miss.clean_puncore interconnectDirectory Miss : On NonDirty Line in L Stateevent=0x1e,umask=0x4001unc_m2m_directory_miss.clean_suncore interconnectDirectory Miss : On NonDirty Line in S Stateevent=0x1e,umask=0x2001unc_m2m_directory_miss.dirty_auncore interconnectDirectory Miss : On Dirty Line in A Stateevent=0x1e,umask=801unc_m2m_directory_miss.dirty_iuncore interconnectDirectory Miss : On Dirty Line in I Stateevent=0x1e,umask=101unc_m2m_directory_miss.dirty_puncore interconnectDirectory Miss : On Dirty Line in L Stateevent=0x1e,umask=401unc_m2m_directory_miss.dirty_suncore interconnectDirectory Miss : On Dirty Line in S Stateevent=0x1e,umask=201unc_m2m_directory_update.a2iuncore interconnectMulti-socket cacheline Directory update from A to Ievent=0x21,umask=0x32001unc_m2m_directory_update.a2suncore interconnectMulti-socket cacheline Directory update from A to Sevent=0x21,umask=0x34001unc_m2m_directory_update.anyuncore interconnectMulti-socket cacheline Directory update from/to Any stateevent=0x21,umask=0x30101unc_m2m_directory_update.a_to_i_hit_non_pmmuncore interconnectMulti-socket cacheline Directory Updatesevent=0x21,umask=0x12001Counts 1lm or 2lm hit  data returns that would result in directory update from A to I to non persistent memory (DRAM or HBM)unc_m2m_directory_update.a_to_i_miss_non_pmmuncore interconnectMulti-socket cacheline Directory Updatesevent=0x21,umask=0x22001Counts 2lm miss  data returns that would result in directory update from A to I to non persistent memory (DRAM or HBM)unc_m2m_directory_update.a_to_s_hit_non_pmmuncore interconnectMulti-socket cacheline Directory Updatesevent=0x21,umask=0x14001Counts 1lm or 2lm hit  data returns that would result in directory update from A to S to non persistent memory (DRAM or HBM)unc_m2m_directory_update.a_to_s_miss_non_pmmuncore interconnectMulti-socket cacheline Directory Updatesevent=0x21,umask=0x24001Counts 2lm miss  data returns that would result in directory update from A to S to non persistent memory (DRAM or HBM)unc_m2m_directory_update.hit_non_pmmuncore interconnectMulti-socket cacheline Directory Updatesevent=0x21,umask=0x10101Counts any 1lm or 2lm hit data return that would result in directory update to non persistent memory (DRAM or HBM)unc_m2m_directory_update.i2auncore interconnectMulti-socket cacheline Directory update from I to Aevent=0x21,umask=0x30401unc_m2m_directory_update.i2suncore interconnectMulti-socket cacheline Directory update from I to Sevent=0x21,umask=0x30201unc_m2m_directory_update.i_to_a_hit_non_pmmuncore interconnectMulti-socket cacheline Directory Updatesevent=0x21,umask=0x10401Counts 1lm or 2lm hit  data returns that would result in directory update from I to A to non persistent memory (DRAM or HBM)unc_m2m_directory_update.i_to_a_miss_non_pmmuncore interconnectMulti-socket cacheline Directory Updatesevent=0x21,umask=0x20401Counts 2lm miss  data returns that would result in directory update from I to A to non persistent memory (DRAM or HBM)unc_m2m_directory_update.i_to_s_hit_non_pmmuncore interconnectMulti-socket cacheline Directory Updatesevent=0x21,umask=0x10201Counts 1lm or 2lm hit  data returns that would result in directory update from I to S to non persistent memory (DRAM or HBM)unc_m2m_directory_update.i_to_s_miss_non_pmmuncore interconnectMulti-socket cacheline Directory Updatesevent=0x21,umask=0x20201Counts  2lm miss  data returns that would result in directory update from I to S to non persistent memory (DRAM or HBM)unc_m2m_directory_update.miss_non_pmmuncore interconnectMulti-socket cacheline Directory Updatesevent=0x21,umask=0x20101Counts any 2lm miss data return that would result in directory update to non persistent memory (DRAM or HBM)unc_m2m_directory_update.s2auncore interconnectMulti-socket cacheline Directory update from S to Aevent=0x21,umask=0x31001unc_m2m_directory_update.s2iuncore interconnectMulti-socket cacheline Directory update from S to Ievent=0x21,umask=0x30801unc_m2m_directory_update.s_to_a_hit_non_pmmuncore interconnectMulti-socket cacheline Directory Updatesevent=0x21,umask=0x11001Counts 1lm or 2lm hit  data returns that would result in directory update from S to A to non persistent memory (DRAM or HBM)unc_m2m_directory_update.s_to_a_miss_non_pmmuncore interconnectMulti-socket cacheline Directory Updatesevent=0x21,umask=0x21001Counts 2lm miss  data returns that would result in directory update from S to A to non persistent memory (DRAM or HBM)unc_m2m_directory_update.s_to_i_hit_non_pmmuncore interconnectMulti-socket cacheline Directory Updatesevent=0x21,umask=0x10801Counts 1lm or 2lm hit  data returns that would result in directory update from S to I to non persistent memory (DRAM or HBM)unc_m2m_directory_update.s_to_i_miss_non_pmmuncore interconnectMulti-socket cacheline Directory Updatesevent=0x21,umask=0x20801Counts 2lm miss  data returns that would result in directory update from S to I to non persistent memory (DRAM or HBM)unc_m2m_egress_ordering.iv_snoopgo_dnuncore interconnectEgress Blocking due to Ordering requirements : Downevent=0xba,umask=0x8000000401Egress Blocking due to Ordering requirements : Down : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirementsunc_m2m_egress_ordering.iv_snoopgo_upuncore interconnectEgress Blocking due to Ordering requirements : Upevent=0xba,umask=0x8000000101Egress Blocking due to Ordering requirements : Up : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirementsunc_m2m_igr_starve_winner.mask7uncore interconnectCount when Starve Glocab counter is at 7event=0x44,umask=0x8001unc_m2m_imc_reads.alluncore interconnectReads to iMC issuedevent=0x24,umask=0x30401unc_m2m_imc_reads.ch0.to_nm1lmuncore interconnectUNC_M2M_IMC_READS.CH0.TO_NM1LMevent=0x24,umask=0x10801unc_m2m_imc_reads.ch0.to_nmcacheuncore interconnectUNC_M2M_IMC_READS.CH0.TO_NMCacheevent=0x24,umask=0x11001unc_m2m_imc_reads.ch0_alluncore interconnectUNC_M2M_IMC_READS.CH0_ALLevent=0x24,umask=0x10401unc_m2m_imc_reads.ch0_from_tgruncore interconnectUNC_M2M_IMC_READS.CH0_FROM_TGRevent=0x24,umask=0x14001unc_m2m_imc_reads.ch0_isochuncore interconnectUNC_M2M_IMC_READS.CH0_ISOCHevent=0x24,umask=0x10201unc_m2m_imc_reads.ch0_normaluncore interconnectUNC_M2M_IMC_READS.CH0_NORMALevent=0x24,umask=0x10101unc_m2m_imc_reads.ch0_to_ddr_as_cacheuncore interconnectUNC_M2M_IMC_READS.CH0_TO_DDR_AS_CACHEevent=0x24,umask=0x11001unc_m2m_imc_reads.ch0_to_ddr_as_memuncore interconnectUNC_M2M_IMC_READS.CH0_TO_DDR_AS_MEMevent=0x24,umask=0x10801unc_m2m_imc_reads.ch0_to_pmmuncore interconnectUNC_M2M_IMC_READS.CH0_TO_PMMevent=0x24,umask=0x12001unc_m2m_imc_reads.ch1.to_nm1lmuncore interconnectUNC_M2M_IMC_READS.CH1.TO_NM1LMevent=0x24,umask=0x20801unc_m2m_imc_reads.ch1.to_nmcacheuncore interconnectUNC_M2M_IMC_READS.CH1.TO_NMCacheevent=0x24,umask=0x21001unc_m2m_imc_reads.ch1_alluncore interconnectUNC_M2M_IMC_READS.CH1_ALLevent=0x24,umask=0x20401unc_m2m_imc_reads.ch1_from_tgruncore interconnectUNC_M2M_IMC_READS.CH1_FROM_TGRevent=0x24,umask=0x24001unc_m2m_imc_reads.ch1_isochuncore interconnectUNC_M2M_IMC_READS.CH1_ISOCHevent=0x24,umask=0x20201unc_m2m_imc_reads.ch1_normaluncore interconnectUNC_M2M_IMC_READS.CH1_NORMALevent=0x24,umask=0x20101unc_m2m_imc_reads.ch1_to_ddr_as_cacheuncore interconnectUNC_M2M_IMC_READS.CH1_TO_DDR_AS_CACHEevent=0x24,umask=0x21001unc_m2m_imc_reads.ch1_to_ddr_as_memuncore interconnectUNC_M2M_IMC_READS.CH1_TO_DDR_AS_MEMevent=0x24,umask=0x20801unc_m2m_imc_reads.ch1_to_pmmuncore interconnectUNC_M2M_IMC_READS.CH1_TO_PMMevent=0x24,umask=0x22001unc_m2m_imc_reads.from_tgruncore interconnectUNC_M2M_IMC_READS.FROM_TGRevent=0x24,umask=0x34001unc_m2m_imc_reads.isochuncore interconnectUNC_M2M_IMC_READS.ISOCHevent=0x24,umask=0x30201unc_m2m_imc_reads.normaluncore interconnectUNC_M2M_IMC_READS.NORMALevent=0x24,umask=0x30101unc_m2m_imc_reads.to_ddr_as_cacheuncore interconnectUNC_M2M_IMC_READS.TO_DDR_AS_CACHEevent=0x24,umask=0x31001unc_m2m_imc_reads.to_ddr_as_memuncore interconnectUNC_M2M_IMC_READS.TO_DDR_AS_MEMevent=0x24,umask=0x30801unc_m2m_imc_reads.to_nm1lmuncore interconnectUNC_M2M_IMC_READS.TO_NM1LMevent=0x24,umask=0x30801unc_m2m_imc_reads.to_nmcacheuncore interconnectUNC_M2M_IMC_READS.TO_NMCACHEevent=0x24,umask=0x31001unc_m2m_imc_reads.to_pmmuncore interconnectUNC_M2M_IMC_READS.TO_PMMevent=0x24,umask=0x32001unc_m2m_imc_writes.alluncore interconnectAll Writes - All Channelsevent=0x25,umask=0x181001unc_m2m_imc_writes.ch0.niuncore interconnectNon-Inclusive - Ch0event=0x2501unc_m2m_imc_writes.ch0_alluncore interconnectUNC_M2M_IMC_WRITES.CH0_ALLevent=0x25,umask=0x81001unc_m2m_imc_writes.ch0_from_tgruncore interconnectFrom TGR - Ch0event=0x2501unc_m2m_imc_writes.ch0_fulluncore interconnectUNC_M2M_IMC_WRITES.CH0_FULLevent=0x25,umask=0x80101unc_m2m_imc_writes.ch0_full_isochuncore interconnectUNC_M2M_IMC_WRITES.CH0_FULL_ISOCHevent=0x25,umask=0x80401unc_m2m_imc_writes.ch0_niuncore interconnectNon-Inclusive - Ch0event=0x2501unc_m2m_imc_writes.ch0_ni_missuncore interconnectNon-Inclusive Miss - Ch0event=0x2501unc_m2m_imc_writes.ch0_partialuncore interconnectUNC_M2M_IMC_WRITES.CH0_PARTIALevent=0x25,umask=0x80201unc_m2m_imc_writes.ch0_partial_isochuncore interconnectUNC_M2M_IMC_WRITES.CH0_PARTIAL_ISOCHevent=0x25,umask=0x80801unc_m2m_imc_writes.ch0_to_ddr_as_cacheuncore interconnectDDR, acting as Cache - Ch0event=0x25,umask=0x84001unc_m2m_imc_writes.ch0_to_ddr_as_memuncore interconnectUNC_M2M_IMC_WRITES.CH0_TO_DDR_AS_MEMevent=0x25,umask=0x82001unc_m2m_imc_writes.ch0_to_pmmuncore interconnectPMM - Ch0event=0x25,umask=0x88001PMM - Ch0 : Counts all PMM dimm writes requests(full line and partial) sent from M2M to iMCunc_m2m_imc_writes.ch1.niuncore interconnectNon-Inclusive - Ch1event=0x2501unc_m2m_imc_writes.ch1_alluncore interconnectAll Writes - Ch1event=0x25,umask=0x101001unc_m2m_imc_writes.ch1_from_tgruncore interconnectFrom TGR - Ch1event=0x2501unc_m2m_imc_writes.ch1_fulluncore interconnectFull Line Non-ISOCH - Ch1event=0x25,umask=0x100101unc_m2m_imc_writes.ch1_full_isochuncore interconnectISOCH Full Line - Ch1event=0x25,umask=0x100401unc_m2m_imc_writes.ch1_niuncore interconnectNon-Inclusive - Ch1event=0x2501unc_m2m_imc_writes.ch1_ni_missuncore interconnectNon-Inclusive Miss - Ch1event=0x2501unc_m2m_imc_writes.ch1_partialuncore interconnectPartial Non-ISOCH - Ch1event=0x25,umask=0x100201unc_m2m_imc_writes.ch1_partial_isochuncore interconnectISOCH Partial - Ch1event=0x25,umask=0x100801unc_m2m_imc_writes.ch1_to_ddr_as_cacheuncore interconnectDDR, acting as Cache - Ch1event=0x25,umask=0x104001unc_m2m_imc_writes.ch1_to_ddr_as_memuncore interconnectDDR - Ch1event=0x25,umask=0x102001unc_m2m_imc_writes.ch1_to_pmmuncore interconnectPMM - Ch1event=0x25,umask=0x108001PMM - Ch1 : Counts all PMM dimm writes requests(full line and partial) sent from M2M to iMCunc_m2m_imc_writes.from_tgruncore interconnectFrom TGR - All Channelsevent=0x2501unc_m2m_imc_writes.fulluncore interconnectFull Non-ISOCH - All Channelsevent=0x25,umask=0x180101unc_m2m_imc_writes.full_isochuncore interconnectISOCH Full Line - All Channelsevent=0x25,umask=0x180401unc_m2m_imc_writes.niuncore interconnectNon-Inclusive - All Channelsevent=0x2501unc_m2m_imc_writes.ni_missuncore interconnectNon-Inclusive Miss - All Channelsevent=0x2501unc_m2m_imc_writes.partialuncore interconnectPartial Non-ISOCH - All Channelsevent=0x25,umask=0x180201unc_m2m_imc_writes.partial_isochuncore interconnectISOCH Partial - All Channelsevent=0x25,umask=0x180801unc_m2m_imc_writes.to_ddr_as_cacheuncore interconnectDDR, acting as Cache - All Channelsevent=0x25,umask=0x184001unc_m2m_imc_writes.to_ddr_as_memuncore interconnectDDR - All Channelsevent=0x25,umask=0x182001unc_m2m_imc_writes.to_pmmuncore interconnectPMM - All Channelsevent=0x25,umask=0x188001unc_m2m_prefcam_cis_dropsuncore interconnectUNC_M2M_PREFCAM_CIS_DROPSevent=0x5c01unc_m2m_prefcam_demand_drops.ch0_upiuncore interconnectData Prefetches Droppedevent=0x58,umask=201unc_m2m_prefcam_demand_drops.ch0_xptuncore interconnectData Prefetches Droppedevent=0x58,umask=101unc_m2m_prefcam_demand_drops.ch1_upiuncore interconnectData Prefetches Droppedevent=0x58,umask=801unc_m2m_prefcam_demand_drops.ch1_xptuncore interconnectData Prefetches Droppedevent=0x58,umask=401unc_m2m_prefcam_demand_drops.upi_allchuncore interconnectData Prefetches Dropped : UPI - All Channelsevent=0x58,umask=0xa01unc_m2m_prefcam_demand_drops.xpt_allchuncore interconnectData Prefetches Droppedevent=0x58,umask=501unc_m2m_prefcam_demand_merge.upi_allchuncore interconnect: UPI - All Channelsevent=0x5d,umask=0xa01unc_m2m_prefcam_demand_merge.xpt_allchuncore interconnect: XPT - All Channelsevent=0x5d,umask=501unc_m2m_prefcam_demand_no_merge.rd_mergeduncore interconnectDemands Not Merged with CAMed Prefetchesevent=0x5e,umask=0x4001unc_m2m_prefcam_demand_no_merge.wr_mergeduncore interconnectDemands Not Merged with CAMed Prefetchesevent=0x5e,umask=0x2001unc_m2m_prefcam_demand_no_merge.wr_squasheduncore interconnectDemands Not Merged with CAMed Prefetchesevent=0x5e,umask=0x1001unc_m2m_prefcam_inserts.ch0_upiuncore interconnectPrefetch CAM Inserts : UPI - Ch 0event=0x56,umask=201unc_m2m_prefcam_inserts.ch0_xptuncore interconnectPrefetch CAM Inserts : XPT - Ch 0event=0x56,umask=101unc_m2m_prefcam_inserts.ch1_upiuncore interconnectPrefetch CAM Inserts : UPI - Ch 1event=0x56,umask=801unc_m2m_prefcam_inserts.ch1_xptuncore interconnectPrefetch CAM Inserts : XPT - Ch 1event=0x56,umask=401unc_m2m_prefcam_inserts.upi_allchuncore interconnectPrefetch CAM Inserts : UPI - All Channelsevent=0x56,umask=0xa01unc_m2m_prefcam_inserts.xpt_allchuncore interconnectPrefetch CAM Inserts : XPT - All Channelsevent=0x56,umask=501Prefetch CAM Inserts : XPT -All Channelsunc_m2m_prefcam_occupancy.allchuncore interconnectPrefetch CAM Occupancy : All Channelsevent=0x54,umask=301unc_m2m_prefcam_occupancy.ch0uncore interconnectPrefetch CAM Occupancy : Channel 0event=0x54,umask=101unc_m2m_prefcam_occupancy.ch1uncore interconnectPrefetch CAM Occupancy : Channel 1event=0x54,umask=201unc_m2m_prefcam_resp_miss.allchuncore interconnectAll Channelsevent=0x5f,umask=301unc_m2m_prefcam_resp_miss.ch0uncore interconnect: Channel 0event=0x5f,umask=101unc_m2m_prefcam_resp_miss.ch1uncore interconnect: Channel 1event=0x5f,umask=201unc_m2m_prefcam_rxc_deallocs.1lm_posteduncore interconnectUNC_M2M_PREFCAM_RxC_DEALLOCS.1LM_POSTEDevent=0x62,umask=201unc_m2m_prefcam_rxc_deallocs.cisuncore interconnectUNC_M2M_PREFCAM_RxC_DEALLOCS.CISevent=0x62,umask=801unc_m2m_prefcam_rxc_deallocs.pmm_memmode_acceptuncore interconnectUNC_M2M_PREFCAM_RxC_DEALLOCS.PMM_MEMMODE_ACCEPTevent=0x62,umask=401unc_m2m_prefcam_rxc_deallocs.squasheduncore interconnectUNC_M2M_PREFCAM_RxC_DEALLOCS.SQUASHEDevent=0x62,umask=101unc_m2m_prefcam_rxc_occupancyuncore interconnectAD Ingress (from CMS) Occupancy - Prefetchesevent=0x6001unc_m2m_rxc_ad_insertsuncore interconnectAD Ingress (from CMS) : AD Ingress (from CMS) Allocationsevent=2,umask=101unc_m2m_rxc_ad_occupancyuncore interconnectAD Ingress (from CMS) Occupancyevent=301unc_m2m_tag_hit.nm_rd_hit_cleanuncore interconnectClean NearMem Read Hitevent=0x1f,umask=101Counts clean full line read hits (reads and RFOs)unc_m2m_tag_hit.nm_rd_hit_dirtyuncore interconnectDirty NearMem Read Hitevent=0x1f,umask=201Counts dirty full line read hits (reads and RFOs)unc_m2m_tag_hit.nm_ufill_hit_cleanuncore interconnectTag Hit : Clean NearMem Underfill Hitevent=0x1f,umask=401Tag Hit indicates when a request sent to the iMC hit in Near Memory. : Counts clean underfill hits due to a partial writeunc_m2m_tag_hit.nm_ufill_hit_dirtyuncore interconnectTag Hit : Dirty NearMem Underfill Hitevent=0x1f,umask=801Tag Hit indicates when a request sent to the iMC hit in Near Memory. : Counts dirty underfill read hits due to a partial writeunc_m2m_tag_missuncore interconnectUNC_M2M_TAG_MISSevent=0x4b,umask=301unc_m2m_tgr_ad_creditsuncore interconnectNumber AD Ingress Creditsevent=0x2e01unc_m2m_tgr_bl_creditsuncore interconnectNumber BL Ingress Creditsevent=0x2f01unc_m2m_tracker_inserts.ch0uncore interconnectTracker Inserts : Channel 0event=0x32,umask=0x10401unc_m2m_tracker_inserts.ch1uncore interconnectTracker Inserts : Channel 1event=0x32,umask=0x20401unc_m2m_tracker_occupancy.ch0uncore interconnectTracker Occupancy : Channel 0event=0x33,umask=101unc_m2m_tracker_occupancy.ch1uncore interconnectTracker Occupancy : Channel 1event=0x33,umask=201unc_m2m_wpq_flush.ch0uncore interconnectWPQ Flush : Channel 0event=0x42,umask=101unc_m2m_wpq_flush.ch1uncore interconnectWPQ Flush : Channel 1event=0x42,umask=201unc_m2m_wpq_no_reg_crd.chn0uncore interconnectM2M->iMC WPQ Cycles w/Credits - Regular : Channel 0event=0x37,umask=101unc_m2m_wpq_no_reg_crd.chn1uncore interconnectM2M->iMC WPQ Cycles w/Credits - Regular : Channel 1event=0x37,umask=201unc_m2m_wpq_no_spec_crd.chn0uncore interconnectM2M->iMC WPQ Cycles w/Credits - Special : Channel 0event=0x38,umask=101unc_m2m_wpq_no_spec_crd.chn1uncore interconnectM2M->iMC WPQ Cycles w/Credits - Special : Channel 1event=0x38,umask=201unc_m2m_wr_tracker_inserts.ch0uncore interconnectWrite Tracker Inserts : Channel 0event=0x40,umask=101unc_m2m_wr_tracker_inserts.ch1uncore interconnectWrite Tracker Inserts : Channel 1event=0x40,umask=201unc_m2m_wr_tracker_ne.ch0uncore interconnectWrite Tracker Cycles Not Empty : Channel 0event=0x35,umask=101unc_m2m_wr_tracker_ne.ch1uncore interconnectWrite Tracker Cycles Not Empty : Channel 1event=0x35,umask=201unc_m2m_wr_tracker_ne.mirruncore interconnectWrite Tracker Cycles Not Empty : Mirrorevent=0x35,umask=401unc_m2m_wr_tracker_ne.mirr_nontgruncore interconnectWrite Tracker Cycles Not Emptyevent=0x35,umask=801unc_m2m_wr_tracker_ne.mirr_pwruncore interconnectWrite Tracker Cycles Not Emptyevent=0x35,umask=0x1001unc_m2m_wr_tracker_nonposted_inserts.ch0uncore interconnectWrite Tracker Non-Posted Inserts : Channel 0event=0x4d,umask=101unc_m2m_wr_tracker_nonposted_inserts.ch1uncore interconnectWrite Tracker Non-Posted Inserts : Channel 1event=0x4d,umask=201unc_m2m_wr_tracker_nonposted_occupancy.ch0uncore interconnectWrite Tracker Non-Posted Occupancy : Channel 0event=0x4c,umask=101unc_m2m_wr_tracker_nonposted_occupancy.ch1uncore interconnectWrite Tracker Non-Posted Occupancy : Channel 1event=0x4c,umask=201unc_m2m_wr_tracker_posted_inserts.ch0uncore interconnectWrite Tracker Posted Inserts : Channel 0event=0x48,umask=101unc_m2m_wr_tracker_posted_inserts.ch1uncore interconnectWrite Tracker Posted Inserts : Channel 1event=0x48,umask=201unc_m2m_wr_tracker_posted_occupancy.ch0uncore interconnectWrite Tracker Posted Occupancy : Channel 0event=0x47,umask=101unc_m2m_wr_tracker_posted_occupancy.ch1uncore interconnectWrite Tracker Posted Occupancy : Channel 1event=0x47,umask=201unc_m3upi_cha_ad_credits_empty.requncore interconnectCBox AD Credits Empty : Requestsevent=0x22,umask=401CBox AD Credits Empty : Requests : No credits available to send to Cbox on the AD Ring (covers higher CBoxes)unc_m3upi_cha_ad_credits_empty.snpuncore interconnectCBox AD Credits Empty : Snoopsevent=0x22,umask=801CBox AD Credits Empty : Snoops : No credits available to send to Cbox on the AD Ring (covers higher CBoxes)unc_m3upi_cha_ad_credits_empty.vnauncore interconnectCBox AD Credits Empty : VNA Messagesevent=0x22,umask=101CBox AD Credits Empty : VNA Messages : No credits available to send to Cbox on the AD Ring (covers higher CBoxes)unc_m3upi_cha_ad_credits_empty.wbuncore interconnectCBox AD Credits Empty : Writebacksevent=0x22,umask=201CBox AD Credits Empty : Writebacks : No credits available to send to Cbox on the AD Ring (covers higher CBoxes)unc_m3upi_clockticksuncore interconnectM3UPI Clockticksevent=101Number of M2UPI clock cycles while the event is enabledunc_m3upi_cms_clockticksuncore interconnectM3UPI CMS Clockticksevent=0xc001unc_m3upi_d2c_sentuncore interconnectD2C Sentevent=0x2b01D2C Sent : Count cases BL sends direct to coreunc_m3upi_d2u_sentuncore interconnectD2U Sentevent=0x2a01D2U Sent : Cases where SMI3 sends D2U commandunc_m3upi_egress_ordering.iv_snoopgo_dnuncore interconnectEgress Blocking due to Ordering requirements : Downevent=0xba,umask=401Egress Blocking due to Ordering requirements : Down : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirementsunc_m3upi_egress_ordering.iv_snoopgo_upuncore interconnectEgress Blocking due to Ordering requirements : Upevent=0xba,umask=101Egress Blocking due to Ordering requirements : Up : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirementsunc_m3upi_m2_bl_credits_empty.iio1_ncbuncore interconnectM2 BL Credits Empty : IIO0 and IIO1 share the same ring destination. (1 VN0 credit only)event=0x23,umask=101M2 BL Credits Empty : IIO0 and IIO1 share the same ring destination. (1 VN0 credit only) : No vn0 and vna credits available to send to M2unc_m3upi_m2_bl_credits_empty.iio2_ncbuncore interconnectM2 BL Credits Empty : IIO2event=0x23,umask=201M2 BL Credits Empty : IIO2 : No vn0 and vna credits available to send to M2unc_m3upi_m2_bl_credits_empty.iio3_ncbuncore interconnectM2 BL Credits Empty : IIO3event=0x23,umask=401M2 BL Credits Empty : IIO3 : No vn0 and vna credits available to send to M2unc_m3upi_m2_bl_credits_empty.iio4_ncbuncore interconnectM2 BL Credits Empty : IIO4event=0x23,umask=801M2 BL Credits Empty : IIO4 : No vn0 and vna credits available to send to M2unc_m3upi_m2_bl_credits_empty.iio5_ncbuncore interconnectM2 BL Credits Empty : IIO5event=0x23,umask=0x1001M2 BL Credits Empty : IIO5 : No vn0 and vna credits available to send to M2unc_m3upi_m2_bl_credits_empty.ncsuncore interconnectM2 BL Credits Empty : All IIO targets for NCS are in single mask. ORs them togetherevent=0x23,umask=0x4001M2 BL Credits Empty : All IIO targets for NCS are in single mask. ORs them together : No vn0 and vna credits available to send to M2unc_m3upi_m2_bl_credits_empty.ncs_seluncore interconnectM2 BL Credits Empty : Selected M2p BL NCS creditsevent=0x23,umask=0x8001M2 BL Credits Empty : Selected M2p BL NCS credits : No vn0 and vna credits available to send to M2unc_m3upi_m2_bl_credits_empty.ubox_ncbuncore interconnectM2 BL Credits Empty : IIO5event=0x23,umask=0x2001M2 BL Credits Empty : IIO5 : No vn0 and vna credits available to send to M2unc_m3upi_multi_slot_rcvd.ad_slot0uncore interconnectMulti Slot Flit Received : AD - Slot 0event=0x3e,umask=101Multi Slot Flit Received : AD - Slot 0 : Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)unc_m3upi_multi_slot_rcvd.ad_slot1uncore interconnectMulti Slot Flit Received : AD - Slot 1event=0x3e,umask=201Multi Slot Flit Received : AD - Slot 1 : Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)unc_m3upi_multi_slot_rcvd.ad_slot2uncore interconnectMulti Slot Flit Received : AD - Slot 2event=0x3e,umask=401Multi Slot Flit Received : AD - Slot 2 : Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)unc_m3upi_multi_slot_rcvd.ak_slot0uncore interconnectMulti Slot Flit Received : AK - Slot 0event=0x3e,umask=0x1001Multi Slot Flit Received : AK - Slot 0 : Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)unc_m3upi_multi_slot_rcvd.ak_slot2uncore interconnectMulti Slot Flit Received : AK - Slot 2event=0x3e,umask=0x2001Multi Slot Flit Received : AK - Slot 2 : Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)unc_m3upi_multi_slot_rcvd.bl_slot0uncore interconnectMulti Slot Flit Received : BL - Slot 0event=0x3e,umask=801Multi Slot Flit Received : BL - Slot 0 : Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)unc_m3upi_rxc_arb_lost_vn0.ad_requncore interconnectLost Arb for VN0 : REQ on ADevent=0x4b,umask=101Lost Arb for VN0 : REQ on AD : VN0 message requested but lost arbitration : Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_arb_lost_vn0.ad_rspuncore interconnectLost Arb for VN0 : RSP on ADevent=0x4b,umask=401Lost Arb for VN0 : RSP on AD : VN0 message requested but lost arbitration : Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_arb_lost_vn0.ad_snpuncore interconnectLost Arb for VN0 : SNP on ADevent=0x4b,umask=201Lost Arb for VN0 : SNP on AD : VN0 message requested but lost arbitration : Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_arb_lost_vn0.bl_ncbuncore interconnectLost Arb for VN0 : NCB on BLevent=0x4b,umask=0x2001Lost Arb for VN0 : NCB on BL : VN0 message requested but lost arbitration : Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_arb_lost_vn0.bl_ncsuncore interconnectLost Arb for VN0 : NCS on BLevent=0x4b,umask=0x4001Lost Arb for VN0 : NCS on BL : VN0 message requested but lost arbitration : Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_arb_lost_vn0.bl_rspuncore interconnectLost Arb for VN0 : RSP on BLevent=0x4b,umask=801Lost Arb for VN0 : RSP on BL : VN0 message requested but lost arbitration : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_arb_lost_vn0.bl_wbuncore interconnectLost Arb for VN0 : WB on BLevent=0x4b,umask=0x1001Lost Arb for VN0 : WB on BL : VN0 message requested but lost arbitration : Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_arb_lost_vn1.ad_requncore interconnectLost Arb for VN1 : REQ on ADevent=0x4c,umask=101Lost Arb for VN1 : REQ on AD : VN1 message requested but lost arbitration : Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_arb_lost_vn1.ad_rspuncore interconnectLost Arb for VN1 : RSP on ADevent=0x4c,umask=401Lost Arb for VN1 : RSP on AD : VN1 message requested but lost arbitration : Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_arb_lost_vn1.ad_snpuncore interconnectLost Arb for VN1 : SNP on ADevent=0x4c,umask=201Lost Arb for VN1 : SNP on AD : VN1 message requested but lost arbitration : Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_arb_lost_vn1.bl_ncbuncore interconnectLost Arb for VN1 : NCB on BLevent=0x4c,umask=0x2001Lost Arb for VN1 : NCB on BL : VN1 message requested but lost arbitration : Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_arb_lost_vn1.bl_ncsuncore interconnectLost Arb for VN1 : NCS on BLevent=0x4c,umask=0x4001Lost Arb for VN1 : NCS on BL : VN1 message requested but lost arbitration : Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_arb_lost_vn1.bl_rspuncore interconnectLost Arb for VN1 : RSP on BLevent=0x4c,umask=801Lost Arb for VN1 : RSP on BL : VN1 message requested but lost arbitration : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_arb_lost_vn1.bl_wbuncore interconnectLost Arb for VN1 : WB on BLevent=0x4c,umask=0x1001Lost Arb for VN1 : WB on BL : VN1 message requested but lost arbitration : Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_arb_misc.adbl_parallel_win_vn0uncore interconnectArb Miscellaneous : AD, BL Parallel Win VN0event=0x4d,umask=0x1001Arb Miscellaneous : AD, BL Parallel Win VN0 : AD and BL messages won arbitration concurrently / in parallelunc_m3upi_rxc_arb_misc.adbl_parallel_win_vn1uncore interconnectArb Miscellaneous : AD, BL Parallel Win VN1event=0x4d,umask=0x2001Arb Miscellaneous : AD, BL Parallel Win VN1 : AD and BL messages won arbitration concurrently / in parallelunc_m3upi_rxc_arb_misc.all_parallel_winuncore interconnectArb Miscellaneous : Max Parallel Winevent=0x4d,umask=0x8001Arb Miscellaneous : Max Parallel Win : VN0 and VN1 arbitration sub-pipelines both produced AD and BL winners (maximum possible parallel winners)unc_m3upi_rxc_arb_misc.no_prog_ad_vn0uncore interconnectArb Miscellaneous : No Progress on Pending AD VN0event=0x4d,umask=101Arb Miscellaneous : No Progress on Pending AD VN0 : Arbitration stage made no progress on pending ad vn0 messages because slotting stage cannot accept new messageunc_m3upi_rxc_arb_misc.no_prog_ad_vn1uncore interconnectArb Miscellaneous : No Progress on Pending AD VN1event=0x4d,umask=201Arb Miscellaneous : No Progress on Pending AD VN1 : Arbitration stage made no progress on pending ad vn1 messages because slotting stage cannot accept new messageunc_m3upi_rxc_arb_misc.no_prog_bl_vn0uncore interconnectArb Miscellaneous : No Progress on Pending BL VN0event=0x4d,umask=401Arb Miscellaneous : No Progress on Pending BL VN0 : Arbitration stage made no progress on pending bl vn0 messages because slotting stage cannot accept new messageunc_m3upi_rxc_arb_misc.no_prog_bl_vn1uncore interconnectArb Miscellaneous : No Progress on Pending BL VN1event=0x4d,umask=801Arb Miscellaneous : No Progress on Pending BL VN1 : Arbitration stage made no progress on pending bl vn1 messages because slotting stage cannot accept new messageunc_m3upi_rxc_arb_misc.vn01_parallel_winuncore interconnectArb Miscellaneous : VN0, VN1 Parallel Winevent=0x4d,umask=0x4001Arb Miscellaneous : VN0, VN1 Parallel Win : VN0 and VN1 arbitration sub-pipelines had parallel winners (at least one AD or BL on each side)unc_m3upi_rxc_arb_nocrd_vn0.ad_requncore interconnectNo Credits to Arb for VN0 : REQ on ADevent=0x47,umask=101No Credits to Arb for VN0 : REQ on AD : VN0 message is blocked from requesting arbitration due to lack of remote UPI credits : Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_arb_nocrd_vn0.ad_rspuncore interconnectNo Credits to Arb for VN0 : RSP on ADevent=0x47,umask=401No Credits to Arb for VN0 : RSP on AD : VN0 message is blocked from requesting arbitration due to lack of remote UPI credits : Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_arb_nocrd_vn0.ad_snpuncore interconnectNo Credits to Arb for VN0 : SNP on ADevent=0x47,umask=201No Credits to Arb for VN0 : SNP on AD : VN0 message is blocked from requesting arbitration due to lack of remote UPI credits : Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_arb_nocrd_vn0.bl_ncbuncore interconnectNo Credits to Arb for VN0 : NCB on BLevent=0x47,umask=0x2001No Credits to Arb for VN0 : NCB on BL : VN0 message is blocked from requesting arbitration due to lack of remote UPI credits : Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_arb_nocrd_vn0.bl_ncsuncore interconnectNo Credits to Arb for VN0 : NCS on BLevent=0x47,umask=0x4001No Credits to Arb for VN0 : NCS on BL : VN0 message is blocked from requesting arbitration due to lack of remote UPI credits : Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_arb_nocrd_vn0.bl_rspuncore interconnectNo Credits to Arb for VN0 : RSP on BLevent=0x47,umask=801No Credits to Arb for VN0 : RSP on BL : VN0 message is blocked from requesting arbitration due to lack of remote UPI credits : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_arb_nocrd_vn0.bl_wbuncore interconnectNo Credits to Arb for VN0 : WB on BLevent=0x47,umask=0x1001No Credits to Arb for VN0 : WB on BL : VN0 message is blocked from requesting arbitration due to lack of remote UPI credits : Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_arb_nocrd_vn1.ad_requncore interconnectNo Credits to Arb for VN1 : REQ on ADevent=0x48,umask=101No Credits to Arb for VN1 : REQ on AD : VN1 message is blocked from requesting arbitration due to lack of remote UPI credits : Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_arb_nocrd_vn1.ad_rspuncore interconnectNo Credits to Arb for VN1 : RSP on ADevent=0x48,umask=401No Credits to Arb for VN1 : RSP on AD : VN1 message is blocked from requesting arbitration due to lack of remote UPI credits : Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_arb_nocrd_vn1.ad_snpuncore interconnectNo Credits to Arb for VN1 : SNP on ADevent=0x48,umask=201No Credits to Arb for VN1 : SNP on AD : VN1 message is blocked from requesting arbitration due to lack of remote UPI credits : Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_arb_nocrd_vn1.bl_ncbuncore interconnectNo Credits to Arb for VN1 : NCB on BLevent=0x48,umask=0x2001No Credits to Arb for VN1 : NCB on BL : VN1 message is blocked from requesting arbitration due to lack of remote UPI credits : Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_arb_nocrd_vn1.bl_ncsuncore interconnectNo Credits to Arb for VN1 : NCS on BLevent=0x48,umask=0x4001No Credits to Arb for VN1 : NCS on BL : VN1 message is blocked from requesting arbitration due to lack of remote UPI credits : Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_arb_nocrd_vn1.bl_rspuncore interconnectNo Credits to Arb for VN1 : RSP on BLevent=0x48,umask=801No Credits to Arb for VN1 : RSP on BL : VN1 message is blocked from requesting arbitration due to lack of remote UPI credits : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_arb_nocrd_vn1.bl_wbuncore interconnectNo Credits to Arb for VN1 : WB on BLevent=0x48,umask=0x1001No Credits to Arb for VN1 : WB on BL : VN1 message is blocked from requesting arbitration due to lack of remote UPI credits : Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_arb_noreq_vn0.ad_requncore interconnectCan't Arb for VN0 : REQ on ADevent=0x49,umask=101Can't Arb for VN0 : REQ on AD : VN0 message was not able to request arbitration while some other message won arbitration : Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_arb_noreq_vn0.ad_rspuncore interconnectCan't Arb for VN0 : RSP on ADevent=0x49,umask=401Can't Arb for VN0 : RSP on AD : VN0 message was not able to request arbitration while some other message won arbitration : Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_arb_noreq_vn0.ad_snpuncore interconnectCan't Arb for VN0 : SNP on ADevent=0x49,umask=201Can't Arb for VN0 : SNP on AD : VN0 message was not able to request arbitration while some other message won arbitration : Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_arb_noreq_vn0.bl_ncbuncore interconnectCan't Arb for VN0 : NCB on BLevent=0x49,umask=0x2001Can't Arb for VN0 : NCB on BL : VN0 message was not able to request arbitration while some other message won arbitration : Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_arb_noreq_vn0.bl_ncsuncore interconnectCan't Arb for VN0 : NCS on BLevent=0x49,umask=0x4001Can't Arb for VN0 : NCS on BL : VN0 message was not able to request arbitration while some other message won arbitration : Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_arb_noreq_vn0.bl_rspuncore interconnectCan't Arb for VN0 : RSP on BLevent=0x49,umask=801Can't Arb for VN0 : RSP on BL : VN0 message was not able to request arbitration while some other message won arbitration : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_arb_noreq_vn0.bl_wbuncore interconnectCan't Arb for VN0 : WB on BLevent=0x49,umask=0x1001Can't Arb for VN0 : WB on BL : VN0 message was not able to request arbitration while some other message won arbitration : Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_arb_noreq_vn1.ad_requncore interconnectCan't Arb for VN1 : REQ on ADevent=0x4a,umask=101Can't Arb for VN1 : REQ on AD : VN1 message was not able to request arbitration while some other message won arbitration : Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_arb_noreq_vn1.ad_rspuncore interconnectCan't Arb for VN1 : RSP on ADevent=0x4a,umask=401Can't Arb for VN1 : RSP on AD : VN1 message was not able to request arbitration while some other message won arbitration : Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_arb_noreq_vn1.ad_snpuncore interconnectCan't Arb for VN1 : SNP on ADevent=0x4a,umask=201Can't Arb for VN1 : SNP on AD : VN1 message was not able to request arbitration while some other message won arbitration : Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_arb_noreq_vn1.bl_ncbuncore interconnectCan't Arb for VN1 : NCB on BLevent=0x4a,umask=0x2001Can't Arb for VN1 : NCB on BL : VN1 message was not able to request arbitration while some other message won arbitration : Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_arb_noreq_vn1.bl_ncsuncore interconnectCan't Arb for VN1 : NCS on BLevent=0x4a,umask=0x4001Can't Arb for VN1 : NCS on BL : VN1 message was not able to request arbitration while some other message won arbitration : Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_arb_noreq_vn1.bl_rspuncore interconnectCan't Arb for VN1 : RSP on BLevent=0x4a,umask=801Can't Arb for VN1 : RSP on BL : VN1 message was not able to request arbitration while some other message won arbitration : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_arb_noreq_vn1.bl_wbuncore interconnectCan't Arb for VN1 : WB on BLevent=0x4a,umask=0x1001Can't Arb for VN1 : WB on BL : VN1 message was not able to request arbitration while some other message won arbitration : Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_bypassed.ad_s0_bl_arbuncore interconnectIngress Queue Bypasses : AD to Slot 0 on BL Arbevent=0x40,umask=201Ingress Queue Bypasses : AD to Slot 0 on BL Arb : Number of times message is bypassed around the Ingress Queue : AD is taking bypass to slot 0 of independent flit while bl message is in arbitrationunc_m3upi_rxc_bypassed.ad_s0_idleuncore interconnectIngress Queue Bypasses : AD to Slot 0 on Idleevent=0x40,umask=101Ingress Queue Bypasses : AD to Slot 0 on Idle : Number of times message is bypassed around the Ingress Queue : AD is taking bypass to slot 0 of independent flit while pipeline is idleunc_m3upi_rxc_bypassed.ad_s1_bl_slotuncore interconnectIngress Queue Bypasses : AD + BL to Slot 1event=0x40,umask=401Ingress Queue Bypasses : AD + BL to Slot 1 : Number of times message is bypassed around the Ingress Queue : AD is taking bypass to flit slot 1 while merging with bl message in same flitunc_m3upi_rxc_bypassed.ad_s2_bl_slotuncore interconnectIngress Queue Bypasses : AD + BL to Slot 2event=0x40,umask=801Ingress Queue Bypasses : AD + BL to Slot 2 : Number of times message is bypassed around the Ingress Queue : AD is taking bypass to flit slot 2 while merging with bl message in same flitunc_m3upi_rxc_crd_misc.any_bgf_fifouncore interconnectMiscellaneous Credit Events : Any In BGF FIFOevent=0x5f,umask=101Miscellaneous Credit Events : Any In BGF FIFO : Indication that at least one packet (flit) is in the bgf (fifo only)unc_m3upi_rxc_crd_misc.any_bgf_pathuncore interconnectMiscellaneous Credit Events : Any in BGF Pathevent=0x5f,umask=201Miscellaneous Credit Events : Any in BGF Path : Indication that at least one packet (flit) is in the bgf path (i.e. pipe to fifo)unc_m3upi_rxc_crd_misc.lt1_for_d2kuncore interconnectMiscellaneous Credit Eventsevent=0x5f,umask=0x1001Miscellaneous Credit Events : d2k credit count is less than 1unc_m3upi_rxc_crd_misc.lt2_for_d2kuncore interconnectMiscellaneous Credit Eventsevent=0x5f,umask=0x2001Miscellaneous Credit Events : d2k credit count is less than 2unc_m3upi_rxc_crd_misc.vn0_no_d2k_for_arbuncore interconnectMiscellaneous Credit Events : No D2K For Arbevent=0x5f,umask=401Miscellaneous Credit Events : No D2K For Arb : VN0 BL RSP message was blocked from arbitration request due to lack of D2K CMP creditunc_m3upi_rxc_crd_misc.vn1_no_d2k_for_arbuncore interconnectMiscellaneous Credit Eventsevent=0x5f,umask=801Miscellaneous Credit Events : VN1 BL RSP message was blocked from arbitration request due to lack of D2K CMP creditsunc_m3upi_rxc_crd_occ.consumeduncore interconnectCredit Occupancy : Credits Consumedevent=0x60,umask=0x8001Credit Occupancy : Credits Consumed : number of remote vna credits consumed per cycleunc_m3upi_rxc_crd_occ.d2k_crduncore interconnectCredit Occupancy : D2K Creditsevent=0x60,umask=0x1001Credit Occupancy : D2K Credits : D2K completion fifo credit occupancy (credits in use), accumulated across all cyclesunc_m3upi_rxc_crd_occ.flits_in_fifouncore interconnectCredit Occupancy : Packets in BGF FIFOevent=0x60,umask=201Credit Occupancy : Packets in BGF FIFO : Occupancy of m3upi ingress -> upi link layer bgf; packets (flits) in fifounc_m3upi_rxc_crd_occ.flits_in_pathuncore interconnectCredit Occupancy : Packets in BGF Pathevent=0x60,umask=401Credit Occupancy : Packets in BGF Path : Occupancy of m3upi ingress -> upi link layer bgf; packets (flits) in path (i.e. pipe to fifo or fifo)unc_m3upi_rxc_crd_occ.p1p_fifouncore interconnectCredit Occupancyevent=0x60,umask=0x4001Credit Occupancy : count of bl messages in pump-1-pending state, in completion fifo onlyunc_m3upi_rxc_crd_occ.p1p_totaluncore interconnectCredit Occupancyevent=0x60,umask=0x2001Credit Occupancy : count of bl messages in pump-1-pending state, in marker table and in fifounc_m3upi_rxc_crd_occ.txq_crduncore interconnectCredit Occupancy : Transmit Creditsevent=0x60,umask=801Credit Occupancy : Transmit Credits : Link layer transmit queue credit occupancy (credits in use), accumulated across all cyclesunc_m3upi_rxc_crd_occ.vna_in_useuncore interconnectCredit Occupancy : VNA In Useevent=0x60,umask=101Credit Occupancy : VNA In Use : Remote UPI VNA credit occupancy (number of credits in use), accumulated across all cyclesunc_m3upi_rxc_cycles_ne_vn0.ad_requncore interconnectVN0 Ingress (from CMS) Queue - Cycles Not Empty : REQ on ADevent=0x43,umask=101VN0 Ingress (from CMS) Queue - Cycles Not Empty : REQ on AD : Counts the number of cycles when the UPI Ingress is not empty.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple counters. : Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_cycles_ne_vn0.ad_rspuncore interconnectVN0 Ingress (from CMS) Queue - Cycles Not Empty : RSP on ADevent=0x43,umask=401VN0 Ingress (from CMS) Queue - Cycles Not Empty : RSP on AD : Counts the number of cycles when the UPI Ingress is not empty.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple counters. : Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_cycles_ne_vn0.ad_snpuncore interconnectVN0 Ingress (from CMS) Queue - Cycles Not Empty : SNP on ADevent=0x43,umask=201VN0 Ingress (from CMS) Queue - Cycles Not Empty : SNP on AD : Counts the number of cycles when the UPI Ingress is not empty.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple counters. : Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_cycles_ne_vn0.bl_ncbuncore interconnectVN0 Ingress (from CMS) Queue - Cycles Not Empty : NCB on BLevent=0x43,umask=0x2001VN0 Ingress (from CMS) Queue - Cycles Not Empty : NCB on BL : Counts the number of cycles when the UPI Ingress is not empty.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple counters. : Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_cycles_ne_vn0.bl_ncsuncore interconnectVN0 Ingress (from CMS) Queue - Cycles Not Empty : NCS on BLevent=0x43,umask=0x4001VN0 Ingress (from CMS) Queue - Cycles Not Empty : NCS on BL : Counts the number of cycles when the UPI Ingress is not empty.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple counters. : Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_cycles_ne_vn0.bl_rspuncore interconnectVN0 Ingress (from CMS) Queue - Cycles Not Empty : RSP on BLevent=0x43,umask=801VN0 Ingress (from CMS) Queue - Cycles Not Empty : RSP on BL : Counts the number of cycles when the UPI Ingress is not empty.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple counters. : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_cycles_ne_vn0.bl_wbuncore interconnectVN0 Ingress (from CMS) Queue - Cycles Not Empty : WB on BLevent=0x43,umask=0x1001VN0 Ingress (from CMS) Queue - Cycles Not Empty : WB on BL : Counts the number of cycles when the UPI Ingress is not empty.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple counters. : Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_data_flits_not_sent.alluncore interconnectData Flit Not Sent : Allevent=0x55,umask=101Data Flit Not Sent : All : Data flit is ready for transmission but could not be sent : data flit is ready for transmission but could not be sent for any reason, e.g. low credits, low tsv, stall injectionunc_m3upi_rxc_data_flits_not_sent.no_bgfuncore interconnectData Flit Not Sent : No BGF Creditsevent=0x55,umask=801Data Flit Not Sent : No BGF Credits : Data flit is ready for transmission but could not be sentunc_m3upi_rxc_data_flits_not_sent.no_txquncore interconnectData Flit Not Sent : No TxQ Creditsevent=0x55,umask=0x1001Data Flit Not Sent : No TxQ Credits : Data flit is ready for transmission but could not be sentunc_m3upi_rxc_data_flits_not_sent.tsv_hiuncore interconnectData Flit Not Sent : TSV Highevent=0x55,umask=201Data Flit Not Sent : TSV High : Data flit is ready for transmission but could not be sent : data flit is ready for transmission but was not sent while tsv highunc_m3upi_rxc_data_flits_not_sent.valid_for_flituncore interconnectData Flit Not Sent : Cycle valid for Flitevent=0x55,umask=401Data Flit Not Sent : Cycle valid for Flit : Data flit is ready for transmission but could not be sent : data flit is ready for transmission but was not sent while cycle is valid for flit transmissionunc_m3upi_rxc_flits_gen_bl.p0_waituncore interconnectGenerating BL Data Flit Sequence : Wait on Pump 0event=0x57,umask=101Generating BL Data Flit Sequence : Wait on Pump 0 : generating bl data flit sequence; waiting for data pump 0unc_m3upi_rxc_flits_gen_bl.p1p_at_limituncore interconnectGenerating BL Data Flit Sequenceevent=0x57,umask=0x1001Generating BL Data Flit Sequence : pump-1-pending logic is at capacity (pending table plus completion fifo at limit)unc_m3upi_rxc_flits_gen_bl.p1p_busyuncore interconnectGenerating BL Data Flit Sequenceevent=0x57,umask=801Generating BL Data Flit Sequence : pump-1-pending logic is tracking at least one messageunc_m3upi_rxc_flits_gen_bl.p1p_fifo_fulluncore interconnectGenerating BL Data Flit Sequenceevent=0x57,umask=0x4001Generating BL Data Flit Sequence : pump-1-pending completion fifo is fullunc_m3upi_rxc_flits_gen_bl.p1p_hold_p0uncore interconnectGenerating BL Data Flit Sequenceevent=0x57,umask=0x2001Generating BL Data Flit Sequence : pump-1-pending logic is at or near capacity, such that pump-0-only bl messages are getting stalled in slotting stageunc_m3upi_rxc_flits_gen_bl.p1p_to_limbouncore interconnectGenerating BL Data Flit Sequenceevent=0x57,umask=401Generating BL Data Flit Sequence : a bl message finished but is in limbo and moved to pump-1-pending logicunc_m3upi_rxc_flits_gen_bl.p1_waituncore interconnectGenerating BL Data Flit Sequence : Wait on Pump 1event=0x57,umask=201Generating BL Data Flit Sequence : Wait on Pump 1 : generating bl data flit sequence; waiting for data pump 1unc_m3upi_rxc_flits_misc.s2req_in_holdoffuncore interconnectUNC_M3UPI_RxC_FLITS_MISC.S2REQ_IN_HOLDOFFevent=0x58,umask=401: slot 2 request naturally serviced during hold-off periodunc_m3upi_rxc_flits_misc.s2req_in_serviceuncore interconnectUNC_M3UPI_RxC_FLITS_MISC.S2REQ_IN_SERVICEevent=0x58,umask=801: slot 2 request forcibly serviced during service windowunc_m3upi_rxc_flits_misc.s2req_receiveduncore interconnectUNC_M3UPI_RxC_FLITS_MISC.S2REQ_RECEIVEDevent=0x58,umask=101: slot 2 request received from link layer while idle (with no slot 2 request active immediately prior)unc_m3upi_rxc_flits_misc.s2req_withdrawnuncore interconnectUNC_M3UPI_RxC_FLITS_MISC.S2REQ_WITHDRAWNevent=0x58,umask=201: slot 2 request withdrawn during hold-off period or service windowunc_m3upi_rxc_flits_slot_bl.alluncore interconnectSlotting BL Message Into Header Flit : Allevent=0x56,umask=101unc_m3upi_rxc_flits_slot_bl.need_datauncore interconnectSlotting BL Message Into Header Flit : Needs Data Flitevent=0x56,umask=201Slotting BL Message Into Header Flit : Needs Data Flit : BL message requires data flit sequenceunc_m3upi_rxc_flits_slot_bl.p0_waituncore interconnectSlotting BL Message Into Header Flit : Wait on Pump 0event=0x56,umask=401Slotting BL Message Into Header Flit : Wait on Pump 0 : Waiting for header pump 0unc_m3upi_rxc_flits_slot_bl.p1_not_requncore interconnectSlotting BL Message Into Header Flit : Don't Need Pump 1event=0x56,umask=0x1001Slotting BL Message Into Header Flit : Don't Need Pump 1 : Header pump 1 is not required for flitunc_m3upi_rxc_flits_slot_bl.p1_not_req_but_bubbleuncore interconnectSlotting BL Message Into Header Flit : Don't Need Pump 1 - Bubbleevent=0x56,umask=0x2001Slotting BL Message Into Header Flit : Don't Need Pump 1 - Bubble : Header pump 1 is not required for flit but flit transmission delayedunc_m3upi_rxc_flits_slot_bl.p1_not_req_not_availuncore interconnectSlotting BL Message Into Header Flit : Don't Need Pump 1 - Not Availevent=0x56,umask=0x4001Slotting BL Message Into Header Flit : Don't Need Pump 1 - Not Avail : Header pump 1 is not required for flit and not availableunc_m3upi_rxc_flits_slot_bl.p1_waituncore interconnectSlotting BL Message Into Header Flit : Wait on Pump 1event=0x56,umask=801Slotting BL Message Into Header Flit : Wait on Pump 1 : Waiting for header pump 1unc_m3upi_rxc_flit_gen_hdr1.accumuncore interconnectFlit Gen - Header 1 : Accumulateevent=0x51,umask=101Flit Gen - Header 1 : Accumulate : Events related to Header Flit Generation - Set 1 : Header flit slotting control state machine is in any accumulate state; multi-message flit may be assembled over multiple cyclesunc_m3upi_rxc_flit_gen_hdr1.accum_readuncore interconnectFlit Gen - Header 1 : Accumulate Readyevent=0x51,umask=201Flit Gen - Header 1 : Accumulate Ready : Events related to Header Flit Generation - Set 1 : header flit slotting control state machine is in accum_ready state; flit is ready to send but transmission is blocked; more messages may be slotted into flitunc_m3upi_rxc_flit_gen_hdr1.accum_wasteduncore interconnectFlit Gen - Header 1 : Accumulate Wastedevent=0x51,umask=401Flit Gen - Header 1 : Accumulate Wasted : Events related to Header Flit Generation - Set 1 : Flit is being assembled over multiple cycles, but no additional message is being slotted into flit in current cycle; accumulate cycle is wastedunc_m3upi_rxc_flit_gen_hdr1.ahead_blockeduncore interconnectFlit Gen - Header 1 : Run-Ahead - Blockedevent=0x51,umask=801Flit Gen - Header 1 : Run-Ahead - Blocked : Events related to Header Flit Generation - Set 1 : Header flit slotting entered run-ahead state; new header flit is started while transmission of prior, fully assembled flit is blockedunc_m3upi_rxc_flit_gen_hdr1.ahead_msg1_afteruncore interconnectFlit Gen - Header 1event=0x51,umask=0x8001Flit Gen - Header 1 : Events related to Header Flit Generation - Set 1 : run-ahead mode: message was slotted only after run-ahead was over; run-ahead mode definitely wastedunc_m3upi_rxc_flit_gen_hdr1.ahead_msg1_duringuncore interconnectFlit Gen - Header 1 : Run-Ahead - Messageevent=0x51,umask=0x1001Flit Gen - Header 1 : Run-Ahead - Message : Events related to Header Flit Generation - Set 1 : run-ahead mode: one message slotted during run-aheadunc_m3upi_rxc_flit_gen_hdr1.ahead_msg2_afteruncore interconnectFlit Gen - Header 1event=0x51,umask=0x2001Flit Gen - Header 1 : Events related to Header Flit Generation - Set 1 : run-ahead mode: second message slotted immediately after run-ahead; potential run-ahead successunc_m3upi_rxc_flit_gen_hdr1.ahead_msg2_sentuncore interconnectFlit Gen - Header 1event=0x51,umask=0x4001Flit Gen - Header 1 : Events related to Header Flit Generation - Set 1 : run-ahead mode: two (or three) message flit sent immediately after run-ahead; complete run-ahead successunc_m3upi_rxc_flit_gen_hdr2.paruncore interconnectFlit Gen - Header 2 : Parallel Okevent=0x52,umask=401Flit Gen - Header 2 : Parallel Ok : Events related to Header Flit Generation - Set 2 : new header flit construction may proceed in parallel with data flit sequenceunc_m3upi_rxc_flit_gen_hdr2.par_flituncore interconnectFlit Gen - Header 2 : Parallel Flit Finishedevent=0x52,umask=0x1001Flit Gen - Header 2 : Parallel Flit Finished : Events related to Header Flit Generation - Set 2 : header flit finished assembly in parallel with data flit sequenceunc_m3upi_rxc_flit_gen_hdr2.par_msguncore interconnectFlit Gen - Header 2 : Parallel Messageevent=0x52,umask=801Flit Gen - Header 2 : Parallel Message : Events related to Header Flit Generation - Set 2 : message is slotted into header flit in parallel with data flit sequenceunc_m3upi_rxc_flit_gen_hdr2.rmstalluncore interconnectFlit Gen - Header 2 : Rate-matching Stallevent=0x52,umask=101Flit Gen - Header 2 : Rate-matching Stall : Events related to Header Flit Generation - Set 2 : Rate-matching stall injectedunc_m3upi_rxc_flit_gen_hdr2.rmstall_nomsguncore interconnectFlit Gen - Header 2 : Rate-matching Stall - No Messageevent=0x52,umask=201Flit Gen - Header 2 : Rate-matching Stall - No Message : Events related to Header Flit Generation - Set 2 : Rate matching stall injected, but no additional message slotted during stall cycleunc_m3upi_rxc_hdr_flits_sent.1_msguncore interconnectSent Header Flit : One Messageevent=0x54,umask=101Sent Header Flit : One Message : One message in flit; VNA or non-VNA flitunc_m3upi_rxc_hdr_flits_sent.1_msg_vnxuncore interconnectSent Header Flit : One Message in non-VNAevent=0x54,umask=801Sent Header Flit : One Message in non-VNA : One message in flit; non-VNA flitunc_m3upi_rxc_hdr_flits_sent.2_msgsuncore interconnectSent Header Flit : Two Messagesevent=0x54,umask=201Sent Header Flit : Two Messages : Two messages in flit; VNA flitunc_m3upi_rxc_hdr_flits_sent.3_msgsuncore interconnectSent Header Flit : Three Messagesevent=0x54,umask=401Sent Header Flit : Three Messages : Three messages in flit; VNA flitunc_m3upi_rxc_hdr_flits_sent.slots_1uncore interconnectSent Header Flit : One Slot Takenevent=0x54,umask=0x1001unc_m3upi_rxc_hdr_flits_sent.slots_2uncore interconnectSent Header Flit : Two Slots Takenevent=0x54,umask=0x2001unc_m3upi_rxc_hdr_flits_sent.slots_3uncore interconnectSent Header Flit : All Slots Takenevent=0x54,umask=0x4001unc_m3upi_rxc_hdr_flit_not_sent.alluncore interconnectHeader Not Sent : Allevent=0x53,umask=101Header Not Sent : All : header flit is ready for transmission but could not be sent : header flit is ready for transmission but could not be sent for any reason, e.g. no credits, low tsv, stall injectionunc_m3upi_rxc_hdr_flit_not_sent.no_bgf_crduncore interconnectHeader Not Sent : No BGF Creditsevent=0x53,umask=801Header Not Sent : No BGF Credits : header flit is ready for transmission but could not be sent : No BGF credits availableunc_m3upi_rxc_hdr_flit_not_sent.no_bgf_no_msguncore interconnectHeader Not Sent : No BGF Credits + No Extra Message Slottedevent=0x53,umask=0x2001Header Not Sent : No BGF Credits + No Extra Message Slotted : header flit is ready for transmission but could not be sent : No BGF credits available; no additional message slotted into flitunc_m3upi_rxc_hdr_flit_not_sent.no_txq_crduncore interconnectHeader Not Sent : No TxQ Creditsevent=0x53,umask=0x1001Header Not Sent : No TxQ Credits : header flit is ready for transmission but could not be sent : No TxQ credits availableunc_m3upi_rxc_hdr_flit_not_sent.no_txq_no_msguncore interconnectHeader Not Sent : No TxQ Credits + No Extra Message Slottedevent=0x53,umask=0x4001Header Not Sent : No TxQ Credits + No Extra Message Slotted : header flit is ready for transmission but could not be sent : No TxQ credits available; no additional message slotted into flitunc_m3upi_rxc_hdr_flit_not_sent.tsv_hiuncore interconnectHeader Not Sent : TSV Highevent=0x53,umask=201Header Not Sent : TSV High : header flit is ready for transmission but could not be sent : header flit is ready for transmission but was not sent while tsv highunc_m3upi_rxc_hdr_flit_not_sent.valid_for_flituncore interconnectHeader Not Sent : Cycle valid for Flitevent=0x53,umask=401Header Not Sent : Cycle valid for Flit : header flit is ready for transmission but could not be sent : header flit is ready for transmission but was not sent while cycle is valid for flit transmissionunc_m3upi_rxc_held.cant_slot_aduncore interconnectMessage Held : Can't Slot ADevent=0x50,umask=0x1001Message Held : Can't Slot AD : some AD message could not be slotted (logical OR of all AD events under INGR_SLOT_CANT_MC_VN{0,1})unc_m3upi_rxc_held.cant_slot_bluncore interconnectMessage Held : Can't Slot BLevent=0x50,umask=0x2001Message Held : Can't Slot BL : some BL message could not be slotted (logical OR of all BL events under INGR_SLOT_CANT_MC_VN{0,1})unc_m3upi_rxc_held.parallel_attemptuncore interconnectMessage Held : Parallel Attemptevent=0x50,umask=401Message Held : Parallel Attempt : ad and bl messages attempted to slot into the same flit in parallelunc_m3upi_rxc_held.parallel_successuncore interconnectMessage Held : Parallel Successevent=0x50,umask=801Message Held : Parallel Success : ad and bl messages were actually slotted into the same flit in parallelunc_m3upi_rxc_held.vn0uncore interconnectMessage Held : VN0event=0x50,umask=101Message Held : VN0 : vn0 message(s) that couldn't be slotted into last vn0 flit are held in slotting stage while processing vn1 flitunc_m3upi_rxc_held.vn1uncore interconnectMessage Held : VN1event=0x50,umask=201Message Held : VN1 : vn1 message(s) that couldn't be slotted into last vn1 flit are held in slotting stage while processing vn0 flitunc_m3upi_rxc_packing_miss_vn0.ad_requncore interconnectVN0 message can't slot into flit : REQ on ADevent=0x4e,umask=101VN0 message can't slot into flit : REQ on AD : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_packing_miss_vn0.ad_rspuncore interconnectVN0 message can't slot into flit : RSP on ADevent=0x4e,umask=401VN0 message can't slot into flit : RSP on AD : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_packing_miss_vn0.ad_snpuncore interconnectVN0 message can't slot into flit : SNP on ADevent=0x4e,umask=201VN0 message can't slot into flit : SNP on AD : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_packing_miss_vn0.bl_ncbuncore interconnectVN0 message can't slot into flit : NCB on BLevent=0x4e,umask=0x2001VN0 message can't slot into flit : NCB on BL : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_packing_miss_vn0.bl_ncsuncore interconnectVN0 message can't slot into flit : NCS on BLevent=0x4e,umask=0x4001VN0 message can't slot into flit : NCS on BL : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_packing_miss_vn0.bl_rspuncore interconnectVN0 message can't slot into flit : RSP on BLevent=0x4e,umask=801VN0 message can't slot into flit : RSP on BL : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_packing_miss_vn0.bl_wbuncore interconnectVN0 message can't slot into flit : WB on BLevent=0x4e,umask=0x1001VN0 message can't slot into flit : WB on BL : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_packing_miss_vn1.ad_requncore interconnectVN1 message can't slot into flit : REQ on ADevent=0x4f,umask=101VN1 message can't slot into flit : REQ on AD : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_packing_miss_vn1.ad_rspuncore interconnectVN1 message can't slot into flit : RSP on ADevent=0x4f,umask=401VN1 message can't slot into flit : RSP on AD : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_packing_miss_vn1.ad_snpuncore interconnectVN1 message can't slot into flit : SNP on ADevent=0x4f,umask=201VN1 message can't slot into flit : SNP on AD : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_packing_miss_vn1.bl_ncbuncore interconnectVN1 message can't slot into flit : NCB on BLevent=0x4f,umask=0x2001VN1 message can't slot into flit : NCB on BL : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_packing_miss_vn1.bl_ncsuncore interconnectVN1 message can't slot into flit : NCS on BLevent=0x4f,umask=0x4001VN1 message can't slot into flit : NCS on BL : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_packing_miss_vn1.bl_rspuncore interconnectVN1 message can't slot into flit : RSP on BLevent=0x4f,umask=801VN1 message can't slot into flit : RSP on BL : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_packing_miss_vn1.bl_wbuncore interconnectVN1 message can't slot into flit : WB on BLevent=0x4f,umask=0x1001VN1 message can't slot into flit : WB on BL : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_vna_crd.any_in_useuncore interconnectRemote VNA Credits : Any In Useevent=0x5a,umask=0x2001Remote VNA Credits : Any In Use : At least one remote vna credit is in useunc_m3upi_rxc_vna_crd.correcteduncore interconnectRemote VNA Credits : Correctedevent=0x5a,umask=101Remote VNA Credits : Corrected : Number of remote vna credits corrected (local return) per cycleunc_m3upi_rxc_vna_crd.lt1uncore interconnectRemote VNA Credits : Level < 1event=0x5a,umask=201Remote VNA Credits : Level < 1 : Remote vna credit level is less than 1 (i.e. no vna credits available)unc_m3upi_rxc_vna_crd.lt10uncore interconnectRemote VNA Credits : Level < 10event=0x5a,umask=0x1001Remote VNA Credits : Level < 10 : remote vna credit level is less than 10; parallel vn0/vn1 arb not possibleunc_m3upi_rxc_vna_crd.lt4uncore interconnectRemote VNA Credits : Level < 4event=0x5a,umask=401Remote VNA Credits : Level < 4 : Remote vna credit level is less than 4; bl (or ad requiring 4 vna) cannot arb on vnaunc_m3upi_rxc_vna_crd.lt5uncore interconnectRemote VNA Credits : Level < 5event=0x5a,umask=801Remote VNA Credits : Level < 5 : Remote vna credit level is less than 5; parallel ad/bl arb on vna not possibleunc_m3upi_rxc_vna_crd_misc.req_adbl_alloc_l5uncore interconnectUNC_M3UPI_RxC_VNA_CRD_MISC.REQ_ADBL_ALLOC_L5event=0x59,umask=201: remote vna credit count was less than 5 and allocation to ad or bl messages was requiredunc_m3upi_rxc_vna_crd_misc.req_vn01_alloc_lt10uncore interconnectUNC_M3UPI_RxC_VNA_CRD_MISC.REQ_VN01_ALLOC_LT10event=0x59,umask=101: remote vna credit count was less than 10 and allocation to vn0 or vn1 was requiredunc_m3upi_rxc_vna_crd_misc.vn0_just_aduncore interconnectUNC_M3UPI_RxC_VNA_CRD_MISC.VN0_JUST_ADevent=0x59,umask=0x1001: on vn0, remote vna credits were allocated only to ad messages, not to blunc_m3upi_rxc_vna_crd_misc.vn0_just_bluncore interconnectUNC_M3UPI_RxC_VNA_CRD_MISC.VN0_JUST_BLevent=0x59,umask=0x2001: on vn0, remote vna credits were allocated only to bl messages, not to adunc_m3upi_rxc_vna_crd_misc.vn0_onlyuncore interconnectUNC_M3UPI_RxC_VNA_CRD_MISC.VN0_ONLYevent=0x59,umask=401: remote vna credits were allocated only to vn0, not to vn1unc_m3upi_rxc_vna_crd_misc.vn1_just_aduncore interconnectUNC_M3UPI_RxC_VNA_CRD_MISC.VN1_JUST_ADevent=0x59,umask=0x4001: on vn1, remote vna credits were allocated only to ad messages, not to blunc_m3upi_rxc_vna_crd_misc.vn1_just_bluncore interconnectUNC_M3UPI_RxC_VNA_CRD_MISC.VN1_JUST_BLevent=0x59,umask=0x8001: on vn1, remote vna credits were allocated only to bl messages, not to adunc_m3upi_rxc_vna_crd_misc.vn1_onlyuncore interconnectUNC_M3UPI_RxC_VNA_CRD_MISC.VN1_ONLYevent=0x59,umask=801: remote vna credits were allocated only to vn1, not to vn0unc_m3upi_txc_ad_arb_fail.vn0_requncore interconnectFailed ARB for AD : VN0 REQ Messagesevent=0x30,umask=101Failed ARB for AD : VN0 REQ Messages : AD arb but no win; arb request asserted but not wonunc_m3upi_txc_ad_arb_fail.vn0_rspuncore interconnectFailed ARB for AD : VN0 RSP Messagesevent=0x30,umask=401Failed ARB for AD : VN0 RSP Messages : AD arb but no win; arb request asserted but not wonunc_m3upi_txc_ad_arb_fail.vn0_snpuncore interconnectFailed ARB for AD : VN0 SNP Messagesevent=0x30,umask=201Failed ARB for AD : VN0 SNP Messages : AD arb but no win; arb request asserted but not wonunc_m3upi_txc_ad_arb_fail.vn0_wbuncore interconnectFailed ARB for AD : VN0 WB Messagesevent=0x30,umask=801Failed ARB for AD : VN0 WB Messages : AD arb but no win; arb request asserted but not wonunc_m3upi_txc_ad_arb_fail.vn1_requncore interconnectFailed ARB for AD : VN1 REQ Messagesevent=0x30,umask=0x1001Failed ARB for AD : VN1 REQ Messages : AD arb but no win; arb request asserted but not wonunc_m3upi_txc_ad_arb_fail.vn1_rspuncore interconnectFailed ARB for AD : VN1 RSP Messagesevent=0x30,umask=0x4001Failed ARB for AD : VN1 RSP Messages : AD arb but no win; arb request asserted but not wonunc_m3upi_txc_ad_arb_fail.vn1_snpuncore interconnectFailed ARB for AD : VN1 SNP Messagesevent=0x30,umask=0x2001Failed ARB for AD : VN1 SNP Messages : AD arb but no win; arb request asserted but not wonunc_m3upi_txc_ad_arb_fail.vn1_wbuncore interconnectFailed ARB for AD : VN1 WB Messagesevent=0x30,umask=0x8001Failed ARB for AD : VN1 WB Messages : AD arb but no win; arb request asserted but not wonunc_m3upi_txc_ad_flq_bypassuncore interconnectAD FlowQ Bypassevent=0x2c01Counts cases when the AD flowQ is bypassed (S0, S1 and S2 indicate which slot was bypassed with S0 having the highest priority and S2 the least)unc_m3upi_txc_ad_flq_bypass.ad_slot0uncore interconnectAD FlowQ Bypassevent=0x2c,umask=101AD FlowQ Bypass : Counts cases when the AD flowQ is bypassed (S0, S1 and S2 indicate which slot was bypassed with S0 having the highest priority and S2 the least)unc_m3upi_txc_ad_flq_bypass.ad_slot1uncore interconnectAD FlowQ Bypassevent=0x2c,umask=201AD FlowQ Bypass : Counts cases when the AD flowQ is bypassed (S0, S1 and S2 indicate which slot was bypassed with S0 having the highest priority and S2 the least)unc_m3upi_txc_ad_flq_bypass.ad_slot2uncore interconnectAD FlowQ Bypassevent=0x2c,umask=401AD FlowQ Bypass : Counts cases when the AD flowQ is bypassed (S0, S1 and S2 indicate which slot was bypassed with S0 having the highest priority and S2 the least)unc_m3upi_txc_ad_flq_bypass.bl_early_rspuncore interconnectAD FlowQ Bypassevent=0x2c,umask=801AD FlowQ Bypass : Counts cases when the AD flowQ is bypassed (S0, S1 and S2 indicate which slot was bypassed with S0 having the highest priority and S2 the least)unc_m3upi_txc_ad_flq_cycles_ne.vn0_requncore interconnectAD Flow Q Not Empty : VN0 REQ Messagesevent=0x27,umask=101AD Flow Q Not Empty : VN0 REQ Messages : Number of cycles the AD Egress queue is Not Emptyunc_m3upi_txc_ad_flq_cycles_ne.vn0_rspuncore interconnectAD Flow Q Not Empty : VN0 RSP Messagesevent=0x27,umask=401AD Flow Q Not Empty : VN0 RSP Messages : Number of cycles the AD Egress queue is Not Emptyunc_m3upi_txc_ad_flq_cycles_ne.vn0_snpuncore interconnectAD Flow Q Not Empty : VN0 SNP Messagesevent=0x27,umask=201AD Flow Q Not Empty : VN0 SNP Messages : Number of cycles the AD Egress queue is Not Emptyunc_m3upi_txc_ad_flq_cycles_ne.vn0_wbuncore interconnectAD Flow Q Not Empty : VN0 WB Messagesevent=0x27,umask=801AD Flow Q Not Empty : VN0 WB Messages : Number of cycles the AD Egress queue is Not Emptyunc_m3upi_txc_ad_flq_cycles_ne.vn1_requncore interconnectAD Flow Q Not Empty : VN1 REQ Messagesevent=0x27,umask=0x1001AD Flow Q Not Empty : VN1 REQ Messages : Number of cycles the AD Egress queue is Not Emptyunc_m3upi_txc_ad_flq_cycles_ne.vn1_rspuncore interconnectAD Flow Q Not Empty : VN1 RSP Messagesevent=0x27,umask=0x4001AD Flow Q Not Empty : VN1 RSP Messages : Number of cycles the AD Egress queue is Not Emptyunc_m3upi_txc_ad_flq_cycles_ne.vn1_snpuncore interconnectAD Flow Q Not Empty : VN1 SNP Messagesevent=0x27,umask=0x2001AD Flow Q Not Empty : VN1 SNP Messages : Number of cycles the AD Egress queue is Not Emptyunc_m3upi_txc_ad_flq_cycles_ne.vn1_wbuncore interconnectAD Flow Q Not Empty : VN1 WB Messagesevent=0x27,umask=0x8001AD Flow Q Not Empty : VN1 WB Messages : Number of cycles the AD Egress queue is Not Emptyunc_m3upi_txc_ad_flq_inserts.vn0_requncore interconnectAD Flow Q Inserts : VN0 REQ Messagesevent=0x2d,umask=101AD Flow Q Inserts : VN0 REQ Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_ad_flq_inserts.vn0_rspuncore interconnectAD Flow Q Inserts : VN0 RSP Messagesevent=0x2d,umask=401AD Flow Q Inserts : VN0 RSP Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_ad_flq_inserts.vn0_snpuncore interconnectAD Flow Q Inserts : VN0 SNP Messagesevent=0x2d,umask=201AD Flow Q Inserts : VN0 SNP Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_ad_flq_inserts.vn0_wbuncore interconnectAD Flow Q Inserts : VN0 WB Messagesevent=0x2d,umask=801AD Flow Q Inserts : VN0 WB Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_ad_flq_inserts.vn1_requncore interconnectAD Flow Q Inserts : VN1 REQ Messagesevent=0x2d,umask=0x1001AD Flow Q Inserts : VN1 REQ Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_ad_flq_inserts.vn1_rspuncore interconnectAD Flow Q Inserts : VN1 RSP Messagesevent=0x2d,umask=0x4001AD Flow Q Inserts : VN1 RSP Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_ad_flq_inserts.vn1_snpuncore interconnectAD Flow Q Inserts : VN1 SNP Messagesevent=0x2d,umask=0x2001AD Flow Q Inserts : VN1 SNP Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_ad_flq_occupancy.vn0_requncore interconnectAD Flow Q Occupancy : VN0 REQ Messagesevent=0x1c,umask=101unc_m3upi_txc_ad_flq_occupancy.vn0_rspuncore interconnectAD Flow Q Occupancy : VN0 RSP Messagesevent=0x1c,umask=401unc_m3upi_txc_ad_flq_occupancy.vn0_snpuncore interconnectAD Flow Q Occupancy : VN0 SNP Messagesevent=0x1c,umask=201unc_m3upi_txc_ad_flq_occupancy.vn0_wbuncore interconnectAD Flow Q Occupancy : VN0 WB Messagesevent=0x1c,umask=801unc_m3upi_txc_ad_flq_occupancy.vn1_requncore interconnectAD Flow Q Occupancy : VN1 REQ Messagesevent=0x1c,umask=0x1001unc_m3upi_txc_ad_flq_occupancy.vn1_rspuncore interconnectAD Flow Q Occupancy : VN1 RSP Messagesevent=0x1c,umask=0x4001unc_m3upi_txc_ad_flq_occupancy.vn1_snpuncore interconnectAD Flow Q Occupancy : VN1 SNP Messagesevent=0x1c,umask=0x2001unc_m3upi_txc_bl_arb_fail.vn0_ncbuncore interconnectFailed ARB for BL : VN0 NCB Messagesevent=0x35,umask=401Failed ARB for BL : VN0 NCB Messages : BL arb but no win; arb request asserted but not wonunc_m3upi_txc_bl_arb_fail.vn0_ncsuncore interconnectFailed ARB for BL : VN0 NCS Messagesevent=0x35,umask=801Failed ARB for BL : VN0 NCS Messages : BL arb but no win; arb request asserted but not wonunc_m3upi_txc_bl_arb_fail.vn0_rspuncore interconnectFailed ARB for BL : VN0 RSP Messagesevent=0x35,umask=101Failed ARB for BL : VN0 RSP Messages : BL arb but no win; arb request asserted but not wonunc_m3upi_txc_bl_arb_fail.vn0_wbuncore interconnectFailed ARB for BL : VN0 WB Messagesevent=0x35,umask=201Failed ARB for BL : VN0 WB Messages : BL arb but no win; arb request asserted but not wonunc_m3upi_txc_bl_arb_fail.vn1_ncbuncore interconnectFailed ARB for BL : VN1 NCS Messagesevent=0x35,umask=0x4001Failed ARB for BL : VN1 NCS Messages : BL arb but no win; arb request asserted but not wonunc_m3upi_txc_bl_arb_fail.vn1_ncsuncore interconnectFailed ARB for BL : VN1 NCB Messagesevent=0x35,umask=0x8001Failed ARB for BL : VN1 NCB Messages : BL arb but no win; arb request asserted but not wonunc_m3upi_txc_bl_arb_fail.vn1_rspuncore interconnectFailed ARB for BL : VN1 RSP Messagesevent=0x35,umask=0x1001Failed ARB for BL : VN1 RSP Messages : BL arb but no win; arb request asserted but not wonunc_m3upi_txc_bl_arb_fail.vn1_wbuncore interconnectFailed ARB for BL : VN1 WB Messagesevent=0x35,umask=0x2001Failed ARB for BL : VN1 WB Messages : BL arb but no win; arb request asserted but not wonunc_m3upi_txc_bl_flq_cycles_ne.vn0_requncore interconnectBL Flow Q Not Empty : VN0 REQ Messagesevent=0x28,umask=101BL Flow Q Not Empty : VN0 REQ Messages : Number of cycles the BL Egress queue is Not Emptyunc_m3upi_txc_bl_flq_cycles_ne.vn0_rspuncore interconnectBL Flow Q Not Empty : VN0 RSP Messagesevent=0x28,umask=401BL Flow Q Not Empty : VN0 RSP Messages : Number of cycles the BL Egress queue is Not Emptyunc_m3upi_txc_bl_flq_cycles_ne.vn0_snpuncore interconnectBL Flow Q Not Empty : VN0 SNP Messagesevent=0x28,umask=201BL Flow Q Not Empty : VN0 SNP Messages : Number of cycles the BL Egress queue is Not Emptyunc_m3upi_txc_bl_flq_cycles_ne.vn0_wbuncore interconnectBL Flow Q Not Empty : VN0 WB Messagesevent=0x28,umask=801BL Flow Q Not Empty : VN0 WB Messages : Number of cycles the BL Egress queue is Not Emptyunc_m3upi_txc_bl_flq_cycles_ne.vn1_requncore interconnectBL Flow Q Not Empty : VN1 REQ Messagesevent=0x28,umask=0x1001BL Flow Q Not Empty : VN1 REQ Messages : Number of cycles the BL Egress queue is Not Emptyunc_m3upi_txc_bl_flq_cycles_ne.vn1_rspuncore interconnectBL Flow Q Not Empty : VN1 RSP Messagesevent=0x28,umask=0x4001BL Flow Q Not Empty : VN1 RSP Messages : Number of cycles the BL Egress queue is Not Emptyunc_m3upi_txc_bl_flq_cycles_ne.vn1_snpuncore interconnectBL Flow Q Not Empty : VN1 SNP Messagesevent=0x28,umask=0x2001BL Flow Q Not Empty : VN1 SNP Messages : Number of cycles the BL Egress queue is Not Emptyunc_m3upi_txc_bl_flq_cycles_ne.vn1_wbuncore interconnectBL Flow Q Not Empty : VN1 WB Messagesevent=0x28,umask=0x8001BL Flow Q Not Empty : VN1 WB Messages : Number of cycles the BL Egress queue is Not Emptyunc_m3upi_txc_bl_flq_inserts.vn0_ncbuncore interconnectBL Flow Q Inserts : VN0 RSP Messagesevent=0x2e,umask=101BL Flow Q Inserts : VN0 RSP Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_bl_flq_inserts.vn0_ncsuncore interconnectBL Flow Q Inserts : VN0 WB Messagesevent=0x2e,umask=201BL Flow Q Inserts : VN0 WB Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_bl_flq_inserts.vn0_rspuncore interconnectBL Flow Q Inserts : VN0 NCS Messagesevent=0x2e,umask=801BL Flow Q Inserts : VN0 NCS Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_bl_flq_inserts.vn0_wbuncore interconnectBL Flow Q Inserts : VN0 NCB Messagesevent=0x2e,umask=401BL Flow Q Inserts : VN0 NCB Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_bl_flq_inserts.vn1_ncbuncore interconnectBL Flow Q Inserts : VN1 RSP Messagesevent=0x2e,umask=0x1001BL Flow Q Inserts : VN1 RSP Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_bl_flq_inserts.vn1_ncsuncore interconnectBL Flow Q Inserts : VN1 WB Messagesevent=0x2e,umask=0x2001BL Flow Q Inserts : VN1 WB Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_bl_flq_inserts.vn1_rspuncore interconnectBL Flow Q Inserts : VN1_NCB Messagesevent=0x2e,umask=0x8001BL Flow Q Inserts : VN1_NCB Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_bl_flq_inserts.vn1_wbuncore interconnectBL Flow Q Inserts : VN1_NCS Messagesevent=0x2e,umask=0x4001BL Flow Q Inserts : VN1_NCS Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency.  Only a single FlowQ queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_m3upi_txc_bl_flq_occupancy.vn0_ncbuncore interconnectBL Flow Q Occupancy : VN0 NCB Messagesevent=0x1d,umask=401unc_m3upi_txc_bl_flq_occupancy.vn0_ncsuncore interconnectBL Flow Q Occupancy : VN0 NCS Messagesevent=0x1d,umask=801unc_m3upi_txc_bl_flq_occupancy.vn0_rspuncore interconnectBL Flow Q Occupancy : VN0 RSP Messagesevent=0x1d,umask=101unc_m3upi_txc_bl_flq_occupancy.vn0_wbuncore interconnectBL Flow Q Occupancy : VN0 WB Messagesevent=0x1d,umask=201unc_m3upi_txc_bl_flq_occupancy.vn1_ncbuncore interconnectBL Flow Q Occupancy : VN1_NCS Messagesevent=0x1d,umask=0x4001unc_m3upi_txc_bl_flq_occupancy.vn1_ncsuncore interconnectBL Flow Q Occupancy : VN1_NCB Messagesevent=0x1d,umask=0x8001unc_m3upi_txc_bl_flq_occupancy.vn1_rspuncore interconnectBL Flow Q Occupancy : VN1 RSP Messagesevent=0x1d,umask=0x1001unc_m3upi_txc_bl_flq_occupancy.vn1_wbuncore interconnectBL Flow Q Occupancy : VN1 WB Messagesevent=0x1d,umask=0x2001unc_m3upi_txc_bl_wb_flq_occupancy.vn0_localuncore interconnectBL Flow Q Occupancy : VN0 RSP Messagesevent=0x1f,umask=101unc_m3upi_txc_bl_wb_flq_occupancy.vn0_throughuncore interconnectBL Flow Q Occupancy : VN0 WB Messagesevent=0x1f,umask=201unc_m3upi_txc_bl_wb_flq_occupancy.vn0_wrpulluncore interconnectBL Flow Q Occupancy : VN0 NCB Messagesevent=0x1f,umask=401unc_m3upi_txc_bl_wb_flq_occupancy.vn1_localuncore interconnectBL Flow Q Occupancy : VN1 RSP Messagesevent=0x1f,umask=0x1001unc_m3upi_txc_bl_wb_flq_occupancy.vn1_throughuncore interconnectBL Flow Q Occupancy : VN1 WB Messagesevent=0x1f,umask=0x2001unc_m3upi_txc_bl_wb_flq_occupancy.vn1_wrpulluncore interconnectBL Flow Q Occupancy : VN1_NCS Messagesevent=0x1f,umask=0x4001unc_m3upi_upi_peer_ad_credits_empty.vn0_requncore interconnectUPI0 AD Credits Empty : VN0 REQ Messagesevent=0x20,umask=201UPI0 AD Credits Empty : VN0 REQ Messages : No credits available to send to UPIs on the AD Ringunc_m3upi_upi_peer_ad_credits_empty.vn0_rspuncore interconnectUPI0 AD Credits Empty : VN0 RSP Messagesevent=0x20,umask=801UPI0 AD Credits Empty : VN0 RSP Messages : No credits available to send to UPIs on the AD Ringunc_m3upi_upi_peer_ad_credits_empty.vn0_snpuncore interconnectUPI0 AD Credits Empty : VN0 SNP Messagesevent=0x20,umask=401UPI0 AD Credits Empty : VN0 SNP Messages : No credits available to send to UPIs on the AD Ringunc_m3upi_upi_peer_ad_credits_empty.vn1_requncore interconnectUPI0 AD Credits Empty : VN1 REQ Messagesevent=0x20,umask=0x1001UPI0 AD Credits Empty : VN1 REQ Messages : No credits available to send to UPIs on the AD Ringunc_m3upi_upi_peer_ad_credits_empty.vn1_rspuncore interconnectUPI0 AD Credits Empty : VN1 RSP Messagesevent=0x20,umask=0x4001UPI0 AD Credits Empty : VN1 RSP Messages : No credits available to send to UPIs on the AD Ringunc_m3upi_upi_peer_ad_credits_empty.vn1_snpuncore interconnectUPI0 AD Credits Empty : VN1 SNP Messagesevent=0x20,umask=0x2001UPI0 AD Credits Empty : VN1 SNP Messages : No credits available to send to UPIs on the AD Ringunc_m3upi_upi_peer_ad_credits_empty.vnauncore interconnectUPI0 AD Credits Empty : VNAevent=0x20,umask=101UPI0 AD Credits Empty : VNA : No credits available to send to UPIs on the AD Ringunc_m3upi_upi_peer_bl_credits_empty.vn0_ncs_ncbuncore interconnectUPI0 BL Credits Empty : VN0 RSP Messagesevent=0x21,umask=401UPI0 BL Credits Empty : VN0 RSP Messages : No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)unc_m3upi_upi_peer_bl_credits_empty.vn0_rspuncore interconnectUPI0 BL Credits Empty : VN0 REQ Messagesevent=0x21,umask=201UPI0 BL Credits Empty : VN0 REQ Messages : No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)unc_m3upi_upi_peer_bl_credits_empty.vn0_wbuncore interconnectUPI0 BL Credits Empty : VN0 SNP Messagesevent=0x21,umask=801UPI0 BL Credits Empty : VN0 SNP Messages : No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)unc_m3upi_upi_peer_bl_credits_empty.vn1_ncs_ncbuncore interconnectUPI0 BL Credits Empty : VN1 RSP Messagesevent=0x21,umask=0x2001UPI0 BL Credits Empty : VN1 RSP Messages : No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)unc_m3upi_upi_peer_bl_credits_empty.vn1_rspuncore interconnectUPI0 BL Credits Empty : VN1 REQ Messagesevent=0x21,umask=0x1001UPI0 BL Credits Empty : VN1 REQ Messages : No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)unc_m3upi_upi_peer_bl_credits_empty.vn1_wbuncore interconnectUPI0 BL Credits Empty : VN1 SNP Messagesevent=0x21,umask=0x4001UPI0 BL Credits Empty : VN1 SNP Messages : No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)unc_m3upi_upi_peer_bl_credits_empty.vnauncore interconnectUPI0 BL Credits Empty : VNAevent=0x21,umask=101UPI0 BL Credits Empty : VNA : No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)unc_m3upi_upi_prefetch_spawnuncore interconnectFlowQ Generated Prefetchevent=0x2901FlowQ Generated Prefetch : Count cases where FlowQ causes spawn of Prefetch to iMC/SMI3 targetunc_m3upi_vn0_credits_used.ncbuncore interconnectVN0 Credit Used : WB on BLevent=0x5b,umask=0x1001VN0 Credit Used : WB on BL : Number of times a VN0 credit was used on the DRS message channel.  In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This counts the number of times a VN0 credit was used.  Note that a single VN0 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN0 will only count a single credit even though it may use multiple buffers. : Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_vn0_credits_used.ncsuncore interconnectVN0 Credit Used : NCB on BLevent=0x5b,umask=0x2001VN0 Credit Used : NCB on BL : Number of times a VN0 credit was used on the DRS message channel.  In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This counts the number of times a VN0 credit was used.  Note that a single VN0 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN0 will only count a single credit even though it may use multiple buffers. : Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_vn0_credits_used.requncore interconnectVN0 Credit Used : REQ on ADevent=0x5b,umask=101VN0 Credit Used : REQ on AD : Number of times a VN0 credit was used on the DRS message channel.  In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This counts the number of times a VN0 credit was used.  Note that a single VN0 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN0 will only count a single credit even though it may use multiple buffers. : Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_vn0_credits_used.rspuncore interconnectVN0 Credit Used : RSP on ADevent=0x5b,umask=401VN0 Credit Used : RSP on AD : Number of times a VN0 credit was used on the DRS message channel.  In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This counts the number of times a VN0 credit was used.  Note that a single VN0 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN0 will only count a single credit even though it may use multiple buffers. : Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_vn0_credits_used.snpuncore interconnectVN0 Credit Used : SNP on ADevent=0x5b,umask=201VN0 Credit Used : SNP on AD : Number of times a VN0 credit was used on the DRS message channel.  In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This counts the number of times a VN0 credit was used.  Note that a single VN0 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN0 will only count a single credit even though it may use multiple buffers. : Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_vn0_credits_used.wbuncore interconnectVN0 Credit Used : RSP on BLevent=0x5b,umask=801VN0 Credit Used : RSP on BL : Number of times a VN0 credit was used on the DRS message channel.  In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This counts the number of times a VN0 credit was used.  Note that a single VN0 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN0 will only count a single credit even though it may use multiple buffers. : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_vn0_no_credits.ncbuncore interconnectVN0 No Credits : WB on BLevent=0x5d,umask=0x1001VN0 No Credits : WB on BL : Number of Cycles there were no VN0 Credits : Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_vn0_no_credits.ncsuncore interconnectVN0 No Credits : NCB on BLevent=0x5d,umask=0x2001VN0 No Credits : NCB on BL : Number of Cycles there were no VN0 Credits : Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_vn0_no_credits.requncore interconnectVN0 No Credits : REQ on ADevent=0x5d,umask=101VN0 No Credits : REQ on AD : Number of Cycles there were no VN0 Credits : Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_vn0_no_credits.rspuncore interconnectVN0 No Credits : RSP on ADevent=0x5d,umask=401VN0 No Credits : RSP on AD : Number of Cycles there were no VN0 Credits : Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_vn0_no_credits.snpuncore interconnectVN0 No Credits : SNP on ADevent=0x5d,umask=201VN0 No Credits : SNP on AD : Number of Cycles there were no VN0 Credits : Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_vn0_no_credits.wbuncore interconnectVN0 No Credits : RSP on BLevent=0x5d,umask=801VN0 No Credits : RSP on BL : Number of Cycles there were no VN0 Credits : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_vn1_credits_used.ncbuncore interconnectVN1 Credit Used : WB on BLevent=0x5c,umask=0x1001VN1 Credit Used : WB on BL : Number of times a VN1 credit was used on the WB message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN1.  VNA is a shared pool used to achieve high performance.  The VN1 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail.  This counts the number of times a VN1 credit was used.  Note that a single VN1 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN1 will only count a single credit even though it may use multiple buffers. : Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_vn1_credits_used.ncsuncore interconnectVN1 Credit Used : NCB on BLevent=0x5c,umask=0x2001VN1 Credit Used : NCB on BL : Number of times a VN1 credit was used on the WB message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN1.  VNA is a shared pool used to achieve high performance.  The VN1 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail.  This counts the number of times a VN1 credit was used.  Note that a single VN1 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN1 will only count a single credit even though it may use multiple buffers. : Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_vn1_credits_used.requncore interconnectVN1 Credit Used : REQ on ADevent=0x5c,umask=101VN1 Credit Used : REQ on AD : Number of times a VN1 credit was used on the WB message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN1.  VNA is a shared pool used to achieve high performance.  The VN1 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail.  This counts the number of times a VN1 credit was used.  Note that a single VN1 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN1 will only count a single credit even though it may use multiple buffers. : Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_vn1_credits_used.rspuncore interconnectVN1 Credit Used : RSP on ADevent=0x5c,umask=401VN1 Credit Used : RSP on AD : Number of times a VN1 credit was used on the WB message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN1.  VNA is a shared pool used to achieve high performance.  The VN1 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail.  This counts the number of times a VN1 credit was used.  Note that a single VN1 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN1 will only count a single credit even though it may use multiple buffers. : Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_vn1_credits_used.snpuncore interconnectVN1 Credit Used : SNP on ADevent=0x5c,umask=201VN1 Credit Used : SNP on AD : Number of times a VN1 credit was used on the WB message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN1.  VNA is a shared pool used to achieve high performance.  The VN1 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail.  This counts the number of times a VN1 credit was used.  Note that a single VN1 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN1 will only count a single credit even though it may use multiple buffers. : Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_vn1_credits_used.wbuncore interconnectVN1 Credit Used : RSP on BLevent=0x5c,umask=801VN1 Credit Used : RSP on BL : Number of times a VN1 credit was used on the WB message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN1.  VNA is a shared pool used to achieve high performance.  The VN1 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail.  This counts the number of times a VN1 credit was used.  Note that a single VN1 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN1 will only count a single credit even though it may use multiple buffers. : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_vn1_no_credits.ncbuncore interconnectVN1 No Credits : WB on BLevent=0x5e,umask=0x1001VN1 No Credits : WB on BL : Number of Cycles there were no VN1 Credits : Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_vn1_no_credits.ncsuncore interconnectVN1 No Credits : NCB on BLevent=0x5e,umask=0x2001VN1 No Credits : NCB on BL : Number of Cycles there were no VN1 Credits : Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_vn1_no_credits.requncore interconnectVN1 No Credits : REQ on ADevent=0x5e,umask=101VN1 No Credits : REQ on AD : Number of Cycles there were no VN1 Credits : Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_vn1_no_credits.rspuncore interconnectVN1 No Credits : RSP on ADevent=0x5e,umask=401VN1 No Credits : RSP on AD : Number of Cycles there were no VN1 Credits : Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_vn1_no_credits.snpuncore interconnectVN1 No Credits : SNP on ADevent=0x5e,umask=201VN1 No Credits : SNP on AD : Number of Cycles there were no VN1 Credits : Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_vn1_no_credits.wbuncore interconnectVN1 No Credits : RSP on BLevent=0x5e,umask=801VN1 No Credits : RSP on BL : Number of Cycles there were no VN1 Credits : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_wb_occ_compare.bothnonzero_rt_eq_localdest_vn0uncore interconnectUNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_EQ_LOCALDEST_VN0event=0x7e,umask=0x8201unc_m3upi_wb_occ_compare.bothnonzero_rt_eq_localdest_vn1uncore interconnectUNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_EQ_LOCALDEST_VN1event=0x7e,umask=0xa001unc_m3upi_wb_occ_compare.bothnonzero_rt_gt_localdest_vn0uncore interconnectUNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_GT_LOCALDEST_VN0event=0x7e,umask=0x8101unc_m3upi_wb_occ_compare.bothnonzero_rt_gt_localdest_vn1uncore interconnectUNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_GT_LOCALDEST_VN1event=0x7e,umask=0x9001unc_m3upi_wb_occ_compare.bothnonzero_rt_lt_localdest_vn0uncore interconnectUNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_LT_LOCALDEST_VN0event=0x7e,umask=0x8401unc_m3upi_wb_occ_compare.bothnonzero_rt_lt_localdest_vn1uncore interconnectUNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_LT_LOCALDEST_VN1event=0x7e,umask=0xc001unc_m3upi_wb_occ_compare.rt_eq_localdest_vn0uncore interconnectUNC_M3UPI_WB_OCC_COMPARE.RT_EQ_LOCALDEST_VN0event=0x7e,umask=201unc_m3upi_wb_occ_compare.rt_eq_localdest_vn1uncore interconnectUNC_M3UPI_WB_OCC_COMPARE.RT_EQ_LOCALDEST_VN1event=0x7e,umask=0x2001unc_m3upi_wb_occ_compare.rt_gt_localdest_vn0uncore interconnectUNC_M3UPI_WB_OCC_COMPARE.RT_GT_LOCALDEST_VN0event=0x7e,umask=101unc_m3upi_wb_occ_compare.rt_gt_localdest_vn1uncore interconnectUNC_M3UPI_WB_OCC_COMPARE.RT_GT_LOCALDEST_VN1event=0x7e,umask=0x1001unc_m3upi_wb_occ_compare.rt_lt_localdest_vn0uncore interconnectUNC_M3UPI_WB_OCC_COMPARE.RT_LT_LOCALDEST_VN0event=0x7e,umask=401unc_m3upi_wb_occ_compare.rt_lt_localdest_vn1uncore interconnectUNC_M3UPI_WB_OCC_COMPARE.RT_LT_LOCALDEST_VN1event=0x7e,umask=0x4001unc_m3upi_wb_pending.localdest_vn0uncore interconnectUNC_M3UPI_WB_PENDING.LOCALDEST_VN0event=0x7d,umask=101unc_m3upi_wb_pending.localdest_vn1uncore interconnectUNC_M3UPI_WB_PENDING.LOCALDEST_VN1event=0x7d,umask=0x1001unc_m3upi_wb_pending.local_and_rt_vn0uncore interconnectUNC_M3UPI_WB_PENDING.LOCAL_AND_RT_VN0event=0x7d,umask=401unc_m3upi_wb_pending.local_and_rt_vn1uncore interconnectUNC_M3UPI_WB_PENDING.LOCAL_AND_RT_VN1event=0x7d,umask=0x4001unc_m3upi_wb_pending.routethru_vn0uncore interconnectUNC_M3UPI_WB_PENDING.ROUTETHRU_VN0event=0x7d,umask=201unc_m3upi_wb_pending.routethru_vn1uncore interconnectUNC_M3UPI_WB_PENDING.ROUTETHRU_VN1event=0x7d,umask=0x2001unc_m3upi_wb_pending.waiting4pull_vn0uncore interconnectUNC_M3UPI_WB_PENDING.WAITING4PULL_VN0event=0x7d,umask=801unc_m3upi_wb_pending.waiting4pull_vn1uncore interconnectUNC_M3UPI_WB_PENDING.WAITING4PULL_VN1event=0x7d,umask=0x8001unc_m3upi_xpt_pftch.arbuncore interconnectUNC_M3UPI_XPT_PFTCH.ARBevent=0x61,umask=401: xpt prefetch message is making arbitration requestunc_m3upi_xpt_pftch.arriveduncore interconnectUNC_M3UPI_XPT_PFTCH.ARRIVEDevent=0x61,umask=101: xpt prefetch message arrived in ingress pipelineunc_m3upi_xpt_pftch.bypassuncore interconnectUNC_M3UPI_XPT_PFTCH.BYPASSevent=0x61,umask=201: xpt prefetch message took bypass pathunc_m3upi_xpt_pftch.flitteduncore interconnectUNC_M3UPI_XPT_PFTCH.FLITTEDevent=0x61,umask=0x1001: xpt prefetch message was slotted into flit (non bypass)unc_m3upi_xpt_pftch.lost_arbuncore interconnectUNC_M3UPI_XPT_PFTCH.LOST_ARBevent=0x61,umask=801: xpt prefetch message lost arbitrationunc_m3upi_xpt_pftch.lost_olduncore interconnectUNC_M3UPI_XPT_PFTCH.LOST_OLDevent=0x61,umask=0x2001: xpt prefetch message was dropped because it became too oldunc_m3upi_xpt_pftch.lost_qfulluncore interconnectUNC_M3UPI_XPT_PFTCH.LOST_QFULLevent=0x61,umask=0x4001: xpt prefetch message was dropped because it was overwritten by new message while prefetch queue was fulluncore_mdfunc_mdf_crs_txr_inserts.ad_bncuncore interconnectNumber of allocations into the CRS Egress  used to queue up requests destined to the mesh (AD Bounceable)event=0x47,umask=101AD Bounceable : Number of allocations into the CRS Egressunc_mdf_crs_txr_inserts.ad_crduncore interconnectNumber of allocations into the CRS Egress  used to queue up requests destined to the mesh (AD credited)event=0x47,umask=201AD credited : Number of allocations into the CRS Egressunc_mdf_crs_txr_inserts.akuncore interconnectNumber of allocations into the CRS Egress  used to queue up requests destined to the mesh (AK)event=0x47,umask=0x1001AK : Number of allocations into the CRS Egressunc_mdf_crs_txr_inserts.akcuncore interconnectNumber of allocations into the CRS Egress  used to queue up requests destined to the mesh (AKC)event=0x47,umask=0x4001AKC : Number of allocations into the CRS Egressunc_mdf_crs_txr_inserts.bl_bncuncore interconnectNumber of allocations into the CRS Egress  used to queue up requests destined to the mesh (BL Bounceable)event=0x47,umask=401BL Bounceable : Number of allocations into the CRS Egressunc_mdf_crs_txr_inserts.bl_crduncore interconnectNumber of allocations into the CRS Egress  used to queue up requests destined to the mesh (BL credited)event=0x47,umask=801BL credited : Number of allocations into the CRS Egressunc_mdf_crs_txr_inserts.ivuncore interconnectNumber of allocations into the CRS Egress  used to queue up requests destined to the mesh (IV)event=0x47,umask=0x2001IV : Number of allocations into the CRS Egressunc_mdf_crs_txr_v_bounces.aduncore interconnectNumber of cycles incoming messages from the vertical ring that are bounced at the SBO Ingress (V-EMIB) (AD)event=0x4b,umask=101AD : Number of cycles incoming messages from the vertical ring that are bounced at the SBOunc_mdf_crs_txr_v_bounces.akuncore interconnectNumber of cycles incoming messages from the vertical ring that are bounced at the SBO Ingress (V-EMIB) (AK)event=0x4b,umask=401AK : Number of cycles incoming messages from the vertical ring that are bounced at the SBOunc_mdf_crs_txr_v_bounces.akcuncore interconnectNumber of cycles incoming messages from the vertical ring that are bounced at the SBO Ingress (V-EMIB) (AKC)event=0x4b,umask=0x1001AKC : Number of cycles incoming messages from the vertical ring that are bounced at the SBOunc_mdf_crs_txr_v_bounces.bluncore interconnectNumber of cycles incoming messages from the vertical ring that are bounced at the SBO Ingress (V-EMIB) (BL)event=0x4b,umask=201BL : Number of cycles incoming messages from the vertical ring that are bounced at the SBOunc_mdf_crs_txr_v_bounces.ivuncore interconnectNumber of cycles incoming messages from the vertical ring that are bounced at the SBO Ingress (V-EMIB) (IV)event=0x4b,umask=801IV : Number of cycles incoming messages from the vertical ring that are bounced at the SBOunc_mdf_fast_asserted.ad_bncuncore interconnectCounts the number of cycles when the distress signals are asserted based on SBO Ingress thresholdevent=0x15,umask=101AD bnc : Counts the number of cycles when the  distress signals are asserted based on SBO Ingress thresholdunc_mdf_fast_asserted.bl_crduncore interconnectCounts the number of cycles when the distress signals are asserted based on SBO Ingress thresholdevent=0x15,umask=201BL bnc : Counts the number of cycles when the  distress signals are asserted based on SBO Ingress thresholdunc_upi_clockticksuncore interconnectUPI Clockticksevent=101Number of UPI LL clock cycles while the event is enabledunc_upi_direct_attempts.d2cuncore interconnectDirect packet attempts : D2Cevent=0x12,umask=101Direct packet attempts : D2C : Counts the number of DRS packets that we attempted to do direct2core/direct2UPI on.  There are 4 mutually exclusive filters.  Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases.  Note that this does not count packets that are not candidates for Direct2Core.  The only candidates for Direct2Core are DRS packets destined for Cbosunc_upi_direct_attempts.d2kuncore interconnectDirect packet attempts : D2Kevent=0x12,umask=201Direct packet attempts : D2K : Counts the number of DRS packets that we attempted to do direct2core/direct2UPI on.  There are 4 mutually exclusive filters.  Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases.  Note that this does not count packets that are not candidates for Direct2Core.  The only candidates for Direct2Core are DRS packets destined for Cbosunc_upi_l1_power_cyclesuncore interconnectCycles in L1event=0x2101Cycles in L1 : Number of UPI qfclk cycles spent in L1 power mode.  L1 is a mode that totally shuts down a UPI link.  Use edge detect to count the number of instances when the UPI link entered L1.  Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another. Because L1 totally shuts down the link, it takes a good amount of time to exit this modeunc_upi_power_l1_nackuncore interconnectL1 Req Nackevent=0x2301L1 Req Nack : Counts the number of times a link sends/receives a LinkReqNAck.  When the UPI links would like to change power state, the Tx side initiates a request to the Rx side requesting to change states.  This requests can either be accepted or denied.  If the Rx side replies with an Ack, the power mode will change.  If it replies with NAck, no change will take place.  This can be filtered based on Rx and Tx.  An Rx LinkReqNAck refers to receiving an NAck (meaning this agent's Tx originally requested the power change).  A Tx LinkReqNAck refers to sending this command (meaning the peer agent's Tx originally requested the power change and this agent accepted it)unc_upi_power_l1_requncore interconnectL1 Req (same as L1 Ack)event=0x2201L1 Req (same as L1 Ack). : Counts the number of times a link sends/receives a LinkReqAck.  When the UPI links would like to change power state, the Tx side initiates a request to the Rx side requesting to change states.  This requests can either be accepted or denied.  If the Rx side replies with an Ack, the power mode will change.  If it replies with NAck, no change will take place.  This can be filtered based on Rx and Tx.  An Rx LinkReqAck refers to receiving an Ack (meaning this agent's Tx originally requested the power change).  A Tx LinkReqAck refers to sending this command (meaning the peer agent's Tx originally requested the power change and this agent accepted it)unc_upi_rxl0p_power_cyclesuncore interconnectCycles in L0pevent=0x2501Cycles in L0p : Number of UPI qfclk cycles spent in L0p power mode.  L0p is a mode where we disable 1/2 of the UPI lanes, decreasing our bandwidth in order to save power.  It increases snoop and data transfer latencies and decreases overall bandwidth.  This mode can be very useful in NUMA optimized workloads that largely only utilize UPI for snoops and their responses.  Use edge detect to count the number of instances when the UPI link entered L0p.  Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in anotherunc_upi_rxl0_power_cyclesuncore interconnectCycles in L0event=0x2401Cycles in L0 : Number of UPI qfclk cycles spent in L0 power mode in the Link Layer.  L0 is the default mode which provides the highest performance with the most power.  Use edge detect to count the number of instances that the link entered L0.  Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another.  The phy layer  sometimes leaves L0 for training, which will not be captured by this eventunc_upi_rxl_any_flits.datauncore interconnectUNC_UPI_RxL_ANY_FLITS.DATAevent=0x4b,umask=801unc_upi_rxl_any_flits.llcrduncore interconnectUNC_UPI_RxL_ANY_FLITS.LLCRDevent=0x4b,umask=0x1001unc_upi_rxl_any_flits.llctrluncore interconnectUNC_UPI_RxL_ANY_FLITS.LLCTRLevent=0x4b,umask=0x4001unc_upi_rxl_any_flits.nulluncore interconnectUNC_UPI_RxL_ANY_FLITS.NULLevent=0x4b,umask=0x2001unc_upi_rxl_any_flits.prothdruncore interconnectUNC_UPI_RxL_ANY_FLITS.PROTHDRevent=0x4b,umask=0x8001unc_upi_rxl_any_flits.slot0uncore interconnectUNC_UPI_RxL_ANY_FLITS.SLOT0event=0x4b,umask=101unc_upi_rxl_any_flits.slot1uncore interconnectUNC_UPI_RxL_ANY_FLITS.SLOT1event=0x4b,umask=201unc_upi_rxl_any_flits.slot2uncore interconnectUNC_UPI_RxL_ANY_FLITS.SLOT2event=0x4b,umask=401unc_upi_rxl_basic_hdr_match.ncbuncore interconnectMatches on Receive path of a UPI Port : Non-Coherent Bypassevent=5,umask=0xe01Matches on Receive path of a UPI Port : Non-Coherent Bypass : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_rxl_basic_hdr_match.ncb_opcuncore interconnectMatches on Receive path of a UPI Port : Non-Coherent Bypass, Match Opcodeevent=5,umask=0x10e01Matches on Receive path of a UPI Port : Non-Coherent Bypass, Match Opcode : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_rxl_basic_hdr_match.ncsuncore interconnectMatches on Receive path of a UPI Port : Non-Coherent Standardevent=5,umask=0xf01Matches on Receive path of a UPI Port : Non-Coherent Standard : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_rxl_basic_hdr_match.ncs_opcuncore interconnectMatches on Receive path of a UPI Port : Non-Coherent Standard, Match Opcodeevent=5,umask=0x10f01Matches on Receive path of a UPI Port : Non-Coherent Standard, Match Opcode : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_rxl_bypassed.slot0uncore interconnectRxQ Flit Buffer Bypassed : Slot 0event=0x31,umask=101RxQ Flit Buffer Bypassed : Slot 0 : Counts the number of times that an incoming flit was able to bypass the flit buffer and pass directly across the BGF and into the Egress.  This is a latency optimization, and should generally be the common case.  If this value is less than the number of flits transferred, it implies that there was queueing getting onto the ring, and thus the transactions saw higher latencyunc_upi_rxl_bypassed.slot1uncore interconnectRxQ Flit Buffer Bypassed : Slot 1event=0x31,umask=201RxQ Flit Buffer Bypassed : Slot 1 : Counts the number of times that an incoming flit was able to bypass the flit buffer and pass directly across the BGF and into the Egress.  This is a latency optimization, and should generally be the common case.  If this value is less than the number of flits transferred, it implies that there was queueing getting onto the ring, and thus the transactions saw higher latencyunc_upi_rxl_bypassed.slot2uncore interconnectRxQ Flit Buffer Bypassed : Slot 2event=0x31,umask=401RxQ Flit Buffer Bypassed : Slot 2 : Counts the number of times that an incoming flit was able to bypass the flit buffer and pass directly across the BGF and into the Egress.  This is a latency optimization, and should generally be the common case.  If this value is less than the number of flits transferred, it implies that there was queueing getting onto the ring, and thus the transactions saw higher latencyunc_upi_rxl_crc_errorsuncore interconnectCRC Errors Detectedevent=0xb01CRC Errors Detected : Number of CRC errors detected in the UPI Agent.  Each UPI flit incorporates 8 bits of CRC for error detection.  This counts the number of flits where the CRC was able to detect an error.  After an error has been detected, the UPI agent will send a request to the transmitting socket to resend the flit (as well as any flits that came after it)unc_upi_rxl_crc_llr_req_transmituncore interconnectLLR Requests Sentevent=801LLR Requests Sent : Number of LLR Requests were transmitted.  This should generally be <= the number of CRC errors detected.  If multiple errors are detected before the Rx side receives a LLC_REQ_ACK from the Tx side, there is no need to send more LLR_REQ_NACKs.unc_upi_rxl_credits_consumed_vn0uncore interconnectVN0 Credit Consumedevent=0x3901VN0 Credit Consumed : Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer).  This includes packets that went through the RxQ and those that were bypasssedunc_upi_rxl_credits_consumed_vn1uncore interconnectVN1 Credit Consumedevent=0x3a01VN1 Credit Consumed : Counts the number of times that an RxQ VN1 credit was consumed (i.e. message uses a VN1 credit for the Rx Buffer).  This includes packets that went through the RxQ and those that were bypasssedunc_upi_rxl_flits.all_datauncore interconnectValid Flits Received : All Dataevent=3,umask=0xf01Valid Flits Received : All Data : Shows legal flit time (hides impact of L0p and L0c)unc_upi_rxl_flits.all_nulluncore interconnectNull FLITs received from any slotevent=3,umask=0x2701unc_upi_rxl_flits.datauncore interconnectValid Flits Received : Dataevent=3,umask=801Valid Flits Received : Data : Shows legal flit time (hides impact of L0p and L0c). : Count Data Flits (which consume all slots), but how much to count is based on Slot0-2 mask, so count can be 0-3 depending on which slots are enabled for counting.unc_upi_rxl_flits.idleuncore interconnectValid Flits Received : Idleevent=3,umask=0x4701Valid Flits Received : Idle : Shows legal flit time (hides impact of L0p and L0c)unc_upi_rxl_flits.llcrduncore interconnectValid Flits Received : LLCRD Not Emptyevent=3,umask=0x1001Valid Flits Received : LLCRD Not Empty : Shows legal flit time (hides impact of L0p and L0c). : Enables counting of LLCRD (with non-zero payload). This only applies to slot 2 since LLCRD is only allowed in slot 2unc_upi_rxl_flits.llctrluncore interconnectValid Flits Received : LLCTRLevent=3,umask=0x4001Valid Flits Received : LLCTRL : Shows legal flit time (hides impact of L0p and L0c). : Equivalent to an idle packet.  Enables counting of slot 0 LLCTRL messagesunc_upi_rxl_flits.non_datauncore interconnectValid Flits Received : All Non Dataevent=3,umask=0x9701Valid Flits Received : All Non Data : Shows legal flit time (hides impact of L0p and L0c)unc_upi_rxl_flits.nulluncore interconnectValid Flits Received : Slot NULL or LLCRD Emptyevent=3,umask=0x2001Valid Flits Received : Slot NULL or LLCRD Empty : Shows legal flit time (hides impact of L0p and L0c). : LLCRD with all zeros is treated as NULL. Slot 1 is not treated as NULL if slot 0 is a dual slot. This can apply to slot 0,1, or 2unc_upi_rxl_flits.prothdruncore interconnectValid Flits Received : Protocol Headerevent=3,umask=0x8001Valid Flits Received : Protocol Header : Shows legal flit time (hides impact of L0p and L0c). : Enables count of protocol headers in slot 0,1,2 (depending on slot uMask bits)unc_upi_rxl_flits.slot0uncore interconnectValid Flits Received : Slot 0event=3,umask=101Valid Flits Received : Slot 0 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 0 - Other mask bits determine types of headers to countunc_upi_rxl_flits.slot1uncore interconnectValid Flits Received : Slot 1event=3,umask=201Valid Flits Received : Slot 1 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 1 - Other mask bits determine types of headers to countunc_upi_rxl_flits.slot2uncore interconnectValid Flits Received : Slot 2event=3,umask=401Valid Flits Received : Slot 2 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 2 - Other mask bits determine types of headers to countunc_upi_rxl_inserts.slot0uncore interconnectRxQ Flit Buffer Allocations : Slot 0event=0x30,umask=101RxQ Flit Buffer Allocations : Slot 0 : Number of allocations into the UPI Rx Flit Buffer.  Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetimeunc_upi_rxl_inserts.slot1uncore interconnectRxQ Flit Buffer Allocations : Slot 1event=0x30,umask=201RxQ Flit Buffer Allocations : Slot 1 : Number of allocations into the UPI Rx Flit Buffer.  Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetimeunc_upi_rxl_inserts.slot2uncore interconnectRxQ Flit Buffer Allocations : Slot 2event=0x30,umask=401RxQ Flit Buffer Allocations : Slot 2 : Number of allocations into the UPI Rx Flit Buffer.  Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetimeunc_upi_rxl_occupancy.slot0uncore interconnectRxQ Occupancy - All Packets : Slot 0event=0x32,umask=101RxQ Occupancy - All Packets : Slot 0 : Accumulates the number of elements in the UPI RxQ in each cycle.  Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetimeunc_upi_rxl_occupancy.slot1uncore interconnectRxQ Occupancy - All Packets : Slot 1event=0x32,umask=201RxQ Occupancy - All Packets : Slot 1 : Accumulates the number of elements in the UPI RxQ in each cycle.  Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetimeunc_upi_rxl_occupancy.slot2uncore interconnectRxQ Occupancy - All Packets : Slot 2event=0x32,umask=401RxQ Occupancy - All Packets : Slot 2 : Accumulates the number of elements in the UPI RxQ in each cycle.  Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetimeunc_upi_txl0p_power_cyclesuncore interconnectCycles in L0pevent=0x2701Cycles in L0p : Number of UPI qfclk cycles spent in L0p power mode.  L0p is a mode where we disable 1/2 of the UPI lanes, decreasing our bandwidth in order to save power.  It increases snoop and data transfer latencies and decreases overall bandwidth.  This mode can be very useful in NUMA optimized workloads that largely only utilize UPI for snoops and their responses.  Use edge detect to count the number of instances when the UPI link entered L0p.  Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in anotherunc_upi_txl0_power_cyclesuncore interconnectCycles in L0event=0x2601Cycles in L0 : Number of UPI qfclk cycles spent in L0 power mode in the Link Layer.  L0 is the default mode which provides the highest performance with the most power.  Use edge detect to count the number of instances that the link entered L0.  Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another.  The phy layer  sometimes leaves L0 for training, which will not be captured by this eventunc_upi_txl_any_flits.datauncore interconnectUNC_UPI_TxL_ANY_FLITS.DATAevent=0x4a,umask=801unc_upi_txl_any_flits.llcrduncore interconnectUNC_UPI_TxL_ANY_FLITS.LLCRDevent=0x4a,umask=0x1001unc_upi_txl_any_flits.llctrluncore interconnectUNC_UPI_TxL_ANY_FLITS.LLCTRLevent=0x4a,umask=0x4001unc_upi_txl_any_flits.nulluncore interconnectUNC_UPI_TxL_ANY_FLITS.NULLevent=0x4a,umask=0x2001unc_upi_txl_any_flits.prothdruncore interconnectUNC_UPI_TxL_ANY_FLITS.PROTHDRevent=0x4a,umask=0x8001unc_upi_txl_any_flits.slot0uncore interconnectUNC_UPI_TxL_ANY_FLITS.SLOT0event=0x4a,umask=101unc_upi_txl_any_flits.slot1uncore interconnectUNC_UPI_TxL_ANY_FLITS.SLOT1event=0x4a,umask=201unc_upi_txl_any_flits.slot2uncore interconnectUNC_UPI_TxL_ANY_FLITS.SLOT2event=0x4a,umask=401unc_upi_txl_basic_hdr_match.ncbuncore interconnectMatches on Transmit path of a UPI Port : Non-Coherent Bypassevent=4,umask=0xe01Matches on Transmit path of a UPI Port : Non-Coherent Bypass : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_txl_basic_hdr_match.ncb_opcuncore interconnectMatches on Transmit path of a UPI Port : Non-Coherent Bypass, Match Opcodeevent=4,umask=0x10e01Matches on Transmit path of a UPI Port : Non-Coherent Bypass, Match Opcode : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_txl_basic_hdr_match.ncsuncore interconnectMatches on Transmit path of a UPI Port : Non-Coherent Standardevent=4,umask=0xf01Matches on Transmit path of a UPI Port : Non-Coherent Standard : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_txl_basic_hdr_match.ncs_opcuncore interconnectMatches on Transmit path of a UPI Port : Non-Coherent Standard, Match Opcodeevent=4,umask=0x10f01Matches on Transmit path of a UPI Port : Non-Coherent Standard, Match Opcode : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_txl_bypasseduncore interconnectTx Flit Buffer Bypassedevent=0x4101Tx Flit Buffer Bypassed : Counts the number of times that an incoming flit was able to bypass the Tx flit buffer and pass directly out the UPI Link. Generally, when data is transmitted across UPI, it will bypass the TxQ and pass directly to the link.  However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the linkunc_upi_txl_flits.all_datauncore interconnectValid Flits Sent : All Dataevent=2,umask=0xf01Valid Flits Sent : All Data : Counts number of data flits across this UPI linkunc_upi_txl_flits.all_llcrduncore interconnectValid Flits Sent : All LLCRD Not Emptyevent=2,umask=0x1701Valid Flits Sent : All Data : Shows legal flit time (hides impact of L0p and L0c)unc_upi_txl_flits.all_llctrluncore interconnectValid Flits Sent : All LLCTRLevent=2,umask=0x4701Valid Flits Sent : All LLCTRL : Shows legal flit time (hides impact of L0p and L0c)unc_upi_txl_flits.all_nulluncore interconnectAll Null Flitsevent=2,umask=0x2701unc_upi_txl_flits.all_prothdruncore interconnectValid Flits Sent : All Protocol Headerevent=2,umask=0x8701Valid Flits Sent : All ProtDDR : Shows legal flit time (hides impact of L0p and L0c)unc_upi_txl_flits.datauncore interconnectValid Flits Sent : Dataevent=2,umask=801Valid Flits Sent : Data : Shows legal flit time (hides impact of L0p and L0c). : Count Data Flits (which consume all slots), but how much to count is based on Slot0-2 mask, so count can be 0-3 depending on which slots are enabled for counting.unc_upi_txl_flits.idleuncore interconnectValid Flits Sent : Idleevent=2,umask=0x4701Valid Flits Sent : Idle : Shows legal flit time (hides impact of L0p and L0c)unc_upi_txl_flits.llcrduncore interconnectValid Flits Sent : LLCRD Not Emptyevent=2,umask=0x1001Valid Flits Sent : LLCRD Not Empty : Shows legal flit time (hides impact of L0p and L0c). : Enables counting of LLCRD (with non-zero payload). This only applies to slot 2 since LLCRD is only allowed in slot 2unc_upi_txl_flits.llctrluncore interconnectValid Flits Sent : LLCTRLevent=2,umask=0x4001Valid Flits Sent : LLCTRL : Shows legal flit time (hides impact of L0p and L0c). : Equivalent to an idle packet.  Enables counting of slot 0 LLCTRL messagesunc_upi_txl_flits.non_datauncore interconnectValid Flits Sent : All Non Dataevent=2,umask=0x9701Valid Flits Sent : All Non Data : Shows legal flit time (hides impact of L0p and L0c)unc_upi_txl_flits.nulluncore interconnectValid Flits Sent : Slot NULL or LLCRD Emptyevent=2,umask=0x2001Valid Flits Sent : Slot NULL or LLCRD Empty : Shows legal flit time (hides impact of L0p and L0c). : LLCRD with all zeros is treated as NULL. Slot 1 is not treated as NULL if slot 0 is a dual slot. This can apply to slot 0,1, or 2unc_upi_txl_flits.prothdruncore interconnectValid Flits Sent : Protocol Headerevent=2,umask=0x8001Valid Flits Sent : Protocol Header : Shows legal flit time (hides impact of L0p and L0c). : Enables count of protocol headers in slot 0,1,2 (depending on slot uMask bits)unc_upi_txl_flits.slot0uncore interconnectValid Flits Sent : Slot 0event=2,umask=101Valid Flits Sent : Slot 0 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 0 - Other mask bits determine types of headers to countunc_upi_txl_flits.slot1uncore interconnectValid Flits Sent : Slot 1event=2,umask=201Valid Flits Sent : Slot 1 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 1 - Other mask bits determine types of headers to countunc_upi_txl_flits.slot2uncore interconnectValid Flits Sent : Slot 2event=2,umask=401Valid Flits Sent : Slot 2 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 2 - Other mask bits determine types of headers to countunc_upi_txl_insertsuncore interconnectTx Flit Buffer Allocationsevent=0x4001Tx Flit Buffer Allocations : Number of allocations into the UPI Tx Flit Buffer.  Generally, when data is transmitted across UPI, it will bypass the TxQ and pass directly to the link.  However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetimeunc_upi_txl_occupancyuncore interconnectTx Flit Buffer Occupancyevent=0x4201Tx Flit Buffer Occupancy : Accumulates the number of flits in the TxQ.  Generally, when data is transmitted across UPI, it will bypass the TxQ and pass directly to the link.  However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link. This can be used with the cycles not empty event to track average occupancy, or the allocations event to track average lifetime in the TxQunc_upi_vna_credit_return_occupancyuncore interconnectVNA Credits Pending Return - Occupancyevent=0x4401VNA Credits Pending Return - Occupancy : Number of VNA credits in the Rx side that are waitng to be returned back across the linkunc_u_event_msg.doorbell_rcvduncore interconnectMessage Received : Doorbellevent=0x42,umask=801unc_u_event_msg.int_priouncore interconnectMessage Received : Interruptevent=0x42,umask=0x1001Message Received : Interrupt : Interruptsunc_u_event_msg.ipi_rcvduncore interconnectMessage Received : IPIevent=0x42,umask=401Message Received : IPI : Inter Processor Interruptsunc_u_event_msg.msi_rcvduncore interconnectMessage Received : MSIevent=0x42,umask=201Message Received : MSI : Message Signaled Interrupts - interrupts sent by devices (including PCIe via IOxAPIC) (Socket Mode only)unc_u_event_msg.vlw_rcvduncore interconnectMessage Received : VLWevent=0x42,umask=101Message Received : VLW : Virtual Logical Wire (legacy) message were received from Uncoreunc_u_m2u_misc1.rxc_cycles_ne_cbo_ncbuncore interconnectUNC_U_M2U_MISC1.RxC_CYCLES_NE_CBO_NCBevent=0x4d,umask=101unc_u_m2u_misc1.rxc_cycles_ne_cbo_ncsuncore interconnectUNC_U_M2U_MISC1.RxC_CYCLES_NE_CBO_NCSevent=0x4d,umask=201unc_u_m2u_misc1.rxc_cycles_ne_upi_ncbuncore interconnectUNC_U_M2U_MISC1.RxC_CYCLES_NE_UPI_NCBevent=0x4d,umask=401unc_u_m2u_misc1.rxc_cycles_ne_upi_ncsuncore interconnectUNC_U_M2U_MISC1.RxC_CYCLES_NE_UPI_NCSevent=0x4d,umask=801unc_u_m2u_misc1.txc_cycles_crd_ovf_cbo_ncbuncore interconnectUNC_U_M2U_MISC1.TxC_CYCLES_CRD_OVF_CBO_NCBevent=0x4d,umask=0x1001unc_u_m2u_misc1.txc_cycles_crd_ovf_cbo_ncsuncore interconnectUNC_U_M2U_MISC1.TxC_CYCLES_CRD_OVF_CBO_NCSevent=0x4d,umask=0x2001unc_u_m2u_misc1.txc_cycles_crd_ovf_upi_ncbuncore interconnectUNC_U_M2U_MISC1.TxC_CYCLES_CRD_OVF_UPI_NCBevent=0x4d,umask=0x4001unc_u_m2u_misc1.txc_cycles_crd_ovf_upi_ncsuncore interconnectUNC_U_M2U_MISC1.TxC_CYCLES_CRD_OVF_UPI_NCSevent=0x4d,umask=0x8001unc_u_m2u_misc2.rxc_cycles_empty_bluncore interconnectUNC_U_M2U_MISC2.RxC_CYCLES_EMPTY_BLevent=0x4e,umask=201unc_u_m2u_misc2.rxc_cycles_full_bluncore interconnectUNC_U_M2U_MISC2.RxC_CYCLES_FULL_BLevent=0x4e,umask=101unc_u_m2u_misc2.txc_cycles_crd_ovf_vn0_ncbuncore interconnectUNC_U_M2U_MISC2.TxC_CYCLES_CRD_OVF_VN0_NCBevent=0x4e,umask=401unc_u_m2u_misc2.txc_cycles_crd_ovf_vn0_ncsuncore interconnectUNC_U_M2U_MISC2.TxC_CYCLES_CRD_OVF_VN0_NCSevent=0x4e,umask=801unc_u_m2u_misc2.txc_cycles_empty_akuncore interconnectUNC_U_M2U_MISC2.TxC_CYCLES_EMPTY_AKevent=0x4e,umask=0x2001unc_u_m2u_misc2.txc_cycles_empty_akcuncore interconnectUNC_U_M2U_MISC2.TxC_CYCLES_EMPTY_AKCevent=0x4e,umask=0x4001unc_u_m2u_misc2.txc_cycles_empty_bluncore interconnectUNC_U_M2U_MISC2.TxC_CYCLES_EMPTY_BLevent=0x4e,umask=0x1001unc_u_m2u_misc2.txc_cycles_full_bluncore interconnectUNC_U_M2U_MISC2.TxC_CYCLES_FULL_BLevent=0x4e,umask=0x8001unc_u_m2u_misc3.txc_cycles_full_akuncore interconnectUNC_U_M2U_MISC3.TxC_CYCLES_FULL_AKevent=0x4f,umask=101unc_u_m2u_misc3.txc_cycles_full_akcuncore interconnectUNC_U_M2U_MISC3.TxC_CYCLES_FULL_AKCevent=0x4f,umask=201unc_u_phold_cycles.assert_to_ackuncore interconnectCycles PHOLD Assert to Ack : Assert to ACKevent=0x45,umask=101Cycles PHOLD Assert to Ack : Assert to ACK : PHOLD cyclesunc_u_racu_requestsuncore interconnectRACU Requestevent=0x4601RACU Request : Number outstanding register requests within message channel trackeruncore_iio_free_runningunc_iio_bandwidth_in.part0_freerununcore ioFree running counter that increments for every 32 bytes of data sent from the IO agent to the SOCevent=0xff,umask=0x2001unc_iio_bandwidth_in.part1_freerununcore ioFree running counter that increments for every 32 bytes of data sent from the IO agent to the SOCevent=0xff,umask=0x2101unc_iio_bandwidth_in.part2_freerununcore ioFree running counter that increments for every 32 bytes of data sent from the IO agent to the SOCevent=0xff,umask=0x2201unc_iio_bandwidth_in.part3_freerununcore ioFree running counter that increments for every 32 bytes of data sent from the IO agent to the SOCevent=0xff,umask=0x2301unc_iio_bandwidth_in.part4_freerununcore ioFree running counter that increments for every 32 bytes of data sent from the IO agent to the SOCevent=0xff,umask=0x2401unc_iio_bandwidth_in.part5_freerununcore ioFree running counter that increments for every 32 bytes of data sent from the IO agent to the SOCevent=0xff,umask=0x2501unc_iio_bandwidth_in.part6_freerununcore ioFree running counter that increments for every 32 bytes of data sent from the IO agent to the SOCevent=0xff,umask=0x2601unc_iio_bandwidth_in.part7_freerununcore ioFree running counter that increments for every 32 bytes of data sent from the IO agent to the SOCevent=0xff,umask=0x2701unc_iio_bandwidth_out.part0_freerununcore ioFree running counter that increments for every 32 bytes of data sent from the IO agent to the SOCevent=0xff,umask=0x3001unc_iio_bandwidth_out.part1_freerununcore ioFree running counter that increments for every 32 bytes of data sent from the IO agent to the SOCevent=0xff,umask=0x3101unc_iio_bandwidth_out.part2_freerununcore ioFree running counter that increments for every 32 bytes of data sent from the IO agent to the SOCevent=0xff,umask=0x3201unc_iio_bandwidth_out.part3_freerununcore ioFree running counter that increments for every 32 bytes of data sent from the IO agent to the SOCevent=0xff,umask=0x3301unc_iio_bandwidth_out.part4_freerununcore ioFree running counter that increments for every 32 bytes of data sent from the IO agent to the SOCevent=0xff,umask=0x3401unc_iio_bandwidth_out.part5_freerununcore ioFree running counter that increments for every 32 bytes of data sent from the IO agent to the SOCevent=0xff,umask=0x3501unc_iio_bandwidth_out.part6_freerununcore ioFree running counter that increments for every 32 bytes of data sent from the IO agent to the SOCevent=0xff,umask=0x3601unc_iio_bandwidth_out.part7_freerununcore ioFree running counter that increments for every 32 bytes of data sent from the IO agent to the SOCevent=0xff,umask=0x3701unc_iio_clockticksuncore ioIIO Clockticksevent=101Number of IIO clock cycles while the event is enabledunc_iio_clockticks_freerununcore ioFree running counter that increments for IIO clocktickevent=0xff,umask=0x1001unc_iio_comp_buf_inserts.cmpd.all_partsuncore ioPCIe Completion Buffer Inserts of completions with data: Part 0-7event=0xc2,ch_mask=0xff,fc_mask=7,umask=401PCIe Completion Buffer Inserts of completions with data : Part 0-7unc_iio_comp_buf_inserts.cmpd.part0uncore ioPCIe Completion Buffer Inserts of completions with data: Part 0event=0xc2,ch_mask=1,fc_mask=7,umask=0x700100401PCIe Completion Buffer Inserts of completions with data : Part 0 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_comp_buf_inserts.cmpd.part1uncore ioPCIe Completion Buffer Inserts of completions with data: Part 1event=0xc2,ch_mask=2,fc_mask=7,umask=0x700200401PCIe Completion Buffer Inserts of completions with data : Part 1 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 1unc_iio_comp_buf_inserts.cmpd.part2uncore ioPCIe Completion Buffer Inserts of completions with data: Part 2event=0xc2,ch_mask=4,fc_mask=7,umask=0x700400401PCIe Completion Buffer Inserts of completions with data : Part 2 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 2unc_iio_comp_buf_inserts.cmpd.part3uncore ioPCIe Completion Buffer Inserts of completions with data: Part 3event=0xc2,ch_mask=8,fc_mask=7,umask=0x700800401PCIe Completion Buffer Inserts of completions with data : Part 2 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 3unc_iio_comp_buf_inserts.cmpd.part4uncore ioPCIe Completion Buffer Inserts of completions with data: Part 4event=0xc2,ch_mask=0x10,fc_mask=7,umask=0x701000401PCIe Completion Buffer Inserts of completions with data : Part 0 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 4unc_iio_comp_buf_inserts.cmpd.part5uncore ioPCIe Completion Buffer Inserts of completions with data: Part 5event=0xc2,ch_mask=0x20,fc_mask=7,umask=0x702000401PCIe Completion Buffer Inserts of completions with data : Part 1 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 5unc_iio_comp_buf_inserts.cmpd.part6uncore ioPCIe Completion Buffer Inserts of completions with data: Part 6event=0xc2,ch_mask=0x40,fc_mask=7,umask=0x704000401PCIe Completion Buffer Inserts of completions with data : Part 2 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 6unc_iio_comp_buf_inserts.cmpd.part7uncore ioPCIe Completion Buffer Inserts of completions with data: Part 7event=0xc2,ch_mask=0x80,fc_mask=7,umask=0x708000401PCIe Completion Buffer Inserts of completions with data : Part 2 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 7unc_iio_comp_buf_occupancy.cmpd.all_partsuncore ioUNC_IIO_COMP_BUF_OCCUPANCY.CMPD.ALL_PARTSevent=0xd5,fc_mask=7,umask=0xff01unc_iio_comp_buf_occupancy.cmpd.part0uncore ioPCIe Completion Buffer Occupancy : Part 0event=0xd5,fc_mask=7,umask=0x700000101x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_comp_buf_occupancy.cmpd.part1uncore ioPCIe Completion Buffer Occupancy : Part 1event=0xd5,fc_mask=7,umask=0x700000201x4 card is plugged in to slot 1unc_iio_comp_buf_occupancy.cmpd.part2uncore ioPCIe Completion Buffer Occupancy : Part 2event=0xd5,fc_mask=7,umask=0x700000401x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1unc_iio_comp_buf_occupancy.cmpd.part3uncore ioPCIe Completion Buffer Occupancy : Part 3event=0xd5,fc_mask=7,umask=0x700000801x4 card is plugged in to slot 3unc_iio_comp_buf_occupancy.cmpd.part4uncore ioPCIe Completion Buffer Occupancy : Part 4event=0xd5,fc_mask=7,umask=0x700001001x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_comp_buf_occupancy.cmpd.part5uncore ioPCIe Completion Buffer Occupancy : Part 5event=0xd5,fc_mask=7,umask=0x700002001x4 card is plugged in to slot 1unc_iio_comp_buf_occupancy.cmpd.part6uncore ioPCIe Completion Buffer Occupancy : Part 6event=0xd5,fc_mask=7,umask=0x700004001x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1unc_iio_comp_buf_occupancy.cmpd.part7uncore ioPCIe Completion Buffer Occupancy : Part 7event=0xd5,fc_mask=7,umask=0x700008001x4 card is plugged in to slot 3unc_iio_data_req_by_cpu.mem_read.all_partsuncore ioRead request for 4 bytes made by the CPU to IIO Part0-7event=0xc0,ch_mask=0xff,fc_mask=7,umask=401unc_iio_data_req_by_cpu.mem_read.part0uncore ioRead request for 4 bytes made by the CPU to IIO Part0event=0xc0,ch_mask=1,fc_mask=7,umask=0x700100401Data requested by the CPU : Core reading from Card's MMIO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_by_cpu.mem_read.part1uncore ioRead request for 4 bytes made by the CPU to IIO Part1event=0xc0,ch_mask=2,fc_mask=7,umask=0x700200401Data requested by the CPU : Core reading from Card's MMIO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_by_cpu.mem_read.part2uncore ioRead request for 4 bytes made by the CPU to IIO Part2event=0xc0,ch_mask=4,fc_mask=7,umask=0x700400401Data requested by the CPU : Core reading from Card's MMIO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1unc_iio_data_req_by_cpu.mem_read.part3uncore ioRead request for 4 bytes made by the CPU to IIO Part3event=0xc0,ch_mask=8,fc_mask=7,umask=0x700800401Data requested by the CPU : Core reading from Card's MMIO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_by_cpu.mem_read.part4uncore ioData requested by the CPU : Core reading from Cards MMIO spaceevent=0xc0,ch_mask=0x10,fc_mask=7,umask=0x701000401Data requested by the CPU : Core reading from Cards MMIO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_by_cpu.mem_read.part5uncore ioData requested by the CPU : Core reading from Cards MMIO spaceevent=0xc0,ch_mask=0x20,fc_mask=7,umask=0x702000401Data requested by the CPU : Core reading from Cards MMIO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_by_cpu.mem_read.part6uncore ioData requested by the CPU : Core reading from Cards MMIO spaceevent=0xc0,ch_mask=0x40,fc_mask=7,umask=0x704000401Data requested by the CPU : Core reading from Cards MMIO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1unc_iio_data_req_by_cpu.mem_read.part7uncore ioData requested by the CPU : Core reading from Cards MMIO spaceevent=0xc0,ch_mask=0x80,fc_mask=7,umask=0x708000401Data requested by the CPU : Core reading from Cards MMIO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_by_cpu.mem_write.all_partsuncore ioWrite request of 4 bytes made to IIO Part0-7 by the CPUevent=0xc0,ch_mask=0xff,fc_mask=7,umask=101unc_iio_data_req_by_cpu.mem_write.iommu0uncore ioData requested by the CPU : Core writing to Cards MMIO spaceevent=0xc0,ch_mask=0x100,fc_mask=7,umask=101Data requested by the CPU : Core writing to Cards MMIO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : IOMMU - Type 0unc_iio_data_req_by_cpu.mem_write.iommu1uncore ioData requested by the CPU : Core writing to Cards MMIO spaceevent=0xc0,ch_mask=0x200,fc_mask=7,umask=101Data requested by the CPU : Core writing to Cards MMIO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : IOMMU - Type 1unc_iio_data_req_by_cpu.mem_write.part0uncore ioWrite request of 4 bytes made to IIO Part0 by the CPUevent=0xc0,ch_mask=1,fc_mask=7,umask=101Data requested by the CPU : Core writing to Card's MMIO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_by_cpu.mem_write.part1uncore ioWrite request of 4 bytes made to IIO Part1 by the CPUevent=0xc0,ch_mask=2,fc_mask=7,umask=101Data requested by the CPU : Core writing to Card's MMIO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_by_cpu.mem_write.part2uncore ioWrite request of 4 bytes made to IIO Part2 by the CPUevent=0xc0,ch_mask=4,fc_mask=7,umask=101Data requested by the CPU : Core writing to Card's MMIO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1unc_iio_data_req_by_cpu.mem_write.part3uncore ioWrite request of 4 bytes made to IIO Part3 by the CPUevent=0xc0,ch_mask=8,fc_mask=7,umask=101Data requested by the CPU : Core writing to Card's MMIO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_by_cpu.mem_write.part4uncore ioData requested by the CPU : Core writing to Cards MMIO spaceevent=0xc0,ch_mask=0x10,fc_mask=7,umask=101Data requested by the CPU : Core writing to Cards MMIO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_by_cpu.mem_write.part5uncore ioData requested by the CPU : Core writing to Cards MMIO spaceevent=0xc0,ch_mask=0x20,fc_mask=7,umask=101Data requested by the CPU : Core writing to Cards MMIO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_by_cpu.mem_write.part6uncore ioData requested by the CPU : Core writing to Cards MMIO spaceevent=0xc0,ch_mask=0x40,fc_mask=7,umask=101Data requested by the CPU : Core writing to Cards MMIO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1unc_iio_data_req_by_cpu.mem_write.part7uncore ioData requested by the CPU : Core writing to Cards MMIO spaceevent=0xc0,ch_mask=0x80,fc_mask=7,umask=101Data requested by the CPU : Core writing to Cards MMIO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_by_cpu.peer_read.part0uncore ioPeer to peer read request for 4 bytes made by a different IIO unit to IIO Part0event=0xc0,ch_mask=1,fc_mask=7,umask=0x700100801Data requested by the CPU : Another card (different IIO stack) reading from this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_by_cpu.peer_read.part1uncore ioPeer to peer read request for 4 bytes made by a different IIO unit to IIO Part0event=0xc0,ch_mask=2,fc_mask=7,umask=0x700200801Data requested by the CPU : Another card (different IIO stack) reading from this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_by_cpu.peer_read.part2uncore ioPeer to peer read request for 4 bytes made by a different IIO unit to IIO Part0event=0xc0,ch_mask=4,fc_mask=7,umask=0x700400801Data requested by the CPU : Another card (different IIO stack) reading from this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1unc_iio_data_req_by_cpu.peer_read.part3uncore ioPeer to peer read request for 4 bytes made by a different IIO unit to IIO Part0event=0xc0,ch_mask=8,fc_mask=7,umask=0x700800801Data requested by the CPU : Another card (different IIO stack) reading from this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_by_cpu.peer_read.part4uncore ioData requested by the CPU : Another card (different IIO stack) reading from this cardevent=0xc0,ch_mask=0x10,fc_mask=7,umask=0x701000801Data requested by the CPU : Another card (different IIO stack) reading from this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_by_cpu.peer_read.part5uncore ioData requested by the CPU : Another card (different IIO stack) reading from this cardevent=0xc0,ch_mask=0x20,fc_mask=7,umask=0x702000801Data requested by the CPU : Another card (different IIO stack) reading from this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_by_cpu.peer_read.part6uncore ioData requested by the CPU : Another card (different IIO stack) reading from this cardevent=0xc0,ch_mask=0x40,fc_mask=7,umask=0x704000801Data requested by the CPU : Another card (different IIO stack) reading from this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1unc_iio_data_req_by_cpu.peer_read.part7uncore ioData requested by the CPU : Another card (different IIO stack) reading from this cardevent=0xc0,ch_mask=0x80,fc_mask=7,umask=0x708000801Data requested by the CPU : Another card (different IIO stack) reading from this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_by_cpu.peer_write.part0uncore ioPeer to peer write request of 4 bytes made to IIO Part0 by a different IIO unitevent=0xc0,ch_mask=1,fc_mask=7,umask=0x700100201Data requested by the CPU : Another card (different IIO stack) writing to this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_by_cpu.peer_write.part1uncore ioPeer to peer write request of 4 bytes made to IIO Part0 by a different IIO unitevent=0xc0,ch_mask=2,fc_mask=7,umask=0x700200201Data requested by the CPU : Another card (different IIO stack) writing to this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_by_cpu.peer_write.part2uncore ioPeer to peer write request of 4 bytes made to IIO Part0 by a different IIO unitevent=0xc0,ch_mask=4,fc_mask=7,umask=0x700400201Data requested by the CPU : Another card (different IIO stack) writing to this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1unc_iio_data_req_by_cpu.peer_write.part3uncore ioPeer to peer write request of 4 bytes made to IIO Part0 by a different IIO unitevent=0xc0,ch_mask=8,fc_mask=7,umask=0x700800201Data requested by the CPU : Another card (different IIO stack) writing to this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_by_cpu.peer_write.part4uncore ioData requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc0,ch_mask=0x10,fc_mask=7,umask=0x701000201Data requested by the CPU : Another card (different IIO stack) writing to this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_by_cpu.peer_write.part5uncore ioData requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc0,ch_mask=0x20,fc_mask=7,umask=0x702000201Data requested by the CPU : Another card (different IIO stack) writing to this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_by_cpu.peer_write.part6uncore ioData requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc0,ch_mask=0x40,fc_mask=7,umask=0x704000201Data requested by the CPU : Another card (different IIO stack) writing to this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1unc_iio_data_req_by_cpu.peer_write.part7uncore ioData requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc0,ch_mask=0x80,fc_mask=7,umask=0x708000201Data requested by the CPU : Another card (different IIO stack) writing to this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_of_cpu.cmpd.all_partsuncore ioData requested of the CPU : CmpD - device sending completion to CPU requestevent=0x83,ch_mask=0xff,fc_mask=7,umask=0x8001Data requested of the CPU : CmpD - device sending completion to CPU request : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_of_cpu.cmpd.part0uncore ioData requested of the CPU : CmpD - device sending completion to CPU requestevent=0x83,ch_mask=1,fc_mask=7,umask=0x8001Data requested of the CPU : CmpD - device sending completion to CPU request : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_of_cpu.cmpd.part1uncore ioData requested of the CPU : CmpD - device sending completion to CPU requestevent=0x83,ch_mask=2,fc_mask=7,umask=0x8001Data requested of the CPU : CmpD - device sending completion to CPU request : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_of_cpu.cmpd.part2uncore ioData requested of the CPU : CmpD - device sending completion to CPU requestevent=0x83,ch_mask=4,fc_mask=7,umask=0x8001Data requested of the CPU : CmpD - device sending completion to CPU request : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_data_req_of_cpu.cmpd.part3uncore ioData requested of the CPU : CmpD - device sending completion to CPU requestevent=0x83,ch_mask=8,fc_mask=7,umask=0x8001Data requested of the CPU : CmpD - device sending completion to CPU request : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_of_cpu.cmpd.part4uncore ioData requested of the CPU : CmpD - device sending completion to CPU requestevent=0x83,ch_mask=0x10,fc_mask=7,umask=0x8001Data requested of the CPU : CmpD - device sending completion to CPU request : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_data_req_of_cpu.cmpd.part5uncore ioData requested of the CPU : CmpD - device sending completion to CPU requestevent=0x83,ch_mask=0x20,fc_mask=7,umask=0x8001Data requested of the CPU : CmpD - device sending completion to CPU request : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 5unc_iio_data_req_of_cpu.cmpd.part6uncore ioData requested of the CPU : CmpD - device sending completion to CPU requestevent=0x83,ch_mask=0x40,fc_mask=7,umask=0x8001Data requested of the CPU : CmpD - device sending completion to CPU request : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_data_req_of_cpu.cmpd.part7uncore ioData requested of the CPU : CmpD - device sending completion to CPU requestevent=0x83,ch_mask=0x80,fc_mask=7,umask=0x8001Data requested of the CPU : CmpD - device sending completion to CPU request : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 7unc_iio_data_req_of_cpu.mem_read.all_partsuncore ioRead request for 4 bytes made by IIO Part0-7 to Memoryevent=0x83,ch_mask=0xff,fc_mask=7,umask=401unc_iio_data_req_of_cpu.mem_read.part0uncore ioRead request for 4 bytes made by IIO Part0 to Memoryevent=0x83,ch_mask=1,fc_mask=7,umask=401Data requested of the CPU : Card reading from DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_of_cpu.mem_read.part1uncore ioRead request for 4 bytes made by IIO Part1 to Memoryevent=0x83,ch_mask=2,fc_mask=7,umask=401Data requested of the CPU : Card reading from DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_of_cpu.mem_read.part2uncore ioRead request for 4 bytes made by IIO Part2 to Memoryevent=0x83,ch_mask=4,fc_mask=7,umask=401Data requested of the CPU : Card reading from DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1unc_iio_data_req_of_cpu.mem_read.part3uncore ioRead request for 4 bytes made by IIO Part3 to Memoryevent=0x83,ch_mask=8,fc_mask=7,umask=401Data requested of the CPU : Card reading from DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_of_cpu.mem_read.part4uncore ioData requested of the CPU : Card reading from DRAMevent=0x83,ch_mask=0x10,fc_mask=7,umask=401Data requested of the CPU : Card reading from DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_of_cpu.mem_read.part5uncore ioData requested of the CPU : Card reading from DRAMevent=0x83,ch_mask=0x20,fc_mask=7,umask=401Data requested of the CPU : Card reading from DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_of_cpu.mem_read.part6uncore ioData requested of the CPU : Card reading from DRAMevent=0x83,ch_mask=0x40,fc_mask=7,umask=401Data requested of the CPU : Card reading from DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1unc_iio_data_req_of_cpu.mem_read.part7uncore ioData requested of the CPU : Card reading from DRAMevent=0x83,ch_mask=0x80,fc_mask=7,umask=401Data requested of the CPU : Card reading from DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_of_cpu.mem_write.all_partsuncore ioWrite request of 4 bytes made by IIO Part0-7 to Memoryevent=0x83,ch_mask=0xff,fc_mask=7,umask=101unc_iio_data_req_of_cpu.mem_write.part0uncore ioWrite request of 4 bytes made by IIO Part0 to Memoryevent=0x83,ch_mask=1,fc_mask=7,umask=101Data requested of the CPU : Card writing to DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_of_cpu.mem_write.part1uncore ioWrite request of 4 bytes made by IIO Part1 to Memoryevent=0x83,ch_mask=2,fc_mask=7,umask=101Data requested of the CPU : Card writing to DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_of_cpu.mem_write.part2uncore ioWrite request of 4 bytes made by IIO Part2 to Memoryevent=0x83,ch_mask=4,fc_mask=7,umask=101Data requested of the CPU : Card writing to DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1unc_iio_data_req_of_cpu.mem_write.part3uncore ioWrite request of 4 bytes made by IIO Part3 to Memoryevent=0x83,ch_mask=8,fc_mask=7,umask=101Data requested of the CPU : Card writing to DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_of_cpu.mem_write.part4uncore ioData requested of the CPU : Card writing to DRAMevent=0x83,ch_mask=0x10,fc_mask=7,umask=101Data requested of the CPU : Card writing to DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_of_cpu.mem_write.part5uncore ioData requested of the CPU : Card writing to DRAMevent=0x83,ch_mask=0x20,fc_mask=7,umask=101Data requested of the CPU : Card writing to DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_of_cpu.mem_write.part6uncore ioData requested of the CPU : Card writing to DRAMevent=0x83,ch_mask=0x40,fc_mask=7,umask=101Data requested of the CPU : Card writing to DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1unc_iio_data_req_of_cpu.mem_write.part7uncore ioData requested of the CPU : Card writing to DRAMevent=0x83,ch_mask=0x80,fc_mask=7,umask=101Data requested of the CPU : Card writing to DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_of_cpu.peer_write.part0uncore ioPeer to peer write request of 4 bytes made by IIO Part0 to an IIO targetevent=0x83,ch_mask=1,fc_mask=7,umask=201Data requested of the CPU : Card writing to another Card (same or different stack) : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_of_cpu.peer_write.part1uncore ioPeer to peer write request of 4 bytes made by IIO Part0 to an IIO targetevent=0x83,ch_mask=2,fc_mask=7,umask=201Data requested of the CPU : Card writing to another Card (same or different stack) : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_of_cpu.peer_write.part2uncore ioPeer to peer write request of 4 bytes made by IIO Part0 to an IIO targetevent=0x83,ch_mask=4,fc_mask=7,umask=201Data requested of the CPU : Card writing to another Card (same or different stack) : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1unc_iio_data_req_of_cpu.peer_write.part3uncore ioPeer to peer write request of 4 bytes made by IIO Part0 to an IIO targetevent=0x83,ch_mask=8,fc_mask=7,umask=201Data requested of the CPU : Card writing to another Card (same or different stack) : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_of_cpu.peer_write.part4uncore ioData requested of the CPU : Card writing to another Card (same or different stack)event=0x83,ch_mask=0x10,fc_mask=7,umask=201Data requested of the CPU : Card writing to another Card (same or different stack) : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_of_cpu.peer_write.part5uncore ioData requested of the CPU : Card writing to another Card (same or different stack)event=0x83,ch_mask=0x20,fc_mask=7,umask=201Data requested of the CPU : Card writing to another Card (same or different stack) : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_of_cpu.peer_write.part6uncore ioData requested of the CPU : Card writing to another Card (same or different stack)event=0x83,ch_mask=0x40,fc_mask=7,umask=201Data requested of the CPU : Card writing to another Card (same or different stack) : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1unc_iio_data_req_of_cpu.peer_write.part7uncore ioData requested of the CPU : Card writing to another Card (same or different stack)event=0x83,ch_mask=0x80,fc_mask=7,umask=201Data requested of the CPU : Card writing to another Card (same or different stack) : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 3unc_iio_inbound_arb_req.datauncore ioIncoming arbitration requests : Passing data to be writtenevent=0x86,ch_mask=0xff,fc_mask=7,umask=0x70ff02001Incoming arbitration requests : Passing data to be written : How often different queues (e.g. channel / fc) ask to send request into pipeline : Only for posted requestsunc_iio_inbound_arb_req.final_rd_wruncore ioIncoming arbitration requests : Issuing final read or write of lineevent=0x86,ch_mask=0xff,fc_mask=7,umask=801Incoming arbitration requests : Issuing final read or write of line : How often different queues (e.g. channel / fc) ask to send request into pipelineunc_iio_inbound_arb_req.iommu_hituncore ioIncoming arbitration requests : Processing response from IOMMUevent=0x86,ch_mask=0xff,fc_mask=7,umask=201Incoming arbitration requests : Processing response from IOMMU : How often different queues (e.g. channel / fc) ask to send request into pipelineunc_iio_inbound_arb_req.iommu_requncore ioIncoming arbitration requests : Issuing to IOMMUevent=0x86,ch_mask=0xff,fc_mask=7,umask=101Incoming arbitration requests : Issuing to IOMMU : How often different queues (e.g. channel / fc) ask to send request into pipelineunc_iio_inbound_arb_req.req_ownuncore ioIncoming arbitration requests : Request Ownershipevent=0x86,ch_mask=0xff,fc_mask=7,umask=0x70ff00401Incoming arbitration requests : Request Ownership : How often different queues (e.g. channel / fc) ask to send request into pipeline : Only for posted requestsunc_iio_inbound_arb_req.wruncore ioIncoming arbitration requests : Writing lineevent=0x86,ch_mask=0xff,fc_mask=7,umask=0x70ff01001Incoming arbitration requests : Writing line : How often different queues (e.g. channel / fc) ask to send request into pipeline : Only for posted requestsunc_iio_inbound_arb_won.datauncore ioIncoming arbitration requests granted : Passing data to be writtenevent=0x87,ch_mask=0xff,fc_mask=7,umask=0x70ff02001Incoming arbitration requests granted : Passing data to be written : How often different queues (e.g. channel / fc) are allowed to send request into pipeline : Only for posted requestsunc_iio_inbound_arb_won.final_rd_wruncore ioIncoming arbitration requests granted : Issuing final read or write of lineevent=0x87,ch_mask=0xff,fc_mask=7,umask=0x70ff00801Incoming arbitration requests granted : Issuing final read or write of line : How often different queues (e.g. channel / fc) are allowed to send request into pipelineunc_iio_inbound_arb_won.iommu_hituncore ioIncoming arbitration requests granted : Processing response from IOMMUevent=0x87,ch_mask=0xff,fc_mask=7,umask=0x70ff00201Incoming arbitration requests granted : Processing response from IOMMU : How often different queues (e.g. channel / fc) are allowed to send request into pipelineunc_iio_inbound_arb_won.iommu_requncore ioIncoming arbitration requests granted : Issuing to IOMMUevent=0x87,ch_mask=0xff,fc_mask=7,umask=0x70ff00101Incoming arbitration requests granted : Issuing to IOMMU : How often different queues (e.g. channel / fc) are allowed to send request into pipelineunc_iio_inbound_arb_won.req_ownuncore ioIncoming arbitration requests granted : Request Ownershipevent=0x87,ch_mask=0xff,fc_mask=7,umask=0x70ff00401Incoming arbitration requests granted : Request Ownership : How often different queues (e.g. channel / fc) are allowed to send request into pipeline : Only for posted requestsunc_iio_inbound_arb_won.wruncore ioIncoming arbitration requests granted : Writing lineevent=0x87,ch_mask=0xff,fc_mask=7,umask=0x70ff01001Incoming arbitration requests granted : Writing line : How often different queues (e.g. channel / fc) are allowed to send request into pipeline : Only for posted requestsunc_iio_iommu0.1g_hitsuncore io: IOTLB Hits to a 1G Pageevent=0x40,umask=0x1001: IOTLB Hits to a 1G Page : Counts if a transaction to a 1G page, on its first lookup, hits the IOTLBunc_iio_iommu0.2m_hitsuncore io: IOTLB Hits to a 2M Pageevent=0x40,umask=801: IOTLB Hits to a 2M Page : Counts if a transaction to a 2M page, on its first lookup, hits the IOTLBunc_iio_iommu0.4k_hitsuncore io: IOTLB Hits to a 4K Pageevent=0x40,umask=401: IOTLB Hits to a 4K Page : Counts if a transaction to a 4K page, on its first lookup, hits the IOTLBunc_iio_iommu0.ctxt_cache_hitsuncore io: Context cache hitsevent=0x40,umask=0x8001: Context cache hits : Counts each time a first look up of the transaction hits the RCCunc_iio_iommu0.ctxt_cache_lookupsuncore io: Context cache lookupsevent=0x40,umask=0x4001: Context cache lookups : Counts each time a transaction looks up root context cacheunc_iio_iommu0.first_lookupsuncore io: IOTLB lookups firstevent=0x40,umask=101: IOTLB lookups first : Some transactions have to look up IOTLB multiple times.  Counts the first time a request looks up IOTLBunc_iio_iommu0.missesuncore ioIOTLB Fills (same as IOTLB miss)event=0x40,umask=0x2001IOTLB Fills (same as IOTLB miss) : When a transaction misses IOTLB, it does a page walk to look up memory and bring in the relevant page translation. Counts when this page translation is written to IOTLBunc_iio_iommu1.num_mem_accessesuncore io: IOMMU memory accessevent=0x41,umask=0xc001: IOMMU memory access : IOMMU sends out memory fetches when it misses the cache look up which is indicated by this signal.  M2IOSF only uses low priority channelunc_iio_iommu1.pwc_1g_hitsuncore io: PWC Hit to a 2M pageevent=0x41,umask=401: PWC Hit to a 2M page : Counts each time a transaction's first look up hits the SLPWC at the 2M levelunc_iio_iommu1.pwc_256t_hitsuncore io: PWT Hit to a 256T pageevent=0x41,umask=0x1001: PWT Hit to a 256T page : Counts each time a transaction's first look up hits the SLPWC at the 512G levelunc_iio_iommu1.pwc_2m_hitsuncore io: PWC Hit to a 4K pageevent=0x41,umask=201: PWC Hit to a 4K page : Counts each time a transaction's first look up hits the SLPWC at the 4K levelunc_iio_iommu1.pwc_512g_hitsuncore io: PWC Hit to a 1G pageevent=0x41,umask=801: PWC Hit to a 1G page : Counts each time a transaction's first look up hits the SLPWC at the 1G levelunc_iio_iommu1.pwc_cache_fillsuncore io: PageWalk cache fillevent=0x41,umask=0x2001: PageWalk cache fill : When a transaction misses SLPWC, it does a page walk to look up memory and bring in the relevant page translation. When this page translation is written to SLPWC, ObsPwcFillValid_nnnH is assertedunc_iio_iommu1.pwt_cache_lookupsuncore io: PageWalk cache lookupevent=0x41,umask=101: PageWalk cache lookup : Counts each time a transaction looks up second level page walk cacheunc_iio_iommu1.slpwc_1g_hitsuncore io: PWC Hit to a 2M pageevent=0x41,umask=401: PWC Hit to a 2M page : Counts each time a transaction's first look up hits the SLPWC at the 2M levelunc_iio_iommu1.slpwc_256t_hitsuncore io: PWC Hit to a 2M pageevent=0x41,umask=0x1001: PWC Hit to a 2M page : Counts each time a transaction's first look up hits the SLPWC at the 2M levelunc_iio_iommu1.slpwc_512g_hitsuncore io: PWC Hit to a 1G pageevent=0x41,umask=801: PWC Hit to a 1G page : Counts each time a transaction's first look up hits the SLPWC at the 1G levelunc_iio_iommu3.pwt_occupancy_msbuncore io: Global IOTLB invalidation cyclesevent=0x43,umask=101: Global IOTLB invalidation cycles : Indicates that IOMMU is doing global invalidationunc_iio_mask_match_and.bus0uncore ioAND Mask/match for debug bus : Non-PCIE busevent=2,umask=101AND Mask/match for debug bus : Non-PCIE bus : Asserted if all bits specified by mask matchunc_iio_mask_match_and.bus0_bus1uncore ioAND Mask/match for debug bus : Non-PCIE bus and PCIE busevent=2,umask=801AND Mask/match for debug bus : Non-PCIE bus and PCIE bus : Asserted if all bits specified by mask matchunc_iio_mask_match_and.bus0_not_bus1uncore ioAND Mask/match for debug bus : Non-PCIE bus and !(PCIE bus)event=2,umask=401AND Mask/match for debug bus : Non-PCIE bus and !(PCIE bus) : Asserted if all bits specified by mask matchunc_iio_mask_match_and.bus1uncore ioAND Mask/match for debug bus : PCIE busevent=2,umask=201AND Mask/match for debug bus : PCIE bus : Asserted if all bits specified by mask matchunc_iio_mask_match_and.not_bus0_bus1uncore ioAND Mask/match for debug bus : !(Non-PCIE bus) and PCIE busevent=2,umask=0x1001AND Mask/match for debug bus : !(Non-PCIE bus) and PCIE bus : Asserted if all bits specified by mask matchunc_iio_mask_match_and.not_bus0_not_bus1uncore ioAND Mask/match for debug bus : !(Non-PCIE bus) and !(PCIE bus)event=2,umask=0x2001AND Mask/match for debug bus : !(Non-PCIE bus) and !(PCIE bus) : Asserted if all bits specified by mask matchunc_iio_mask_match_or.bus0uncore ioOR Mask/match for debug bus : Non-PCIE busevent=3,umask=101OR Mask/match for debug bus : Non-PCIE bus : Asserted if any bits specified by mask matchunc_iio_mask_match_or.bus0_bus1uncore ioOR Mask/match for debug bus : Non-PCIE bus and PCIE busevent=3,umask=801OR Mask/match for debug bus : Non-PCIE bus and PCIE bus : Asserted if any bits specified by mask matchunc_iio_mask_match_or.bus0_not_bus1uncore ioOR Mask/match for debug bus : Non-PCIE bus and !(PCIE bus)event=3,umask=401OR Mask/match for debug bus : Non-PCIE bus and !(PCIE bus) : Asserted if any bits specified by mask matchunc_iio_mask_match_or.bus1uncore ioOR Mask/match for debug bus : PCIE busevent=3,umask=201OR Mask/match for debug bus : PCIE bus : Asserted if any bits specified by mask matchunc_iio_mask_match_or.not_bus0_bus1uncore ioOR Mask/match for debug bus : !(Non-PCIE bus) and PCIE busevent=3,umask=0x1001OR Mask/match for debug bus : !(Non-PCIE bus) and PCIE bus : Asserted if any bits specified by mask matchunc_iio_mask_match_or.not_bus0_not_bus1uncore ioOR Mask/match for debug bus : !(Non-PCIE bus) and !(PCIE bus)event=3,umask=0x2001OR Mask/match for debug bus : !(Non-PCIE bus) and !(PCIE bus) : Asserted if any bits specified by mask matchunc_iio_num_req_of_cpu.commit.alluncore ioNumber requests PCIe makes of the main die : Allevent=0x85,ch_mask=0xfff,fc_mask=7,umask=101Number requests PCIe makes of the main die : All : Counts full PCIe requests before they're broken into a series of cache-line size requests as measured by DATA_REQ_OF_CPU and TXN_REQ_OF_CPUunc_iio_num_req_of_cpu_by_tgt.abortuncore ioNum requests sent by PCIe - by target : Abortevent=0x8e,ch_mask=0xff,fc_mask=7,umask=0x8001unc_iio_num_req_of_cpu_by_tgt.confined_p2puncore ioNum requests sent by PCIe - by target : Confined P2Pevent=0x8e,ch_mask=0xff,fc_mask=7,umask=0x4001unc_iio_num_req_of_cpu_by_tgt.loc_p2puncore ioNum requests sent by PCIe - by target : Local P2Pevent=0x8e,ch_mask=0xff,fc_mask=7,umask=0x2001unc_iio_num_req_of_cpu_by_tgt.mcastuncore ioNum requests sent by PCIe - by target : Multi-castevent=0x8e,ch_mask=0xff,fc_mask=7,umask=201unc_iio_num_req_of_cpu_by_tgt.memuncore ioNum requests sent by PCIe - by target : Memoryevent=0x8e,ch_mask=0xff,fc_mask=7,umask=801unc_iio_num_req_of_cpu_by_tgt.msgbuncore ioNum requests sent by PCIe - by target : MsgBevent=0x8e,ch_mask=0xff,fc_mask=7,umask=101unc_iio_num_req_of_cpu_by_tgt.rem_p2puncore ioNum requests sent by PCIe - by target : Remote P2Pevent=0x8e,ch_mask=0xff,fc_mask=7,umask=0x1001unc_iio_num_req_of_cpu_by_tgt.uboxuncore ioNum requests sent by PCIe - by target : Uboxevent=0x8e,ch_mask=0xff,fc_mask=7,umask=401unc_iio_num_tgt_matched_req_of_cpuuncore ioITC address map 1event=0x8f01UNC_IIO_NUM_TGT_MATCHED_REQ_OF_CPUunc_iio_outbound_cl_reqs_issued.to_iouncore ioOutbound cacheline requests issued : 64B requests issued to deviceevent=0xd0,ch_mask=0xff,fc_mask=7,umask=801Outbound cacheline requests issued : 64B requests issued to device : Each outbound cacheline granular request may need to make multiple passes through the pipeline.  Each time a cacheline completes all its passes it advances lineunc_iio_outbound_tlp_reqs_issued.to_iouncore ioOutbound TLP (transaction layer packet) requests issued : To deviceevent=0xd1,ch_mask=0xff,fc_mask=7,umask=801Outbound TLP (transaction layer packet) requests issued : To device : Each time an outbound completes all its passes it advances the pointerunc_iio_pwt_occupancyuncore ioPWT occupancy.  Does not include 9th bit of occupancy (will undercount if PWT is greater than 255 per cycle)event=0x42,umask=0xff01PWT occupancy : Indicates how many page walks are outstanding at any point in timeunc_iio_req_from_pcie_cl_cmpl.datauncore ioRequest Ownership : PCIe Request completeevent=0x91,ch_mask=0xff,fc_mask=7,umask=0x70ff02001Request Ownership : PCIe Request complete : Only for posted requests : Each PCIe request is broken down into a series of cacheline granular requests and each cacheline size request may need to make multiple passes through the pipeline (e.g. for posted interrupts or multi-cast).   Each time a single PCIe request completes all its cacheline granular requests, it advances pointerunc_iio_req_from_pcie_cl_cmpl.final_rd_wruncore ioRequest Ownership : Writing lineevent=0x91,ch_mask=0xff,fc_mask=7,umask=0x70ff00801Request Ownership : Writing line : Only for posted requests : Only for posted requestsunc_iio_req_from_pcie_cl_cmpl.req_ownuncore ioRequest Ownership : Issuing final read or write of lineevent=0x91,ch_mask=0xff,fc_mask=7,umask=0x70ff00401Request Ownership : Issuing final read or write of line : Only for posted requestsunc_iio_req_from_pcie_cl_cmpl.wruncore ioRequest Ownership : Passing data to be writtenevent=0x91,ch_mask=0xff,fc_mask=7,umask=0x70ff01001Request Ownership : Passing data to be written : Only for posted requests : Only for posted requestsunc_iio_req_from_pcie_cmpl.final_rd_wruncore ioProcessing response from IOMMU : Passing data to be writtenevent=0x92,ch_mask=0xff,fc_mask=7,umask=0x70ff00801Processing response from IOMMU : Passing data to be written : Only for posted requestsunc_iio_req_from_pcie_cmpl.iommu_hituncore ioProcessing response from IOMMU : Issuing final read or write of lineevent=0x92,ch_mask=0xff,fc_mask=7,umask=0x70ff00201unc_iio_req_from_pcie_cmpl.iommu_requncore ioProcessing response from IOMMU : Request Ownershipevent=0x92,ch_mask=0xff,fc_mask=7,umask=0x70ff00101Processing response from IOMMU : Request Ownership : Only for posted requestsunc_iio_req_from_pcie_cmpl.req_ownuncore ioProcessing response from IOMMU : Writing lineevent=0x92,ch_mask=0xff,fc_mask=7,umask=0x70ff00401Processing response from IOMMU : Writing line : Only for posted requestsunc_iio_req_from_pcie_pass_cmpl.datauncore ioPCIe Request - pass complete : Passing data to be writtenevent=0x90,ch_mask=0xff,fc_mask=7,umask=0x70ff02001PCIe Request - pass complete : Passing data to be written : Each PCIe request is broken down into a series of cacheline granular requests and each cacheline size request may need to make multiple passes through the pipeline (e.g. for posted interrupts or multi-cast).   Each time a cacheline completes a single pass (e.g. posts a write to single multi-cast target) it advances state : Only for posted requestsunc_iio_req_from_pcie_pass_cmpl.final_rd_wruncore ioPCIe Request - pass complete : Issuing final read or write of lineevent=0x90,ch_mask=0xff,fc_mask=7,umask=801PCIe Request - pass complete : Issuing final read or write of line : Each PCIe request is broken down into a series of cacheline granular requests and each cacheline size request may need to make multiple passes through the pipeline (e.g. for posted interrupts or multi-cast).   Each time a cacheline completes a single pass (e.g. posts a write to single multi-cast target) it advances stateunc_iio_req_from_pcie_pass_cmpl.req_ownuncore ioPCIe Request - pass complete : Request Ownershipevent=0x90,ch_mask=0xff,fc_mask=7,umask=0x70ff00401PCIe Request - pass complete : Request Ownership : Each PCIe request is broken down into a series of cacheline granular requests and each cacheline size request may need to make multiple passes through the pipeline (e.g. for posted interrupts or multi-cast).   Each time a cacheline completes a single pass (e.g. posts a write to single multi-cast target) it advances state : Only for posted requestsunc_iio_req_from_pcie_pass_cmpl.wruncore ioPCIe Request - pass complete : Writing lineevent=0x90,ch_mask=0xff,fc_mask=7,umask=0x70ff01001PCIe Request - pass complete : Writing line : Each PCIe request is broken down into a series of cacheline granular requests and each cacheline size request may need to make multiple passes through the pipeline (e.g. for posted interrupts or multi-cast).   Each time a cacheline completes a single pass (e.g. posts a write to single multi-cast target) it advances state : Only for posted requestsunc_iio_txn_req_by_cpu.mem_read.part0uncore ioRead request for up to a 64 byte transaction is made by the CPU to IIO Part0event=0xc1,ch_mask=1,fc_mask=7,umask=401Number Transactions requested by the CPU : Core reading from Card's MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_by_cpu.mem_read.part1uncore ioRead request for up to a 64 byte transaction is made by the CPU to IIO Part1event=0xc1,ch_mask=2,fc_mask=7,umask=401Number Transactions requested by the CPU : Core reading from Card's MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_txn_req_by_cpu.mem_read.part2uncore ioRead request for up to a 64 byte transaction is made by the CPU to IIO Part2event=0xc1,ch_mask=4,fc_mask=7,umask=401Number Transactions requested by the CPU : Core reading from Card's MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1unc_iio_txn_req_by_cpu.mem_read.part3uncore ioRead request for up to a 64 byte transaction is made by the CPU to IIO Part3event=0xc1,ch_mask=8,fc_mask=7,umask=401Number Transactions requested by the CPU : Core reading from Card's MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_txn_req_by_cpu.mem_read.part4uncore ioNumber Transactions requested by the CPU : Core reading from Cards MMIO spaceevent=0xc1,ch_mask=0x10,fc_mask=7,umask=401Number Transactions requested by the CPU : Core reading from Cards MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_by_cpu.mem_read.part5uncore ioNumber Transactions requested by the CPU : Core reading from Cards MMIO spaceevent=0xc1,ch_mask=0x20,fc_mask=7,umask=401Number Transactions requested by the CPU : Core reading from Cards MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_txn_req_by_cpu.mem_read.part6uncore ioNumber Transactions requested by the CPU : Core reading from Cards MMIO spaceevent=0xc1,ch_mask=0x40,fc_mask=7,umask=401Number Transactions requested by the CPU : Core reading from Cards MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1unc_iio_txn_req_by_cpu.mem_read.part7uncore ioNumber Transactions requested by the CPU : Core reading from Cards MMIO spaceevent=0xc1,ch_mask=0x80,fc_mask=7,umask=401Number Transactions requested by the CPU : Core reading from Cards MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_txn_req_by_cpu.mem_write.part0uncore ioWrite request of up to a 64 byte transaction is made to IIO Part0 by the CPUevent=0xc1,ch_mask=1,fc_mask=7,umask=101Number Transactions requested by the CPU : Core writing to Card's MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_by_cpu.mem_write.part1uncore ioWrite request of up to a 64 byte transaction is made to IIO Part1 by the CPUevent=0xc1,ch_mask=2,fc_mask=7,umask=101Number Transactions requested by the CPU : Core writing to Card's MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_txn_req_by_cpu.mem_write.part2uncore ioWrite request of up to a 64 byte transaction is made to IIO Part2 by the CPUevent=0xc1,ch_mask=4,fc_mask=7,umask=101Number Transactions requested by the CPU : Core writing to Card's MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1unc_iio_txn_req_by_cpu.mem_write.part3uncore ioWrite request of up to a 64 byte transaction is made to IIO Part3 by the CPUevent=0xc1,ch_mask=8,fc_mask=7,umask=101Number Transactions requested by the CPU : Core writing to Card's MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_txn_req_by_cpu.mem_write.part4uncore ioNumber Transactions requested by the CPU : Core writing to Cards MMIO spaceevent=0xc1,ch_mask=0x10,fc_mask=7,umask=101Number Transactions requested by the CPU : Core writing to Cards MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_by_cpu.mem_write.part5uncore ioNumber Transactions requested by the CPU : Core writing to Cards MMIO spaceevent=0xc1,ch_mask=0x20,fc_mask=7,umask=101Number Transactions requested by the CPU : Core writing to Cards MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_txn_req_by_cpu.mem_write.part6uncore ioNumber Transactions requested by the CPU : Core writing to Cards MMIO spaceevent=0xc1,ch_mask=0x40,fc_mask=7,umask=101Number Transactions requested by the CPU : Core writing to Cards MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1unc_iio_txn_req_by_cpu.mem_write.part7uncore ioNumber Transactions requested by the CPU : Core writing to Cards MMIO spaceevent=0xc1,ch_mask=0x80,fc_mask=7,umask=101Number Transactions requested by the CPU : Core writing to Cards MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_txn_req_by_cpu.peer_write.part0uncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc1,ch_mask=1,fc_mask=7,umask=0x700100201Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card. : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_by_cpu.peer_write.part1uncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc1,ch_mask=2,fc_mask=7,umask=0x700200201Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card. : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_txn_req_by_cpu.peer_write.part2uncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc1,ch_mask=4,fc_mask=7,umask=0x700400201Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card. : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_txn_req_by_cpu.peer_write.part3uncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc1,ch_mask=8,fc_mask=7,umask=0x700800201Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card. : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_txn_req_by_cpu.peer_write.part4uncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc1,ch_mask=0x10,fc_mask=7,umask=0x701000201Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card. : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_txn_req_by_cpu.peer_write.part5uncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc1,ch_mask=0x20,fc_mask=7,umask=0x702000201Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card. : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 5unc_iio_txn_req_by_cpu.peer_write.part6uncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc1,ch_mask=0x40,fc_mask=7,umask=0x704000201Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card. : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_txn_req_by_cpu.peer_write.part7uncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc1,ch_mask=0x80,fc_mask=7,umask=0x708000201Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card. : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 7unc_iio_txn_req_of_cpu.cmpd.part0uncore ioNumber Transactions requested of the CPU : CmpD - device sending completion to CPU requestevent=0x84,ch_mask=1,fc_mask=7,umask=0x8001Number Transactions requested of the CPU : CmpD - device sending completion to CPU request : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_of_cpu.cmpd.part1uncore ioNumber Transactions requested of the CPU : CmpD - device sending completion to CPU requestevent=0x84,ch_mask=2,fc_mask=7,umask=0x8001Number Transactions requested of the CPU : CmpD - device sending completion to CPU request : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 1unc_iio_txn_req_of_cpu.cmpd.part2uncore ioNumber Transactions requested of the CPU : CmpD - device sending completion to CPU requestevent=0x84,ch_mask=4,fc_mask=7,umask=0x8001Number Transactions requested of the CPU : CmpD - device sending completion to CPU request : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_txn_req_of_cpu.cmpd.part3uncore ioNumber Transactions requested of the CPU : CmpD - device sending completion to CPU requestevent=0x84,ch_mask=8,fc_mask=7,umask=0x8001Number Transactions requested of the CPU : CmpD - device sending completion to CPU request : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 3unc_iio_txn_req_of_cpu.cmpd.part4uncore ioNumber Transactions requested of the CPU : CmpD - device sending completion to CPU requestevent=0x84,ch_mask=0x10,fc_mask=7,umask=0x8001Number Transactions requested of the CPU : CmpD - device sending completion to CPU request : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_txn_req_of_cpu.cmpd.part5uncore ioNumber Transactions requested of the CPU : CmpD - device sending completion to CPU requestevent=0x84,ch_mask=0x20,fc_mask=7,umask=0x8001Number Transactions requested of the CPU : CmpD - device sending completion to CPU request : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 5unc_iio_txn_req_of_cpu.cmpd.part6uncore ioNumber Transactions requested of the CPU : CmpD - device sending completion to CPU requestevent=0x84,ch_mask=0x40,fc_mask=7,umask=0x8001Number Transactions requested of the CPU : CmpD - device sending completion to CPU request : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_txn_req_of_cpu.cmpd.part7uncore ioNumber Transactions requested of the CPU : CmpD - device sending completion to CPU requestevent=0x84,ch_mask=0x80,fc_mask=7,umask=0x8001Number Transactions requested of the CPU : CmpD - device sending completion to CPU request : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 7unc_iio_txn_req_of_cpu.mem_read.part0uncore ioRead request for up to a 64 byte transaction is made by IIO Part0 to Memoryevent=0x84,ch_mask=1,fc_mask=7,umask=401Number Transactions requested of the CPU : Card reading from DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_of_cpu.mem_read.part1uncore ioRead request for up to a 64 byte transaction is  made by IIO Part1 to Memoryevent=0x84,ch_mask=2,fc_mask=7,umask=401Number Transactions requested of the CPU : Card reading from DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 1unc_iio_txn_req_of_cpu.mem_read.part2uncore ioRead request for up to a 64 byte transaction is made by IIO Part2 to Memoryevent=0x84,ch_mask=4,fc_mask=7,umask=401Number Transactions requested of the CPU : Card reading from DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1unc_iio_txn_req_of_cpu.mem_read.part3uncore ioRead request for up to a 64 byte transaction is made by IIO Part3 to Memoryevent=0x84,ch_mask=8,fc_mask=7,umask=401Number Transactions requested of the CPU : Card reading from DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 3unc_iio_txn_req_of_cpu.mem_read.part4uncore ioNumber Transactions requested of the CPU : Card reading from DRAMevent=0x84,ch_mask=0x10,fc_mask=7,umask=401Number Transactions requested of the CPU : Card reading from DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_of_cpu.mem_read.part5uncore ioNumber Transactions requested of the CPU : Card reading from DRAMevent=0x84,ch_mask=0x20,fc_mask=7,umask=401Number Transactions requested of the CPU : Card reading from DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 1unc_iio_txn_req_of_cpu.mem_read.part6uncore ioNumber Transactions requested of the CPU : Card reading from DRAMevent=0x84,ch_mask=0x40,fc_mask=7,umask=401Number Transactions requested of the CPU : Card reading from DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1unc_iio_txn_req_of_cpu.mem_read.part7uncore ioNumber Transactions requested of the CPU : Card reading from DRAMevent=0x84,ch_mask=0x80,fc_mask=7,umask=401Number Transactions requested of the CPU : Card reading from DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 3unc_iio_txn_req_of_cpu.mem_write.part0uncore ioWrite request of up to a 64 byte transaction is made by IIO Part0 to Memoryevent=0x84,ch_mask=1,fc_mask=7,umask=101Number Transactions requested of the CPU : Card writing to DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_of_cpu.mem_write.part1uncore ioWrite request of up to a 64 byte transaction is made by IIO Part1 to Memoryevent=0x84,ch_mask=2,fc_mask=7,umask=101Number Transactions requested of the CPU : Card writing to DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 1unc_iio_txn_req_of_cpu.mem_write.part2uncore ioWrite request of up to a 64 byte transaction is made by IIO Part2 to Memoryevent=0x84,ch_mask=4,fc_mask=7,umask=101Number Transactions requested of the CPU : Card writing to DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1unc_iio_txn_req_of_cpu.mem_write.part3uncore ioWrite request of up to a 64 byte transaction is made by IIO Part3 to Memoryevent=0x84,ch_mask=8,fc_mask=7,umask=101Number Transactions requested of the CPU : Card writing to DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 3unc_iio_txn_req_of_cpu.mem_write.part4uncore ioNumber Transactions requested of the CPU : Card writing to DRAMevent=0x84,ch_mask=0x10,fc_mask=7,umask=101Number Transactions requested of the CPU : Card writing to DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_of_cpu.mem_write.part5uncore ioNumber Transactions requested of the CPU : Card writing to DRAMevent=0x84,ch_mask=0x20,fc_mask=7,umask=101Number Transactions requested of the CPU : Card writing to DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 1unc_iio_txn_req_of_cpu.mem_write.part6uncore ioNumber Transactions requested of the CPU : Card writing to DRAMevent=0x84,ch_mask=0x40,fc_mask=7,umask=101Number Transactions requested of the CPU : Card writing to DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1unc_iio_txn_req_of_cpu.mem_write.part7uncore ioNumber Transactions requested of the CPU : Card writing to DRAMevent=0x84,ch_mask=0x80,fc_mask=7,umask=101Number Transactions requested of the CPU : Card writing to DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 3unc_iio_txn_req_of_cpu.peer_write.part0uncore ioNumber Transactions requested of the CPU : Card writing to another Card (same or different stack)event=0x84,ch_mask=1,fc_mask=7,umask=201Number Transactions requested of the CPU : Card writing to another Card (same or different stack) : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_of_cpu.peer_write.part1uncore ioNumber Transactions requested of the CPU : Card writing to another Card (same or different stack)event=0x84,ch_mask=2,fc_mask=7,umask=201Number Transactions requested of the CPU : Card writing to another Card (same or different stack) : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 1unc_iio_txn_req_of_cpu.peer_write.part2uncore ioNumber Transactions requested of the CPU : Card writing to another Card (same or different stack)event=0x84,ch_mask=4,fc_mask=7,umask=201Number Transactions requested of the CPU : Card writing to another Card (same or different stack) : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_txn_req_of_cpu.peer_write.part3uncore ioNumber Transactions requested of the CPU : Card writing to another Card (same or different stack)event=0x84,ch_mask=8,fc_mask=7,umask=201Number Transactions requested of the CPU : Card writing to another Card (same or different stack) : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 3unc_iio_txn_req_of_cpu.peer_write.part4uncore ioNumber Transactions requested of the CPU : Card writing to another Card (same or different stack)event=0x84,ch_mask=0x10,fc_mask=7,umask=201Number Transactions requested of the CPU : Card writing to another Card (same or different stack) : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_txn_req_of_cpu.peer_write.part5uncore ioNumber Transactions requested of the CPU : Card writing to another Card (same or different stack)event=0x84,ch_mask=0x20,fc_mask=7,umask=201Number Transactions requested of the CPU : Card writing to another Card (same or different stack) : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 5unc_iio_txn_req_of_cpu.peer_write.part6uncore ioNumber Transactions requested of the CPU : Card writing to another Card (same or different stack)event=0x84,ch_mask=0x40,fc_mask=7,umask=201Number Transactions requested of the CPU : Card writing to another Card (same or different stack) : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_txn_req_of_cpu.peer_write.part7uncore ioNumber Transactions requested of the CPU : Card writing to another Card (same or different stack)event=0x84,ch_mask=0x80,fc_mask=7,umask=201Number Transactions requested of the CPU : Card writing to another Card (same or different stack) : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 7uncore_m2pcieunc_m2p_clockticksuncore ioM2P Clockticksevent=101Number of M2P clock cycles while the event is enabledunc_m2p_cms_clockticksuncore ioCMS Clockticksevent=0xc001unc_m2p_egress_ordering.iv_snoopgo_dnuncore ioEgress Blocking due to Ordering requirements : Downevent=0xba,umask=401Egress Blocking due to Ordering requirements : Down : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirementsunc_m2p_egress_ordering.iv_snoopgo_upuncore ioEgress Blocking due to Ordering requirements : Upevent=0xba,umask=101Egress Blocking due to Ordering requirements : Up : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirementsunc_m2p_iio_credits_acquired.drs_0uncore ioM2PCIe IIO Credit Acquired : DRSevent=0x33,umask=101M2PCIe IIO Credit Acquired : DRS : Counts the number of credits that are acquired in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use.  Transactions from the BL ring going into the IIO Agent must first acquire a credit.  These credits are for either the NCB or NCS message classes.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the DRS message classunc_m2p_iio_credits_acquired.drs_1uncore ioM2PCIe IIO Credit Acquired : DRSevent=0x33,umask=201M2PCIe IIO Credit Acquired : DRS : Counts the number of credits that are acquired in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use.  Transactions from the BL ring going into the IIO Agent must first acquire a credit.  These credits are for either the NCB or NCS message classes.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the DRS message classunc_m2p_iio_credits_acquired.ncb_0uncore ioM2PCIe IIO Credit Acquired : NCBevent=0x33,umask=401M2PCIe IIO Credit Acquired : NCB : Counts the number of credits that are acquired in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use.  Transactions from the BL ring going into the IIO Agent must first acquire a credit.  These credits are for either the NCB or NCS message classes.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the NCB message classunc_m2p_iio_credits_acquired.ncb_1uncore ioM2PCIe IIO Credit Acquired : NCBevent=0x33,umask=801M2PCIe IIO Credit Acquired : NCB : Counts the number of credits that are acquired in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use.  Transactions from the BL ring going into the IIO Agent must first acquire a credit.  These credits are for either the NCB or NCS message classes.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the NCB message classunc_m2p_iio_credits_acquired.ncs_0uncore ioM2PCIe IIO Credit Acquired : NCSevent=0x33,umask=0x1001M2PCIe IIO Credit Acquired : NCS : Counts the number of credits that are acquired in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use.  Transactions from the BL ring going into the IIO Agent must first acquire a credit.  These credits are for either the NCB or NCS message classes.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the NCS message classunc_m2p_iio_credits_acquired.ncs_1uncore ioM2PCIe IIO Credit Acquired : NCSevent=0x33,umask=0x2001M2PCIe IIO Credit Acquired : NCS : Counts the number of credits that are acquired in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use.  Transactions from the BL ring going into the IIO Agent must first acquire a credit.  These credits are for either the NCB or NCS message classes.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly). : Credit for transfer through CMS Port 0s to the IIO for the NCS message classunc_m2p_iio_credits_reject.drsuncore ioM2PCIe IIO Failed to Acquire a Credit : DRSevent=0x34,umask=801M2PCIe IIO Failed to Acquire a Credit : DRS : Counts the number of times that a request pending in the BL Ingress attempted to acquire either a NCB or NCS credit to transmit into the IIO, but was rejected because no credits were available.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly). : Credits to the IIO for the DRS message classunc_m2p_iio_credits_reject.ncbuncore ioM2PCIe IIO Failed to Acquire a Credit : NCBevent=0x34,umask=0x1001M2PCIe IIO Failed to Acquire a Credit : NCB : Counts the number of times that a request pending in the BL Ingress attempted to acquire either a NCB or NCS credit to transmit into the IIO, but was rejected because no credits were available.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly). : Credits to the IIO for the NCB message classunc_m2p_iio_credits_reject.ncsuncore ioM2PCIe IIO Failed to Acquire a Credit : NCSevent=0x34,umask=0x2001M2PCIe IIO Failed to Acquire a Credit : NCS : Counts the number of times that a request pending in the BL Ingress attempted to acquire either a NCB or NCS credit to transmit into the IIO, but was rejected because no credits were available.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly). : Credits to the IIO for the NCS message classunc_m2p_iio_credits_used.drs_0uncore ioM2PCIe IIO Credits in Use : DRS to CMS Port 0event=0x32,umask=101M2PCIe IIO Credits in Use : DRS to CMS Port 0 : Counts the number of cycles when one or more credits in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use.  Transactions from the BL ring going into the IIO Agent must first acquire a credit.  These credits are for either the NCB or NCS message classes.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the DRS message classunc_m2p_iio_credits_used.drs_1uncore ioM2PCIe IIO Credits in Use : DRS to CMS Port 1event=0x32,umask=201M2PCIe IIO Credits in Use : DRS to CMS Port 1 : Counts the number of cycles when one or more credits in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use.  Transactions from the BL ring going into the IIO Agent must first acquire a credit.  These credits are for either the NCB or NCS message classes.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the DRS message classunc_m2p_iio_credits_used.ncb_0uncore ioM2PCIe IIO Credits in Use : NCB to CMS Port 0event=0x32,umask=401M2PCIe IIO Credits in Use : NCB to CMS Port 0 : Counts the number of cycles when one or more credits in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use.  Transactions from the BL ring going into the IIO Agent must first acquire a credit.  These credits are for either the NCB or NCS message classes.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the NCB message classunc_m2p_iio_credits_used.ncb_1uncore ioM2PCIe IIO Credits in Use : NCB to CMS Port 1event=0x32,umask=801M2PCIe IIO Credits in Use : NCB to CMS Port 1 : Counts the number of cycles when one or more credits in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use.  Transactions from the BL ring going into the IIO Agent must first acquire a credit.  These credits are for either the NCB or NCS message classes.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the NCB message classunc_m2p_iio_credits_used.ncs_0uncore ioM2PCIe IIO Credits in Use : NCS to CMS Port 0event=0x32,umask=0x1001M2PCIe IIO Credits in Use : NCS to CMS Port 0 : Counts the number of cycles when one or more credits in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use.  Transactions from the BL ring going into the IIO Agent must first acquire a credit.  These credits are for either the NCB or NCS message classes.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the NCS message classunc_m2p_iio_credits_used.ncs_1uncore ioM2PCIe IIO Credits in Use : NCS to CMS Port 1event=0x32,umask=0x2001M2PCIe IIO Credits in Use : NCS to CMS Port 1 : Counts the number of cycles when one or more credits in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use.  Transactions from the BL ring going into the IIO Agent must first acquire a credit.  These credits are for either the NCB or NCS message classes.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly). : Credit for transfer through CMS Port 0s to the IIO for the NCS message classunc_m2p_local_ded_p2p_crd_taken_0.m2iosf0_ncbuncore ioLocal Dedicated P2P Credit Taken - 0 : M2IOSF0 - NCBevent=0x46,umask=101unc_m2p_local_ded_p2p_crd_taken_0.m2iosf0_ncsuncore ioLocal Dedicated P2P Credit Taken - 0 : M2IOSF0 - NCSevent=0x46,umask=201unc_m2p_local_ded_p2p_crd_taken_0.m2iosf1_ncbuncore ioLocal Dedicated P2P Credit Taken - 0 : M2IOSF1 - NCBevent=0x46,umask=401unc_m2p_local_ded_p2p_crd_taken_0.m2iosf1_ncsuncore ioLocal Dedicated P2P Credit Taken - 0 : M2IOSF1 - NCSevent=0x46,umask=801unc_m2p_local_ded_p2p_crd_taken_0.m2iosf2_ncbuncore ioLocal Dedicated P2P Credit Taken - 0 : M2IOSF2 - NCBevent=0x46,umask=0x1001unc_m2p_local_ded_p2p_crd_taken_0.m2iosf2_ncsuncore ioLocal Dedicated P2P Credit Taken - 0 : M2IOSF2 - NCSevent=0x46,umask=0x2001unc_m2p_local_ded_p2p_crd_taken_0.m2iosf3_ncbuncore ioLocal Dedicated P2P Credit Taken - 0 : M2IOSF3 - NCBevent=0x46,umask=0x4001unc_m2p_local_ded_p2p_crd_taken_0.m2iosf3_ncsuncore ioLocal Dedicated P2P Credit Taken - 0 : M2IOSF3 - NCSevent=0x46,umask=0x8001unc_m2p_local_ded_p2p_crd_taken_1.m2iosf4_ncbuncore ioLocal Dedicated P2P Credit Taken - 1 : M2IOSF4 - NCBevent=0x47,umask=101unc_m2p_local_ded_p2p_crd_taken_1.m2iosf4_ncsuncore ioLocal Dedicated P2P Credit Taken - 1 : M2IOSF4 - NCSevent=0x47,umask=201unc_m2p_local_ded_p2p_crd_taken_1.m2iosf5_ncbuncore ioLocal Dedicated P2P Credit Taken - 1 : M2IOSF5 - NCBevent=0x47,umask=401unc_m2p_local_ded_p2p_crd_taken_1.m2iosf5_ncsuncore ioLocal Dedicated P2P Credit Taken - 1 : M2IOSF5 - NCSevent=0x47,umask=801unc_m2p_local_p2p_ded_returned_0.ms2iosf0_ncbuncore ioLocal P2P Dedicated Credits Returned - 0 : M2IOSF0 - NCBevent=0x19,umask=101unc_m2p_local_p2p_ded_returned_0.ms2iosf0_ncsuncore ioLocal P2P Dedicated Credits Returned - 0 : M2IOSF0 - NCSevent=0x19,umask=201unc_m2p_local_p2p_ded_returned_0.ms2iosf1_ncbuncore ioLocal P2P Dedicated Credits Returned - 0 : M2IOSF1 - NCBevent=0x19,umask=401unc_m2p_local_p2p_ded_returned_0.ms2iosf1_ncsuncore ioLocal P2P Dedicated Credits Returned - 0 : M2IOSF1 - NCSevent=0x19,umask=801unc_m2p_local_p2p_ded_returned_0.ms2iosf2_ncbuncore ioLocal P2P Dedicated Credits Returned - 0 : M2IOSF2 - NCBevent=0x19,umask=0x1001unc_m2p_local_p2p_ded_returned_0.ms2iosf2_ncsuncore ioLocal P2P Dedicated Credits Returned - 0 : M2IOSF2 - NCSevent=0x19,umask=0x2001unc_m2p_local_p2p_ded_returned_0.ms2iosf3_ncbuncore ioLocal P2P Dedicated Credits Returned - 0 : M2IOSF3 - NCBevent=0x19,umask=0x4001unc_m2p_local_p2p_ded_returned_0.ms2iosf3_ncsuncore ioLocal P2P Dedicated Credits Returned - 0 : M2IOSF3 - NCSevent=0x19,umask=0x8001unc_m2p_local_p2p_ded_returned_1.ms2iosf4_ncbuncore ioLocal P2P Dedicated Credits Returned - 1 : M2IOSF4 - NCBevent=0x1a,umask=101unc_m2p_local_p2p_ded_returned_1.ms2iosf4_ncsuncore ioLocal P2P Dedicated Credits Returned - 1 : M2IOSF4 - NCSevent=0x1a,umask=201unc_m2p_local_p2p_ded_returned_1.ms2iosf5_ncbuncore ioLocal P2P Dedicated Credits Returned - 1 : M2IOSF5 - NCBevent=0x1a,umask=401unc_m2p_local_p2p_ded_returned_1.ms2iosf5_ncsuncore ioLocal P2P Dedicated Credits Returned - 1 : M2IOSF5 - NCSevent=0x1a,umask=801unc_m2p_local_p2p_shar_returned.agent_0uncore ioLocal P2P Shared Credits Returned : Agent0event=0x17,umask=101unc_m2p_local_p2p_shar_returned.agent_1uncore ioLocal P2P Shared Credits Returned : Agent1event=0x17,umask=201unc_m2p_local_p2p_shar_returned.agent_2uncore ioLocal P2P Shared Credits Returned : Agent2event=0x17,umask=401unc_m2p_local_shar_p2p_crd_returned.agent_0uncore ioLocal Shared P2P Credit Returned to credit ring : Agent0event=0x44,umask=101unc_m2p_local_shar_p2p_crd_returned.agent_1uncore ioLocal Shared P2P Credit Returned to credit ring : Agent1event=0x44,umask=201unc_m2p_local_shar_p2p_crd_returned.agent_2uncore ioLocal Shared P2P Credit Returned to credit ring : Agent2event=0x44,umask=401unc_m2p_local_shar_p2p_crd_returned.agent_3uncore ioLocal Shared P2P Credit Returned to credit ring : Agent3event=0x44,umask=801unc_m2p_local_shar_p2p_crd_returned.agent_4uncore ioLocal Shared P2P Credit Returned to credit ring : Agent4event=0x44,umask=0x1001unc_m2p_local_shar_p2p_crd_returned.agent_5uncore ioLocal Shared P2P Credit Returned to credit ring : Agent5event=0x44,umask=0x2001unc_m2p_local_shar_p2p_crd_taken_0.m2iosf0_ncbuncore ioLocal Shared P2P Credit Taken - 0 : M2IOSF0 - NCBevent=0x40,umask=101unc_m2p_local_shar_p2p_crd_taken_0.m2iosf0_ncsuncore ioLocal Shared P2P Credit Taken - 0 : M2IOSF0 - NCSevent=0x40,umask=201unc_m2p_local_shar_p2p_crd_taken_0.m2iosf1_ncbuncore ioLocal Shared P2P Credit Taken - 0 : M2IOSF1 - NCBevent=0x40,umask=401unc_m2p_local_shar_p2p_crd_taken_0.m2iosf1_ncsuncore ioLocal Shared P2P Credit Taken - 0 : M2IOSF1 - NCSevent=0x40,umask=801unc_m2p_local_shar_p2p_crd_taken_0.m2iosf2_ncbuncore ioLocal Shared P2P Credit Taken - 0 : M2IOSF2 - NCBevent=0x40,umask=0x1001unc_m2p_local_shar_p2p_crd_taken_0.m2iosf2_ncsuncore ioLocal Shared P2P Credit Taken - 0 : M2IOSF2 - NCSevent=0x40,umask=0x2001unc_m2p_local_shar_p2p_crd_taken_0.m2iosf3_ncbuncore ioLocal Shared P2P Credit Taken - 0 : M2IOSF3 - NCBevent=0x40,umask=0x4001unc_m2p_local_shar_p2p_crd_taken_0.m2iosf3_ncsuncore ioLocal Shared P2P Credit Taken - 0 : M2IOSF3 - NCSevent=0x40,umask=0x8001unc_m2p_local_shar_p2p_crd_taken_1.m2iosf4_ncbuncore ioLocal Shared P2P Credit Taken - 1 : M2IOSF4 - NCBevent=0x41,umask=101unc_m2p_local_shar_p2p_crd_taken_1.m2iosf4_ncsuncore ioLocal Shared P2P Credit Taken - 1 : M2IOSF4 - NCSevent=0x41,umask=201unc_m2p_local_shar_p2p_crd_taken_1.m2iosf5_ncbuncore ioLocal Shared P2P Credit Taken - 1 : M2IOSF5 - NCBevent=0x41,umask=401unc_m2p_local_shar_p2p_crd_taken_1.m2iosf5_ncsuncore ioLocal Shared P2P Credit Taken - 1 : M2IOSF5 - NCSevent=0x41,umask=801unc_m2p_local_shar_p2p_crd_wait_0.m2iosf0_ncbuncore ioWaiting on Local Shared P2P Credit - 0 : M2IOSF0 - NCBevent=0x4a,umask=101unc_m2p_local_shar_p2p_crd_wait_0.m2iosf0_ncsuncore ioWaiting on Local Shared P2P Credit - 0 : M2IOSF0 - NCSevent=0x4a,umask=201unc_m2p_local_shar_p2p_crd_wait_0.m2iosf1_ncbuncore ioWaiting on Local Shared P2P Credit - 0 : M2IOSF1 - NCBevent=0x4a,umask=401unc_m2p_local_shar_p2p_crd_wait_0.m2iosf1_ncsuncore ioWaiting on Local Shared P2P Credit - 0 : M2IOSF1 - NCSevent=0x4a,umask=801unc_m2p_local_shar_p2p_crd_wait_0.m2iosf2_ncbuncore ioWaiting on Local Shared P2P Credit - 0 : M2IOSF2 - NCBevent=0x4a,umask=0x1001unc_m2p_local_shar_p2p_crd_wait_0.m2iosf2_ncsuncore ioWaiting on Local Shared P2P Credit - 0 : M2IOSF2 - NCSevent=0x4a,umask=0x2001unc_m2p_local_shar_p2p_crd_wait_0.m2iosf3_ncbuncore ioWaiting on Local Shared P2P Credit - 0 : M2IOSF3 - NCBevent=0x4a,umask=0x4001unc_m2p_local_shar_p2p_crd_wait_0.m2iosf3_ncsuncore ioWaiting on Local Shared P2P Credit - 0 : M2IOSF3 - NCSevent=0x4a,umask=0x8001unc_m2p_local_shar_p2p_crd_wait_1.m2iosf4_ncbuncore ioWaiting on Local Shared P2P Credit - 1 : M2IOSF4 - NCBevent=0x4b,umask=101unc_m2p_local_shar_p2p_crd_wait_1.m2iosf4_ncsuncore ioWaiting on Local Shared P2P Credit - 1 : M2IOSF4 - NCSevent=0x4b,umask=201unc_m2p_local_shar_p2p_crd_wait_1.m2iosf5_ncbuncore ioWaiting on Local Shared P2P Credit - 1 : M2IOSF5 - NCBevent=0x4b,umask=401unc_m2p_local_shar_p2p_crd_wait_1.m2iosf5_ncsuncore ioWaiting on Local Shared P2P Credit - 1 : M2IOSF5 - NCSevent=0x4b,umask=801unc_m2p_p2p_crd_occupancy.alluncore ioP2P Credit Occupancy : Allevent=0x14,umask=0x1001unc_m2p_p2p_crd_occupancy.local_ncbuncore ioP2P Credit Occupancy : Local NCBevent=0x14,umask=101unc_m2p_p2p_crd_occupancy.local_ncsuncore ioP2P Credit Occupancy : Local NCSevent=0x14,umask=201unc_m2p_p2p_crd_occupancy.remote_ncbuncore ioP2P Credit Occupancy : Remote NCBevent=0x14,umask=401unc_m2p_p2p_crd_occupancy.remote_ncsuncore ioP2P Credit Occupancy : Remote NCSevent=0x14,umask=801unc_m2p_p2p_ded_received.alluncore ioDedicated Credits Received : Allevent=0x16,umask=0x1001unc_m2p_p2p_ded_received.local_ncbuncore ioDedicated Credits Received : Local NCBevent=0x16,umask=101unc_m2p_p2p_ded_received.local_ncsuncore ioDedicated Credits Received : Local NCSevent=0x16,umask=201unc_m2p_p2p_ded_received.remote_ncbuncore ioDedicated Credits Received : Remote NCBevent=0x16,umask=401unc_m2p_p2p_ded_received.remote_ncsuncore ioDedicated Credits Received : Remote NCSevent=0x16,umask=801unc_m2p_p2p_shar_received.alluncore ioShared Credits  Received : Allevent=0x15,umask=0x1001unc_m2p_p2p_shar_received.local_ncbuncore ioShared Credits  Received : Local NCBevent=0x15,umask=101unc_m2p_p2p_shar_received.local_ncsuncore ioShared Credits  Received : Local NCSevent=0x15,umask=201unc_m2p_p2p_shar_received.remote_ncbuncore ioShared Credits  Received : Remote NCBevent=0x15,umask=401unc_m2p_p2p_shar_received.remote_ncsuncore ioShared Credits  Received : Remote NCSevent=0x15,umask=801unc_m2p_remote_ded_p2p_crd_taken_0.upi0_drsuncore ioRemote Dedicated P2P Credit Taken - 0 : UPI0 - DRSevent=0x48,umask=101unc_m2p_remote_ded_p2p_crd_taken_0.upi0_ncbuncore ioRemote Dedicated P2P Credit Taken - 0 : UPI0 - NCBevent=0x48,umask=201unc_m2p_remote_ded_p2p_crd_taken_0.upi0_ncsuncore ioRemote Dedicated P2P Credit Taken - 0 : UPI0 - NCSevent=0x48,umask=401unc_m2p_remote_ded_p2p_crd_taken_0.upi1_drsuncore ioRemote Dedicated P2P Credit Taken - 0 : UPI1 - DRSevent=0x48,umask=801unc_m2p_remote_ded_p2p_crd_taken_0.upi1_ncbuncore ioRemote Dedicated P2P Credit Taken - 0 : UPI1 - NCBevent=0x48,umask=0x1001unc_m2p_remote_ded_p2p_crd_taken_0.upi1_ncsuncore ioRemote Dedicated P2P Credit Taken - 0 : UPI1 - NCSevent=0x48,umask=0x2001unc_m2p_remote_ded_p2p_crd_taken_1.upi2_drsuncore ioRemote Dedicated P2P Credit Taken - 1 : UPI2 - DRSevent=0x49,umask=101unc_m2p_remote_ded_p2p_crd_taken_1.upi2_ncbuncore ioRemote Dedicated P2P Credit Taken - 1 : UPI2 - NCBevent=0x49,umask=201unc_m2p_remote_ded_p2p_crd_taken_1.upi2_ncsuncore ioRemote Dedicated P2P Credit Taken - 1 : UPI2 - NCSevent=0x49,umask=401unc_m2p_remote_p2p_ded_returned.upi0_ncbuncore ioRemote P2P Dedicated Credits Returned : UPI0 - NCBevent=0x1b,umask=101unc_m2p_remote_p2p_ded_returned.upi0_ncsuncore ioRemote P2P Dedicated Credits Returned : UPI0 - NCSevent=0x1b,umask=201unc_m2p_remote_p2p_ded_returned.upi1_ncbuncore ioRemote P2P Dedicated Credits Returned : UPI1 - NCBevent=0x1b,umask=401unc_m2p_remote_p2p_ded_returned.upi1_ncsuncore ioRemote P2P Dedicated Credits Returned : UPI1 - NCSevent=0x1b,umask=801unc_m2p_remote_p2p_ded_returned.upi2_ncbuncore ioRemote P2P Dedicated Credits Returned : UPI2 - NCBevent=0x1b,umask=0x1001unc_m2p_remote_p2p_ded_returned.upi2_ncsuncore ioRemote P2P Dedicated Credits Returned : UPI2 - NCSevent=0x1b,umask=0x2001unc_m2p_remote_p2p_shar_returned.agent_0uncore ioRemote P2P Shared Credits Returned : Agent0event=0x18,umask=101unc_m2p_remote_p2p_shar_returned.agent_1uncore ioRemote P2P Shared Credits Returned : Agent1event=0x18,umask=201unc_m2p_remote_p2p_shar_returned.agent_2uncore ioRemote P2P Shared Credits Returned : Agent2event=0x18,umask=401unc_m2p_remote_shar_p2p_crd_returned.agent_0uncore ioRemote Shared P2P Credit Returned to credit ring : Agent0event=0x45,umask=101unc_m2p_remote_shar_p2p_crd_returned.agent_1uncore ioRemote Shared P2P Credit Returned to credit ring : Agent1event=0x45,umask=201unc_m2p_remote_shar_p2p_crd_returned.agent_2uncore ioRemote Shared P2P Credit Returned to credit ring : Agent2event=0x45,umask=401unc_m2p_remote_shar_p2p_crd_taken_0.upi0_drsuncore ioRemote Shared P2P Credit Taken - 0 : UPI0 - DRSevent=0x42,umask=101unc_m2p_remote_shar_p2p_crd_taken_0.upi0_ncbuncore ioRemote Shared P2P Credit Taken - 0 : UPI0 - NCBevent=0x42,umask=201unc_m2p_remote_shar_p2p_crd_taken_0.upi0_ncsuncore ioRemote Shared P2P Credit Taken - 0 : UPI0 - NCSevent=0x42,umask=401unc_m2p_remote_shar_p2p_crd_taken_0.upi1_drsuncore ioRemote Shared P2P Credit Taken - 0 : UPI1 - DRSevent=0x42,umask=801unc_m2p_remote_shar_p2p_crd_taken_0.upi1_ncbuncore ioRemote Shared P2P Credit Taken - 0 : UPI1 - NCBevent=0x42,umask=0x1001unc_m2p_remote_shar_p2p_crd_taken_0.upi1_ncsuncore ioRemote Shared P2P Credit Taken - 0 : UPI1 - NCSevent=0x42,umask=0x2001unc_m2p_remote_shar_p2p_crd_taken_1.upi2_drsuncore ioRemote Shared P2P Credit Taken - 1 : UPI2 - DRSevent=0x43,umask=101unc_m2p_remote_shar_p2p_crd_taken_1.upi2_ncbuncore ioRemote Shared P2P Credit Taken - 1 : UPI2 - NCBevent=0x43,umask=201unc_m2p_remote_shar_p2p_crd_taken_1.upi2_ncsuncore ioRemote Shared P2P Credit Taken - 1 : UPI2 - NCSevent=0x43,umask=401unc_m2p_remote_shar_p2p_crd_wait_0.upi0_drsuncore ioWaiting on Remote Shared P2P Credit - 0 : UPI0 - DRSevent=0x4c,umask=101unc_m2p_remote_shar_p2p_crd_wait_0.upi0_ncbuncore ioWaiting on Remote Shared P2P Credit - 0 : UPI0 - NCBevent=0x4c,umask=201unc_m2p_remote_shar_p2p_crd_wait_0.upi0_ncsuncore ioWaiting on Remote Shared P2P Credit - 0 : UPI0 - NCSevent=0x4c,umask=401unc_m2p_remote_shar_p2p_crd_wait_0.upi1_drsuncore ioWaiting on Remote Shared P2P Credit - 0 : UPI1 - DRSevent=0x4c,umask=801unc_m2p_remote_shar_p2p_crd_wait_0.upi1_ncbuncore ioWaiting on Remote Shared P2P Credit - 0 : UPI1 - NCBevent=0x4c,umask=0x1001unc_m2p_remote_shar_p2p_crd_wait_0.upi1_ncsuncore ioWaiting on Remote Shared P2P Credit - 0 : UPI1 - NCSevent=0x4c,umask=0x2001unc_m2p_remote_shar_p2p_crd_wait_1.upi2_drsuncore ioWaiting on Remote Shared P2P Credit - 1 : UPI2 - DRSevent=0x4d,umask=101unc_m2p_remote_shar_p2p_crd_wait_1.upi2_ncbuncore ioWaiting on Remote Shared P2P Credit - 1 : UPI2 - NCBevent=0x4d,umask=201unc_m2p_remote_shar_p2p_crd_wait_1.upi2_ncsuncore ioWaiting on Remote Shared P2P Credit - 1 : UPI2 - NCSevent=0x4d,umask=401unc_m2p_rxc_cycles_ne.alluncore ioIngress (from CMS) Queue Cycles Not Emptyevent=0x10,umask=0x8001Ingress (from CMS) Queue Cycles Not Empty : Counts the number of cycles when the M2PCIe Ingress is not emptyunc_m2p_rxc_cycles_ne.cha_idiuncore ioIngress (from CMS) Queue Cycles Not Emptyevent=0x10,umask=101Ingress (from CMS) Queue Cycles Not Empty : Counts the number of cycles when the M2PCIe Ingress is not emptyunc_m2p_rxc_cycles_ne.cha_ncbuncore ioIngress (from CMS) Queue Cycles Not Emptyevent=0x10,umask=201Ingress (from CMS) Queue Cycles Not Empty : Counts the number of cycles when the M2PCIe Ingress is not emptyunc_m2p_rxc_cycles_ne.cha_ncsuncore ioIngress (from CMS) Queue Cycles Not Emptyevent=0x10,umask=401Ingress (from CMS) Queue Cycles Not Empty : Counts the number of cycles when the M2PCIe Ingress is not emptyunc_m2p_rxc_cycles_ne.iio_ncbuncore ioIngress (from CMS) Queue Cycles Not Emptyevent=0x10,umask=0x2001Ingress (from CMS) Queue Cycles Not Empty : Counts the number of cycles when the M2PCIe Ingress is not emptyunc_m2p_rxc_cycles_ne.iio_ncsuncore ioIngress (from CMS) Queue Cycles Not Emptyevent=0x10,umask=0x4001Ingress (from CMS) Queue Cycles Not Empty : Counts the number of cycles when the M2PCIe Ingress is not emptyunc_m2p_rxc_cycles_ne.upi_ncbuncore ioIngress (from CMS) Queue Cycles Not Emptyevent=0x10,umask=801Ingress (from CMS) Queue Cycles Not Empty : Counts the number of cycles when the M2PCIe Ingress is not emptyunc_m2p_rxc_cycles_ne.upi_ncsuncore ioIngress (from CMS) Queue Cycles Not Emptyevent=0x10,umask=0x1001Ingress (from CMS) Queue Cycles Not Empty : Counts the number of cycles when the M2PCIe Ingress is not emptyunc_m2p_rxc_inserts.alluncore ioIngress (from CMS) Queue Insertsevent=0x11,umask=0x8001Ingress (from CMS) Queue Inserts : Counts the number of entries inserted into the M2PCIe Ingress Queue.  This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue latencyunc_m2p_rxc_inserts.cha_idiuncore ioIngress (from CMS) Queue Insertsevent=0x11,umask=101Ingress (from CMS) Queue Inserts : Counts the number of entries inserted into the M2PCIe Ingress Queue.  This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue latencyunc_m2p_rxc_inserts.cha_ncbuncore ioIngress (from CMS) Queue Insertsevent=0x11,umask=201Ingress (from CMS) Queue Inserts : Counts the number of entries inserted into the M2PCIe Ingress Queue.  This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue latencyunc_m2p_rxc_inserts.cha_ncsuncore ioIngress (from CMS) Queue Insertsevent=0x11,umask=401Ingress (from CMS) Queue Inserts : Counts the number of entries inserted into the M2PCIe Ingress Queue.  This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue latencyunc_m2p_rxc_inserts.iio_ncbuncore ioIngress (from CMS) Queue Insertsevent=0x11,umask=0x2001Ingress (from CMS) Queue Inserts : Counts the number of entries inserted into the M2PCIe Ingress Queue.  This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue latencyunc_m2p_rxc_inserts.iio_ncsuncore ioIngress (from CMS) Queue Insertsevent=0x11,umask=0x4001Ingress (from CMS) Queue Inserts : Counts the number of entries inserted into the M2PCIe Ingress Queue.  This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue latencyunc_m2p_rxc_inserts.upi_ncbuncore ioIngress (from CMS) Queue Insertsevent=0x11,umask=801Ingress (from CMS) Queue Inserts : Counts the number of entries inserted into the M2PCIe Ingress Queue.  This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue latencyunc_m2p_rxc_inserts.upi_ncsuncore ioIngress (from CMS) Queue Insertsevent=0x11,umask=0x1001Ingress (from CMS) Queue Inserts : Counts the number of entries inserted into the M2PCIe Ingress Queue.  This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue latencyunc_m2p_txc_credits.pmmuncore ioUNC_M2P_TxC_CREDITS.PMMevent=0x2d,umask=201unc_m2p_txc_credits.prquncore ioUNC_M2P_TxC_CREDITS.PRQevent=0x2d,umask=101unc_m2p_txc_cycles_full.pmm_block_0uncore ioEgress (to CMS) Cycles Fullevent=0x25,umask=0x8001Egress (to CMS) Cycles Full : Counts the number of cycles when the M2PCIe Egress is full.  This tracks messages for one of the two CMS ports that are used by the M2PCIe agentunc_m2p_txc_cycles_full.pmm_block_1uncore ioEgress (to CMS) Cycles Fullevent=0x25,umask=801Egress (to CMS) Cycles Full : Counts the number of cycles when the M2PCIe Egress is full.  This tracks messages for one of the two CMS ports that are used by the M2PCIe agentunc_m2p_txc_cycles_ne.pmm_distress_0uncore ioEgress (to CMS) Cycles Not Emptyevent=0x23,umask=0x8001Egress (to CMS) Cycles Not Empty : Counts the number of cycles when the M2PCIe Egress is not empty.  This tracks messages for one of the two CMS ports that are used by the M2PCIe agent.  This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple egress buffers can be tracked at a given time using multiple countersunc_m2p_txc_cycles_ne.pmm_distress_1uncore ioEgress (to CMS) Cycles Not Emptyevent=0x23,umask=801Egress (to CMS) Cycles Not Empty : Counts the number of cycles when the M2PCIe Egress is not empty.  This tracks messages for one of the two CMS ports that are used by the M2PCIe agent.  This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple egress buffers can be tracked at a given time using multiple countersuncore_m2hbmunc_m2hbm_clockticksuncore memoryCycles - at UCLKevent=101unc_m2hbm_cms_clockticksuncore memoryCMS Clockticksevent=0xc001unc_m2hbm_direct2core_not_taken_dirstateuncore memoryCycles when direct to core mode (which bypasses the CHA) was disabledevent=0x17,umask=701unc_m2hbm_direct2core_not_taken_dirstate.non_cisgressuncore memoryCycles when direct to core mode, which bypasses the CHA, was disabled : Non Cisgressevent=0x17,umask=201Counts the number of time non cisgress D2C was not honoured by egress due to directory state constraintsunc_m2hbm_direct2core_not_taken_notforkeduncore memoryCounts the time when FM didn't do d2c for fill reads (cross tile case)event=0x4a01unc_m2hbm_direct2core_txn_overrideuncore memoryNumber of reads in which direct to core transaction were overriddenevent=0x18,umask=301unc_m2hbm_direct2core_txn_override.cisgressuncore memoryNumber of reads in which direct to core transaction was overridden : Cisgressevent=0x18,umask=201unc_m2hbm_direct2upi_not_taken_creditsuncore memoryNumber of reads in which direct to Intel UPI transactions were overriddenevent=0x1b,umask=701unc_m2hbm_direct2upi_not_taken_dirstateuncore memoryCycles when direct to Intel UPI was disabledevent=0x1a,umask=701unc_m2hbm_direct2upi_not_taken_dirstate.cisgressuncore memoryCycles when Direct2UPI was Disabled : Cisgress D2U Ignoredevent=0x1a,umask=401Counts cisgress d2K that was not honored due to directory constraintsunc_m2hbm_direct2upi_not_taken_dirstate.egressuncore memoryCycles when Direct2UPI was Disabled : Egress Ignored D2Uevent=0x1a,umask=101Counts the number of time D2K was not honoured by egress due to directory state constraintsunc_m2hbm_direct2upi_not_taken_dirstate.non_cisgressuncore memoryCycles when Direct2UPI was Disabled : Non Cisgress D2U Ignoredevent=0x1a,umask=201Counts non cisgress d2K that was not honored due to directory constraintsunc_m2hbm_direct2upi_txn_overrideuncore memoryNumber of reads that a message sent direct2 Intel UPI was overriddenevent=0x1c,umask=301unc_m2hbm_direct2upi_txn_override.cisgressuncore memoryNumber of times a direct to UPI transaction was overriddenevent=0x1c,umask=201unc_m2hbm_directory_hit.clean_auncore memoryDirectory Hit : On NonDirty Line in A Stateevent=0x1d,umask=0x8001unc_m2hbm_directory_hit.clean_iuncore memoryDirectory Hit : On NonDirty Line in I Stateevent=0x1d,umask=0x1001unc_m2hbm_directory_hit.clean_puncore memoryDirectory Hit : On NonDirty Line in L Stateevent=0x1d,umask=0x4001unc_m2hbm_directory_hit.clean_suncore memoryDirectory Hit : On NonDirty Line in S Stateevent=0x1d,umask=0x2001unc_m2hbm_directory_hit.dirty_auncore memoryDirectory Hit : On Dirty Line in A Stateevent=0x1d,umask=801unc_m2hbm_directory_hit.dirty_iuncore memoryDirectory Hit : On Dirty Line in I Stateevent=0x1d,umask=101unc_m2hbm_directory_hit.dirty_puncore memoryDirectory Hit : On Dirty Line in L Stateevent=0x1d,umask=401unc_m2hbm_directory_hit.dirty_suncore memoryDirectory Hit : On Dirty Line in S Stateevent=0x1d,umask=201unc_m2hbm_directory_lookup.anyuncore memoryMulti-socket cacheline Directory lookups (any state found)event=0x20,umask=101Counts the number of hit data returns to egress with any directory to non persistent memoryunc_m2hbm_directory_lookup.state_auncore memoryMulti-socket cacheline Directory lookups (cacheline found in A state)event=0x20,umask=801Counts the number of hit data returns to egress with directory A to non persistent memoryunc_m2hbm_directory_lookup.state_iuncore memoryMulti-socket cacheline Directory lookup (cacheline found in I state)event=0x20,umask=201Counts the number of hit data returns to egress with directory I to non persistent memoryunc_m2hbm_directory_lookup.state_suncore memoryMulti-socket cacheline Directory lookup (cacheline found in S state)event=0x20,umask=401Counts the number of hit data returns to egress with directory S to non persistent memoryunc_m2hbm_directory_miss.clean_auncore memoryDirectory Miss : On NonDirty Line in A Stateevent=0x1e,umask=0x8001unc_m2hbm_directory_miss.clean_iuncore memoryDirectory Miss : On NonDirty Line in I Stateevent=0x1e,umask=0x1001unc_m2hbm_directory_miss.clean_puncore memoryDirectory Miss : On NonDirty Line in L Stateevent=0x1e,umask=0x4001unc_m2hbm_directory_miss.clean_suncore memoryDirectory Miss : On NonDirty Line in S Stateevent=0x1e,umask=0x2001unc_m2hbm_directory_miss.dirty_auncore memoryDirectory Miss : On Dirty Line in A Stateevent=0x1e,umask=801unc_m2hbm_directory_miss.dirty_iuncore memoryDirectory Miss : On Dirty Line in I Stateevent=0x1e,umask=101unc_m2hbm_directory_miss.dirty_puncore memoryDirectory Miss : On Dirty Line in L Stateevent=0x1e,umask=401unc_m2hbm_directory_miss.dirty_suncore memoryDirectory Miss : On Dirty Line in S Stateevent=0x1e,umask=201unc_m2hbm_directory_update.a2iuncore memoryMulti-socket cacheline Directory update from A to Ievent=0x21,umask=0x32001unc_m2hbm_directory_update.a2suncore memoryMulti-socket cacheline Directory update from A to Sevent=0x21,umask=0x34001unc_m2hbm_directory_update.anyuncore memoryMulti-socket cacheline Directory update from/to Any stateevent=0x21,umask=0x30101unc_m2hbm_directory_update.a_to_i_hit_non_pmmuncore memoryMulti-socket cacheline Directory Updatesevent=0x21,umask=0x12001Counts 1lm or 2lm hit  data returns that would result in directory update from A to I to non persistent memoryunc_m2hbm_directory_update.a_to_i_miss_non_pmmuncore memoryMulti-socket cacheline Directory Updatesevent=0x21,umask=0x22001Counts 2lm miss  data returns that would result in directory update from A to I to non persistent memoryunc_m2hbm_directory_update.a_to_s_hit_non_pmmuncore memoryMulti-socket cacheline Directory Updatesevent=0x21,umask=0x14001Counts 1lm or 2lm hit  data returns that would result in directory update from A to S to non persistent memoryunc_m2hbm_directory_update.a_to_s_miss_non_pmmuncore memoryMulti-socket cacheline Directory Updatesevent=0x21,umask=0x24001Counts 2lm miss  data returns that would result in directory update from A to S to non persistent memoryunc_m2hbm_directory_update.hit_non_pmmuncore memoryMulti-socket cacheline Directory Updatesevent=0x21,umask=0x10101Counts any 1lm or 2lm hit data return that would result in directory update to non persistent memoryunc_m2hbm_directory_update.i2auncore memoryMulti-socket cacheline Directory update from I to Aevent=0x21,umask=0x30401unc_m2hbm_directory_update.i2suncore memoryMulti-socket cacheline Directory update from I to Sevent=0x21,umask=0x30201unc_m2hbm_directory_update.i_to_a_hit_non_pmmuncore memoryMulti-socket cacheline Directory Updatesevent=0x21,umask=0x10401Counts 1lm or 2lm hit  data returns that would result in directory update from I to A to non persistent memoryunc_m2hbm_directory_update.i_to_a_miss_non_pmmuncore memoryMulti-socket cacheline Directory Updatesevent=0x21,umask=0x20401Counts 2lm miss  data returns that would result in directory update from I to A to non persistent memoryunc_m2hbm_directory_update.i_to_s_hit_non_pmmuncore memoryMulti-socket cacheline Directory Updatesevent=0x21,umask=0x10201Counts 1lm or 2lm hit  data returns that would result in directory update from I to S to non persistent memoryunc_m2hbm_directory_update.i_to_s_miss_non_pmmuncore memoryMulti-socket cacheline Directory Updatesevent=0x21,umask=0x20201Counts  2lm miss  data returns that would result in directory update from I to S to non persistent memoryunc_m2hbm_directory_update.miss_non_pmmuncore memoryMulti-socket cacheline Directory Updatesevent=0x21,umask=0x20101Counts any 2lm miss data return that would result in directory update to non persistent memoryunc_m2hbm_directory_update.s2auncore memoryMulti-socket cacheline Directory update from S to Aevent=0x21,umask=0x31001unc_m2hbm_directory_update.s2iuncore memoryMulti-socket cacheline Directory update from S to Ievent=0x21,umask=0x30801unc_m2hbm_directory_update.s_to_a_hit_non_pmmuncore memoryMulti-socket cacheline Directory Updatesevent=0x21,umask=0x11001Counts 1lm or 2lm hit  data returns that would result in directory update from S to A to non persistent memoryunc_m2hbm_directory_update.s_to_a_miss_non_pmmuncore memoryMulti-socket cacheline Directory Updatesevent=0x21,umask=0x21001Counts 2lm miss  data returns that would result in directory update from S to A to non persistent memoryunc_m2hbm_directory_update.s_to_i_hit_non_pmmuncore memoryMulti-socket cacheline Directory Updatesevent=0x21,umask=0x10801Counts 1lm or 2lm hit  data returns that would result in directory update from S to I to non persistent memoryunc_m2hbm_directory_update.s_to_i_miss_non_pmmuncore memoryMulti-socket cacheline Directory Updatesevent=0x21,umask=0x20801Counts 2lm miss  data returns that would result in directory update from S to I to non persistent memoryunc_m2hbm_distress.aduncore memoryCount distress signalled on AkAd cmp messageevent=0x67,umask=0x2001unc_m2hbm_distress.alluncore memoryCount distress signalled on any packet typeevent=0x67,umask=101unc_m2hbm_distress.bl_cmpuncore memoryCount distress signalled on Bl Cmp messageevent=0x67,umask=0x4001unc_m2hbm_distress.crosstile_nmwruncore memoryCount distress signalled on NM fill write messageevent=0x67,umask=0x1001unc_m2hbm_distress.d2chauncore memoryCount distress signalled on D2Cha messageevent=0x67,umask=801unc_m2hbm_distress.d2coreuncore memoryCount distress signalled on D2c messageevent=0x67,umask=201unc_m2hbm_distress.d2upiuncore memoryCount distress signalled on D2k messageevent=0x67,umask=401unc_m2hbm_egress_ordering.iv_snoopgo_dnuncore memoryEgress Blocking due to Ordering requirements : Downevent=0xba,umask=0x8000000401Egress Blocking due to Ordering requirements : Down : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirementsunc_m2hbm_egress_ordering.iv_snoopgo_upuncore memoryEgress Blocking due to Ordering requirements : Upevent=0xba,umask=0x8000000101Egress Blocking due to Ordering requirements : Up : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirementsunc_m2hbm_igr_starve_winner.mask7uncore memoryCount when Starve Glocab counter is at 7event=0x44,umask=0x8001unc_m2hbm_imc_reads.alluncore memoryReads to iMC issuedevent=0x24,umask=0x30401unc_m2hbm_imc_reads.ch0.alluncore memoryUNC_M2HBM_IMC_READS.CH0.ALLevent=0x24,umask=0x10401unc_m2hbm_imc_reads.ch0.normaluncore memoryUNC_M2HBM_IMC_READS.CH0.NORMALevent=0x24,umask=0x10101unc_m2hbm_imc_reads.ch0_alluncore memoryUNC_M2HBM_IMC_READS.CH0_ALLevent=0x24,umask=0x10401unc_m2hbm_imc_reads.ch0_from_tgruncore memoryUNC_M2HBM_IMC_READS.CH0_FROM_TGRevent=0x24,umask=0x14001unc_m2hbm_imc_reads.ch0_isochuncore memoryCritical Priority - Ch0event=0x24,umask=0x10201unc_m2hbm_imc_reads.ch0_normaluncore memoryUNC_M2HBM_IMC_READS.CH0_NORMALevent=0x24,umask=0x10101unc_m2hbm_imc_reads.ch1.alluncore memoryUNC_M2HBM_IMC_READS.CH1.ALLevent=0x24,umask=0x20401unc_m2hbm_imc_reads.ch1.normaluncore memoryUNC_M2HBM_IMC_READS.CH1.NORMALevent=0x24,umask=0x20101unc_m2hbm_imc_reads.ch1_alluncore memoryUNC_M2HBM_IMC_READS.CH1_ALLevent=0x24,umask=0x20401unc_m2hbm_imc_reads.ch1_from_tgruncore memoryFrom TGR - Ch1event=0x24,umask=0x24001unc_m2hbm_imc_reads.ch1_isochuncore memoryCritical Priority - Ch1event=0x24,umask=0x20201unc_m2hbm_imc_reads.ch1_normaluncore memoryUNC_M2HBM_IMC_READS.CH1_NORMALevent=0x24,umask=0x20101unc_m2hbm_imc_reads.from_tgruncore memoryFrom TGR - All Channelsevent=0x24,umask=0x34001unc_m2hbm_imc_reads.isochuncore memoryCritical Priority - All Channelsevent=0x24,umask=0x30201unc_m2hbm_imc_reads.normaluncore memoryUNC_M2HBM_IMC_READS.NORMALevent=0x24,umask=0x30101unc_m2hbm_imc_writes.alluncore memoryAll Writes - All Channelsevent=0x25,umask=0x181001unc_m2hbm_imc_writes.ch0.alluncore memoryUNC_M2HBM_IMC_WRITES.CH0.ALLevent=0x25,umask=0x81001unc_m2hbm_imc_writes.ch0.fulluncore memoryUNC_M2HBM_IMC_WRITES.CH0.FULLevent=0x25,umask=0x80101unc_m2hbm_imc_writes.ch0.partialuncore memoryUNC_M2HBM_IMC_WRITES.CH0.PARTIALevent=0x25,umask=0x80201unc_m2hbm_imc_writes.ch0_alluncore memoryUNC_M2HBM_IMC_WRITES.CH0_ALLevent=0x25,umask=0x81001unc_m2hbm_imc_writes.ch0_from_tgruncore memoryFrom TGR - Ch0event=0x2501unc_m2hbm_imc_writes.ch0_fulluncore memoryUNC_M2HBM_IMC_WRITES.CH0_FULLevent=0x25,umask=0x80101unc_m2hbm_imc_writes.ch0_full_isochuncore memoryISOCH Full Line - Ch0event=0x25,umask=0x80401unc_m2hbm_imc_writes.ch0_niuncore memoryNon-Inclusive - Ch0event=0x2501unc_m2hbm_imc_writes.ch0_ni_missuncore memoryNon-Inclusive Miss - Ch0event=0x2501unc_m2hbm_imc_writes.ch0_partialuncore memoryUNC_M2HBM_IMC_WRITES.CH0_PARTIALevent=0x25,umask=0x80201unc_m2hbm_imc_writes.ch0_partial_isochuncore memoryISOCH Partial - Ch0event=0x25,umask=0x80801unc_m2hbm_imc_writes.ch1.alluncore memoryAll Writes - Ch1event=0x25,umask=0x101001unc_m2hbm_imc_writes.ch1.fulluncore memoryFull Line Non-ISOCH - Ch1event=0x25,umask=0x100101unc_m2hbm_imc_writes.ch1.partialuncore memoryPartial Non-ISOCH - Ch1event=0x25,umask=0x100201unc_m2hbm_imc_writes.ch1_alluncore memoryAll Writes - Ch1event=0x25,umask=0x101001unc_m2hbm_imc_writes.ch1_from_tgruncore memoryFrom TGR - Ch1event=0x2501unc_m2hbm_imc_writes.ch1_fulluncore memoryFull Line Non-ISOCH - Ch1event=0x25,umask=0x100101unc_m2hbm_imc_writes.ch1_full_isochuncore memoryISOCH Full Line - Ch1event=0x25,umask=0x100401unc_m2hbm_imc_writes.ch1_niuncore memoryNon-Inclusive - Ch1event=0x2501unc_m2hbm_imc_writes.ch1_ni_missuncore memoryNon-Inclusive Miss - Ch1event=0x2501unc_m2hbm_imc_writes.ch1_partialuncore memoryPartial Non-ISOCH - Ch1event=0x25,umask=0x100201unc_m2hbm_imc_writes.ch1_partial_isochuncore memoryISOCH Partial - Ch1event=0x25,umask=0x100801unc_m2hbm_imc_writes.from_tgruncore memoryFrom TGR - All Channelsevent=0x2501unc_m2hbm_imc_writes.fulluncore memoryFull Non-ISOCH - All Channelsevent=0x25,umask=0x180101unc_m2hbm_imc_writes.full_isochuncore memoryISOCH Full Line - All Channelsevent=0x25,umask=0x180401unc_m2hbm_imc_writes.niuncore memoryNon-Inclusive - All Channelsevent=0x2501unc_m2hbm_imc_writes.ni_missuncore memoryNon-Inclusive Miss - All Channelsevent=0x2501unc_m2hbm_imc_writes.partialuncore memoryPartial Non-ISOCH - All Channelsevent=0x25,umask=0x180201unc_m2hbm_imc_writes.partial_isochuncore memoryISOCH Partial - All Channelsevent=0x25,umask=0x180801unc_m2hbm_prefcam_cis_dropsuncore memoryUNC_M2HBM_PREFCAM_CIS_DROPSevent=0x5c01unc_m2hbm_prefcam_demand_drops.ch0_upiuncore memoryData Prefetches Droppedevent=0x58,umask=201unc_m2hbm_prefcam_demand_drops.ch0_xptuncore memoryData Prefetches Droppedevent=0x58,umask=101unc_m2hbm_prefcam_demand_drops.ch1_upiuncore memoryData Prefetches Droppedevent=0x58,umask=801unc_m2hbm_prefcam_demand_drops.ch1_xptuncore memoryData Prefetches Droppedevent=0x58,umask=401unc_m2hbm_prefcam_demand_drops.upi_allchuncore memoryData Prefetches Dropped : UPI - All Channelsevent=0x58,umask=0xa01unc_m2hbm_prefcam_demand_drops.xpt_allchuncore memoryData Prefetches Droppedevent=0x58,umask=501unc_m2hbm_prefcam_demand_merge.upi_allchuncore memory: UPI - All Channelsevent=0x5d,umask=0xa01unc_m2hbm_prefcam_demand_merge.xpt_allchuncore memory: XPT - All Channelsevent=0x5d,umask=501unc_m2hbm_prefcam_demand_no_merge.rd_mergeduncore memoryDemands Not Merged with CAMed Prefetchesevent=0x5e,umask=0x4001unc_m2hbm_prefcam_demand_no_merge.wr_mergeduncore memoryDemands Not Merged with CAMed Prefetchesevent=0x5e,umask=0x2001unc_m2hbm_prefcam_demand_no_merge.wr_squasheduncore memoryDemands Not Merged with CAMed Prefetchesevent=0x5e,umask=0x1001unc_m2hbm_prefcam_inserts.ch0_upiuncore memoryPrefetch CAM Inserts : UPI - Ch 0event=0x56,umask=201unc_m2hbm_prefcam_inserts.ch0_xptuncore memoryPrefetch CAM Inserts : XPT - Ch 0event=0x56,umask=101unc_m2hbm_prefcam_inserts.ch1_upiuncore memoryPrefetch CAM Inserts : UPI - Ch 1event=0x56,umask=801unc_m2hbm_prefcam_inserts.ch1_xptuncore memoryPrefetch CAM Inserts : XPT - Ch 1event=0x56,umask=401unc_m2hbm_prefcam_inserts.upi_allchuncore memoryPrefetch CAM Inserts : UPI - All Channelsevent=0x56,umask=0xa01unc_m2hbm_prefcam_inserts.xpt_allchuncore memoryPrefetch CAM Inserts : XPT - All Channelsevent=0x56,umask=501Prefetch CAM Inserts : XPT -All Channelsunc_m2hbm_prefcam_occupancy.allchuncore memoryPrefetch CAM Occupancy : All Channelsevent=0x54,umask=301unc_m2hbm_prefcam_occupancy.ch0uncore memoryPrefetch CAM Occupancy : Channel 0event=0x54,umask=101unc_m2hbm_prefcam_occupancy.ch1uncore memoryPrefetch CAM Occupancy : Channel 1event=0x54,umask=201unc_m2hbm_prefcam_resp_miss.allchuncore memoryAll Channelsevent=0x5f,umask=301unc_m2hbm_prefcam_resp_miss.ch0uncore memory: Channel 0event=0x5f,umask=101unc_m2hbm_prefcam_resp_miss.ch1uncore memory: Channel 1event=0x5f,umask=201unc_m2hbm_prefcam_rxc_deallocs.1lm_posteduncore memoryUNC_M2HBM_PREFCAM_RxC_DEALLOCS.1LM_POSTEDevent=0x62,umask=201unc_m2hbm_prefcam_rxc_deallocs.cisuncore memoryUNC_M2HBM_PREFCAM_RxC_DEALLOCS.CISevent=0x62,umask=801unc_m2hbm_prefcam_rxc_deallocs.squasheduncore memoryUNC_M2HBM_PREFCAM_RxC_DEALLOCS.SQUASHEDevent=0x62,umask=101unc_m2hbm_prefcam_rxc_occupancyuncore memoryUNC_M2HBM_PREFCAM_RxC_OCCUPANCYevent=0x6001unc_m2hbm_rxc_ad.insertsuncore memoryAD Ingress (from CMS) : AD Ingress (from CMS) Allocationsevent=2,umask=101unc_m2hbm_rxc_ad_insertsuncore memoryAD Ingress (from CMS) : AD Ingress (from CMS) Allocationsevent=2,umask=101unc_m2hbm_rxc_ad_occupancyuncore memoryAD Ingress (from CMS) Occupancyevent=301unc_m2hbm_rxc_bl.insertsuncore memoryBL Ingress (from CMS) : BL Ingress (from CMS) Allocationsevent=4,umask=101Counts anytime a BL packet is added to Ingressunc_m2hbm_rxc_bl_insertsuncore memoryBL Ingress (from CMS) : BL Ingress (from CMS) Allocationsevent=4,umask=101Counts anytime a BL packet is added to Ingressunc_m2hbm_rxc_bl_occupancyuncore memoryBL Ingress (from CMS) Occupancyevent=501unc_m2hbm_tgr_ad_creditsuncore memoryNumber AD Ingress Creditsevent=0x2e01unc_m2hbm_tgr_bl_creditsuncore memoryNumber BL Ingress Creditsevent=0x2f01unc_m2hbm_tracker_inserts.ch0uncore memoryTracker Inserts : Channel 0event=0x32,umask=0x10401unc_m2hbm_tracker_inserts.ch1uncore memoryTracker Inserts : Channel 1event=0x32,umask=0x20401unc_m2hbm_tracker_occupancy.ch0uncore memoryTracker Occupancy : Channel 0event=0x33,umask=101unc_m2hbm_tracker_occupancy.ch1uncore memoryTracker Occupancy : Channel 1event=0x33,umask=201unc_m2hbm_txc_ad.insertsuncore memoryAD Egress (to CMS) : AD Egress (to CMS) Allocationsevent=6,umask=101Counts anytime a AD packet is added to Egressunc_m2hbm_txc_ad_insertsuncore memoryAD Egress (to CMS) : AD Egress (to CMS) Allocationsevent=6,umask=101Counts anytime a AD packet is added to Egressunc_m2hbm_txc_ad_occupancyuncore memoryAD Egress (to CMS) Occupancyevent=701unc_m2hbm_txc_bl.inserts_cms0uncore memoryBL Egress (to CMS) : Inserts - CMS0 - Near Sideevent=0xe,umask=0x10101Counts the number of BL transactions to CMS add port 0unc_m2hbm_txc_bl.inserts_cms1uncore memoryBL Egress (to CMS) : Inserts - CMS1 - Far Sideevent=0xe,umask=0x20101Counts the number of BL transactions to CMS add port 1unc_m2hbm_txc_bl_occupancy.alluncore memoryBL Egress (to CMS) Occupancy : Allevent=0xf,umask=301unc_m2hbm_txc_bl_occupancy.cms0uncore memoryBL Egress (to CMS) Occupancy : Common Mesh Stop - Near Sideevent=0xf,umask=101unc_m2hbm_txc_bl_occupancy.cms1uncore memoryBL Egress (to CMS) Occupancy : Common Mesh Stop - Far Sideevent=0xf,umask=201unc_m2hbm_wpq_flush.ch0uncore memoryWPQ Flush : Channel 0event=0x42,umask=101unc_m2hbm_wpq_flush.ch1uncore memoryWPQ Flush : Channel 1event=0x42,umask=201unc_m2hbm_wpq_no_reg_crd.chn0uncore memoryM2M and iMC WPQ Cycles w/Credits - Regular : Channel 0event=0x37,umask=101unc_m2hbm_wpq_no_reg_crd.chn1uncore memoryM2M and iMC WPQ Cycles w/Credits - Regular : Channel 1event=0x37,umask=201unc_m2hbm_wpq_no_spec_crd.chn0uncore memoryM2M and iMC WPQ Cycles w/Credits - Special : Channel 0event=0x38,umask=101unc_m2hbm_wpq_no_spec_crd.chn1uncore memoryM2M and iMC WPQ Cycles w/Credits - Special : Channel 1event=0x38,umask=201unc_m2hbm_wr_tracker_inserts.ch0uncore memoryWrite Tracker Inserts : Channel 0event=0x40,umask=101unc_m2hbm_wr_tracker_inserts.ch1uncore memoryWrite Tracker Inserts : Channel 1event=0x40,umask=201unc_m2hbm_wr_tracker_nonposted_inserts.ch0uncore memoryWrite Tracker Non-Posted Inserts : Channel 0event=0x4d,umask=101unc_m2hbm_wr_tracker_nonposted_inserts.ch1uncore memoryWrite Tracker Non-Posted Inserts : Channel 1event=0x4d,umask=201unc_m2hbm_wr_tracker_nonposted_occupancy.ch0uncore memoryWrite Tracker Non-Posted Occupancy : Channel 0event=0x4c,umask=101unc_m2hbm_wr_tracker_nonposted_occupancy.ch1uncore memoryWrite Tracker Non-Posted Occupancy : Channel 1event=0x4c,umask=201unc_m2hbm_wr_tracker_posted_inserts.ch0uncore memoryWrite Tracker Posted Inserts : Channel 0event=0x48,umask=101unc_m2hbm_wr_tracker_posted_inserts.ch1uncore memoryWrite Tracker Posted Inserts : Channel 1event=0x48,umask=201unc_m2hbm_wr_tracker_posted_occupancy.ch0uncore memoryWrite Tracker Posted Occupancy : Channel 0event=0x47,umask=101unc_m2hbm_wr_tracker_posted_occupancy.ch1uncore memoryWrite Tracker Posted Occupancy : Channel 1event=0x47,umask=201uncore_mchbmunc_mchbm_act_count.alluncore memoryActivate due to read, write, underfill, or bypassevent=2,umask=0xff01Counts the number of HBM Activate commands sent on this channel.  Activate commands are issued to open up a page on the HBM devices so that it can be read or written to with a CAS.  One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activatesunc_mchbm_act_count.rduncore memoryActivate due to readevent=2,umask=0x1101Counts the number of HBM Activate commands sent on this channel.  Activate commands are issued to open up a page on the HBM devices so that it can be read or written to with a CAS.  One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activatesunc_mchbm_act_count.rd_pch0uncore memoryHBM Activate Count : Activate due to Read in PCH0event=2,umask=101Counts the number of HBM Activate commands sent on this channel.  Activate commands are issued to open up a page on the HBM devices so that it can be read or written to with a CAS.  One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activatesunc_mchbm_act_count.rd_pch1uncore memoryHBM Activate Count : Activate due to Read in PCH1event=2,umask=0x1001Counts the number of HBM Activate commands sent on this channel.  Activate commands are issued to open up a page on the HBM devices so that it can be read or written to with a CAS.  One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activatesunc_mchbm_act_count.ufilluncore memoryHBM Activate Count : Underfill Read transaction on Page Empty or Page Missevent=2,umask=0x4401Counts the number of HBM Activate commands sent on this channel.  Activate commands are issued to open up a page on the HBM devices so that it can be read or written to with a CAS.  One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activatesunc_mchbm_act_count.ufill_pch0uncore memoryHBM Activate Countevent=2,umask=401Counts the number of HBM Activate commands sent on this channel.  Activate commands are issued to open up a page on the HBM devices so that it can be read or written to with a CAS.  One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activatesunc_mchbm_act_count.ufill_pch1uncore memoryHBM Activate Countevent=2,umask=0x4001Counts the number of HBM Activate commands sent on this channel.  Activate commands are issued to open up a page on the HBM devices so that it can be read or written to with a CAS.  One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activatesunc_mchbm_act_count.wruncore memoryActivate due to writeevent=2,umask=0x2201Counts the number of HBM Activate commands sent on this channel.  Activate commands are issued to open up a page on the HBM devices so that it can be read or written to with a CAS.  One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activatesunc_mchbm_act_count.wr_pch0uncore memoryHBM Activate Count : Activate due to Write in PCH0event=2,umask=201Counts the number of HBM Activate commands sent on this channel.  Activate commands are issued to open up a page on the HBM devices so that it can be read or written to with a CAS.  One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activatesunc_mchbm_act_count.wr_pch1uncore memoryHBM Activate Count : Activate due to Write in PCH1event=2,umask=0x2001Counts the number of HBM Activate commands sent on this channel.  Activate commands are issued to open up a page on the HBM devices so that it can be read or written to with a CAS.  One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activatesunc_mchbm_cas_count.alluncore memoryAll CAS commands issuedevent=5,umask=0xff01unc_mchbm_cas_count.pch0uncore memoryPseudo Channel 0event=5,umask=0x4001HBM RD_CAS and WR_CAS Commandsunc_mchbm_cas_count.pch1uncore memoryPseudo Channel 1event=5,umask=0x8001HBM RD_CAS and WR_CAS Commandsunc_mchbm_cas_count.rduncore memoryRead CAS commands issued (regular and underfill)event=5,umask=0xcf01unc_mchbm_cas_count.rd_pre_reguncore memoryRegular read CAS commands with prechargeevent=5,umask=0xc201unc_mchbm_cas_count.rd_pre_underfilluncore memoryUnderfill read CAS commands with prechargeevent=5,umask=0xc801unc_mchbm_cas_count.rd_reguncore memoryRegular read CAS commands issued (does not include underfills)event=5,umask=0xc101unc_mchbm_cas_count.rd_underfilluncore memoryUnderfill read CAS commands issuedevent=5,umask=0xc401unc_mchbm_cas_count.wruncore memoryWrite CAS commands issuedevent=5,umask=0xf001unc_mchbm_cas_count.wr_nonpreuncore memoryHBM RD_CAS and WR_CAS Commands. : HBM WR_CAS commands w/o auto-preevent=5,umask=0xd001unc_mchbm_cas_count.wr_preuncore memoryWrite CAS commands with prechargeevent=5,umask=0xe001unc_mchbm_cas_issued_req_len.pch0uncore memoryPseudo Channel 0event=6,umask=0x4001unc_mchbm_cas_issued_req_len.pch1uncore memoryPseudo Channel 1event=6,umask=0x8001unc_mchbm_cas_issued_req_len.rd_32buncore memoryRead CAS Command in Interleaved Mode (32B)event=6,umask=0xc801unc_mchbm_cas_issued_req_len.rd_64buncore memoryRead CAS Command in Regular Mode (64B) in Pseudochannel 0event=6,umask=0xc101unc_mchbm_cas_issued_req_len.rd_ufill_32buncore memoryUnderfill Read CAS Command in Interleaved Mode (32B)event=6,umask=0xd001unc_mchbm_cas_issued_req_len.rd_ufill_64buncore memoryUnderfill Read CAS Command in Regular Mode (64B) in Pseudochannel 1event=6,umask=0xc201unc_mchbm_cas_issued_req_len.wr_32buncore memoryWrite CAS Command in Interleaved Mode (32B)event=6,umask=0xe001unc_mchbm_cas_issued_req_len.wr_64buncore memoryWrite CAS Command in Regular Mode (64B) in Pseudochannel 0event=6,umask=0xc401unc_mchbm_clockticksuncore memoryIMC Clockticks at DCLK frequencyevent=1,umask=101unc_mchbm_hbm_preall.pch0uncore memoryHBM Precharge All Commandsevent=0x44,umask=101Counts the number of times that the precharge all command was sentunc_mchbm_hbm_preall.pch1uncore memoryHBM Precharge All Commandsevent=0x44,umask=201Counts the number of times that the precharge all command was sentunc_mchbm_hbm_pre_alluncore memoryAll Precharge Commandsevent=0x44,umask=301Precharge All Commands: Counts the number of times that the precharge all command was sentunc_mchbm_hclockticksuncore memoryIMC Clockticks at HCLK frequencyevent=101unc_mchbm_pre_count.alluncore memoryAll precharge eventsevent=3,umask=0xff01Counts the number of HBM Precharge commands sent on this channelunc_mchbm_pre_count.pgtuncore memoryPrecharge from MC page tableevent=3,umask=0x8801Counts the number of HBM Precharge commands sent on this channelunc_mchbm_pre_count.pgt_pch0uncore memoryHBM Precharge commands. : Precharges from Page Tableevent=3,umask=801Counts the number of HBM Precharge commands sent on this channel. : Equivalent to PAGE_EMPTYunc_mchbm_pre_count.pgt_pch1uncore memoryHBM Precharge commandsevent=3,umask=0x8001Counts the number of HBM Precharge commands sent on this channelunc_mchbm_pre_count.rduncore memoryPrecharge due to read on page missevent=3,umask=0x1101Counts the number of HBM Precharge commands sent on this channelunc_mchbm_pre_count.rd_pch0uncore memoryHBM Precharge commands. : Precharge due to readevent=3,umask=101Counts the number of HBM Precharge commands sent on this channel. : Precharge from read bank schedulerunc_mchbm_pre_count.rd_pch1uncore memoryHBM Precharge commandsevent=3,umask=0x1001Counts the number of HBM Precharge commands sent on this channelunc_mchbm_pre_count.ufilluncore memoryHBM Precharge commandsevent=3,umask=0x4401Counts the number of HBM Precharge commands sent on this channelunc_mchbm_pre_count.ufill_pch0uncore memoryHBM Precharge commandsevent=3,umask=401Counts the number of HBM Precharge commands sent on this channelunc_mchbm_pre_count.ufill_pch1uncore memoryHBM Precharge commandsevent=3,umask=0x4001Counts the number of HBM Precharge commands sent on this channelunc_mchbm_pre_count.wruncore memoryPrecharge due to write on page missevent=3,umask=0x2201Counts the number of HBM Precharge commands sent on this channelunc_mchbm_pre_count.wr_pch0uncore memoryHBM Precharge commands. : Precharge due to writeevent=3,umask=201Counts the number of HBM Precharge commands sent on this channel. : Precharge from write bank schedulerunc_mchbm_pre_count.wr_pch1uncore memoryHBM Precharge commandsevent=3,umask=0x2001Counts the number of HBM Precharge commands sent on this channelunc_mchbm_rdb_fulluncore memoryCounts the number of cycles where the read buffer has greater than UMASK elements.  NOTE: Umask must be set to the maximum number of elements in the queue (24 entries for SPR)event=0x1901unc_mchbm_rdb_insertsuncore memoryCounts the number of inserts into the read bufferevent=0x17,umask=301unc_mchbm_rdb_inserts.pch0uncore memoryRead Data Buffer Insertsevent=0x17,umask=101unc_mchbm_rdb_inserts.pch1uncore memoryRead Data Buffer Insertsevent=0x17,umask=201unc_mchbm_rdb_occupancyuncore memoryCounts the number of elements in the read buffer per cycleevent=0x1a01unc_mchbm_rpq_inserts.pch0uncore memoryRead Pending Queue Allocationsevent=0x10,umask=101Read Pending Queue Allocations: Counts the number of allocations into the Read Pending Queue.  This queue is used to schedule reads out to the memory controller and to track the requests.  Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC.  They deallocate after the CAS command has been issued to memory.  This includes both ISOCH and non-ISOCH requestsunc_mchbm_rpq_inserts.pch1uncore memoryRead Pending Queue Allocationsevent=0x10,umask=201Read Pending Queue Allocations: Counts the number of allocations into the Read Pending Queue.  This queue is used to schedule reads out to the memory controller and to track the requests.  Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC.  They deallocate after the CAS command has been issued to memory.  This includes both ISOCH and non-ISOCH requestsunc_mchbm_rpq_occupancy_pch0uncore memoryRead Pending Queue Occupancyevent=0x8001Read Pending Queue Occupancy: Accumulates the occupancies of the Read Pending Queue each cycle.  This can then be used to calculate both the average occupancy (in conjunction with the number of cycles not empty) and the average latency (in conjunction with the number of allocations).  The RPQ is used to schedule reads out to the memory controller and to track the requests.  Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memoryunc_mchbm_rpq_occupancy_pch1uncore memoryRead Pending Queue Occupancyevent=0x8101Read Pending Queue Occupancy: Accumulates the occupancies of the Read Pending Queue each cycle.  This can then be used to calculate both the average occupancy (in conjunction with the number of cycles not empty) and the average latency (in conjunction with the number of allocations).  The RPQ is used to schedule reads out to the memory controller and to track the requests.  Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memoryunc_mchbm_wpq_inserts.pch0uncore memoryWrite Pending Queue Allocationsevent=0x20,umask=101Write Pending Queue Allocations: Counts the number of allocations into the Write Pending Queue.  This can then be used to calculate the average queuing latency (in conjunction with the WPQ occupancy count).  The WPQ is used to schedule write out to the memory controller and to track the writes.  Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the CHA to the iMC.  They deallocate after being issued.  Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have posted to the iMCunc_mchbm_wpq_inserts.pch1uncore memoryWrite Pending Queue Allocationsevent=0x20,umask=201Write Pending Queue Allocations: Counts the number of allocations into the Write Pending Queue.  This can then be used to calculate the average queuing latency (in conjunction with the WPQ occupancy count).  The WPQ is used to schedule write out to the memory controller and to track the writes.  Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the CHA to the iMC.  They deallocate after being issued.  Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have posted to the iMCunc_mchbm_wpq_occupancy_pch0uncore memoryWrite Pending Queue Occupancyevent=0x8201Write Pending Queue Occupancy: Accumulates the occupancies of the Write Pending Queue each cycle.  This can then be used to calculate both the average queue occupancy (in conjunction with the number of cycles not empty) and the average latency (in conjunction with the number of allocations).  The WPQ is used to schedule write out to the memory controller and to track the writes.  Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC.  They deallocate after being issued to memory.  Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have posted to the iMC.  This is not to be confused with actually performing the write.  Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latencies.  So, we provide filtering based on if the request has posted or not.  By using the not posted filter, we can track how long writes spent in the iMC before completions were sent to the HA.  The posted filter, on the other hand, provides information about how much queueing is actually happening in the iMC for writes before they are actually issued to memory.  High average occupancies will generally coincide with high write major mode countsunc_mchbm_wpq_occupancy_pch1uncore memoryWrite Pending Queue Occupancyevent=0x8301Write Pending Queue Occupancy: Accumulates the occupancies of the Write Pending Queue each cycle.  This can then be used to calculate both the average queue occupancy (in conjunction with the number of cycles not empty) and the average latency (in conjunction with the number of allocations).  The WPQ is used to schedule write out to the memory controller and to track the writes.  Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC.  They deallocate after being issued to memory.  Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have posted to the iMC.  This is not to be confused with actually performing the write.  Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latencies.  So, we provide filtering based on if the request has posted or not.  By using the not posted filter, we can track how long writes spent in the iMC before completions were sent to the HA.  The posted filter, on the other hand, provides information about how much queueing is actually happening in the iMC for writes before they are actually issued to memory.  High average occupancies will generally coincide with high write major mode countsunc_mchbm_wpq_read_hituncore memoryWrite Pending Queue CAM Matchevent=0x2301Counts the number of times a request hits in the WPQ (write-pending queue).  The iMC allows writes and reads to pass up other writes to different addresses.  Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address.  When reads hit, they are able to directly pull their data from the WPQ instead of going to memory.  Writes that hit will overwrite the existing data.  Partial writes that hit will not need to do underfill reads and will simply update their relevant sectionsunc_mchbm_wpq_read_hit.pch0uncore memoryWrite Pending Queue CAM Matchevent=0x23,umask=101Write Pending Queue CAM Match: Counts the number of times a request hits in the WPQ (write-pending queue).  The iMC allows writes and reads to pass up other writes to different addresses.  Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address.  When reads hit, they are able to directly pull their data from the WPQ instead of going to memory.  Writes that hit will overwrite the existing data.  Partial writes that hit will not need to do underfill reads and will simply update their relevant sectionsunc_mchbm_wpq_read_hit.pch1uncore memoryWrite Pending Queue CAM Matchevent=0x23,umask=201Write Pending Queue CAM Match: Counts the number of times a request hits in the WPQ (write-pending queue).  The iMC allows writes and reads to pass up other writes to different addresses.  Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address.  When reads hit, they are able to directly pull their data from the WPQ instead of going to memory.  Writes that hit will overwrite the existing data.  Partial writes that hit will not need to do underfill reads and will simply update their relevant sectionsunc_mchbm_wpq_write_hituncore memoryWrite Pending Queue CAM Matchevent=0x2401Counts the number of times a request hits in the WPQ (write-pending queue).  The iMC allows writes and reads to pass up other writes to different addresses.  Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address.  When reads hit, they are able to directly pull their data from the WPQ instead of going to memory.  Writes that hit will overwrite the existing data.  Partial writes that hit will not need to do underfill reads and will simply update their relevant sectionsunc_mchbm_wpq_write_hit.pch0uncore memoryWrite Pending Queue CAM Matchevent=0x24,umask=101Write Pending Queue CAM Match: Counts the number of times a request hits in the WPQ (write-pending queue).  The iMC allows writes and reads to pass up other writes to different addresses.  Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address.  When reads hit, they are able to directly pull their data from the WPQ instead of going to memory.  Writes that hit will overwrite the existing data.  Partial writes that hit will not need to do underfill reads and will simply update their relevant sectionsunc_mchbm_wpq_write_hit.pch1uncore memoryWrite Pending Queue CAM Matchevent=0x24,umask=201Write Pending Queue CAM Match: Counts the number of times a request hits in the WPQ (write-pending queue).  The iMC allows writes and reads to pass up other writes to different addresses.  Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address.  When reads hit, they are able to directly pull their data from the WPQ instead of going to memory.  Writes that hit will overwrite the existing data.  Partial writes that hit will not need to do underfill reads and will simply update their relevant sectionsunc_m_act_count.alluncore memoryActivate due to read, write, underfill, or bypassevent=2,umask=0xff01DRAM Activate Count : Counts the number of DRAM Activate commands sent on this channel.  Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS.  One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activatesunc_m_cas_count.alluncore memoryAll DRAM CAS commands issuedevent=5,umask=0xff01DRAM RD_CAS and WR_CAS Commands. : All DRAM Read and Write actions : DRAM RD_CAS and WR_CAS Commands : Counts the total number of DRAM CAS commands issued on this channelunc_m_cas_count.pch0uncore memoryDRAM RD_CAS and WR_CAS Commands. : Pseudo Channel 0event=5,umask=0x4001DRAM RD_CAS and WR_CAS Commands. : Pseudo Channel 0 : DRAM RD_CAS and WR_CAS Commandsunc_m_cas_count.pch1uncore memoryDRAM RD_CAS and WR_CAS Commands. : Pseudo Channel 1event=5,umask=0x8001DRAM RD_CAS and WR_CAS Commands. : Pseudo Channel 1 : DRAM RD_CAS and WR_CAS Commandsunc_m_cas_count.rduncore memoryAll DRAM read CAS commands issued (including underfills)event=5,umask=0xcf01DRAM RD_CAS and WR_CAS Commands : Counts the total number of DRAM Read CAS commands issued on this channel.  This includes underfillsunc_m_cas_count.rd_pre_reguncore memoryDRAM RD_CAS and WR_CAS Commandsevent=5,umask=0xc201DRAM RD_CAS and WR_CAS Commands. : DRAM RD_CAS and WR_CAS Commandsunc_m_cas_count.rd_pre_underfilluncore memoryDRAM RD_CAS and WR_CAS Commandsevent=5,umask=0xc801DRAM RD_CAS and WR_CAS Commands. : DRAM RD_CAS and WR_CAS Commandsunc_m_cas_count.rd_reguncore memoryAll DRAM read CAS commands issued (does not include underfills)event=5,umask=0xc101DRAM RD_CAS and WR_CAS Commands. : DRAM RD_CAS commands w/out auto-pre : DRAM RD_CAS and WR_CAS Commands : Counts the total number or DRAM Read CAS commands issued on this channel.  This includes both regular RD CAS commands as well as those with implicit Precharge.   We do not filter based on major mode, as RD_CAS is not issued during WMM (with the exception of underfills)unc_m_cas_count.rd_underfilluncore memoryDRAM underfill read CAS commands issuedevent=5,umask=0xc401DRAM RD_CAS and WR_CAS Commands. : Underfill Read Issued : DRAM RD_CAS and WR_CAS Commandsunc_m_cas_count.wruncore memoryAll DRAM write CAS commands issuedevent=5,umask=0xf001DRAM RD_CAS and WR_CAS Commands : Counts the total number of DRAM Write CAS commands issued on this channelunc_m_cas_count.wr_nonpreuncore memoryDRAM RD_CAS and WR_CAS Commands. : DRAM WR_CAS commands w/o auto-preevent=5,umask=0xd001DRAM RD_CAS and WR_CAS Commands. : DRAM WR_CAS commands w/o auto-pre : DRAM RD_CAS and WR_CAS Commandsunc_m_cas_count.wr_preuncore memoryDRAM RD_CAS and WR_CAS Commandsevent=5,umask=0xe001DRAM RD_CAS and WR_CAS Commands. : DRAM RD_CAS and WR_CAS Commandsunc_m_cas_issued_req_len.pch0uncore memoryPseudo Channel 0event=6,umask=0x4001unc_m_cas_issued_req_len.pch1uncore memoryPseudo Channel 1event=6,umask=0x8001unc_m_cas_issued_req_len.rd_32buncore memoryRead CAS Command in Interleaved Mode (32B)event=6,umask=0xc801unc_m_cas_issued_req_len.rd_64buncore memoryRead CAS Command in Regular Mode (64B) in Pseudochannel 0event=6,umask=0xc101unc_m_cas_issued_req_len.rd_ufill_32buncore memoryUnderfill Read CAS Command in Interleaved Mode (32B)event=6,umask=0xd001unc_m_cas_issued_req_len.rd_ufill_64buncore memoryUnderfill Read CAS Command in Regular Mode (64B) in Pseudochannel 1event=6,umask=0xc201unc_m_cas_issued_req_len.wr_32buncore memoryWrite CAS Command in Interleaved Mode (32B)event=6,umask=0xe001unc_m_cas_issued_req_len.wr_64buncore memoryWrite CAS Command in Regular Mode (64B) in Pseudochannel 0event=6,umask=0xc401unc_m_clockticksuncore memoryIMC Clockticks at DCLK frequencyevent=1,umask=101Number of DRAM DCLK clock cycles while the event is enabledunc_m_dram_pre_alluncore memoryDRAM Precharge All Commandsevent=0x44,umask=301DRAM Precharge All Commands : Counts the number of times that the precharge all command was sentunc_m_hclockticksuncore memoryIMC Clockticks at HCLK frequencyevent=101Number of DRAM HCLK clock cycles while the event is enabledunc_m_pcls.rduncore memoryUNC_M_PCLS.RDevent=0xa0,umask=501unc_m_pcls.totaluncore memoryUNC_M_PCLS.TOTALevent=0xa0,umask=0xf01unc_m_pcls.wruncore memoryUNC_M_PCLS.WRevent=0xa0,umask=0xa01unc_m_pmm_rpq_insertsuncore memoryPMM Read Pending Queue insertsevent=0xe301Counts number of read requests allocated in the PMM Read Pending Queueunc_m_pmm_rpq_occupancy.all_sch0uncore memoryPMM Read Pending Queue occupancyevent=0xe0,umask=101Accumulates the per cycle occupancy of the PMM Read Pending Queueunc_m_pmm_rpq_occupancy.all_sch1uncore memoryPMM Read Pending Queue occupancyevent=0xe0,umask=201Accumulates the per cycle occupancy of the PMM Read Pending Queueunc_m_pmm_rpq_occupancy.gnt_wait_sch0uncore memoryPMM Read Pending Queue Occupancyevent=0xe0,umask=0x1001PMM Read Pending Queue Occupancy : Accumulates the per cycle occupancy of the PMM Read Pending Queueunc_m_pmm_rpq_occupancy.gnt_wait_sch1uncore memoryPMM Read Pending Queue Occupancyevent=0xe0,umask=0x2001PMM Read Pending Queue Occupancy : Accumulates the per cycle occupancy of the PMM Read Pending Queueunc_m_pmm_rpq_occupancy.no_gnt_sch0uncore memoryPMM Read Pending Queue Occupancyevent=0xe0,umask=401Accumulates the per cycle occupancy of the PMM Read Pending Queueunc_m_pmm_rpq_occupancy.no_gnt_sch1uncore memoryPMM Read Pending Queue Occupancyevent=0xe0,umask=801Accumulates the per cycle occupancy of the PMM Read Pending Queueunc_m_pmm_wpq_cycles_neuncore memoryPMM (for IXP) Write Queue Cycles Not Emptyevent=0xe501unc_m_pmm_wpq_insertsuncore memoryPMM Write Pending Queue insertsevent=0xe701Counts number of  write requests allocated in the PMM Write Pending Queueunc_m_pmm_wpq_occupancy.alluncore memoryPMM Write Pending Queue Occupancyevent=0xe4,umask=301PMM Write Pending Queue Occupancy : Accumulates the per cycle occupancy of the Write Pending Queue to the PMM DIMMunc_m_pmm_wpq_occupancy.all_sch0uncore memoryPMM Write Pending Queue Occupancyevent=0xe4,umask=101PMM Write Pending Queue Occupancy : Accumulates the per cycle occupancy of the PMM Write Pending Queueunc_m_pmm_wpq_occupancy.all_sch1uncore memoryPMM Write Pending Queue Occupancyevent=0xe4,umask=201PMM Write Pending Queue Occupancy : Accumulates the per cycle occupancy of the PMM Write Pending Queueunc_m_pmm_wpq_occupancy.casuncore memoryPMM (for IXP) Write Pending Queue Occupancyevent=0xe4,umask=0xc01PMM (for IXP) Write Pending Queue Occupancy : Accumulates the per cycle occupancy of the Write Pending Queue to the IXP DIMMunc_m_pmm_wpq_occupancy.pwruncore memoryPMM (for IXP) Write Pending Queue Occupancyevent=0xe4,umask=0x3001PMM (for IXP) Write Pending Queue Occupancy : Accumulates the per cycle occupancy of the Write Pending Queue to the IXP DIMMunc_m_power_channel_ppduncore memoryChannel PPD Cyclesevent=0x8501Channel PPD Cycles : Number of cycles when all the ranks in the channel are in PPD mode.  If IBT=off is enabled, then this can be used to count those cycles.  If it is not enabled, then this can count the number of cycles when that could have been taken advantage ofunc_m_power_cke_cycles.low_0uncore memoryCKE_ON_CYCLES by Rank : DIMM IDevent=0x47,umask=101CKE_ON_CYCLES by Rank : DIMM ID : Number of cycles spent in CKE ON mode.  The filter allows you to select a rank to monitor.  If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation.  Multiple counters will need to be used to track multiple ranks simultaneously.  There is no distinction between the different CKE modes (APD, PPDS, PPDF).  This can be determined based on the system programming.  These events should commonly be used with Invert to get the number of cycles in power saving mode.  Edge Detect is also useful here.  Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary)unc_m_power_cke_cycles.low_1uncore memoryCKE_ON_CYCLES by Rank : DIMM IDevent=0x47,umask=201CKE_ON_CYCLES by Rank : DIMM ID : Number of cycles spent in CKE ON mode.  The filter allows you to select a rank to monitor.  If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation.  Multiple counters will need to be used to track multiple ranks simultaneously.  There is no distinction between the different CKE modes (APD, PPDS, PPDF).  This can be determined based on the system programming.  These events should commonly be used with Invert to get the number of cycles in power saving mode.  Edge Detect is also useful here.  Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary)unc_m_power_cke_cycles.low_2uncore memoryCKE_ON_CYCLES by Rank : DIMM IDevent=0x47,umask=401CKE_ON_CYCLES by Rank : DIMM ID : Number of cycles spent in CKE ON mode.  The filter allows you to select a rank to monitor.  If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation.  Multiple counters will need to be used to track multiple ranks simultaneously.  There is no distinction between the different CKE modes (APD, PPDS, PPDF).  This can be determined based on the system programming.  These events should commonly be used with Invert to get the number of cycles in power saving mode.  Edge Detect is also useful here.  Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary)unc_m_power_cke_cycles.low_3uncore memoryCKE_ON_CYCLES by Rank : DIMM IDevent=0x47,umask=801CKE_ON_CYCLES by Rank : DIMM ID : Number of cycles spent in CKE ON mode.  The filter allows you to select a rank to monitor.  If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation.  Multiple counters will need to be used to track multiple ranks simultaneously.  There is no distinction between the different CKE modes (APD, PPDS, PPDF).  This can be determined based on the system programming.  These events should commonly be used with Invert to get the number of cycles in power saving mode.  Edge Detect is also useful here.  Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary)unc_m_power_crit_throttle_cycles.slot0uncore memoryThrottle Cycles for Rank 0event=0x86,umask=101Throttle Cycles for Rank 0 : Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling.  It is not possible to distinguish between the two.  This can be filtered by rank.  If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1. : Thermal throttling is performed per DIMM.  We support 3 DIMMs per channel.  This ID allows us to filter by IDunc_m_power_crit_throttle_cycles.slot1uncore memoryThrottle Cycles for Rank 0event=0x86,umask=201Throttle Cycles for Rank 0 : Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling.  It is not possible to distinguish between the two.  This can be filtered by rank.  If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1unc_m_power_self_refreshuncore memoryClock-Enabled Self-Refreshevent=0x4301Clock-Enabled Self-Refresh : Counts the number of cycles when the iMC is in self-refresh and the iMC still has a clock.  This happens in some package C-states.  For example, the PCU may ask the iMC to enter self-refresh even though some of the cores are still processing.  One use of this is for Monroe technology.  Self-refresh is required during package C3 and C6, but there is no clock in the iMC at this time, so it is not possible to count these casesunc_m_pre_count.alluncore memoryPrecharge due to read, write, underfill, or PGTevent=3,umask=0xff01DRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channelunc_m_pre_count.pgtuncore memoryDRAM Precharge commandsevent=3,umask=0x8801DRAM Precharge commands.  Counts the number of DRAM Precharge commands sent on this channelunc_m_pre_count.pgt_pch0uncore memoryDRAM Precharge commands. : Precharges from Page Tableevent=3,umask=801DRAM Precharge commands. : Precharges from Page Table : Counts the number of DRAM Precharge commands sent on this channel. : Equivalent to PAGE_EMPTYunc_m_pre_count.pgt_pch1uncore memoryDRAM Precharge commandsevent=3,umask=0x8001DRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channelunc_m_pre_count.rduncore memoryPrecharge due to read on page missevent=3,umask=0x1101DRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channelunc_m_pre_count.rd_pch0uncore memoryDRAM Precharge commands. : Precharge due to readevent=3,umask=101DRAM Precharge commands. : Precharge due to read : Counts the number of DRAM Precharge commands sent on this channel. : Precharge from read bank schedulerunc_m_pre_count.rd_pch1uncore memoryDRAM Precharge commandsevent=3,umask=0x1001DRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channelunc_m_pre_count.ufilluncore memoryDRAM Precharge commandsevent=3,umask=0x4401DRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channelunc_m_pre_count.ufill_pch0uncore memoryDRAM Precharge commandsevent=3,umask=401DRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channelunc_m_pre_count.ufill_pch1uncore memoryDRAM Precharge commandsevent=3,umask=0x4001DRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channelunc_m_pre_count.wruncore memoryPrecharge due to write on page missevent=3,umask=0x2201DRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channelunc_m_pre_count.wr_pch0uncore memoryDRAM Precharge commands. : Precharge due to writeevent=3,umask=201DRAM Precharge commands. : Precharge due to write : Counts the number of DRAM Precharge commands sent on this channel. : Precharge from write bank schedulerunc_m_pre_count.wr_pch1uncore memoryDRAM Precharge commandsevent=3,umask=0x2001DRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channelunc_m_rdb_fulluncore memoryCounts the number of cycles where the read buffer has greater than UMASK elements.  This includes reads to both DDR and PMEM.  NOTE: Umask must be set to the maximum number of elements in the queue (24 entries for SPR)event=0x1901unc_m_rdb_insertsuncore memoryCounts the number of inserts into the read buffer destined for DDR.  Does not count reads destined for PMEMevent=0x17,umask=301unc_m_rdb_inserts.pch0uncore memoryRead Data Buffer Insertsevent=0x17,umask=101unc_m_rdb_inserts.pch1uncore memoryRead Data Buffer Insertsevent=0x17,umask=201unc_m_rdb_neuncore memoryCounts the number of cycles where there's at least one element in the read buffer.  This includes reads to both DDR and PMEMevent=0x18,umask=301unc_m_rdb_ne.pch0uncore memoryRead Data Buffer Not Emptyevent=0x18,umask=101unc_m_rdb_ne.pch1uncore memoryRead Data Buffer Not Emptyevent=0x18,umask=201unc_m_rdb_not_emptyuncore memoryCounts the number of cycles where there's at least one element in the read buffer.  This includes reads to both DDR and PMEMevent=0x18,umask=301unc_m_rdb_occupancyuncore memoryCounts the number of elements in the read buffer, including reads to both DDR and PMEMevent=0x1a01unc_m_rpq_inserts.pch0uncore memoryRead Pending Queue Allocationsevent=0x10,umask=101Read Pending Queue Allocations : Counts the number of allocations into the Read Pending Queue.  This queue is used to schedule reads out to the memory controller and to track the requests.  Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC.  They deallocate after the CAS command has been issued to memory.  This includes both ISOCH and non-ISOCH requestsunc_m_rpq_inserts.pch1uncore memoryRead Pending Queue Allocationsevent=0x10,umask=201Read Pending Queue Allocations : Counts the number of allocations into the Read Pending Queue.  This queue is used to schedule reads out to the memory controller and to track the requests.  Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC.  They deallocate after the CAS command has been issued to memory.  This includes both ISOCH and non-ISOCH requestsunc_m_rpq_occupancy_pch0uncore memoryRead Pending Queue Occupancyevent=0x8001Read Pending Queue Occupancy : Accumulates the occupancies of the Read Pending Queue each cycle.  This can then be used to calculate both the average occupancy (in conjunction with the number of cycles not empty) and the average latency (in conjunction with the number of allocations).  The RPQ is used to schedule reads out to the memory controller and to track the requests.  Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memoryunc_m_rpq_occupancy_pch1uncore memoryRead Pending Queue Occupancyevent=0x8101Read Pending Queue Occupancy : Accumulates the occupancies of the Read Pending Queue each cycle.  This can then be used to calculate both the average occupancy (in conjunction with the number of cycles not empty) and the average latency (in conjunction with the number of allocations).  The RPQ is used to schedule reads out to the memory controller and to track the requests.  Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memoryunc_m_sb_accesses.acceptsuncore memoryScoreboard acceptsevent=0xd2,umask=501unc_m_sb_accesses.fm_rd_cmpsuncore memoryScoreboard Accesses : Write Acceptsevent=0xd2,umask=0x4001unc_m_sb_accesses.fm_wr_cmpsuncore memoryScoreboard Accesses : Write Rejectsevent=0xd2,umask=0x8001unc_m_sb_accesses.nm_rd_cmpsuncore memoryScoreboard Accesses : FM read completionsevent=0xd2,umask=0x1001unc_m_sb_accesses.nm_wr_cmpsuncore memoryScoreboard Accesses : FM write completionsevent=0xd2,umask=0x2001unc_m_sb_accesses.rd_acceptsuncore memoryScoreboard Accesses : Read Acceptsevent=0xd2,umask=101unc_m_sb_accesses.rd_rejectsuncore memoryScoreboard Accesses : Read Rejectsevent=0xd2,umask=201unc_m_sb_accesses.rejectsuncore memoryScoreboard rejectsevent=0xd2,umask=0xa01unc_m_sb_accesses.wr_acceptsuncore memoryScoreboard Accesses : NM read completionsevent=0xd2,umask=401unc_m_sb_accesses.wr_rejectsuncore memoryScoreboard Accesses : NM write completionsevent=0xd2,umask=801unc_m_sb_canary.allocuncore memory: Allocevent=0xd9,umask=101unc_m_sb_canary.deallocuncore memory: Deallocevent=0xd9,umask=201unc_m_sb_canary.fm_rd_starveduncore memory: Near Mem Write Starvedevent=0xd9,umask=0x2001unc_m_sb_canary.fm_tgr_wr_starveduncore memory: Far Mem Write Starvedevent=0xd9,umask=0x8001unc_m_sb_canary.fm_wr_starveduncore memory: Far Mem Read Starvedevent=0xd9,umask=0x4001unc_m_sb_canary.nm_rd_starveduncore memory: Validevent=0xd9,umask=801unc_m_sb_canary.nm_wr_starveduncore memory: Near Mem Read Starvedevent=0xd9,umask=0x1001unc_m_sb_canary.vlduncore memory: Rejectevent=0xd9,umask=401unc_m_sb_inserts.block_rdsuncore memoryScoreboard Inserts : Block region readsevent=0xd6,umask=0x1001unc_m_sb_inserts.block_wrsuncore memoryScoreboard Inserts : Block region writesevent=0xd6,umask=0x2001unc_m_sb_inserts.pmm_rdsuncore memoryScoreboard Inserts : Persistent Mem readsevent=0xd6,umask=401unc_m_sb_inserts.pmm_wrsuncore memoryScoreboard Inserts : Persistent Mem writesevent=0xd6,umask=801unc_m_sb_inserts.rdsuncore memoryScoreboard Inserts : Readsevent=0xd6,umask=101unc_m_sb_inserts.wrsuncore memoryScoreboard Inserts : Writesevent=0xd6,umask=201unc_m_sb_occupancy.block_rdsuncore memoryScoreboard Occupancy : Block region readsevent=0xd5,umask=0x2001unc_m_sb_occupancy.block_wrsuncore memoryScoreboard Occupancy : Block region writesevent=0xd5,umask=0x4001unc_m_sb_occupancy.pmm_rdsuncore memoryScoreboard Occupancy : Persistent Mem readsevent=0xd5,umask=401unc_m_sb_occupancy.pmm_wrsuncore memoryScoreboard Occupancy : Persistent Mem writesevent=0xd5,umask=801unc_m_sb_occupancy.rdsuncore memoryScoreboard Occupancy : Readsevent=0xd5,umask=101unc_m_sb_pref_inserts.alluncore memoryScoreboard Prefetch Inserts : Allevent=0xda,umask=101unc_m_sb_pref_inserts.ddruncore memoryScoreboard Prefetch Inserts : DDR4event=0xda,umask=201unc_m_sb_pref_inserts.pmmuncore memoryScoreboard Prefetch Inserts : PMMevent=0xda,umask=401unc_m_sb_pref_occupancy.alluncore memoryScoreboard Prefetch Occupancy : Allevent=0xdb,umask=101unc_m_sb_pref_occupancy.ddruncore memoryScoreboard Prefetch Occupancy : DDR4event=0xdb,umask=201unc_m_sb_pref_occupancy.pmmuncore memoryScoreboard Prefetch Occupancy : Persistent Memevent=0xdb,umask=401unc_m_sb_reject.canaryuncore memoryNumber of Scoreboard Requests Rejectedevent=0xd4,umask=801unc_m_sb_reject.ddr_early_cmpuncore memoryNumber of Scoreboard Requests Rejectedevent=0xd4,umask=0x2001unc_m_sb_reject.fm_addr_cnfltuncore memoryNumber of Scoreboard Requests Rejected : FM requests rejected due to full address conflictevent=0xd4,umask=201unc_m_sb_reject.nm_set_cnfltuncore memoryNumber of Scoreboard Requests Rejected : NM requests rejected due to set conflictevent=0xd4,umask=101unc_m_sb_reject.patrol_set_cnfltuncore memoryNumber of Scoreboard Requests Rejected : Patrol requests rejected due to set conflictevent=0xd4,umask=401unc_m_sb_strv_alloc.fm_rduncore memory: Far Mem Read - Setevent=0xd7,umask=201unc_m_sb_strv_alloc.fm_tgruncore memory: Near Mem Read - Clearevent=0xd7,umask=0x1001unc_m_sb_strv_alloc.fm_wruncore memory: Far Mem Write - Setevent=0xd7,umask=801unc_m_sb_strv_alloc.nm_rduncore memory: Near Mem Read - Setevent=0xd7,umask=101unc_m_sb_strv_alloc.nm_wruncore memory: Near Mem Write - Setevent=0xd7,umask=401unc_m_sb_strv_dealloc.fm_rduncore memory: Far Mem Read - Setevent=0xde,umask=201unc_m_sb_strv_dealloc.fm_tgruncore memory: Near Mem Read - Clearevent=0xde,umask=0x1001unc_m_sb_strv_dealloc.fm_wruncore memory: Far Mem Write - Setevent=0xde,umask=801unc_m_sb_strv_dealloc.nm_rduncore memory: Near Mem Read - Setevent=0xde,umask=101unc_m_sb_strv_dealloc.nm_wruncore memory: Near Mem Write - Setevent=0xde,umask=401unc_m_sb_strv_occ.fm_rduncore memory: Far Mem Readevent=0xd8,umask=201unc_m_sb_strv_occ.fm_tgruncore memory: Near Mem Read - Clearevent=0xd8,umask=0x1001unc_m_sb_strv_occ.fm_wruncore memory: Far Mem Writeevent=0xd8,umask=801unc_m_sb_strv_occ.nm_rduncore memory: Near Mem Readevent=0xd8,umask=101unc_m_sb_strv_occ.nm_wruncore memory: Near Mem Writeevent=0xd8,umask=401unc_m_tagchk.hituncore memory2LM Tag check hit in near memory cache (DDR4)event=0xd3,umask=101unc_m_tagchk.miss_cleanuncore memory2LM Tag check miss, no data at this lineevent=0xd3,umask=201unc_m_tagchk.miss_dirtyuncore memory2LM Tag check miss, existing data may be evicted to PMMevent=0xd3,umask=401unc_m_tagchk.nm_rd_hituncore memory2LM Tag check hit due to memory readevent=0xd3,umask=801unc_m_tagchk.nm_wr_hituncore memory2LM Tag check hit due to memory writeevent=0xd3,umask=0x1001unc_m_wpq_inserts.pch0uncore memoryWrite Pending Queue Allocationsevent=0x20,umask=101Write Pending Queue Allocations : Counts the number of allocations into the Write Pending Queue.  This can then be used to calculate the average queuing latency (in conjunction with the WPQ occupancy count).  The WPQ is used to schedule write out to the memory controller and to track the writes.  Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the CHA to the iMC.  They deallocate after being issued to DRAM.  Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have posted to the iMCunc_m_wpq_inserts.pch1uncore memoryWrite Pending Queue Allocationsevent=0x20,umask=201Write Pending Queue Allocations : Counts the number of allocations into the Write Pending Queue.  This can then be used to calculate the average queuing latency (in conjunction with the WPQ occupancy count).  The WPQ is used to schedule write out to the memory controller and to track the writes.  Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the CHA to the iMC.  They deallocate after being issued to DRAM.  Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have posted to the iMCunc_m_wpq_occupancy_pch0uncore memoryWrite Pending Queue Occupancyevent=0x8201Write Pending Queue Occupancy : Accumulates the occupancies of the Write Pending Queue each cycle.  This can then be used to calculate both the average queue occupancy (in conjunction with the number of cycles not empty) and the average latency (in conjunction with the number of allocations).  The WPQ is used to schedule write out to the memory controller and to track the writes.  Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC.  They deallocate after being issued to DRAM.  Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have posted to the iMC.  This is not to be confused with actually performing the write to DRAM.  Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latencies.  So, we provide filtering based on if the request has posted or not.  By using the not posted filter, we can track how long writes spent in the iMC before completions were sent to the HA.  The posted filter, on the other hand, provides information about how much queueing is actually happening in the iMC for writes before they are actually issued to memory.  High average occupancies will generally coincide with high write major mode countsunc_m_wpq_occupancy_pch1uncore memoryWrite Pending Queue Occupancyevent=0x8301Write Pending Queue Occupancy : Accumulates the occupancies of the Write Pending Queue each cycle.  This can then be used to calculate both the average queue occupancy (in conjunction with the number of cycles not empty) and the average latency (in conjunction with the number of allocations).  The WPQ is used to schedule write out to the memory controller and to track the writes.  Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC.  They deallocate after being issued to DRAM.  Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have posted to the iMC.  This is not to be confused with actually performing the write to DRAM.  Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latencies.  So, we provide filtering based on if the request has posted or not.  By using the not posted filter, we can track how long writes spent in the iMC before completions were sent to the HA.  The posted filter, on the other hand, provides information about how much queueing is actually happening in the iMC for writes before they are actually issued to memory.  High average occupancies will generally coincide with high write major mode countsunc_p_clockticksuncore powerPCU PCLK Clockticksevent=101Number of PCU PCLK Clock cycles while the event is enabledunc_p_fivr_ps_ps0_cyclesuncore powerPhase Shed 0 Cyclesevent=0x7501Phase Shed 0 Cycles : Cycles spent in phase-shedding power state 0unc_p_fivr_ps_ps1_cyclesuncore powerPhase Shed 1 Cyclesevent=0x7601Phase Shed 1 Cycles : Cycles spent in phase-shedding power state 1unc_p_fivr_ps_ps2_cyclesuncore powerPhase Shed 2 Cyclesevent=0x7701Phase Shed 2 Cycles : Cycles spent in phase-shedding power state 2unc_p_fivr_ps_ps3_cyclesuncore powerPhase Shed 3 Cyclesevent=0x7801Phase Shed 3 Cycles : Cycles spent in phase-shedding power state 3unc_p_freq_clip_avx256uncore powerAVX256 Frequency Clippingevent=0x4901unc_p_freq_clip_avx512uncore powerAVX512 Frequency Clippingevent=0x4a01unc_p_freq_max_limit_thermal_cyclesuncore powerThermal Strongest Upper Limit Cyclesevent=401Thermal Strongest Upper Limit Cycles : Number of cycles any frequency is reduced due to a thermal limit.  Count only if throttling is occurringunc_p_freq_max_power_cyclesuncore powerPower Strongest Upper Limit Cyclesevent=501Power Strongest Upper Limit Cycles : Counts the number of cycles when power is the upper limit on frequencyunc_p_freq_min_io_p_cyclesuncore powerIO P Limit Strongest Lower Limit Cyclesevent=0x7301IO P Limit Strongest Lower Limit Cycles : Counts the number of cycles when IO P Limit is preventing us from dropping the frequency lower.  This algorithm monitors the needs to the IO subsystem on both local and remote sockets and will maintain a frequency high enough to maintain good IO BW.  This is necessary for when all the IA cores on a socket are idle but a user still would like to maintain high IO Bandwidthunc_p_freq_trans_cyclesuncore powerCycles spent changing Frequencyevent=0x7401Cycles spent changing Frequency : Counts the number of cycles when the system is changing frequency.  This can not be filtered by thread ID.  One can also use it with the occupancy counter that monitors number of threads in C0 to estimate the performance impact that frequency transitions had on the systemunc_p_memory_phase_shedding_cyclesuncore powerMemory Phase Shedding Cyclesevent=0x2f01Memory Phase Shedding Cycles : Counts the number of cycles that the PCU has triggered memory phase shedding.  This is a mode that can be run in the iMC physicals that saves power at the expense of additional latencyunc_p_pkg_residency_c0_cyclesuncore powerPackage C State Residency - C0event=0x2a01Package C State Residency - C0 : Counts the number of cycles when the package was in C0.  This event can be used in conjunction with edge detect to count C0 entrances (or exits using invert).  Residency events do not include transition timesunc_p_pkg_residency_c2e_cyclesuncore powerPackage C State Residency - C2Eevent=0x2b01Package C State Residency - C2E : Counts the number of cycles when the package was in C2E.  This event can be used in conjunction with edge detect to count C2E entrances (or exits using invert).  Residency events do not include transition timesunc_p_pkg_residency_c6_cyclesuncore powerPackage C State Residency - C6event=0x2d01Package C State Residency - C6 : Counts the number of cycles when the package was in C6.  This event can be used in conjunction with edge detect to count C6 entrances (or exits using invert).  Residency events do not include transition timesunc_p_pmax_throttled_cyclesuncore powerUNC_P_PMAX_THROTTLED_CYCLESevent=601unc_p_power_state_occupancy_cores_c0uncore powerNumber of cores in C0event=0x3501Number of cores in C0 : This is an occupancy event that tracks the number of cores that are in the chosen C-State.  It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other detailsunc_p_power_state_occupancy_cores_c3uncore powerNumber of cores in C3event=0x3601Number of cores in C3 : This is an occupancy event that tracks the number of cores that are in the chosen C-State.  It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other detailsunc_p_power_state_occupancy_cores_c6uncore powerNumber of cores in C6event=0x3701Number of cores in C6 : This is an occupancy event that tracks the number of cores that are in the chosen C-State.  It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other detailsunc_p_prochot_external_cyclesuncore powerExternal Prochotevent=0xa01External Prochot : Counts the number of cycles that we are in external PROCHOT mode.  This mode is triggered when a sensor off the die determines that something off-die (like DRAM) is too hot and must throttle to avoid damaging the chipunc_p_prochot_internal_cyclesuncore powerInternal Prochotevent=901Internal Prochot : Counts the number of cycles that we are in Internal PROCHOT mode.  This mode is triggered when a sensor on the die determines that we are too hot and must throttle to avoid damaging the chipunc_p_total_transition_cyclesuncore powerTotal Core C State Transition Cyclesevent=0x7201Total Core C State Transition Cycles : Number of cycles spent performing core C state transitions across all corescore_reject_l2q.allcacheRequests rejected by the L2Qevent=0x31,period=20000300Counts the number of demand and L1 prefetcher requests rejected by the L2Q due to a full or nearly full condition which likely indicates back pressure from L2Q. It also counts requests that would have gone directly to the XQ, but are rejected due to a full or nearly full condition, indicating back pressure from the IDI link. The L2Q may also reject transactions from a core to ensure fairness between cores, or to delay a core's dirty eviction when the address conflicts with incoming external snoopsdl1.dirty_evictioncacheL1 Cache evictions for dirty dataevent=0x51,period=200003,umask=100Counts when a modified (dirty) cache line is evicted from the data L1 cache and needs to be written back to memory.  No count will occur if the evicted line is clean, and hence does not require a writebackfetch_stall.icache_fill_pending_cyclescacheCycles code-fetch stalled due to an outstanding ICache missevent=0x86,period=200003,umask=200Counts cycles that fetch is stalled due to an outstanding ICache miss. That is, the decoder queue is able to accept bytes, but the fetch unit is unable to provide bytes due to an ICache miss.  Note: this event is not the same as the total number of cycles spent retrieving instruction cache lines from the memory hierarchyl2_reject_xq.allcacheRequests rejected by the XQevent=0x30,period=20000300Counts the number of demand and prefetch transactions that the L2 XQ rejects due to a full or near full condition which likely indicates back pressure from the intra-die interconnect (IDI) fabric. The XQ may reject transactions from the L2Q (non-cacheable requests), L2 misses and L2 write-back victimslongest_lat_cache.misscacheL2 cache request missesevent=0x2e,period=200003,umask=0x4100Counts memory requests originating from the core that miss in the L2 cachelongest_lat_cache.referencecacheL2 cache requestsevent=0x2e,period=200003,umask=0x4f00Counts memory requests originating from the core that reference a cache line in the L2 cachemem_load_uops_retired.dram_hitcacheLoads retired that came from DRAM (Precise event capable)  Supports address when precise (Must be precise)event=0xd1,period=200003,umask=0x8000Counts memory load uops retired where the data is retrieved from DRAM.  Event is counted at retirement, so the speculative loads are ignored.  A memory load can hit (or miss) the L1 cache, hit (or miss) the L2 cache, hit DRAM, hit in the WCB or receive a HITM response  Supports address when precise (Must be precise)mem_load_uops_retired.hitmcacheMemory uop retired where cross core or cross module HITM occurred (Precise event capable)  Supports address when precise (Must be precise)event=0xd1,period=200003,umask=0x2000Counts load uops retired where the cache line containing the data was in the modified state of another core or modules cache (HITM).  More specifically, this means that when the load address was checked by other caching agents (typically another processor) in the system, one of those caching agents indicated that they had a dirty copy of the data.  Loads that obtain a HITM response incur greater latency than most is typical for a load.  In addition, since HITM indicates that some other processor had this data in its cache, it implies that the data was shared between processors, or potentially was a lock or semaphore value.  This event is useful for locating sharing, false sharing, and contended locks  Supports address when precise (Must be precise)mem_load_uops_retired.l1_hitcacheLoad uops retired that hit L1 data cache (Precise event capable)  Supports address when precise (Must be precise)event=0xd1,period=200003,umask=100Counts load uops retired that hit the L1 data cache  Supports address when precise (Must be precise)mem_load_uops_retired.l1_misscacheLoad uops retired that missed L1 data cache (Precise event capable)  Supports address when precise (Must be precise)event=0xd1,period=200003,umask=800Counts load uops retired that miss the L1 data cache  Supports address when precise (Must be precise)mem_load_uops_retired.l2_hitcacheLoad uops retired that hit L2 (Precise event capable)  Supports address when precise (Must be precise)event=0xd1,period=200003,umask=200Counts load uops retired that hit in the L2 cache  Supports address when precise (Must be precise)mem_load_uops_retired.l2_misscacheLoad uops retired that missed L2 (Precise event capable)  Supports address when precise (Must be precise)event=0xd1,period=200003,umask=0x1000Counts load uops retired that miss in the L2 cache  Supports address when precise (Must be precise)mem_load_uops_retired.wcb_hitcacheLoads retired that hit WCB (Precise event capable)  Supports address when precise (Must be precise)event=0xd1,period=200003,umask=0x4000Counts memory load uops retired where the data is retrieved from the WCB (or fill buffer), indicating that the load found its data while that data was in the process of being brought into the L1 cache.  Typically a load will receive this indication when some other load or prefetch missed the L1 cache and was in the process of retrieving the cache line containing the data, but that process had not yet finished (and written the data back to the cache). For example, consider load X and Y, both referencing the same cache line that is not in the L1 cache.  If load X misses cache first, it obtains and WCB (or fill buffer) and begins the process of requesting the data.  When load Y requests the data, it will either hit the WCB, or the L1 cache, depending on exactly what time the request to Y occurs  Supports address when precise (Must be precise)mem_uops_retired.allcacheMemory uops retired (Precise event capable)  Supports address when precise (Must be precise)event=0xd0,period=200003,umask=0x8300Counts the number of memory uops retired that is either a loads or a store or both  Supports address when precise (Must be precise)mem_uops_retired.all_loadscacheLoad uops retired (Precise event capable)  Supports address when precise (Must be precise)event=0xd0,period=200003,umask=0x8100Counts the number of load uops retired  Supports address when precise (Must be precise)mem_uops_retired.all_storescacheStore uops retired (Precise event capable)  Supports address when precise (Must be precise)event=0xd0,period=200003,umask=0x8200Counts the number of store uops retired  Supports address when precise (Must be precise)mem_uops_retired.lock_loadscacheLocked load uops retired (Precise event capable)  Supports address when precise (Must be precise)event=0xd0,period=200003,umask=0x2100Counts locked memory uops retired.  This includes regular locks and bus locks. (To specifically count bus locks only, see the Offcore response event.)  A locked access is one with a lock prefix, or an exchange to memory.  See the SDM for a complete description of which memory load accesses are locks  Supports address when precise (Must be precise)mem_uops_retired.splitcacheMemory uops retired that split a cache-line (Precise event capable)  Supports address when precise (Must be precise)event=0xd0,period=200003,umask=0x4300Counts memory uops retired where the data requested spans a 64 byte cache line boundary  Supports address when precise (Must be precise)mem_uops_retired.split_loadscacheLoad uops retired that split a cache-line (Precise event capable)  Supports address when precise (Must be precise)event=0xd0,period=200003,umask=0x4100Counts load uops retired where the data requested spans a 64 byte cache line boundary  Supports address when precise (Must be precise)mem_uops_retired.split_storescacheStores uops retired that split a cache-line (Precise event capable)  Supports address when precise (Must be precise)event=0xd0,period=200003,umask=0x4200Counts store uops retired where the data requested spans a 64 byte cache line boundary  Supports address when precise (Must be precise)offcore_responsecacheRequires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)event=0xb7,period=100007,umask=100offcore_response.any_data_rd.l2_hitcacheCounts data reads (demand & prefetch) that hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004309100Counts data reads (demand & prefetch) that hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_data_rd.l2_miss.anycacheCounts data reads (demand & prefetch) that miss the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x360000309100Counts data reads (demand & prefetch) that miss the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_data_rd.l2_miss.hitm_other_corecacheCounts data reads (demand & prefetch) that miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000309100Counts data reads (demand & prefetch) that miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_data_rd.l2_miss.hit_other_core_no_fwdcacheCounts data reads (demand & prefetch) that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x040000309100Counts data reads (demand & prefetch) that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_data_rd.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts data reads (demand & prefetch) that true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000309100Counts data reads (demand & prefetch) that true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_pf_data_rd.l2_hitcacheCounts data reads generated by L1 or L2 prefetchers that hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004301000Counts data reads generated by L1 or L2 prefetchers that hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_pf_data_rd.l2_miss.anycacheCounts data reads generated by L1 or L2 prefetchers that miss the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x360000301000Counts data reads generated by L1 or L2 prefetchers that miss the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_pf_data_rd.l2_miss.hitm_other_corecacheCounts data reads generated by L1 or L2 prefetchers that miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000301000Counts data reads generated by L1 or L2 prefetchers that miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_pf_data_rd.l2_miss.hit_other_core_no_fwdcacheCounts data reads generated by L1 or L2 prefetchers that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x040000301000Counts data reads generated by L1 or L2 prefetchers that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_pf_data_rd.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts data reads generated by L1 or L2 prefetchers that true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000301000Counts data reads generated by L1 or L2 prefetchers that true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_read.l2_hitcacheCounts data read, code read, and read for ownership (RFO) requests (demand & prefetch) that hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x00000432b700Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) that hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_read.l2_miss.anycacheCounts data read, code read, and read for ownership (RFO) requests (demand & prefetch) that miss the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x36000032b700Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) that miss the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_read.l2_miss.hitm_other_corecacheCounts data read, code read, and read for ownership (RFO) requests (demand & prefetch) that miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x10000032b700Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) that miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_read.l2_miss.hit_other_core_no_fwdcacheCounts data read, code read, and read for ownership (RFO) requests (demand & prefetch) that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x04000032b700Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_read.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts data read, code read, and read for ownership (RFO) requests (demand & prefetch) that true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x02000032b700Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) that true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_request.any_responsecacheCounts requests to the uncore subsystem that have any transaction responses from the uncore subsystemevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001800000Counts requests to the uncore subsystem that have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_request.l2_hitcacheCounts requests to the uncore subsystem that hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004800000Counts requests to the uncore subsystem that hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_request.l2_miss.hitm_other_corecacheCounts requests to the uncore subsystem that miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000800000Counts requests to the uncore subsystem that miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_request.l2_miss.hit_other_core_no_fwdcacheCounts requests to the uncore subsystem that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x040000800000Counts requests to the uncore subsystem that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_request.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts requests to the uncore subsystem that true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000800000Counts requests to the uncore subsystem that true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_rfo.l2_hitcacheCounts reads for ownership (RFO) requests (demand & prefetch) that hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004002200Counts reads for ownership (RFO) requests (demand & prefetch) that hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_rfo.l2_miss.anycacheCounts reads for ownership (RFO) requests (demand & prefetch) that miss the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x360000002200Counts reads for ownership (RFO) requests (demand & prefetch) that miss the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_rfo.l2_miss.hitm_other_corecacheCounts reads for ownership (RFO) requests (demand & prefetch) that miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000002200Counts reads for ownership (RFO) requests (demand & prefetch) that miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_rfo.l2_miss.hit_other_core_no_fwdcacheCounts reads for ownership (RFO) requests (demand & prefetch) that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x040000002200Counts reads for ownership (RFO) requests (demand & prefetch) that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_rfo.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts reads for ownership (RFO) requests (demand & prefetch) that true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000002200Counts reads for ownership (RFO) requests (demand & prefetch) that true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.bus_locks.any_responsecacheCounts bus lock and split lock requests that have any transaction responses from the uncore subsystemevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001040000Counts bus lock and split lock requests that have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.corewb.l2_hitcacheCounts the number of writeback transactions caused by L1 or L2 cache evictions that hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004000800Counts the number of writeback transactions caused by L1 or L2 cache evictions that hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.corewb.l2_miss.anycacheCounts the number of writeback transactions caused by L1 or L2 cache evictions that miss the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x360000000800Counts the number of writeback transactions caused by L1 or L2 cache evictions that miss the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.corewb.l2_miss.hitm_other_corecacheCounts the number of writeback transactions caused by L1 or L2 cache evictions that miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000000800Counts the number of writeback transactions caused by L1 or L2 cache evictions that miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.corewb.l2_miss.hit_other_core_no_fwdcacheCounts the number of writeback transactions caused by L1 or L2 cache evictions that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x040000000800Counts the number of writeback transactions caused by L1 or L2 cache evictions that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.corewb.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts the number of writeback transactions caused by L1 or L2 cache evictions that true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000000800Counts the number of writeback transactions caused by L1 or L2 cache evictions that true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_code_rd.l2_hitcacheCounts demand instruction cacheline and I-side prefetch requests that miss the instruction cache that hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004000400Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache that hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_code_rd.l2_miss.anycacheCounts demand instruction cacheline and I-side prefetch requests that miss the instruction cache that miss the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x360000000400Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache that miss the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_code_rd.l2_miss.hit_other_core_no_fwdcacheCounts demand instruction cacheline and I-side prefetch requests that miss the instruction cache that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x040000000400Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_code_rd.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts demand instruction cacheline and I-side prefetch requests that miss the instruction cache that true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000000400Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache that true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_code_rd.outstandingcacheCounts demand instruction cacheline and I-side prefetch requests that miss the instruction cache that are outstanding, per cycle, from the time of the L2 miss to when any response is receivedevent=0xb7,period=100007,umask=1,offcore_rsp=0x400000000400Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache that are outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_data_rd.l2_hitcacheCounts demand cacheable data reads of full cache lines that hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004000100Counts demand cacheable data reads of full cache lines that hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_data_rd.l2_miss.anycacheCounts demand cacheable data reads of full cache lines that miss the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x360000000100Counts demand cacheable data reads of full cache lines that miss the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_data_rd.l2_miss.hitm_other_corecacheCounts demand cacheable data reads of full cache lines that miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000000100Counts demand cacheable data reads of full cache lines that miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_data_rd.l2_miss.hit_other_core_no_fwdcacheCounts demand cacheable data reads of full cache lines that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x040000000100Counts demand cacheable data reads of full cache lines that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_data_rd.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts demand cacheable data reads of full cache lines that true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000000100Counts demand cacheable data reads of full cache lines that true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_data_rd.outstandingcacheCounts demand cacheable data reads of full cache lines that are outstanding, per cycle, from the time of the L2 miss to when any response is receivedevent=0xb7,period=100007,umask=1,offcore_rsp=0x400000000100Counts demand cacheable data reads of full cache lines that are outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_rfo.l2_hitcacheCounts demand reads for ownership (RFO) requests generated by a write to full data cache line that hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004000200Counts demand reads for ownership (RFO) requests generated by a write to full data cache line that hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_rfo.l2_miss.anycacheCounts demand reads for ownership (RFO) requests generated by a write to full data cache line that miss the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x360000000200Counts demand reads for ownership (RFO) requests generated by a write to full data cache line that miss the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_rfo.l2_miss.hitm_other_corecacheCounts demand reads for ownership (RFO) requests generated by a write to full data cache line that miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000000200Counts demand reads for ownership (RFO) requests generated by a write to full data cache line that miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_rfo.l2_miss.hit_other_core_no_fwdcacheCounts demand reads for ownership (RFO) requests generated by a write to full data cache line that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x040000000200Counts demand reads for ownership (RFO) requests generated by a write to full data cache line that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_rfo.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts demand reads for ownership (RFO) requests generated by a write to full data cache line that true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000000200Counts demand reads for ownership (RFO) requests generated by a write to full data cache line that true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_rfo.outstandingcacheCounts demand reads for ownership (RFO) requests generated by a write to full data cache line that are outstanding, per cycle, from the time of the L2 miss to when any response is receivedevent=0xb7,period=100007,umask=1,offcore_rsp=0x400000000200Counts demand reads for ownership (RFO) requests generated by a write to full data cache line that are outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.full_streaming_stores.l2_hitcacheCounts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes that hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004080000Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes that hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.full_streaming_stores.l2_miss.anycacheCounts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes that miss the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x360000080000Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes that miss the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.full_streaming_stores.l2_miss.hitm_other_corecacheCounts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes that miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000080000Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes that miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.full_streaming_stores.l2_miss.hit_other_core_no_fwdcacheCounts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x040000080000Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.full_streaming_stores.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes that true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000080000Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes that true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.partial_reads.l2_miss.anycacheCounts demand data partial reads, including data in uncacheable (UC) or uncacheable write combining (USWC) memory types that miss the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x360000008000Counts demand data partial reads, including data in uncacheable (UC) or uncacheable write combining (USWC) memory types that miss the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.partial_streaming_stores.l2_hitcacheCounts partial cache line data writes to uncacheable write combining (USWC) memory region  that hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004400000Counts partial cache line data writes to uncacheable write combining (USWC) memory region  that hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.partial_streaming_stores.l2_miss.anycacheCounts partial cache line data writes to uncacheable write combining (USWC) memory region  that miss the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x360000400000Counts partial cache line data writes to uncacheable write combining (USWC) memory region  that miss the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.partial_streaming_stores.l2_miss.hitm_other_corecacheCounts partial cache line data writes to uncacheable write combining (USWC) memory region  that miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000400000Counts partial cache line data writes to uncacheable write combining (USWC) memory region  that miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.partial_streaming_stores.l2_miss.hit_other_core_no_fwdcacheCounts partial cache line data writes to uncacheable write combining (USWC) memory region  that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x040000400000Counts partial cache line data writes to uncacheable write combining (USWC) memory region  that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.partial_streaming_stores.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts partial cache line data writes to uncacheable write combining (USWC) memory region  that true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000400000Counts partial cache line data writes to uncacheable write combining (USWC) memory region  that true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.partial_writes.l2_miss.anycacheCounts the number of demand write requests (RFO) generated by a write to partial data cache line, including the writes to uncacheable (UC) and write through (WT), and write protected (WP) types of memory that miss the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x360000010000Counts the number of demand write requests (RFO) generated by a write to partial data cache line, including the writes to uncacheable (UC) and write through (WT), and write protected (WP) types of memory that miss the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l1_data_rd.l2_hitcacheCounts data cache line reads generated by hardware L1 data cache prefetcher that hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004200000Counts data cache line reads generated by hardware L1 data cache prefetcher that hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l1_data_rd.l2_miss.anycacheCounts data cache line reads generated by hardware L1 data cache prefetcher that miss the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x360000200000Counts data cache line reads generated by hardware L1 data cache prefetcher that miss the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l1_data_rd.l2_miss.hitm_other_corecacheCounts data cache line reads generated by hardware L1 data cache prefetcher that miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000200000Counts data cache line reads generated by hardware L1 data cache prefetcher that miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l1_data_rd.l2_miss.hit_other_core_no_fwdcacheCounts data cache line reads generated by hardware L1 data cache prefetcher that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x040000200000Counts data cache line reads generated by hardware L1 data cache prefetcher that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l1_data_rd.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts data cache line reads generated by hardware L1 data cache prefetcher that true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000200000Counts data cache line reads generated by hardware L1 data cache prefetcher that true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l2_data_rd.l2_hitcacheCounts data cacheline reads generated by hardware L2 cache prefetcher that hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004001000Counts data cacheline reads generated by hardware L2 cache prefetcher that hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l2_data_rd.l2_miss.anycacheCounts data cacheline reads generated by hardware L2 cache prefetcher that miss the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x360000001000Counts data cacheline reads generated by hardware L2 cache prefetcher that miss the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l2_data_rd.l2_miss.hitm_other_corecacheCounts data cacheline reads generated by hardware L2 cache prefetcher that miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000001000Counts data cacheline reads generated by hardware L2 cache prefetcher that miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l2_data_rd.l2_miss.hit_other_core_no_fwdcacheCounts data cacheline reads generated by hardware L2 cache prefetcher that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x040000001000Counts data cacheline reads generated by hardware L2 cache prefetcher that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l2_data_rd.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts data cacheline reads generated by hardware L2 cache prefetcher that true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000001000Counts data cacheline reads generated by hardware L2 cache prefetcher that true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l2_rfo.l2_hitcacheCounts reads for ownership (RFO) requests generated by L2 prefetcher that hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004002000Counts reads for ownership (RFO) requests generated by L2 prefetcher that hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l2_rfo.l2_miss.anycacheCounts reads for ownership (RFO) requests generated by L2 prefetcher that miss the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x360000002000Counts reads for ownership (RFO) requests generated by L2 prefetcher that miss the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l2_rfo.l2_miss.hitm_other_corecacheCounts reads for ownership (RFO) requests generated by L2 prefetcher that miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000002000Counts reads for ownership (RFO) requests generated by L2 prefetcher that miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l2_rfo.l2_miss.hit_other_core_no_fwdcacheCounts reads for ownership (RFO) requests generated by L2 prefetcher that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x040000002000Counts reads for ownership (RFO) requests generated by L2 prefetcher that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l2_rfo.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts reads for ownership (RFO) requests generated by L2 prefetcher that true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000002000Counts reads for ownership (RFO) requests generated by L2 prefetcher that true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.streaming_stores.l2_hitcacheCounts any data writes to uncacheable write combining (USWC) memory region  that hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004480000Counts any data writes to uncacheable write combining (USWC) memory region  that hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.streaming_stores.l2_miss.anycacheCounts any data writes to uncacheable write combining (USWC) memory region  that miss the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x360000480000Counts any data writes to uncacheable write combining (USWC) memory region  that miss the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.sw_prefetch.l2_hitcacheCounts data cache lines requests by software prefetch instructions that hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004100000Counts data cache lines requests by software prefetch instructions that hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.sw_prefetch.l2_miss.anycacheCounts data cache lines requests by software prefetch instructions that miss the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x360000100000Counts data cache lines requests by software prefetch instructions that miss the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.sw_prefetch.l2_miss.hitm_other_corecacheCounts data cache lines requests by software prefetch instructions that miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000100000Counts data cache lines requests by software prefetch instructions that miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.sw_prefetch.l2_miss.hit_other_core_no_fwdcacheCounts data cache lines requests by software prefetch instructions that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x040000100000Counts data cache lines requests by software prefetch instructions that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.sw_prefetch.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts data cache lines requests by software prefetch instructions that true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000100000Counts data cache lines requests by software prefetch instructions that true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)cycles_div_busy.fpdivfloating pointCycles the FP divide unit is busyevent=0xcd,period=200003,umask=200Counts core cycles the floating point divide unit is busymachine_clears.fp_assistfloating pointMachine clears due to FP assistsevent=0xc3,period=200003,umask=400Counts machine clears due to floating point (FP) operations needing assists.  For instance, if the result was a floating point denormal, the hardware clears the pipeline and reissues uops to produce the correct IEEE compliant denormal resultuops_retired.fpdivfloating pointFloating point divide uops retired. (Precise Event Capable) (Must be precise)event=0xc2,period=2000003,umask=800Counts the number of floating point divide uops retired (Must be precise)baclears.allfrontendBACLEARs asserted for any branch typeevent=0xe6,period=200003,umask=100Counts the number of times a BACLEAR is signaled for any reason, including, but not limited to indirect branch/call,  Jcc (Jump on Conditional Code/Jump if Condition is Met) branch, unconditional branch/call, and returnsbaclears.condfrontendBACLEARs asserted for conditional branchevent=0xe6,period=200003,umask=0x1000Counts BACLEARS on Jcc (Jump on Conditional Code/Jump if Condition is Met) branchesbaclears.returnfrontendBACLEARs asserted for return branchevent=0xe6,period=200003,umask=800Counts BACLEARS on return instructionsdecode_restriction.predecode_wrongfrontendDecode restrictions due to predicting wrong instruction lengthevent=0xe9,period=200003,umask=100Counts the number of times the prediction (from the predecode cache) for instruction length is incorrecticache.accessesfrontendReferences per ICache line. This event counts differently than Intel processors based on Silvermont microarchitectureevent=0x80,period=200003,umask=300Counts requests to the Instruction Cache (ICache) for one or more bytes in an ICache Line.  The event strives to count on a cache line basis, so that multiple fetches to a single cache line count as one ICACHE.ACCESS.  Specifically, the event counts when accesses from straight line code crosses the cache line boundary, or when a branch target is to a new line.
This event counts differently than Intel processors based on Silvermont microarchitectureicache.hitfrontendReferences per ICache line that are available in the ICache (hit). This event counts differently than Intel processors based on Silvermont microarchitectureevent=0x80,period=200003,umask=100Counts requests to the Instruction Cache (ICache) for one or more bytes in an ICache Line and that cache line is in the ICache (hit).  The event strives to count on a cache line basis, so that multiple accesses which hit in a single cache line count as one ICACHE.HIT.  Specifically, the event counts when straight line code crosses the cache line boundary, or when a branch target is to a new line, and that cache line is in the ICache. This event counts differently than Intel processors based on Silvermont microarchitectureicache.missesfrontendReferences per ICache line that are not available in the ICache (miss). This event counts differently than Intel processors based on Silvermont microarchitectureevent=0x80,period=200003,umask=200Counts requests to the Instruction Cache (ICache)  for one or more bytes in an ICache Line and that cache line is not in the ICache (miss).  The event strives to count on a cache line basis, so that multiple accesses which miss in a single cache line count as one ICACHE.MISS.  Specifically, the event counts when straight line code crosses the cache line boundary, or when a branch target is to a new line, and that cache line is not in the ICache. This event counts differently than Intel processors based on Silvermont microarchitecturems_decoded.ms_entryfrontendMS decode startsevent=0xe7,period=200003,umask=100Counts the number of times the Microcode Sequencer (MS) starts a flow of uops from the MSROM. It does not count every time a uop is read from the MSROM.  The most common case that this counts is when a micro-coded instruction is encountered by the front end of the machine.  Other cases include when an instruction encounters a fault, trap, or microcode assist of any sort that initiates a flow of uops.  The event will count MS startups for uops that are speculative, and subsequently cleared by branch mispredict or a machine clearmachine_clears.memory_orderingmemoryMachine clears due to memory ordering issueevent=0xc3,period=200003,umask=200Counts machine clears due to memory ordering issues.  This occurs when a snoop request happens and the machine is uncertain if memory ordering will be preserved as another core is in the process of modifying the datamisalign_mem_ref.load_page_splitmemoryLoad uops that split a page (Precise event capable) (Must be precise)event=0x13,period=200003,umask=200Counts when a memory load of a uop spans a page boundary (a split) is retired (Must be precise)misalign_mem_ref.store_page_splitmemoryStore uops that split a page (Precise event capable) (Must be precise)event=0x13,period=200003,umask=400Counts when a memory store of a uop spans a page boundary (a split) is retired (Must be precise)fetch_stall.allotherCycles code-fetch stalled due to any reasonevent=0x86,period=20000300Counts cycles that fetch is stalled due to any reason. That is, the decoder queue is able to accept bytes, but the fetch unit is unable to provide bytes.  This will include cycles due to an ITLB miss, ICache miss and other eventsfetch_stall.itlb_fill_pending_cyclesotherCycles code-fetch stalled due to an outstanding ITLB missevent=0x86,period=200003,umask=100Counts cycles that fetch is stalled due to an outstanding ITLB miss. That is, the decoder queue is able to accept bytes, but the fetch unit is unable to provide bytes due to an ITLB miss.  Note: this event is not the same as page walk cycles to retrieve an instruction translationhw_interrupts.maskedotherCycles hardware interrupts are maskedevent=0xcb,period=200003,umask=200Counts the number of core cycles during which interrupts are masked (disabled). Increments by 1 each core cycle that EFLAGS.IF is 0, regardless of whether interrupts are pending or nothw_interrupts.pending_and_maskedotherCycles pending interrupts are maskedevent=0xcb,period=200003,umask=400Counts core cycles during which there are pending interrupts, but interrupts are masked (EFLAGS.IF = 0)hw_interrupts.receivedotherHardware interrupts receivedevent=0xcb,period=203,umask=100Counts hardware interrupts received by the processorbr_inst_retired.all_branchespipelineRetired branch instructions (Precise event capable) (Must be precise)event=0xc4,period=20000300Counts branch instructions retired for all branch types.  This is an architectural performance event (Must be precise)br_inst_retired.all_taken_branchespipelineRetired taken branch instructions (Precise event capable) (Must be precise)event=0xc4,period=200003,umask=0x8000Counts the number of taken branch instructions retired (Must be precise)br_inst_retired.callpipelineRetired near call instructions (Precise event capable) (Must be precise)event=0xc4,period=200003,umask=0xf900Counts near CALL branch instructions retired (Must be precise)br_inst_retired.far_branchpipelineRetired far branch instructions (Precise event capable) (Must be precise)event=0xc4,period=200003,umask=0xbf00Counts far branch instructions retired.  This includes far jump, far call and return, and Interrupt call and return (Must be precise)br_inst_retired.ind_callpipelineRetired near indirect call instructions (Precise event capable) (Must be precise)event=0xc4,period=200003,umask=0xfb00Counts near indirect CALL branch instructions retired (Must be precise)br_inst_retired.jccpipelineRetired conditional branch instructions (Precise event capable) (Must be precise)event=0xc4,period=200003,umask=0x7e00Counts retired Jcc (Jump on Conditional Code/Jump if Condition is Met) branch instructions retired, including both when the branch was taken and when it was not taken (Must be precise)br_inst_retired.non_return_indpipelineRetired instructions of near indirect Jmp or call (Precise event capable) (Must be precise)event=0xc4,period=200003,umask=0xeb00Counts near indirect call or near indirect jmp branch instructions retired (Must be precise)br_inst_retired.rel_callpipelineRetired near relative call instructions (Precise event capable) (Must be precise)event=0xc4,period=200003,umask=0xfd00Counts near relative CALL branch instructions retired (Must be precise)br_inst_retired.returnpipelineRetired near return instructions (Precise event capable) (Must be precise)event=0xc4,period=200003,umask=0xf700Counts near return branch instructions retired (Must be precise)br_inst_retired.taken_jccpipelineRetired conditional branch instructions that were taken (Precise event capable) (Must be precise)event=0xc4,period=200003,umask=0xfe00Counts Jcc (Jump on Conditional Code/Jump if Condition is Met) branch instructions retired that were taken and does not count when the Jcc branch instruction were not taken (Must be precise)br_misp_retired.all_branchespipelineRetired mispredicted branch instructions (Precise event capable) (Must be precise)event=0xc5,period=20000300Counts mispredicted branch instructions retired including all branch types (Must be precise)br_misp_retired.ind_callpipelineRetired mispredicted near indirect call instructions (Precise event capable) (Must be precise)event=0xc5,period=200003,umask=0xfb00Counts mispredicted near indirect CALL branch instructions retired, where the target address taken was not what the processor predicted (Must be precise)br_misp_retired.jccpipelineRetired mispredicted conditional branch instructions (Precise event capable) (Must be precise)event=0xc5,period=200003,umask=0x7e00Counts mispredicted retired Jcc (Jump on Conditional Code/Jump if Condition is Met) branch instructions retired, including both when the branch was supposed to be taken and when it was not supposed to be taken (but the processor predicted the opposite condition) (Must be precise)br_misp_retired.non_return_indpipelineRetired mispredicted instructions of near indirect Jmp or near indirect call. (Precise event capable) (Must be precise)event=0xc5,period=200003,umask=0xeb00Counts mispredicted branch instructions retired that were near indirect call or near indirect jmp, where the target address taken was not what the processor predicted (Must be precise)br_misp_retired.returnpipelineRetired mispredicted near return instructions (Precise event capable) (Must be precise)event=0xc5,period=200003,umask=0xf700Counts mispredicted near RET branch instructions retired, where the return address taken was not what the processor predicted (Must be precise)br_misp_retired.taken_jccpipelineRetired mispredicted conditional branch instructions that were taken (Precise event capable) (Must be precise)event=0xc5,period=200003,umask=0xfe00Counts mispredicted retired Jcc (Jump on Conditional Code/Jump if Condition is Met) branch instructions retired that were supposed to be taken but the processor predicted that it would not be taken (Must be precise)cpu_clk_unhalted.corepipelineCore cycles when core is not halted  (Fixed event)event=0x3c,period=200000300Counts the number of core cycles while the core is not in a halt state.  The core enters the halt state when it is running the HLT instruction. In mobile systems the core frequency may change from time to time. For this reason this event may have a changing ratio with regards to time.  This event uses fixed counter 1.  You cannot collect a PEBs record for this eventcpu_clk_unhalted.core_ppipelineCore cycles when core is not haltedevent=0x3c,period=200000300Core cycles when core is not halted.  This event uses a (_P)rogrammable general purpose performance countercpu_clk_unhalted.refpipelineReference cycles when core is not haltedevent=0x0,umask=0x03,period=200000300Reference cycles when core is not halted.  This event uses a programmable general purpose performance countercpu_clk_unhalted.ref_tscpipelineReference cycles when core is not halted  (Fixed event)event=0,period=2000003,umask=300Counts the number of reference cycles that the core is not in a halt state. The core enters the halt state when it is running the HLT instruction.  In mobile systems the core frequency may change from time.  This event is not affected by core frequency changes but counts as if the core is running at the maximum frequency all the time.  This event uses fixed counter 2.  You cannot collect a PEBs record for this eventcycles_div_busy.allpipelineCycles a divider is busyevent=0xcd,period=200000300Counts core cycles if either divide unit is busycycles_div_busy.idivpipelineCycles the integer divide unit is busyevent=0xcd,period=200003,umask=100Counts core cycles the integer divide unit is busyinst_retired.anypipelineInstructions retired (Fixed event)event=0xc0,period=200000300Counts the number of instructions that retire execution. For instructions that consist of multiple uops, this event counts the retirement of the last uop of the instruction. The counter continues counting during hardware interrupts, traps, and inside interrupt handlers.  This event uses fixed counter 0.  You cannot collect a PEBs record for this eventinst_retired.any_ppipelineInstructions retired (Precise event capable) (Must be precise)event=0xc0,period=200000300Counts the number of instructions that retire execution. For instructions that consist of multiple uops, this event counts the retirement of the last uop of the instruction. The event continues counting during hardware interrupts, traps, and inside interrupt handlers.  This is an architectural performance event.  This event uses a (_P)rogrammable general purpose performance counter. *This event is Precise Event capable:  The EventingRIP field in the PEBS record is precise to the address of the instruction which caused the event.  Note: Because PEBS records can be collected only on IA32_PMC0, only one event can use the PEBS facility at a time (Must be precise)issue_slots_not_consumed.anypipelineUnfilled issue slots per cycleevent=0xca,period=20000300Counts the number of issue slots per core cycle that were not consumed by the backend due to either a full resource  in the backend (RESOURCE_FULL) or due to the processor recovering from some event (RECOVERY)issue_slots_not_consumed.recoverypipelineUnfilled issue slots per cycle to recoverevent=0xca,period=200003,umask=200Counts the number of issue slots per core cycle that were not consumed by the backend because allocation is stalled waiting for a mispredicted jump to retire or other branch-like conditions (e.g. the event is relevant during certain microcode flows).   Counts all issue slots blocked while within this window including slots where uops were not available in the Instruction Queueissue_slots_not_consumed.resource_fullpipelineUnfilled issue slots per cycle because of a full resource in the backendevent=0xca,period=200003,umask=100Counts the number of issue slots per core cycle that were not consumed because of a full resource in the backend.  Including but not limited to resources such as the Re-order Buffer (ROB), reservation stations (RS), load/store buffers, physical registers, or any other needed machine resource that is currently unavailable.   Note that uops must be available for consumption in order for this event to fire.  If a uop is not available (Instruction Queue is empty), this event will not countld_blocks.4k_aliaspipelineLoads blocked because address has 4k partial address false dependence (Precise event capable) (Must be precise)event=3,period=200003,umask=400Counts loads that block because their address modulo 4K matches a pending store (Must be precise)ld_blocks.all_blockpipelineLoads blocked (Precise event capable) (Must be precise)event=3,period=200003,umask=0x1000Counts anytime a load that retires is blocked for any reason (Must be precise)ld_blocks.data_unknownpipelineLoads blocked due to store data not ready (Precise event capable) (Must be precise)event=3,period=200003,umask=100Counts a load blocked from using a store forward, but did not occur because the store data was not available at the right time.  The forward might occur subsequently when the data is available (Must be precise)ld_blocks.store_forwardpipelineLoads blocked due to store forward restriction (Precise event capable) (Must be precise)event=3,period=200003,umask=200Counts a load blocked from using a store forward because of an address/size mismatch, only one of the loads blocked from each store will be counted (Must be precise)ld_blocks.utlb_misspipelineLoads blocked because address in not in the UTLB (Precise event capable) (Must be precise)event=3,period=200003,umask=800Counts loads blocked because they are unable to find their physical address in the micro TLB (UTLB) (Must be precise)machine_clears.allpipelineAll machine clearsevent=0xc3,period=20000300Counts machine clears for any reasonmachine_clears.disambiguationpipelineMachine clears due to memory disambiguationevent=0xc3,period=200003,umask=800Counts machine clears due to memory disambiguation.  Memory disambiguation happens when a load which has been issued conflicts with a previous unretired store in the pipeline whose address was not known at issue time, but is later resolved to be the same as the load addressmachine_clears.smcpipelineSelf-Modifying Code detectedevent=0xc3,period=200003,umask=100Counts the number of times that the processor detects that a program is writing to a code section and has to perform a machine clear because of that modification.  Self-modifying code (SMC) causes a severe penalty in all Intel(R) architecture processorsuops_issued.anypipelineUops issued to the back end per cycleevent=0xe,period=20000300Counts uops issued by the front end and allocated into the back end of the machine.  This event counts uops that retire as well as uops that were speculatively executed but didn't retire. The sort of speculative uops that might be counted includes, but is not limited to those uops issued in the shadow of a miss-predicted branch, those uops that are inserted during an assist (such as for a denormal floating point result), and (previously allocated) uops that might be canceled during a machine clearuops_not_delivered.anypipelineUops requested but not-delivered to the back-end per cycleevent=0x9c,period=20000300This event used to measure front-end inefficiencies. I.e. when front-end of the machine is not delivering uops to the back-end and the back-end has is not stalled. This event can be used to identify if the machine is truly front-end bound.  When this event occurs, it is an indication that the front-end of the machine is operating at less than its theoretical peak performance. Background: We can think of the processor pipeline as being divided into 2 broader parts: Front-end and Back-end. Front-end is responsible for fetching the instruction, decoding into uops in machine understandable format and putting them into a uop queue to be consumed by back end. The back-end then takes these uops, allocates the required resources.  When all resources are ready, uops are executed. If the back-end is not ready to accept uops from the front-end, then we do not want to count these as front-end bottlenecks.  However, whenever we have bottlenecks in the back-end, we will have allocation unit stalls and eventually forcing the front-end to wait until the back-end is ready to receive more uops. This event counts only when back-end is requesting more uops and front-end is not able to provide them. When 3 uops are requested and no uops are delivered, the event counts 3. When 3 are requested, and only 1 is delivered, the event counts 2. When only 2 are delivered, the event counts 1. Alternatively stated, the event will not count if 3 uops are delivered, or if the back end is stalled and not requesting any uops at all.  Counts indicate missed opportunities for the front-end to deliver a uop to the back end. Some examples of conditions that cause front-end efficiencies are: ICache misses, ITLB misses, and decoder restrictions that limit the front-end bandwidth. Known Issues: Some uops require multiple allocation slots.  These uops will not be charged as a front end 'not delivered' opportunity, and will be regarded as a back end problem. For example, the INC instruction has one uop that requires 2 issue slots.  A stream of INC instructions will not count as UOPS_NOT_DELIVERED, even though only one instruction can be issued per clock.  The low uop issue rate for a stream of INC instructions is considered to be a back end issueuops_retired.anypipelineUops retired (Precise event capable) (Must be precise)event=0xc2,period=200000300Counts uops which retired (Must be precise)uops_retired.idivpipelineInteger divide uops retired. (Precise Event Capable) (Must be precise)event=0xc2,period=2000003,umask=0x1000Counts the number of integer divide uops retired (Must be precise)uops_retired.mspipelineMS uops retired (Precise event capable) (Must be precise)event=0xc2,period=2000003,umask=100Counts uops retired that are from the complex flows issued by the micro-sequencer (MS).  Counts both the uops from a micro-coded instruction, and the uops that might be generated from a micro-coded assist (Must be precise)itlb.missvirtual memoryITLB missesevent=0x81,period=200003,umask=400Counts the number of times the machine was unable to find a translation in the Instruction Translation Lookaside Buffer (ITLB) for a linear address of an instruction fetch.  It counts when new translation are filled into the ITLB.  The event is speculative in nature, but will not count translations (page walks) that are begun and not finished, or translations that are finished but not filled into the ITLBmem_uops_retired.dtlb_missvirtual memoryMemory uops retired that missed the DTLB (Precise event capable)  Supports address when precise (Must be precise)event=0xd0,period=200003,umask=0x1300Counts uops retired that had a DTLB miss on load, store or either.  Note that when two distinct memory operations to the same page miss the DTLB, only one of them will be recorded as a DTLB miss  Supports address when precise (Must be precise)mem_uops_retired.dtlb_miss_loadsvirtual memoryLoad uops retired that missed the DTLB (Precise event capable)  Supports address when precise (Must be precise)event=0xd0,period=200003,umask=0x1100Counts load uops retired that caused a DTLB miss  Supports address when precise (Must be precise)mem_uops_retired.dtlb_miss_storesvirtual memoryStore uops retired that missed the DTLB (Precise event capable)  Supports address when precise (Must be precise)event=0xd0,period=200003,umask=0x1200Counts store uops retired that caused a DTLB miss  Supports address when precise (Must be precise)page_walks.cyclesvirtual memoryDuration of page-walks in cyclesevent=5,period=200003,umask=300Counts every core cycle a page-walk is in progress due to either a data memory operation or an instruction fetchpage_walks.d_side_cyclesvirtual memoryDuration of D-side page-walks in cyclesevent=5,period=200003,umask=100Counts every core cycle when a Data-side (walks due to a data operation) page walk is in progresspage_walks.i_side_cyclesvirtual memoryDuration of I-side pagewalks in cyclesevent=5,period=200003,umask=200Counts every core cycle when a Instruction-side (walks due to an instruction fetch) page walk is in progresscore_reject_l2q.allcacheRequests rejected by the L2Qevent=0x31,period=20000300Counts the number of demand and L1 prefetcher requests rejected by the L2Q due to a full or nearly full condition which likely indicates back pressure from L2Q. It also counts requests that would have gone directly to the XQ, but are rejected due to a full or nearly full condition, indicating back pressure from the IDI link. The L2Q may also reject transactions from a core to insure fairness between cores, or to delay a core's dirty eviction when the address conflicts with incoming external snoopsdl1.replacementcacheL1 Cache evictions for dirty dataevent=0x51,period=200003,umask=100Counts when a modified (dirty) cache line is evicted from the data L1 cache and needs to be written back to memory.  No count will occur if the evicted line is clean, and hence does not require a writebackoffcore_response.any_data_rd.any_responsecacheCounts data reads (demand & prefetch) have any transaction responses from the uncore subsystemevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001309100Counts data reads (demand & prefetch) have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_data_rd.l2_hitcacheCounts data reads (demand & prefetch) hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004309100Counts data reads (demand & prefetch) hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_data_rd.l2_miss.hitm_other_corecacheCounts data reads (demand & prefetch) miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000309100Counts data reads (demand & prefetch) miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_data_rd.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts data reads (demand & prefetch) true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000309100Counts data reads (demand & prefetch) true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_data_rd.outstandingcacheCounts data reads (demand & prefetch) outstanding, per cycle, from the time of the L2 miss to when any response is receivedevent=0xb7,period=100007,umask=1,offcore_rsp=0x400000309100Counts data reads (demand & prefetch) outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_pf_data_rd.any_responsecacheCounts data reads generated by L1 or L2 prefetchers have any transaction responses from the uncore subsystemevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001301000Counts data reads generated by L1 or L2 prefetchers have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_pf_data_rd.l2_hitcacheCounts data reads generated by L1 or L2 prefetchers hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004301000Counts data reads generated by L1 or L2 prefetchers hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_pf_data_rd.l2_miss.hitm_other_corecacheCounts data reads generated by L1 or L2 prefetchers miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000301000Counts data reads generated by L1 or L2 prefetchers miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_pf_data_rd.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts data reads generated by L1 or L2 prefetchers true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000301000Counts data reads generated by L1 or L2 prefetchers true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_pf_data_rd.outstandingcacheCounts data reads generated by L1 or L2 prefetchers outstanding, per cycle, from the time of the L2 miss to when any response is receivedevent=0xb7,period=100007,umask=1,offcore_rsp=0x400000301000Counts data reads generated by L1 or L2 prefetchers outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_read.any_responsecacheCounts data read, code read, and read for ownership (RFO) requests (demand & prefetch) have any transaction responses from the uncore subsystemevent=0xb7,period=100007,umask=1,offcore_rsp=0x00000132b700Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_read.l2_hitcacheCounts data read, code read, and read for ownership (RFO) requests (demand & prefetch) hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x00000432b700Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_read.l2_miss.hitm_other_corecacheCounts data read, code read, and read for ownership (RFO) requests (demand & prefetch) miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x10000032b700Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_read.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts data read, code read, and read for ownership (RFO) requests (demand & prefetch) true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x02000032b700Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_read.outstandingcacheCounts data read, code read, and read for ownership (RFO) requests (demand & prefetch) outstanding, per cycle, from the time of the L2 miss to when any response is receivedevent=0xb7,period=100007,umask=1,offcore_rsp=0x40000032b700Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_request.any_responsecacheCounts requests to the uncore subsystem have any transaction responses from the uncore subsystemevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001800000Counts requests to the uncore subsystem have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_request.l2_hitcacheCounts requests to the uncore subsystem hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004800000Counts requests to the uncore subsystem hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_request.l2_miss.hitm_other_corecacheCounts requests to the uncore subsystem miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000800000Counts requests to the uncore subsystem miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_request.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts requests to the uncore subsystem true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000800000Counts requests to the uncore subsystem true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_request.outstandingcacheCounts requests to the uncore subsystem outstanding, per cycle, from the time of the L2 miss to when any response is receivedevent=0xb7,period=100007,umask=1,offcore_rsp=0x400000800000Counts requests to the uncore subsystem outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_rfo.any_responsecacheCounts reads for ownership (RFO) requests (demand & prefetch) have any transaction responses from the uncore subsystemevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001002200Counts reads for ownership (RFO) requests (demand & prefetch) have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_rfo.l2_hitcacheCounts reads for ownership (RFO) requests (demand & prefetch) hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004002200Counts reads for ownership (RFO) requests (demand & prefetch) hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_rfo.l2_miss.hitm_other_corecacheCounts reads for ownership (RFO) requests (demand & prefetch) miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000002200Counts reads for ownership (RFO) requests (demand & prefetch) miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_rfo.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts reads for ownership (RFO) requests (demand & prefetch) true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000002200Counts reads for ownership (RFO) requests (demand & prefetch) true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.any_rfo.outstandingcacheCounts reads for ownership (RFO) requests (demand & prefetch) outstanding, per cycle, from the time of the L2 miss to when any response is receivedevent=0xb7,period=100007,umask=1,offcore_rsp=0x400000002200Counts reads for ownership (RFO) requests (demand & prefetch) outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.bus_locks.any_responsecacheCounts bus lock and split lock requests have any transaction responses from the uncore subsystemevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001040000Counts bus lock and split lock requests have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.bus_locks.l2_hitcacheCounts bus lock and split lock requests hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004040000Counts bus lock and split lock requests hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.bus_locks.l2_miss.hitm_other_corecacheCounts bus lock and split lock requests miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000040000Counts bus lock and split lock requests miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.bus_locks.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts bus lock and split lock requests true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000040000Counts bus lock and split lock requests true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.bus_locks.outstandingcacheCounts bus lock and split lock requests outstanding, per cycle, from the time of the L2 miss to when any response is receivedevent=0xb7,period=100007,umask=1,offcore_rsp=0x400000040000Counts bus lock and split lock requests outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.corewb.any_responsecacheCounts the number of writeback transactions caused by L1 or L2 cache evictions have any transaction responses from the uncore subsystemevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001000800Counts the number of writeback transactions caused by L1 or L2 cache evictions have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.corewb.l2_hitcacheCounts the number of writeback transactions caused by L1 or L2 cache evictions hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004000800Counts the number of writeback transactions caused by L1 or L2 cache evictions hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.corewb.l2_miss.hitm_other_corecacheCounts the number of writeback transactions caused by L1 or L2 cache evictions miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000000800Counts the number of writeback transactions caused by L1 or L2 cache evictions miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.corewb.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts the number of writeback transactions caused by L1 or L2 cache evictions true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000000800Counts the number of writeback transactions caused by L1 or L2 cache evictions true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.corewb.outstandingcacheCounts the number of writeback transactions caused by L1 or L2 cache evictions outstanding, per cycle, from the time of the L2 miss to when any response is receivedevent=0xb7,period=100007,umask=1,offcore_rsp=0x400000000800Counts the number of writeback transactions caused by L1 or L2 cache evictions outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_code_rd.any_responsecacheCounts demand instruction cacheline and I-side prefetch requests that miss the instruction cache have any transaction responses from the uncore subsystemevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001000400Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_code_rd.l2_hitcacheCounts demand instruction cacheline and I-side prefetch requests that miss the instruction cache hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004000400Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_code_rd.l2_miss.hitm_other_corecacheCounts demand instruction cacheline and I-side prefetch requests that miss the instruction cache miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000000400Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_code_rd.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts demand instruction cacheline and I-side prefetch requests that miss the instruction cache true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000000400Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_code_rd.outstandingcacheCounts demand instruction cacheline and I-side prefetch requests that miss the instruction cache outstanding, per cycle, from the time of the L2 miss to when any response is receivedevent=0xb7,period=100007,umask=1,offcore_rsp=0x400000000400Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_data_rd.any_responsecacheCounts demand cacheable data reads of full cache lines have any transaction responses from the uncore subsystemevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001000100Counts demand cacheable data reads of full cache lines have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_data_rd.l2_hitcacheCounts demand cacheable data reads of full cache lines hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004000100Counts demand cacheable data reads of full cache lines hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_data_rd.l2_miss.hitm_other_corecacheCounts demand cacheable data reads of full cache lines miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000000100Counts demand cacheable data reads of full cache lines miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_data_rd.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts demand cacheable data reads of full cache lines true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000000100Counts demand cacheable data reads of full cache lines true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_data_rd.outstandingcacheCounts demand cacheable data reads of full cache lines outstanding, per cycle, from the time of the L2 miss to when any response is receivedevent=0xb7,period=100007,umask=1,offcore_rsp=0x400000000100Counts demand cacheable data reads of full cache lines outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_rfo.any_responsecacheCounts demand reads for ownership (RFO) requests generated by a write to full data cache line have any transaction responses from the uncore subsystemevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001000200Counts demand reads for ownership (RFO) requests generated by a write to full data cache line have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_rfo.l2_hitcacheCounts demand reads for ownership (RFO) requests generated by a write to full data cache line hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004000200Counts demand reads for ownership (RFO) requests generated by a write to full data cache line hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_rfo.l2_miss.hitm_other_corecacheCounts demand reads for ownership (RFO) requests generated by a write to full data cache line miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000000200Counts demand reads for ownership (RFO) requests generated by a write to full data cache line miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_rfo.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts demand reads for ownership (RFO) requests generated by a write to full data cache line true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000000200Counts demand reads for ownership (RFO) requests generated by a write to full data cache line true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.demand_rfo.outstandingcacheCounts demand reads for ownership (RFO) requests generated by a write to full data cache line outstanding, per cycle, from the time of the L2 miss to when any response is receivedevent=0xb7,period=100007,umask=1,offcore_rsp=0x400000000200Counts demand reads for ownership (RFO) requests generated by a write to full data cache line outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.full_streaming_stores.any_responsecacheCounts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes have any transaction responses from the uncore subsystemevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001080000Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.full_streaming_stores.l2_hitcacheCounts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004080000Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.full_streaming_stores.l2_miss.hitm_other_corecacheCounts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000080000Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.full_streaming_stores.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000080000Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.full_streaming_stores.outstandingcacheCounts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes outstanding, per cycle, from the time of the L2 miss to when any response is receivedevent=0xb7,period=100007,umask=1,offcore_rsp=0x400000080000Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l1_data_rd.any_responsecacheCounts data cache line reads generated by hardware L1 data cache prefetcher have any transaction responses from the uncore subsystemevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001200000Counts data cache line reads generated by hardware L1 data cache prefetcher have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l1_data_rd.l2_hitcacheCounts data cache line reads generated by hardware L1 data cache prefetcher hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004200000Counts data cache line reads generated by hardware L1 data cache prefetcher hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l1_data_rd.l2_miss.hitm_other_corecacheCounts data cache line reads generated by hardware L1 data cache prefetcher miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000200000Counts data cache line reads generated by hardware L1 data cache prefetcher miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l1_data_rd.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts data cache line reads generated by hardware L1 data cache prefetcher true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000200000Counts data cache line reads generated by hardware L1 data cache prefetcher true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l1_data_rd.outstandingcacheCounts data cache line reads generated by hardware L1 data cache prefetcher outstanding, per cycle, from the time of the L2 miss to when any response is receivedevent=0xb7,period=100007,umask=1,offcore_rsp=0x400000200000Counts data cache line reads generated by hardware L1 data cache prefetcher outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l2_data_rd.any_responsecacheCounts data cacheline reads generated by hardware L2 cache prefetcher have any transaction responses from the uncore subsystemevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001001000Counts data cacheline reads generated by hardware L2 cache prefetcher have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l2_data_rd.l2_hitcacheCounts data cacheline reads generated by hardware L2 cache prefetcher hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004001000Counts data cacheline reads generated by hardware L2 cache prefetcher hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l2_data_rd.l2_miss.hitm_other_corecacheCounts data cacheline reads generated by hardware L2 cache prefetcher miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000001000Counts data cacheline reads generated by hardware L2 cache prefetcher miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l2_data_rd.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts data cacheline reads generated by hardware L2 cache prefetcher true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000001000Counts data cacheline reads generated by hardware L2 cache prefetcher true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l2_data_rd.outstandingcacheCounts data cacheline reads generated by hardware L2 cache prefetcher outstanding, per cycle, from the time of the L2 miss to when any response is receivedevent=0xb7,period=100007,umask=1,offcore_rsp=0x400000001000Counts data cacheline reads generated by hardware L2 cache prefetcher outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l2_rfo.any_responsecacheCounts reads for ownership (RFO) requests generated by L2 prefetcher have any transaction responses from the uncore subsystemevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001002000Counts reads for ownership (RFO) requests generated by L2 prefetcher have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l2_rfo.l2_hitcacheCounts reads for ownership (RFO) requests generated by L2 prefetcher hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004002000Counts reads for ownership (RFO) requests generated by L2 prefetcher hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l2_rfo.l2_miss.hitm_other_corecacheCounts reads for ownership (RFO) requests generated by L2 prefetcher miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000002000Counts reads for ownership (RFO) requests generated by L2 prefetcher miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l2_rfo.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts reads for ownership (RFO) requests generated by L2 prefetcher true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000002000Counts reads for ownership (RFO) requests generated by L2 prefetcher true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.pf_l2_rfo.outstandingcacheCounts reads for ownership (RFO) requests generated by L2 prefetcher outstanding, per cycle, from the time of the L2 miss to when any response is receivedevent=0xb7,period=100007,umask=1,offcore_rsp=0x400000002000Counts reads for ownership (RFO) requests generated by L2 prefetcher outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.streaming_stores.any_responsecacheCounts any data writes to uncacheable write combining (USWC) memory region  have any transaction responses from the uncore subsystemevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001480000Counts any data writes to uncacheable write combining (USWC) memory region  have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.streaming_stores.l2_hitcacheCounts any data writes to uncacheable write combining (USWC) memory region  hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004480000Counts any data writes to uncacheable write combining (USWC) memory region  hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.streaming_stores.l2_miss.hitm_other_corecacheCounts any data writes to uncacheable write combining (USWC) memory region  miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000480000Counts any data writes to uncacheable write combining (USWC) memory region  miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.streaming_stores.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts any data writes to uncacheable write combining (USWC) memory region  true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000480000Counts any data writes to uncacheable write combining (USWC) memory region  true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.streaming_stores.outstandingcacheCounts any data writes to uncacheable write combining (USWC) memory region  outstanding, per cycle, from the time of the L2 miss to when any response is receivedevent=0xb7,period=100007,umask=1,offcore_rsp=0x400000480000Counts any data writes to uncacheable write combining (USWC) memory region  outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.sw_prefetch.any_responsecacheCounts data cache lines requests by software prefetch instructions have any transaction responses from the uncore subsystemevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001100000Counts data cache lines requests by software prefetch instructions have any transaction responses from the uncore subsystem. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.sw_prefetch.l2_hitcacheCounts data cache lines requests by software prefetch instructions hit the L2 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x000004100000Counts data cache lines requests by software prefetch instructions hit the L2 cache. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.sw_prefetch.l2_miss.hitm_other_corecacheCounts data cache lines requests by software prefetch instructions miss the L2 cache with a snoop hit in the other processor module, data forwarding is requiredevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000100000Counts data cache lines requests by software prefetch instructions miss the L2 cache with a snoop hit in the other processor module, data forwarding is required. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.sw_prefetch.l2_miss.snoop_miss_or_no_snoop_neededcacheCounts data cache lines requests by software prefetch instructions true miss for the L2 cache with a snoop miss in the other processor moduleevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000100000Counts data cache lines requests by software prefetch instructions true miss for the L2 cache with a snoop miss in the other processor module.  Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)offcore_response.sw_prefetch.outstandingcacheCounts data cache lines requests by software prefetch instructions outstanding, per cycle, from the time of the L2 miss to when any response is receivedevent=0xb7,period=100007,umask=1,offcore_rsp=0x400000100000Counts data cache lines requests by software prefetch instructions outstanding, per cycle, from the time of the L2 miss to when any response is received. Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)machine_clears.fp_assistfloating pointMachine clears due to FP assistsevent=0xc3,period=20003,umask=400Counts machine clears due to floating point (FP) operations needing assists.  For instance, if the result was a floating point denormal, the hardware clears the pipeline and reissues uops to produce the correct IEEE compliant denormal resultuops_retired.fpdivfloating pointFloating point divide uops retired (Precise Event Capable) (Must be precise)event=0xc2,period=2000003,umask=800Counts the number of floating point divide uops retired (Must be precise)machine_clears.memory_orderingmemoryMachine clears due to memory ordering issueevent=0xc3,period=20003,umask=200Counts machine clears due to memory ordering issues.  This occurs when a snoop request happens and the machine is uncertain if memory ordering will be preserved - as another core is in the process of modifying the datafetch_stall.itlb_fill_pending_cyclesotherCycles the code-fetch stalls and an ITLB miss is outstandingevent=0x86,period=200003,umask=100Counts cycles that fetch is stalled due to an outstanding ITLB miss. That is, the decoder queue is able to accept bytes, but the fetch unit is unable to provide bytes due to an ITLB miss.  Note: this event is not the same as page walk cycles to retrieve an instruction translationbr_misp_retired.non_return_indpipelineRetired mispredicted instructions of near indirect Jmp or near indirect call (Precise event capable) (Must be precise)event=0xc5,period=200003,umask=0xeb00Counts mispredicted branch instructions retired that were near indirect call or near indirect jmp, where the target address taken was not what the processor predicted (Must be precise)cpu_clk_unhalted.refpipelineReference cycles when core is not haltedevent=0x0,umask=0x03,period=200000300Reference cycles when core is not halted.  This event uses a (_P)rogrammable general purpose performance counterinst_retired.anypipelineInstructions retired (Fixed event) (Must be precise)event=0xc0,period=200000300Counts the number of instructions that retire execution. For instructions that consist of multiple uops, this event counts the retirement of the last uop of the instruction. The counter continues counting during hardware interrupts, traps, and inside interrupt handlers.  This event uses fixed counter 0.  You cannot collect a PEBs record for this event (Must be precise)inst_retired.prec_distpipelineInstructions retired - using Reduced Skid PEBS feature (Must be precise)event=0xc0,period=200000300Counts INST_RETIRED.ANY using the Reduced Skid PEBS feature that reduces the shadow in which events aren't counted allowing for a more unbiased distribution of samples across instructions retired (Must be precise)machine_clears.allpipelineAll machine clearsevent=0xc3,period=2000300Counts machine clears for any reasonmachine_clears.disambiguationpipelineMachine clears due to memory disambiguationevent=0xc3,period=20003,umask=800Counts machine clears due to memory disambiguation.  Memory disambiguation happens when a load which has been issued conflicts with a previous unretired store in the pipeline whose address was not known at issue time, but is later resolved to be the same as the load addressmachine_clears.page_faultpipelineMachines clear due to a page faultevent=0xc3,period=20003,umask=0x2000Counts the number of times that the machines clears due to a page fault. Covers both I-side and D-side(Loads/Stores) page faults. A page fault occurs when either page is not present, or an access violationmachine_clears.smcpipelineSelf-Modifying Code detectedevent=0xc3,period=20003,umask=100Counts the number of times that the processor detects that a program is writing to a code section and has to perform a machine clear because of that modification.  Self-modifying code (SMC) causes a severe penalty in all Intel(R) architecture processorsuops_retired.idivpipelineInteger divide uops retired (Precise Event Capable) (Must be precise)event=0xc2,period=2000003,umask=0x1000Counts the number of integer divide uops retired (Must be precise)dtlb_load_misses.walk_completed_1gbvirtual memoryPage walk completed due to a demand load to a 1GB pageevent=8,period=200003,umask=800Counts page walks completed due to demand data loads (including SW prefetches) whose address translations missed in all TLB levels and were mapped to 1GB pages.  The page walks can end with or without a page faultdtlb_load_misses.walk_completed_2m_4mvirtual memoryPage walk completed due to a demand load to a 2M or 4M pageevent=8,period=200003,umask=400Counts page walks completed due to demand data loads (including SW prefetches) whose address translations missed in all TLB levels and were mapped to 2M or 4M pages.  The page walks can end with or without a page faultdtlb_load_misses.walk_completed_4kvirtual memoryPage walk completed due to a demand load to a 4K pageevent=8,period=200003,umask=200Counts page walks completed due to demand data loads (including SW prefetches) whose address translations missed in all TLB levels and were mapped to 4K pages.  The page walks can end with or without a page faultdtlb_load_misses.walk_pendingvirtual memoryPage walks outstanding due to a demand load every cycleevent=8,period=200003,umask=0x1000Counts once per cycle for each page walk occurring due to a load (demand data loads or SW prefetches). Includes cycles spent traversing the Extended Page Table (EPT). Average cycles per walk can be calculated by dividing by the number of walksdtlb_store_misses.walk_completed_1gbvirtual memoryPage walk completed due to a demand data store to a 1GB pageevent=0x49,period=2000003,umask=800Counts page walks completed due to demand data stores whose address translations missed in the TLB and were mapped to 1GB pages.  The page walks can end with or without a page faultdtlb_store_misses.walk_completed_2m_4mvirtual memoryPage walk completed due to a demand data store to a 2M or 4M pageevent=0x49,period=2000003,umask=400Counts page walks completed due to demand data stores whose address translations missed in the TLB and were mapped to 2M or 4M pages.  The page walks can end with or without a page faultdtlb_store_misses.walk_completed_4kvirtual memoryPage walk completed due to a demand data store to a 4K pageevent=0x49,period=2000003,umask=200Counts page walks completed due to demand data stores whose address translations missed in the TLB and were mapped to 4K pages.  The page walks can end with or without a page faultdtlb_store_misses.walk_pendingvirtual memoryPage walks outstanding due to a demand data store every cycleevent=0x49,period=200003,umask=0x1000Counts once per cycle for each page walk occurring due to a demand data store. Includes cycles spent traversing the Extended Page Table (EPT). Average cycles per walk can be calculated by dividing by the number of walksept.walk_pendingvirtual memoryPage walks outstanding due to walking the EPT every cycleevent=0x4f,period=200003,umask=0x1000Counts once per cycle for each page walk only while traversing the Extended Page Table (EPT), and does not count during the rest of the translation.  The EPT is used for translating Guest-Physical Addresses to Physical Addresses for Virtual Machine Monitors (VMMs).  Average cycles per walk can be calculated by dividing the count by number of walksitlb_misses.walk_completed_1gbvirtual memoryPage walk completed due to an instruction fetch in a 1GB pageevent=0x85,period=2000003,umask=800Counts page walks completed due to instruction fetches whose address translations missed in the TLB and were mapped to 1GB pages.  The page walks can end with or without a page faultitlb_misses.walk_completed_2m_4mvirtual memoryPage walk completed due to an instruction fetch in a 2M or 4M pageevent=0x85,period=2000003,umask=400Counts page walks completed due to instruction fetches whose address translations missed in the TLB and were mapped to 2M or 4M pages.  The page walks can end with or without a page faultitlb_misses.walk_completed_4kvirtual memoryPage walk completed due to an instruction fetch in a 4K pageevent=0x85,period=2000003,umask=200Counts page walks completed due to instruction fetches whose address translations missed in the TLB and were mapped to 4K pages.  The page walks can end with or without a page faultitlb_misses.walk_pendingvirtual memoryPage walks outstanding due to an instruction fetch every cycleevent=0x85,period=200003,umask=0x1000Counts once per cycle for each page walk occurring due to an instruction fetch. Includes cycles spent traversing the Extended Page Table (EPT). Average cycles per walk can be calculated by dividing by the number of walkstlb_flushes.stlb_anyvirtual memorySTLB flushesevent=0xbd,period=20003,umask=0x2000Counts STLB flushes.  The TLBs are flushed on instructions like INVLPG and MOV to CR3mem_bound_stalls_ifetch.allcacheCounts the number of unhalted cycles when the core is stalled due to an instruction cache or TLB missevent=0x35,period=1000003,umask=0x7f00mem_bound_stalls_ifetch.l2_hitcacheCounts the number of cycles the core is stalled due to an instruction cache or TLB miss which hit in the L2 cacheevent=0x35,period=1000003,umask=100Counts the number of cycles the core is stalled due to an instruction cache or Translation Lookaside Buffer (TLB) miss which hit in the L2 cachemem_bound_stalls_ifetch.llc_hitcacheCounts the number of unhalted cycles when the core is stalled due to an icache or itlb miss which hit in the LLCevent=0x35,period=1000003,umask=600mem_bound_stalls_ifetch.llc_misscacheCounts the number of unhalted cycles when the core is stalled due to an icache or itlb miss which missed all the cachesevent=0x35,period=1000003,umask=0x7800mem_bound_stalls_load.allcacheCounts the number of unhalted cycles when the core is stalled due to an L1 demand load missevent=0x34,period=1000003,umask=0x7f00mem_bound_stalls_load.l2_hitcacheCounts the number of cycles the core is stalled due to a demand load which hit in the L2 cacheevent=0x34,period=1000003,umask=100Counts the number of cycles a core is stalled due to a demand load which hit in the L2 cachemem_bound_stalls_load.llc_hitcacheCounts the number of unhalted cycles when the core is stalled due to a demand load miss which hit in the LLCevent=0x34,period=1000003,umask=600mem_bound_stalls_load.llc_misscacheCounts the number of unhalted cycles when the core is stalled due to a demand load miss which missed all the local cachesevent=0x34,period=1000003,umask=0x7800mem_load_uops_l3_miss_retired.local_dramcacheCounts the number of load ops retired that miss the L3 cache and hit in DRAMevent=0xd3,period=1000003,umask=100mem_load_uops_retired.l1_hitcacheCounts the number of load ops retired that hit the L1 data cacheevent=0xd1,period=200003,umask=100mem_load_uops_retired.l1_misscacheCounts the number of load ops retired that miss in the L1 data cacheevent=0xd1,period=200003,umask=0x4000mem_load_uops_retired.l2_hitcacheCounts the number of load ops retired that hit in the L2 cacheevent=0xd1,period=200003,umask=200mem_load_uops_retired.l2_misscacheCounts the number of load ops retired that miss in the L2 cacheevent=0xd1,period=200003,umask=0x8000mem_load_uops_retired.l3_hitcacheCounts the number of load ops retired that hit in the L3 cacheevent=0xd1,period=200003,umask=0x1c00mem_load_uops_retired.wcb_hitcacheCounts the number of loads that hit in a write combining buffer (WCB), excluding the first load that caused the WCB to allocateevent=0xd1,period=200003,umask=0x2000mem_uops_retired.all_loadscacheCounts the number of load ops retired  Supports address when preciseevent=0xd0,period=200003,umask=0x8100mem_uops_retired.all_storescacheCounts the number of store ops retired  Supports address when preciseevent=0xd0,period=200003,umask=0x8200mem_uops_retired.load_latency_gt_1024cacheCounts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled  Supports address when preciseevent=0xd0,period=1000003,umask=5,ldlat=0x40000mem_uops_retired.load_latency_gt_128cacheCounts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled  Supports address when preciseevent=0xd0,period=1000003,umask=5,ldlat=0x8000mem_uops_retired.load_latency_gt_16cacheCounts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled  Supports address when preciseevent=0xd0,period=1000003,umask=5,ldlat=0x1000mem_uops_retired.load_latency_gt_2048cacheCounts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled  Supports address when preciseevent=0xd0,period=1000003,umask=5,ldlat=0x80000mem_uops_retired.load_latency_gt_256cacheCounts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled  Supports address when preciseevent=0xd0,period=1000003,umask=5,ldlat=0x10000mem_uops_retired.load_latency_gt_32cacheCounts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled  Supports address when preciseevent=0xd0,period=1000003,umask=5,ldlat=0x2000mem_uops_retired.load_latency_gt_4cacheCounts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled  Supports address when preciseevent=0xd0,period=1000003,umask=5,ldlat=0x400mem_uops_retired.load_latency_gt_512cacheCounts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled  Supports address when preciseevent=0xd0,period=1000003,umask=5,ldlat=0x20000mem_uops_retired.load_latency_gt_64cacheCounts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled  Supports address when preciseevent=0xd0,period=1000003,umask=5,ldlat=0x4000mem_uops_retired.load_latency_gt_8cacheCounts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled  Supports address when preciseevent=0xd0,period=1000003,umask=5,ldlat=0x800mem_uops_retired.lock_loadscacheCounts the number of load uops retired that performed one or more locks  Supports address when preciseevent=0xd0,period=200003,umask=0x2100mem_uops_retired.splitcacheCounts the number of memory uops retired that were splits  Supports address when preciseevent=0xd0,period=200003,umask=0x4300mem_uops_retired.split_loadscacheCounts the number of retired split load uops  Supports address when preciseevent=0xd0,period=200003,umask=0x4100mem_uops_retired.split_storescacheCounts the number of retired split store uops  Supports address when preciseevent=0xd0,period=200003,umask=0x4200mem_uops_retired.store_latencycacheCounts the number of  stores uops retired same as MEM_UOPS_RETIRED.ALL_STORES  Supports address when preciseevent=0xd0,period=1000003,umask=600topdown_fe_bound.icachecacheCounts the number of issue slots every cycle that were not delivered by the frontend due to an icache missevent=0x71,period=1000003,umask=0x2000arith.fpdiv_activefloating pointCounts the number of cycles when any of the floating point dividers are activeevent=0xcd,cmask=1,period=1000003,umask=200fp_flops_retired.allfloating pointCounts the number of all types of floating point operations per uop with all default weightingevent=0xc8,period=1000003,umask=300fp_flops_retired.dpfloating pointThis event is deprecated. [This event is alias to FP_FLOPS_RETIRED.FP64]event=0xc8,period=1000003,umask=110fp_flops_retired.fp32floating pointCounts the number of floating point operations that produce 32 bit single precision results [This event is alias to FP_FLOPS_RETIRED.SP]event=0xc8,period=1000003,umask=200fp_flops_retired.fp64floating pointCounts the number of floating point operations that produce 64 bit double precision results [This event is alias to FP_FLOPS_RETIRED.DP]event=0xc8,period=1000003,umask=100fp_flops_retired.spfloating pointThis event is deprecated. [This event is alias to FP_FLOPS_RETIRED.FP32]event=0xc8,period=1000003,umask=210fp_inst_retired.128b_dpfloating pointCounts the total number of  floating point retired instructionsevent=0xc7,period=1000003,umask=800fp_inst_retired.128b_spfloating pointCounts the number of retired instructions whose sources are a packed 128 bit single precision floating point. This may be SSE or AVX.128 operationsevent=0xc7,period=1000003,umask=400fp_inst_retired.256b_dpfloating pointCounts the number of retired instructions whose sources are a packed 256 bit double precision floating pointevent=0xc7,period=1000003,umask=0x2000fp_inst_retired.32b_spfloating pointCounts the number of retired instructions whose sources are a scalar 32bit single precision floating pointevent=0xc7,period=1000003,umask=100fp_inst_retired.64b_dpfloating pointCounts the number of retired instructions whose sources are a scalar 64 bit double precision floating pointevent=0xc7,period=1000003,umask=200uops_retired.fpdivfloating pointCounts the number of floating point divide uops retired (x87 and sse, including x87 sqrt)event=0xc2,period=2000003,umask=800frontend_retired.itlb_missfrontendCounts the number of instructions retired that were tagged because empty issue slots were seen before the uop due to ITLB missevent=0xc6,period=1000003,umask=0x1000icache.accessesfrontendCounts every time the code stream enters into a new cache line by walking sequential from the previous line or being redirected by a jumpevent=0x80,period=200003,umask=300icache.missesfrontendCounts every time the code stream enters into a new cache line by walking sequential from the previous line or being redirected by a jump and the instruction cache registers bytes are not present. -event=0x80,period=200003,umask=200misalign_mem_ref.load_page_splitmemoryCounts misaligned loads that are 4K page splitsevent=0x13,period=200003,umask=200misalign_mem_ref.store_page_splitmemoryCounts misaligned stores that are 4K page splitsevent=0x13,period=200003,umask=400ocr.demand_data_rd.l3_missmemoryCounts demand data reads that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBFC0000100ocr.demand_rfo.l3_missmemoryCounts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBFC0000200lbr_inserts.anyotherThis event is deprecated. [This event is alias to MISC_RETIRED.LBR_INSERTS]event=0xe4,period=1000003,umask=110serialization.c01_ms_scbotherCounts the number of issue slots in a UMWAIT or TPAUSE instruction where no uop issues due to the instruction putting the CPU into the C0.1 activity stateevent=0x75,period=200003,umask=400arith.div_activepipelineCounts the number of cycles when any of the dividers are activeevent=0xcd,cmask=1,period=1000003,umask=300br_inst_retired.all_branchespipelineCounts the total number of branch instructions retired for all branch typesevent=0xc4,period=20000300Counts the total number of instructions in which the instruction pointer (IP) of the processor is resteered due to a branch instruction and the branch instruction successfully retires.  All branch type instructions are accounted forbr_inst_retired.condpipelineCounts the number of retired JCC (Jump on Conditional Code) branch instructions retired, includes both taken and not taken branchesevent=0xc4,period=200003,umask=0x7e00br_inst_retired.cond_takenpipelineCounts the number of taken JCC (Jump on Conditional Code) branch instructions retiredevent=0xc4,period=200003,umask=0xfe00br_inst_retired.far_branchpipelineCounts the number of far branch instructions retired, includes far jump, far call and return, and interrupt call and returnevent=0xc4,period=200003,umask=0xbf00br_inst_retired.indirectpipelineCounts the number of near indirect JMP and near indirect CALL branch instructions retiredevent=0xc4,period=200003,umask=0xeb00br_inst_retired.indirect_callpipelineCounts the number of near indirect CALL branch instructions retiredevent=0xc4,period=200003,umask=0xfb00br_inst_retired.ind_callpipelineThis event is deprecated. Refer to new event BR_INST_RETIRED.INDIRECT_CALLevent=0xc4,period=200003,umask=0xfb10br_inst_retired.near_callpipelineCounts the number of near CALL branch instructions retiredevent=0xc4,period=200003,umask=0xf900br_inst_retired.near_returnpipelineCounts the number of near RET branch instructions retiredevent=0xc4,period=200003,umask=0xf700br_misp_retired.all_branchespipelineCounts the total number of mispredicted branch instructions retired for all branch typesevent=0xc5,period=20000300Counts the total number of mispredicted branch instructions retired.  All branch type instructions are accounted for.  Prediction of the branch target address enables the processor to begin executing instructions before the non-speculative execution path is known. The branch prediction unit (BPU) predicts the target address based on the instruction pointer (IP) of the branch and on the execution path through which execution reached this IP.    A branch misprediction occurs when the prediction is wrong, and results in discarding all instructions executed in the speculative path and re-fetching from the correct pathbr_misp_retired.condpipelineCounts the number of mispredicted JCC (Jump on Conditional Code) branch instructions retiredevent=0xc5,period=200003,umask=0x7e00br_misp_retired.cond_takenpipelineCounts the number of mispredicted taken JCC (Jump on Conditional Code) branch instructions retiredevent=0xc5,period=200003,umask=0xfe00br_misp_retired.indirectpipelineCounts the number of mispredicted near indirect JMP and near indirect CALL branch instructions retiredevent=0xc5,period=200003,umask=0xeb00br_misp_retired.indirect_callpipelineCounts the number of mispredicted near indirect CALL branch instructions retiredevent=0xc5,period=200003,umask=0xfb00br_misp_retired.near_takenpipelineCounts the number of mispredicted near taken branch instructions retiredevent=0xc5,period=200003,umask=0x8000br_misp_retired.returnpipelineCounts the number of mispredicted near RET branch instructions retiredevent=0xc5,period=200003,umask=0xf700cpu_clk_unhalted.corepipelineFixed Counter: Counts the number of unhalted core clock cyclesevent=0x3c,period=200000300cpu_clk_unhalted.core_ppipelineCounts the number of unhalted core clock cycles [This event is alias to CPU_CLK_UNHALTED.THREAD_P]event=0x3c,period=200000300cpu_clk_unhalted.ref_tscpipelineFixed Counter: Counts the number of unhalted reference clock cyclesevent=0,period=2000003,umask=300cpu_clk_unhalted.threadpipelineFixed Counter: Counts the number of unhalted core clock cyclesevent=0x3c,period=200000300cpu_clk_unhalted.thread_ppipelineCounts the number of unhalted core clock cycles [This event is alias to CPU_CLK_UNHALTED.CORE_P]event=0x3c,period=200000300inst_retired.anypipelineFixed Counter: Counts the number of instructions retired (Precise event)event=0xc0,period=200000300inst_retired.any_ppipelineCounts the number of instructions retiredevent=0xc0,period=200000300ld_blocks.address_aliaspipelineCounts the number of retired loads that are blocked because it initially appears to be store forward blocked, but subsequently is shown not to be blocked based on 4K alias checkevent=3,period=1000003,umask=400ld_blocks.data_unknownpipelineCounts the number of retired loads that are blocked because its address exactly matches an older store whose data is not readyevent=3,period=1000003,umask=100ld_blocks.store_forwardpipelineCounts the number of retired loads that are blocked because its address partially overlapped with an older storeevent=3,period=1000003,umask=200misc_retired.lbr_insertspipelineCounts the number of Last Branch Record (LBR) entries. Requires LBRs to be enabled and configured in IA32_LBR_CTL. [This event is alias to LBR_INSERTS.ANY]event=0xe4,period=1000003,umask=100topdown_bad_speculation.allpipelineCounts the number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear. [This event is alias to TOPDOWN_BAD_SPECULATION.ALL_P]event=0x73,period=100000300Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear. Only issue slots wasted due to fast nukes such as memory ordering nukes are counted. Other nukes are not accounted for. Counts all issue slots blocked during this recovery window, including relevant microcode flows, and while uops are not yet available in the instruction queue (IQ) or until an FE_BOUND event occurs besides OTHER and CISC. Also includes the issue slots that were consumed by the backend but were thrown away because they were younger than the mispredict or machine clear. [This event is alias to TOPDOWN_BAD_SPECULATION.ALL_P]topdown_bad_speculation.all_ppipelineCounts the number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear. [This event is alias to TOPDOWN_BAD_SPECULATION.ALL]event=0x73,period=100000300Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear. Only issue slots wasted due to fast nukes such as memory ordering nukes are counted. Other nukes are not accounted for. Counts all issue slots blocked during this recovery window, including relevant microcode flows, and while uops are not yet available in the instruction queue (IQ) or until an FE_BOUND event occurs besides OTHER and CISC. Also includes the issue slots that were consumed by the backend but were thrown away because they were younger than the mispredict or machine clear. [This event is alias to TOPDOWN_BAD_SPECULATION.ALL]topdown_bad_speculation.fastnukepipelineCounts the number of issue slots every cycle that were not consumed by the backend due to Fast Nukes such as  Memory Ordering Machine clears and MRN nukesevent=0x73,period=1000003,umask=200topdown_bad_speculation.mispredictpipelineCounts the number of issue slots every cycle that were not consumed by the backend due to Branch Mispredictevent=0x73,period=1000003,umask=400topdown_be_bound.allpipelineCounts the number of retirement slots not consumed due to backend stalls [This event is alias to TOPDOWN_BE_BOUND.ALL_P]event=0x74,period=100000300topdown_be_bound.alloc_restrictionspipelineCounts the number of issue slots every cycle that were not consumed by the backend due to due to certain allocation restrictionsevent=0x74,period=1000003,umask=100topdown_be_bound.all_ppipelineCounts the number of retirement slots not consumed due to backend stalls [This event is alias to TOPDOWN_BE_BOUND.ALL]event=0x74,period=100000300topdown_be_bound.mem_schedulerpipelineCounts the number of issue slots every cycle that were not consumed by the backend due to memory reservation stall (scheduler not being able to accept another uop).  This could be caused by RSV full or load/store buffer blockevent=0x74,period=1000003,umask=200topdown_be_bound.non_mem_schedulerpipelineCounts the number of issue slots every cycle that were not consumed by the backend due to IEC and FPC RAT stalls - which can be due to the FIQ and IEC reservation station stall (integer, FP and SIMD scheduler not being able to accept another uop. )event=0x74,period=1000003,umask=800topdown_be_bound.registerpipelineCounts the number of issue slots every cycle that were not consumed by the backend due to mrbl stall.  A 'marble' refers to a physical register file entry, also known as the physical destination (PDST)event=0x74,period=1000003,umask=0x2000topdown_be_bound.reorder_bufferpipelineCounts the number of issue slots every cycle that were not consumed by the backend due to ROB fullevent=0x74,period=1000003,umask=0x4000topdown_be_bound.serializationpipelineCounts the number of issue slots every cycle that were not consumed by the backend due to iq/jeu scoreboards or ms scbevent=0x74,period=1000003,umask=0x1000topdown_fe_bound.allpipelineCounts the number of retirement slots not consumed due to front end stalls [This event is alias to TOPDOWN_FE_BOUND.ALL_P]event=0x71,period=100000300topdown_fe_bound.all_ppipelineCounts the number of retirement slots not consumed due to front end stalls [This event is alias to TOPDOWN_FE_BOUND.ALL]event=0x71,period=100000300topdown_fe_bound.branch_detectpipelineCounts the number of issue slots every cycle that were not delivered by the frontend due to BAClearevent=0x71,period=1000003,umask=200topdown_fe_bound.branch_resteerpipelineCounts the number of issue slots every cycle that were not delivered by the frontend due to BTClearevent=0x71,period=1000003,umask=0x4000topdown_fe_bound.ciscpipelineCounts the number of issue slots every cycle that were not delivered by the frontend due to msevent=0x71,period=1000003,umask=100topdown_fe_bound.decodepipelineCounts the number of issue slots every cycle that were not delivered by the frontend due to decode stallevent=0x71,period=1000003,umask=800topdown_fe_bound.frontend_latencypipelineCounts the number of issue slots every cycle that were not delivered by the frontend due to latency related stalls including BACLEARs, BTCLEARs, ITLB misses, and ICache missesevent=0x71,period=1000003,umask=0x7200topdown_fe_bound.itlbpipelineThis event is deprecated. [This event is alias to TOPDOWN_FE_BOUND.ITLB_MISS]event=0x71,period=1000003,umask=0x1010topdown_fe_bound.itlb_misspipelineCounts the number of issue slots every cycle that were not delivered by the frontend due to itlb miss [This event is alias to TOPDOWN_FE_BOUND.ITLB]event=0x71,period=1000003,umask=0x1000topdown_fe_bound.otherpipelineCounts the number of issue slots every cycle that were not delivered by the frontend that do not categorize into any other common frontend stallevent=0x71,period=1000003,umask=0x8000topdown_fe_bound.predecodepipelineCounts the number of issue slots every cycle that were not delivered by the frontend due to predecode wrongevent=0x71,period=1000003,umask=400topdown_retiring.allpipelineCounts the number of consumed retirement slots. [This event is alias to TOPDOWN_RETIRING.ALL_P]event=0x72,period=100000300topdown_retiring.all_ppipelineCounts the number of consumed retirement slots. [This event is alias to TOPDOWN_RETIRING.ALL]event=0x72,period=100000300uops_issued.anypipelineCounts the number of uops issued by the front end every cycleevent=0xe,period=100000300Counts the number of uops issued by the front end every cycle. When 4-uops are requested and only 2-uops are delivered, the event counts 2.  Uops_issued correlates to the number of ROB entries.  If uop takes 2 ROB slots it counts as 2 uops_issueduops_retired.allpipelineCounts the total number of uops retiredevent=0xc2,period=200000300uops_retired.idivpipelineCounts the number of integer divide uops retiredevent=0xc2,period=2000003,umask=0x1000uops_retired.mspipelineCounts the number of uops that are from the complex flows issued by the micro-sequencer (MS).  This includes uops from flows due to complex instructions, faults, assists, and inserted flowsevent=0xc2,period=2000003,umask=100uops_retired.x87pipelineCounts the number of x87 uops retired, includes those in ms flowsevent=0xc2,period=2000003,umask=200uncore_chacmsunc_chacms_clockticksuncore cacheClockticks for CMS units attached to CHAevent=101unc_cha_clockticksuncore cacheNumber of CHA clock cycles while the event is enabledevent=101Clockticks of the uncore caching and home agent (CHA)unc_cha_distress_asserted.dpt_anyuncore cacheDistress signal assertion for dynamic prefetch throttle (DPT).  Threshold for distress signal assertion reached in TOR or IRQ (immediate cause for triggering)event=0x59,umask=301unc_cha_distress_asserted.dpt_irquncore cacheDistress signal assertion for dynamic prefetch throttle (DPT).  Threshold for distress signal assertion reached in IRQ (immediate cause for triggering)event=0x59,umask=101unc_cha_distress_asserted.dpt_toruncore cacheDistress signal assertion for dynamic prefetch throttle (DPT).  Threshold for distress signal assertion reached in TOR (immediate cause for triggering)event=0x59,umask=201unc_cha_imc_writes_count.fulluncore cacheCounts when a normal (Non-Isochronous) full line write is issued from the CHA to the any of the memory controller channelsevent=0x5b,umask=101unc_cha_imc_writes_count.full_priorityuncore cacheCHA to iMC Full Line Writes Issued : ISOCH Full Line : Counts the total number of full line writes issued from the HA into the memory controllerevent=0x5b,umask=401unc_cha_imc_writes_count.partialuncore cacheCHA to iMC Full Line Writes Issued : Partial Non-ISOCH : Counts the total number of full line writes issued from the HA into the memory controllerevent=0x5b,umask=201unc_cha_imc_writes_count.partial_priorityuncore cacheCHA to iMC Full Line Writes Issued : ISOCH Partial : Counts the total number of full line writes issued from the HA into the memory controllerevent=0x5b,umask=801unc_cha_llc_lookup.codeuncore cacheCache Lookups: CRd Requestsevent=0x34,umask=0x1bd0ff01Cache Lookups : CRd Requestsunc_cha_llc_lookup.data_rduncore cacheCache Lookups: Read Requests and Read Prefetchesevent=0x34,umask=0x1bc1ff01Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactionsunc_cha_llc_lookup.data_read_alluncore cacheCache Lookups: Read Requests, Read Prefetches, and Snoopsevent=0x34,umask=0x1fc1ff01Cache Lookups : Data Readsunc_cha_llc_lookup.data_read_localuncore cacheCache Lookups: Read Requests to Locally Homed Memoryevent=0x34,umask=0x841ff01Cache Lookups : Demand Data Reads, Core and LLC prefetchesunc_cha_llc_lookup.data_read_missuncore cacheCache Lookups: Read Requests, Read Prefetches, and Snoops which miss the Cacheevent=0x34,umask=0x1fc10101Cache Lookups : Data Read Missesunc_cha_llc_lookup.locally_homed_addressuncore cacheCache Lookups: All Requests to Locally Homed Memoryevent=0x34,umask=0xbdfff01Cache Lookups : Transactions homed locallyunc_cha_llc_lookup.local_codeuncore cacheCache Lookups: Code Read Requests and Code Read Prefetches to Locally Homed Memoryevent=0x34,umask=0x19d0ff01Cache Lookups : CRd Requestsunc_cha_llc_lookup.local_data_rduncore cacheCache Lookups: Read Requests and Read Prefetches to Locally Homed Memoryevent=0x34,umask=0x19c1ff01Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactionsunc_cha_llc_lookup.local_dmnd_codeuncore cacheCache Lookups: Code Read Requests to Locally Homed Memoryevent=0x34,umask=0x1850ff01Cache Lookups : CRd Requestsunc_cha_llc_lookup.local_dmnd_data_rduncore cacheCache Lookups: Read Requests to Locally Homed Memoryevent=0x34,umask=0x1841ff01Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactionsunc_cha_llc_lookup.local_dmnd_rfouncore cacheCache Lookups: RFO Requests to Locally Homed Memoryevent=0x34,umask=0x1848ff01Cache Lookups : RFO Requestsunc_cha_llc_lookup.local_llc_pfuncore cacheCache Lookups: LLC Prefetch Requests to Locally Homed Memoryevent=0x34,umask=0x189dff01Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactionsunc_cha_llc_lookup.local_pfuncore cacheCache Lookups: All Prefetches to Locally Homed Memoryevent=0x34,umask=0x199dff01Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactionsunc_cha_llc_lookup.local_pf_codeuncore cacheCache Lookups: Code Prefetches to Locally Homed Memoryevent=0x34,umask=0x1910ff01Cache Lookups : CRd Requestsunc_cha_llc_lookup.local_pf_data_rduncore cacheCache Lookups: Read Prefetches to Locally Homed Memoryevent=0x34,umask=0x1981ff01Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactionsunc_cha_llc_lookup.local_pf_rfouncore cacheCache Lookups: RFO Prefetches to Locally Homed Memoryevent=0x34,umask=0x1908ff01Cache Lookups : RFO Requestsunc_cha_llc_lookup.local_rfouncore cacheCache Lookups: RFO Requests and RFO Prefetches to Locally Homed Memoryevent=0x34,umask=0x19c8ff01Cache Lookups : RFO Requestsunc_cha_llc_lookup.rfouncore cacheCache Lookups: All RFO and RFO Prefetchesevent=0x34,umask=0x1bc8ff01Cache Lookups : All RFOs - Demand and Prefetchesunc_cha_llc_lookup.rfo_localuncore cacheCache Lookups: RFO Requests and RFO Prefetches to Locally Homed Memoryevent=0x34,umask=0x9c8ff01Cache Lookups : Locally HOMed RFOs - Demand and Prefetchesunc_cha_llc_lookup.write_localuncore cacheCache Lookups: Writes to Locally Homed Memory (includes writebacks from L1/L2)event=0x34,umask=0x842ff01Cache Lookups : Writesunc_cha_llc_victims.alluncore cacheCounts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inevent=0x37,umask=0xf01Lines Victimized : All Lines Victimizedunc_cha_llc_victims.iauncore cacheLines Victimized : IA traffic : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inevent=0x37,umask=0x2001unc_cha_llc_victims.iouncore cacheLines Victimized : IO traffic : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inevent=0x37,umask=0x1001unc_cha_llc_victims.local_alluncore cacheCounts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inevent=0x37,umask=0x200f01Lines Victimized : Local - All Linesunc_cha_llc_victims.local_euncore cacheLines Victimized : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inevent=0x37,umask=0x200201Lines Victimized : Local - Lines in E Stateunc_cha_llc_victims.local_funcore cacheLines Victimized : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inevent=0x37,umask=0x200801Lines Victimized : Local - Lines in F Stateunc_cha_llc_victims.local_muncore cacheLines Victimized : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inevent=0x37,umask=0x200101Lines Victimized : Local - Lines in M Stateunc_cha_llc_victims.local_suncore cacheLines Victimized : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inevent=0x37,umask=0x200401Lines Victimized : Local - Lines in S Stateunc_cha_llc_victims.total_euncore cacheCounts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inevent=0x37,umask=201Lines Victimized : Lines in E stateunc_cha_llc_victims.total_muncore cacheCounts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inevent=0x37,umask=101Lines Victimized : Lines in M stateunc_cha_llc_victims.total_suncore cacheCounts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inevent=0x37,umask=401Lines Victimized : Lines in S Stateunc_cha_misc.rfo_hit_suncore cacheCounts when a RFO (the Read for Ownership issued before a  write) request hit a cacheline in the S (Shared) stateevent=0x39,umask=801Cbo Misc : RFO HitSunc_cha_osb.local_invitoeuncore cacheOSB Snoop Broadcast : Local InvItoE : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSBevent=0x55,umask=101unc_cha_osb.local_readuncore cacheOSB Snoop Broadcast : Local Rd : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSBevent=0x55,umask=201unc_cha_osb.off_pwrheuristicuncore cacheOSB Snoop Broadcast : Off : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSBevent=0x55,umask=0x2001unc_cha_osb.rfo_hits_snp_bcastuncore cacheOSB Snoop Broadcast : RFO HitS Snoop Broadcast : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSBevent=0x55,umask=0x1001unc_cha_requests.invitoeuncore cacheCounts the total number of requests coming from a unit on this socket for exclusive ownership of a cache line without receiving data (INVITOE) to the CHAevent=0x50,umask=0x3001HA Read and Write Requests : InvalItoEunc_cha_requests.invitoe_localuncore cacheCounts the total number of requests coming from a unit on this socket for exclusive ownership of a cache line without receiving data (INVITOE) to the CHAevent=0x50,umask=0x1001unc_cha_requests.readsuncore cacheCounts read requests made into this CHA. Reads include all read opcodes (including RFO: the Read for Ownership issued before a  write) event=0x50,umask=301HA Read and Write Requests : Readsunc_cha_requests.reads_localuncore cacheCounts read requests coming from a unit on this socket made into this CHA. Reads include all read opcodes (including RFO: the Read for Ownership issued before a  write)event=0x50,umask=101unc_cha_requests.writesuncore cacheCounts write requests made into the CHA, including streaming, evictions, HitM (Reads from another core to a Modified cacheline), etcevent=0x50,umask=0xc01HA Read and Write Requests : Writesunc_cha_requests.writes_localuncore cacheCounts  write requests coming from a unit on this socket made into this CHA, including streaming, evictions, HitM (Reads from another core to a Modified cacheline), etcevent=0x50,umask=401unc_cha_tor_inserts.alluncore cacheAll TOR Insertsevent=0x35,umask=0xc001ffff01TOR Inserts : Allunc_cha_tor_inserts.iauncore cacheAll locally initiated requests from IA Coresevent=0x35,umask=0xc001ff0101TOR Inserts : All requests from iA Coresunc_cha_tor_inserts.ia_clflushuncore cacheCLFlush events that are initiated from the Coreevent=0x35,umask=0xc8c7ff0101TOR Inserts : CLFlushes issued by iA Coresunc_cha_tor_inserts.ia_clflushoptuncore cacheCLFlushOpt events that are initiated from the Coreevent=0x35,umask=0xc8d7ff0101TOR Inserts : CLFlushOpts issued by iA Coresunc_cha_tor_inserts.ia_crduncore cacheCode read from local IA that miss the cacheevent=0x35,umask=0xc80fff0101TOR Inserts : CRDs issued by iA Coresunc_cha_tor_inserts.ia_crd_prefuncore cacheCode read prefetch from local IA that miss the cacheevent=0x35,umask=0xc88fff0101TOR Inserts; Code read prefetch from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_drd_optuncore cacheData read opt from local IA that miss the cacheevent=0x35,umask=0xc827ff0101TOR Inserts : DRd_Opts issued by iA Coresunc_cha_tor_inserts.ia_drd_opt_prefuncore cacheData read opt prefetch from local IA that miss the cacheevent=0x35,umask=0xc8a7ff0101TOR Inserts : DRd_Opt_Prefs issued by iA Coresunc_cha_tor_inserts.ia_hituncore cacheAll locally initiated requests from IA Cores which hit the cacheevent=0x35,umask=0xc001fd0101TOR Inserts : All requests from iA Cores that Hit the LLCunc_cha_tor_inserts.ia_hit_crduncore cacheCode read from local IA that hit the cacheevent=0x35,umask=0xc80ffd0101TOR Inserts : CRds issued by iA Cores that Hit the LLCunc_cha_tor_inserts.ia_hit_crd_prefuncore cacheCode read prefetch from local IA that hit the cacheevent=0x35,umask=0xc88ffd0101TOR Inserts : CRd_Prefs issued by iA Cores that hit the LLCunc_cha_tor_inserts.ia_hit_drd_optuncore cacheData read opt from local IA that hit the cacheevent=0x35,umask=0xc827fd0101TOR Inserts : DRd_Opts issued by iA Cores that hit the LLCunc_cha_tor_inserts.ia_hit_drd_opt_prefuncore cacheData read opt prefetch from local IA that hit the cacheevent=0x35,umask=0xc8a7fd0101TOR Inserts : DRd_Opt_Prefs issued by iA Cores that hit the LLCunc_cha_tor_inserts.ia_hit_itomuncore cacheItoM requests from local IA cores that hit the cacheevent=0x35,umask=0xcc47fd0101TOR Inserts : ItoMs issued by iA Cores that Hit LLCunc_cha_tor_inserts.ia_hit_llcprefcodeuncore cacheLast level cache prefetch code read from local IA that hit the cacheevent=0x35,umask=0xcccffd0101TOR Inserts : LLCPrefCode issued by iA Cores that hit the LLCunc_cha_tor_inserts.ia_hit_llcprefdatauncore cacheLast level cache prefetch data read from local IA that hit the cacheevent=0x35,umask=0xccd7fd0101TOR Inserts : LLCPrefData issued by iA Cores that hit the LLCunc_cha_tor_inserts.ia_hit_llcprefrfouncore cacheLast level cache prefetch read for ownership from local IA that hit the cacheevent=0x35,umask=0xccc7fd0101TOR Inserts : LLCPrefRFO issued by iA Cores that hit the LLCunc_cha_tor_inserts.ia_hit_rfouncore cacheRead for ownership from local IA that hit the cacheevent=0x35,umask=0xc807fd0101TOR Inserts : RFOs issued by iA Cores that Hit the LLCunc_cha_tor_inserts.ia_hit_rfo_prefuncore cacheRead for ownership prefetch from local IA that hit the cacheevent=0x35,umask=0xc887fd0101TOR Inserts : RFO_Prefs issued by iA Cores that Hit the LLCunc_cha_tor_inserts.ia_itomuncore cacheItoM events that are initiated from the Coreevent=0x35,umask=0xcc47ff0101TOR Inserts : ItoMs issued by iA Coresunc_cha_tor_inserts.ia_itomcachenearuncore cacheItoMCacheNear requests from local IA coresevent=0x35,umask=0xcd47ff0101TOR Inserts : ItoMCacheNears issued by iA Coresunc_cha_tor_inserts.ia_llcprefcodeuncore cacheLast level cache prefetch code read from local IAevent=0x35,umask=0xcccfff0101TOR Inserts : LLCPrefCode issued by iA Coresunc_cha_tor_inserts.ia_llcprefdatauncore cacheLast level cache prefetch data read from local IAevent=0x35,umask=0xccd7ff0101TOR Inserts : LLCPrefData issued by iA Coresunc_cha_tor_inserts.ia_llcprefrfouncore cacheLast level cache prefetch read for ownership from local IA that miss the cacheevent=0x35,umask=0xccc7ff0101TOR Inserts : LLCPrefRFO issued by iA Coresunc_cha_tor_inserts.ia_missuncore cacheAll locally initiated requests from IA Cores which miss the cacheevent=0x35,umask=0xc001fe0101TOR Inserts : All requests from iA Cores that Missed the LLCunc_cha_tor_inserts.ia_miss_crduncore cacheCode read from local IA that miss the cacheevent=0x35,umask=0xc80ffe0101TOR Inserts : CRds issued by iA Cores that Missed the LLCunc_cha_tor_inserts.ia_miss_crd_localuncore cacheCRDs from local IA cores to locally homed memoryevent=0x35,umask=0xc80efe0101TOR Inserts : CRd issued by iA Cores that Missed the LLC - HOMed locallyunc_cha_tor_inserts.ia_miss_crd_prefuncore cacheCode read prefetch from local IA that miss the cacheevent=0x35,umask=0xc88ffe0101TOR Inserts : CRd_Prefs issued by iA Cores that Missed the LLCunc_cha_tor_inserts.ia_miss_crd_pref_localuncore cacheCRD Prefetches from local IA cores to locally homed memoryevent=0x35,umask=0xc88efe0101TOR Inserts : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed locallyunc_cha_tor_inserts.ia_miss_drd_optuncore cacheData read opt from local IA that miss the cacheevent=0x35,umask=0xc827fe0101TOR Inserts : DRd_Opt issued by iA Cores that missed the LLCunc_cha_tor_inserts.ia_miss_drd_opt_localuncore cacheInserts into the TOR from local IA cores which miss the LLC and snoop filter with the opcode DRd_Opt, and which target local memoryevent=0x35,umask=0xc826fe0101TOR Inserts : DRd_Opt issued by iA Cores that Missed the LLC - HOMed locallyunc_cha_tor_inserts.ia_miss_drd_opt_prefuncore cacheData read opt prefetch from local IA that miss the cacheevent=0x35,umask=0xc8a7fe0101TOR Inserts : DRd_Opt_Prefs issued by iA Cores that missed the LLCunc_cha_tor_inserts.ia_miss_drd_opt_pref_localuncore cacheInserts into the TOR from local IA cores which miss the LLC and snoop filter with the opcode DRD_PREF_OPT, and target local memoryevent=0x35,umask=0xc8a6fe0101TOR Inserts : DRd_Opt_Prefs issued by iA Cores that missed the LLCunc_cha_tor_inserts.ia_miss_itomuncore cacheItoM requests from local IA cores that miss the cacheevent=0x35,umask=0xcc47fe0101TOR Inserts : ItoMs issued by iA Cores that Missed LLCunc_cha_tor_inserts.ia_miss_llcprefcodeuncore cacheLast level cache prefetch code read from local IA that miss the cacheevent=0x35,umask=0xcccffe0101TOR Inserts : LLCPrefCode issued by iA Cores that missed the LLCunc_cha_tor_inserts.ia_miss_llcprefdatauncore cacheLast level cache prefetch data read from local IA that miss the cacheevent=0x35,umask=0xccd7fe0101TOR Inserts : LLCPrefData issued by iA Cores that missed the LLCunc_cha_tor_inserts.ia_miss_llcprefrfouncore cacheLast level cache prefetch read for ownership from local IA that miss the cacheevent=0x35,umask=0xccc7fe0101TOR Inserts : LLCPrefRFO issued by iA Cores that missed the LLCunc_cha_tor_inserts.ia_miss_local_wcilf_ddruncore cacheWCILF requests from local IA cores to locally homed DDR addresses that miss the cacheevent=0x35,umask=0xc866860101TOR Inserts : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed locallyunc_cha_tor_inserts.ia_miss_local_wcil_ddruncore cacheWCIL requests from local IA cores to locally homed DDR addresses that miss the cacheevent=0x35,umask=0xc86e860101TOR Inserts : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed locallyunc_cha_tor_inserts.ia_miss_rfouncore cacheRead for ownership from local IA that miss the cacheevent=0x35,umask=0xc807fe0101TOR Inserts : RFOs issued by iA Cores that Missed the LLCunc_cha_tor_inserts.ia_miss_rfo_localuncore cacheRead for ownership from local IA that miss the cacheevent=0x35,umask=0xc806fe0101TOR Inserts : RFOs issued by iA Cores that Missed the LLC - HOMed locallyunc_cha_tor_inserts.ia_miss_rfo_prefuncore cacheRead for ownership prefetch from local IA that miss the cacheevent=0x35,umask=0xc887fe0101TOR Inserts : RFO_Prefs issued by iA Cores that Missed the LLCunc_cha_tor_inserts.ia_miss_rfo_pref_localuncore cacheRead for ownership prefetch from local IA that miss the cacheevent=0x35,umask=0xc886fe0101TOR Inserts : RFO_Prefs issued by iA Cores that Missed the LLC - HOMed locallyunc_cha_tor_inserts.ia_miss_ucrdfuncore cacheUCRDF requests from local IA cores that miss the cacheevent=0x35,umask=0xc877de0101TOR Inserts : UCRdFs issued by iA Cores that Missed LLCunc_cha_tor_inserts.ia_miss_wciluncore cacheWCIL requests from a local IA core that miss the cacheevent=0x35,umask=0xc86ffe0101TOR Inserts : WCiLs issued by iA Cores that Missed the LLCunc_cha_tor_inserts.ia_miss_wcilfuncore cacheWCILF requests from local IA core that miss the cacheevent=0x35,umask=0xc867fe0101TOR Inserts : WCiLF issued by iA Cores that Missed the LLCunc_cha_tor_inserts.ia_miss_wcilf_ddruncore cacheWCILF requests from local IA cores to DDR homed addresses which miss the cacheevent=0x35,umask=0xc867860101TOR Inserts : WCiLFs issued by iA Cores targeting DDR that missed the LLCunc_cha_tor_inserts.ia_miss_wcil_ddruncore cacheWCIL requests from local IA cores to DDR homed addresses which miss the cacheevent=0x35,umask=0xc86f860101TOR Inserts : WCiLs issued by iA Cores targeting DDR that missed the LLCunc_cha_tor_inserts.ia_miss_wiluncore cacheWIL requests from local IA cores that miss the cacheevent=0x35,umask=0xc87fde0101TOR Inserts : WiLs issued by iA Cores that Missed LLCunc_cha_tor_inserts.ia_rfouncore cacheRead for ownership from local IA that miss the cacheevent=0x35,umask=0xc807ff0101TOR Inserts : RFOs issued by iA Coresunc_cha_tor_inserts.ia_rfo_prefuncore cacheRead for ownership prefetch from local IA that miss the cacheevent=0x35,umask=0xc887ff0101TOR Inserts : RFO_Prefs issued by iA Coresunc_cha_tor_inserts.ia_specitomuncore cacheSpecItoM events that are initiated from the Coreevent=0x35,umask=0xcc57ff0101TOR Inserts : SpecItoMs issued by iA Coresunc_cha_tor_inserts.ia_wbeftoeuncore cacheWbEFtoEs issued by iA Cores.  (Non Modified Write Backs)event=0x35,umask=0xcc3fff0101TOR Inserts : ItoMs issued by IO Devices that Hit the LLCunc_cha_tor_inserts.ia_wbeftoiuncore cacheWbEFtoIs issued by iA Cores .  (Non Modified Write Backs)event=0x35,umask=0xcc37ff0101TOR Inserts : ItoMs issued by IO Devices that Hit the LLCunc_cha_tor_inserts.ia_wbmtoeuncore cacheWbMtoEs issued by iA Cores .  (Modified Write Backs)event=0x35,umask=0xcc2fff0101TOR Inserts : ItoMs issued by IO Devices that Hit the LLCunc_cha_tor_inserts.ia_wbmtoiuncore cacheWbMtoI requests from local IA coresevent=0x35,umask=0xcc27ff0101TOR Inserts : WbMtoIs issued by iA Coresunc_cha_tor_inserts.ia_wbstoiuncore cacheWbStoIs issued by iA Cores .  (Non Modified Write Backs)event=0x35,umask=0xcc67ff0101TOR Inserts : ItoMs issued by IO Devices that Hit the LLCunc_cha_tor_inserts.ia_wciluncore cacheWCIL requests from a local IA coreevent=0x35,umask=0xc86fff0101TOR Inserts : WCiLs issued by iA Coresunc_cha_tor_inserts.ia_wcilfuncore cacheWCILF requests from local IA coreevent=0x35,umask=0xc867ff0101TOR Inserts : WCiLF issued by iA Coresunc_cha_tor_inserts.iouncore cacheAll TOR inserts from local IO devicesevent=0x35,umask=0xc001ff0401TOR Inserts : All requests from IO Devicesunc_cha_tor_inserts.io_clflushuncore cacheCLFlush requests from IO devicesevent=0x35,umask=0xc8c3ff0401TOR Inserts : CLFlushes issued by IO Devicesunc_cha_tor_inserts.io_hituncore cacheAll TOR inserts from local IO devices which hit the cacheevent=0x35,umask=0xc001fd0401TOR Inserts : All requests from IO Devices that hit the LLCunc_cha_tor_inserts.io_hit_itomuncore cacheItoMs from local IO devices which hit the cacheevent=0x35,umask=0xcc43fd0401TOR Inserts : ItoMs issued by IO Devices that Hit the LLCunc_cha_tor_inserts.io_hit_itomcachenearuncore cacheItoMCacheNears, indicating a partial write request, from IO Devices that hit the LLCevent=0x35,umask=0xcd43fd0401TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices that hit the LLCunc_cha_tor_inserts.io_hit_pcirdcuruncore cachePCIRDCURs issued by IO devices which hit the LLCevent=0x35,umask=0xc8f3fd0401TOR Inserts : PCIRdCurs issued by IO Devices that hit the LLCunc_cha_tor_inserts.io_hit_rfouncore cacheRFOs from local IO devices which hit the cacheevent=0x35,umask=0xc803fd0401TOR Inserts : RFOs issued by IO Devices that hit the LLCunc_cha_tor_inserts.io_itomuncore cacheAll TOR ItoM inserts from local IO devicesevent=0x35,umask=0xcc43ff0401TOR Inserts : ItoMs issued by IO Devicesunc_cha_tor_inserts.io_itomcachenearuncore cacheItoMCacheNears, indicating a partial write request, from IO Devicesevent=0x35,umask=0xcd43ff0401TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devicesunc_cha_tor_inserts.io_missuncore cacheAll TOR inserts from local IO devices which miss the cacheevent=0x35,umask=0xc001fe0401TOR Inserts : All requests from IO Devices that missed the LLCunc_cha_tor_inserts.io_miss_itomuncore cacheAll TOR ItoM inserts from local IO devices which miss the cacheevent=0x35,umask=0xcc43fe0401TOR Inserts : ItoMs issued by IO Devices that missed the LLCunc_cha_tor_inserts.io_miss_itomcachenearuncore cacheItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLCevent=0x35,umask=0xcd43fe0401TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLCunc_cha_tor_inserts.io_miss_pcirdcuruncore cachePCIRDCURs issued by IO devices which miss the LLCevent=0x35,umask=0xc8f3fe0401TOR Inserts : PCIRdCurs issued by IO Devices that missed the LLCunc_cha_tor_inserts.io_miss_rfouncore cacheAll TOR RFO inserts from local IO devices which miss the cacheevent=0x35,umask=0xc803fe0401TOR Inserts : RFOs issued by IO Devices that missed the LLCunc_cha_tor_inserts.io_pcirdcuruncore cachePCIRDCURs issued by IO devicesevent=0x35,umask=0xc8f3ff0401TOR Inserts : PCIRdCurs issued by IO Devicesunc_cha_tor_inserts.io_rfouncore cacheRFOs from local IO devicesevent=0x35,umask=0xc803ff0401TOR Inserts : RFOs issued by IO Devicesunc_cha_tor_inserts.io_wbmtoiuncore cacheWBMtoI requests from IO devicesevent=0x35,umask=0xcc23ff0401TOR Inserts : WbMtoIs issued by IO Devicesunc_cha_tor_inserts.llc_or_sf_evictionsuncore cacheTOR Inserts for SF or LLC Evictionsevent=0x35,umask=0xc001ff0201TOR allocation occurred as a result of SF/LLC evictions (came from the ISMQ)unc_cha_tor_inserts.loc_alluncore cacheAll locally initiated requestsevent=0x35,umask=0xc000ff0501TOR Inserts : All from Local iA and IOunc_cha_tor_inserts.loc_iauncore cacheAll from Local iAevent=0x35,umask=0xc000ff0101TOR Inserts : All from Local iAunc_cha_tor_inserts.loc_iouncore cacheAll from Local IOevent=0x35,umask=0xc000ff0401TOR Inserts : All from Local IOunc_cha_tor_occupancy.alluncore cacheOccupancy for all TOR entriesevent=0x36,umask=0xc001ffff01TOR Occupancy : Allunc_cha_tor_occupancy.iauncore cacheTOR Occupancy for All locally initiated requests from IA Coresevent=0x36,umask=0xc001ff0101TOR Occupancy : All requests from iA Coresunc_cha_tor_occupancy.ia_clflushuncore cacheTOR Occupancy for CLFlush events that are initiated from the Coreevent=0x36,umask=0xc8c7ff0101TOR Occupancy : CLFlushes issued by iA Coresunc_cha_tor_occupancy.ia_clflushoptuncore cacheTOR Occupancy for CLFlushOpt events that are initiated from the Coreevent=0x36,umask=0xc8d7ff0101TOR Occupancy : CLFlushOpts issued by iA Coresunc_cha_tor_occupancy.ia_crduncore cacheTOR Occupancy for Code read from local IA that miss the cacheevent=0x36,umask=0xc80fff0101TOR Occupancy : CRDs issued by iA Coresunc_cha_tor_occupancy.ia_crd_prefuncore cacheTOR Occupancy for Code read prefetch from local IA that miss the cacheevent=0x36,umask=0xc88fff0101TOR Occupancy; Code read prefetch from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_drd_optuncore cacheTOR Occupancy for Data read opt from local IA that miss the cacheevent=0x36,umask=0xc827ff0101TOR Occupancy : DRd_Opts issued by iA Coresunc_cha_tor_occupancy.ia_drd_opt_prefuncore cacheTOR Occupancy for Data read opt prefetch from local IA that miss the cacheevent=0x36,umask=0xc8a7ff0101TOR Occupancy : DRd_Opt_Prefs issued by iA Coresunc_cha_tor_occupancy.ia_hituncore cacheTOR Occupancy for All locally initiated requests from IA Cores which hit the cacheevent=0x36,umask=0xc001fd0101TOR Occupancy : All requests from iA Cores that Hit the LLCunc_cha_tor_occupancy.ia_hit_crduncore cacheTOR Occupancy for Code read from local IA that hit the cacheevent=0x36,umask=0xc80ffd0101TOR Occupancy : CRds issued by iA Cores that Hit the LLCunc_cha_tor_occupancy.ia_hit_crd_prefuncore cacheTOR Occupancy for Code read prefetch from local IA that hit the cacheevent=0x36,umask=0xc88ffd0101TOR Occupancy : CRd_Prefs issued by iA Cores that hit the LLCunc_cha_tor_occupancy.ia_hit_drd_optuncore cacheTOR Occupancy for Data read opt from local IA that hit the cacheevent=0x36,umask=0xc827fd0101TOR Occupancy : DRd_Opts issued by iA Cores that hit the LLCunc_cha_tor_occupancy.ia_hit_drd_opt_prefuncore cacheTOR Occupancy for Data read opt prefetch from local IA that hit the cacheevent=0x36,umask=0xc8a7fd0101TOR Occupancy : DRd_Opt_Prefs issued by iA Cores that hit the LLCunc_cha_tor_occupancy.ia_hit_itomuncore cacheTOR Occupancy for ItoM requests from local IA cores that hit the cacheevent=0x36,umask=0xcc47fd0101TOR Occupancy : ItoMs issued by iA Cores that Hit LLCunc_cha_tor_occupancy.ia_hit_llcprefcodeuncore cacheTOR Occupancy for Last level cache prefetch code read from local IA that hit the cacheevent=0x36,umask=0xcccffd0101TOR Occupancy : LLCPrefCode issued by iA Cores that hit the LLCunc_cha_tor_occupancy.ia_hit_llcprefdatauncore cacheTOR Occupancy for Last level cache prefetch data read from local IA that hit the cacheevent=0x36,umask=0xccd7fd0101TOR Occupancy : LLCPrefData issued by iA Cores that hit the LLCunc_cha_tor_occupancy.ia_hit_llcprefrfouncore cacheTOR Occupancy for Last level cache prefetch read for ownership from local IA that hit the cacheevent=0x36,umask=0xccc7fd0101TOR Occupancy : LLCPrefRFO issued by iA Cores that hit the LLCunc_cha_tor_occupancy.ia_hit_rfouncore cacheTOR Occupancy for Read for ownership from local IA that hit the cacheevent=0x36,umask=0xc807fd0101TOR Occupancy : RFOs issued by iA Cores that Hit the LLCunc_cha_tor_occupancy.ia_hit_rfo_prefuncore cacheTOR Occupancy for Read for ownership prefetch from local IA that hit the cacheevent=0x36,umask=0xc887fd0101TOR Occupancy : RFO_Prefs issued by iA Cores that Hit the LLCunc_cha_tor_occupancy.ia_itomuncore cacheTOR Occupancy for ItoM events that are initiated from the Coreevent=0x36,umask=0xcc47ff0101TOR Occupancy : ItoMs issued by iA Coresunc_cha_tor_occupancy.ia_itomcachenearuncore cacheTOR Occupancy for ItoMCacheNear requests from local IA coresevent=0x36,umask=0xcd47ff0101TOR Occupancy : ItoMCacheNears issued by iA Coresunc_cha_tor_occupancy.ia_llcprefcodeuncore cacheTOR Occupancy for Last level cache prefetch code read from local IAevent=0x36,umask=0xcccfff0101TOR Occupancy : LLCPrefCode issued by iA Coresunc_cha_tor_occupancy.ia_llcprefdatauncore cacheTOR Occupancy for Last level cache prefetch data read from local IAevent=0x36,umask=0xccd7ff0101TOR Occupancy : LLCPrefData issued by iA Coresunc_cha_tor_occupancy.ia_llcprefrfouncore cacheTOR Occupancy for Last level cache prefetch read for ownership from local IA that miss the cacheevent=0x36,umask=0xccc7ff0101TOR Occupancy : LLCPrefRFO issued by iA Coresunc_cha_tor_occupancy.ia_missuncore cacheTOR Occupancy for All locally initiated requests from IA Cores which miss the cacheevent=0x36,umask=0xc001fe0101TOR Occupancy : All requests from iA Cores that Missed the LLCunc_cha_tor_occupancy.ia_miss_crduncore cacheTOR Occupancy for Code read from local IA that miss the cacheevent=0x36,umask=0xc80ffe0101TOR Occupancy : CRds issued by iA Cores that Missed the LLCunc_cha_tor_occupancy.ia_miss_crd_localuncore cacheTOR Occupancy for CRDs from local IA cores to locally homed memoryevent=0x36,umask=0xc80efe0101TOR Occupancy : CRd issued by iA Cores that Missed the LLC - HOMed locallyunc_cha_tor_occupancy.ia_miss_crd_prefuncore cacheTOR Occupancy for Code read prefetch from local IA that miss the cacheevent=0x36,umask=0xc88ffe0101TOR Occupancy : CRd_Prefs issued by iA Cores that Missed the LLCunc_cha_tor_occupancy.ia_miss_crd_pref_localuncore cacheTOR Occupancy for CRD Prefetches from local IA cores to locally homed memoryevent=0x36,umask=0xc88efe0101TOR Occupancy : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed locallyunc_cha_tor_occupancy.ia_miss_drd_optuncore cacheTOR Occupancy for Data read opt from local IA that miss the cacheevent=0x36,umask=0xc827fe0101TOR Occupancy : DRd_Opt issued by iA Cores that missed the LLCunc_cha_tor_occupancy.ia_miss_drd_opt_prefuncore cacheTOR Occupancy for Data read opt prefetch from local IA that miss the cacheevent=0x36,umask=0xc8a7fe0101TOR Occupancy : DRd_Opt_Prefs issued by iA Cores that missed the LLCunc_cha_tor_occupancy.ia_miss_itomuncore cacheTOR Occupancy for ItoM requests from local IA cores that miss the cacheevent=0x36,umask=0xcc47fe0101TOR Occupancy : ItoMs issued by iA Cores that Missed LLCunc_cha_tor_occupancy.ia_miss_llcprefcodeuncore cacheTOR Occupancy for Last level cache prefetch code read from local IA that miss the cacheevent=0x36,umask=0xcccffe0101TOR Occupancy : LLCPrefCode issued by iA Cores that missed the LLCunc_cha_tor_occupancy.ia_miss_llcprefdatauncore cacheTOR Occupancy for Last level cache prefetch data read from local IA that miss the cacheevent=0x36,umask=0xccd7fe0101TOR Occupancy : LLCPrefData issued by iA Cores that missed the LLCunc_cha_tor_occupancy.ia_miss_llcprefrfouncore cacheTOR Occupancy for Last level cache prefetch read for ownership from local IA that miss the cacheevent=0x36,umask=0xccc7fe0101TOR Occupancy : LLCPrefRFO issued by iA Cores that missed the LLCunc_cha_tor_occupancy.ia_miss_local_wcilf_ddruncore cacheTOR Occupancy for WCILF requests from local IA cores to locally homed DDR addresses that miss the cacheevent=0x36,umask=0xc866860101TOR Occupancy : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed locallyunc_cha_tor_occupancy.ia_miss_local_wcil_ddruncore cacheTOR Occupancy for WCIL requests from local IA cores to locally homed DDR addresses that miss the cacheevent=0x36,umask=0xc86e860101TOR Occupancy : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed locallyunc_cha_tor_occupancy.ia_miss_rfouncore cacheTOR Occupancy for Read for ownership from local IA that miss the cacheevent=0x36,umask=0xc807fe0101TOR Occupancy : RFOs issued by iA Cores that Missed the LLCunc_cha_tor_occupancy.ia_miss_rfo_localuncore cacheTOR Occupancy for Read for ownership from local IA that miss the cacheevent=0x36,umask=0xc806fe0101TOR Occupancy : RFOs issued by iA Cores that Missed the LLC - HOMed locallyunc_cha_tor_occupancy.ia_miss_rfo_prefuncore cacheTOR Occupancy for Read for ownership prefetch from local IA that miss the cacheevent=0x36,umask=0xc887fe0101TOR Occupancy : RFO_Prefs issued by iA Cores that Missed the LLCunc_cha_tor_occupancy.ia_miss_rfo_pref_localuncore cacheTOR Occupancy for Read for ownership prefetch from local IA that miss the cacheevent=0x36,umask=0xc886fe0101TOR Occupancy : RFO_Prefs issued by iA Cores that Missed the LLC - HOMed locallyunc_cha_tor_occupancy.ia_miss_ucrdfuncore cacheTOR Occupancy for UCRDF requests from local IA cores that miss the cacheevent=0x36,umask=0xc877de0101TOR Occupancy : UCRdFs issued by iA Cores that Missed LLCunc_cha_tor_occupancy.ia_miss_wciluncore cacheTOR Occupancy for WCIL requests from a local IA core that miss the cacheevent=0x36,umask=0xc86ffe0101TOR Occupancy : WCiLs issued by iA Cores that Missed the LLCunc_cha_tor_occupancy.ia_miss_wcilfuncore cacheTOR Occupancy for WCILF requests from local IA core that miss the cacheevent=0x36,umask=0xc867fe0101TOR Occupancy : WCiLF issued by iA Cores that Missed the LLCunc_cha_tor_occupancy.ia_miss_wcilf_ddruncore cacheTOR Occupancy for WCILF requests from local IA cores to DDR homed addresses which miss the cacheevent=0x36,umask=0xc867860101TOR Occupancy : WCiLFs issued by iA Cores targeting DDR that missed the LLCunc_cha_tor_occupancy.ia_miss_wcil_ddruncore cacheTOR Occupancy for WCIL requests from local IA cores to DDR homed addresses which miss the cacheevent=0x36,umask=0xc86f860101TOR Occupancy : WCiLs issued by iA Cores targeting DDR that missed the LLCunc_cha_tor_occupancy.ia_miss_wiluncore cacheTOR Occupancy for WIL requests from local IA cores that miss the cacheevent=0x36,umask=0xc87fde0101TOR Occupancy : WiLs issued by iA Cores that Missed LLCunc_cha_tor_occupancy.ia_rfouncore cacheTOR Occupancy for Read for ownership from local IA that miss the cacheevent=0x36,umask=0xc807ff0101TOR Occupancy : RFOs issued by iA Coresunc_cha_tor_occupancy.ia_rfo_prefuncore cacheTOR Occupancy for Read for ownership prefetch from local IA that miss the cacheevent=0x36,umask=0xc887ff0101TOR Occupancy : RFO_Prefs issued by iA Coresunc_cha_tor_occupancy.ia_specitomuncore cacheTOR Occupancy for SpecItoM events that are initiated from the Coreevent=0x36,umask=0xcc57ff0101TOR Occupancy : SpecItoMs issued by iA Coresunc_cha_tor_occupancy.ia_wbmtoiuncore cacheTOR Occupancy for WbMtoI requests from local IA coresevent=0x36,umask=0xcc27ff0101TOR Occupancy : WbMtoIs issued by iA Coresunc_cha_tor_occupancy.ia_wciluncore cacheTOR Occupancy for WCIL requests from a local IA coreevent=0x36,umask=0xc86fff0101TOR Occupancy : WCiLs issued by iA Coresunc_cha_tor_occupancy.ia_wcilfuncore cacheTOR Occupancy for WCILF requests from local IA coreevent=0x36,umask=0xc867ff0101TOR Occupancy : WCiLF issued by iA Coresunc_cha_tor_occupancy.iouncore cacheTOR Occupancy for All TOR inserts from local IO devicesevent=0x36,umask=0xc001ff0401TOR Occupancy : All requests from IO Devicesunc_cha_tor_occupancy.io_clflushuncore cacheTOR Occupancy for CLFlush requests from IO devicesevent=0x36,umask=0xc8c3ff0401TOR Occupancy : CLFlushes issued by IO Devicesunc_cha_tor_occupancy.io_hituncore cacheTOR Occupancy for All TOR inserts from local IO devices which hit the cacheevent=0x36,umask=0xc001fd0401TOR Occupancy : All requests from IO Devices that hit the LLCunc_cha_tor_occupancy.io_hit_itomuncore cacheTOR Occupancy for ItoMs from local IO devices which hit the cacheevent=0x36,umask=0xcc43fd0401TOR Occupancy : ItoMs issued by IO Devices that Hit the LLCunc_cha_tor_occupancy.io_hit_itomcachenearuncore cacheTOR Occupancy for ItoMCacheNears, indicating a partial write request, from IO Devices that hit the LLCevent=0x36,umask=0xcd43fd0401TOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices that hit the LLCunc_cha_tor_occupancy.io_hit_pcirdcuruncore cacheTOR Occupancy for PCIRDCURs issued by IO devices which hit the LLCevent=0x36,umask=0xc8f3fd0401TOR Occupancy : PCIRdCurs issued by IO Devices that hit the LLCunc_cha_tor_occupancy.io_hit_rfouncore cacheTOR Occupancy for RFOs from local IO devices which hit the cacheevent=0x36,umask=0xc803fd0401TOR Occupancy : RFOs issued by IO Devices that hit the LLCunc_cha_tor_occupancy.io_itomuncore cacheTOR Occupancy for All TOR ItoM inserts from local IO devicesevent=0x36,umask=0xcc43ff0401TOR Occupancy : ItoMs issued by IO Devicesunc_cha_tor_occupancy.io_itomcachenearuncore cacheTOR Occupancy for ItoMCacheNears, indicating a partial write request, from IO Devicesevent=0x36,umask=0xcd43ff0401TOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devicesunc_cha_tor_occupancy.io_missuncore cacheTOR Occupancy for All TOR inserts from local IO devices which miss the cacheevent=0x36,umask=0xc001fe0401TOR Occupancy : All requests from IO Devices that missed the LLCunc_cha_tor_occupancy.io_miss_itomuncore cacheTOR Occupancy for All TOR ItoM inserts from local IO devices which miss the cacheevent=0x36,umask=0xcc43fe0401TOR Occupancy : ItoMs issued by IO Devices that missed the LLCunc_cha_tor_occupancy.io_miss_itomcachenearuncore cacheTOR Occupancy for ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLCevent=0x36,umask=0xcd43fe0401TOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLCunc_cha_tor_occupancy.io_miss_pcirdcuruncore cacheTOR Occupancy for PCIRDCURs issued by IO devices which miss the LLCevent=0x36,umask=0xc8f3fe0401TOR Occupancy : PCIRdCurs issued by IO Devices that missed the LLCunc_cha_tor_occupancy.io_miss_rfouncore cacheTOR Occupancy for All TOR RFO inserts from local IO devices which miss the cacheevent=0x36,umask=0xc803fe0401TOR Occupancy : RFOs issued by IO Devices that missed the LLCunc_cha_tor_occupancy.io_pcirdcuruncore cacheTOR Occupancy for PCIRDCURs issued by IO devicesevent=0x36,umask=0xc8f3ff0401TOR Occupancy : PCIRdCurs issued by IO Devicesunc_cha_tor_occupancy.io_rfouncore cacheTOR Occupancy for RFOs from local IO devicesevent=0x36,umask=0xc803ff0401TOR Occupancy : RFOs issued by IO Devicesunc_cha_tor_occupancy.io_wbmtoiuncore cacheTOR Occupancy for WBMtoI requests from IO devicesevent=0x36,umask=0xcc23ff0401TOR Occupancy : WbMtoIs issued by IO Devicesunc_cha_tor_occupancy.loc_alluncore cacheTOR Occupancy for All locally initiated requestsevent=0x36,umask=0xc000ff0501TOR Occupancy : All from Local iA and IOunc_cha_tor_occupancy.loc_iauncore cacheTOR Occupancy for All from Local iAevent=0x36,umask=0xc000ff0101TOR Occupancy : All from Local iAunc_cha_tor_occupancy.loc_iouncore cacheTOR Occupancy for All from Local IOevent=0x36,umask=0xc000ff0401TOR Occupancy : All from Local IOuncore_b2cmiunc_b2cmi_clockticksuncore interconnectClockticks of the mesh to memory (B2CMI)event=101unc_b2cmi_direct2core_takenuncore interconnectCounts the number of times B2CMI egress did D2C (direct to core)event=0x16,umask=101unc_b2cmi_direct2core_txn_overrideuncore interconnectCounts the number of times D2C wasn't honoured even though the incoming request had d2c set for non cisgress txnevent=0x18,umask=101unc_b2cmi_imc_reads.alluncore interconnectCounts any readevent=0x24,umask=0x10401unc_b2cmi_imc_reads.normaluncore interconnectCounts normal reads issue to CMIevent=0x24,umask=0x10101unc_b2cmi_imc_reads.to_ddr_as_memuncore interconnectCounts reads to 1lm non persistent memory regionsevent=0x24,umask=0x10801unc_b2cmi_imc_writes.alluncore interconnectAll Writes - All Channelsevent=0x25,umask=0x11001unc_b2cmi_imc_writes.fulluncore interconnectFull Non-ISOCH - All Channelsevent=0x25,umask=0x10101unc_b2cmi_imc_writes.partialuncore interconnectPartial Non-ISOCH - All Channelsevent=0x25,umask=0x10201unc_b2cmi_imc_writes.to_ddr_as_memuncore interconnectDDR - All Channelsevent=0x25,umask=0x12001unc_b2cmi_prefcam_inserts.ch0_xptuncore interconnectPrefetch CAM Inserts : XPT - Ch 0event=0x56,umask=101unc_b2cmi_prefcam_inserts.xpt_allchuncore interconnectPrefetch CAM Inserts : XPT -All Channelsevent=0x56,umask=101Prefetch CAM Inserts : XPT - All Channelsunc_b2cmi_prefcam_occupancy.ch0uncore interconnectPrefetch CAM Occupancy : Channel 0event=0x54,umask=101unc_b2cmi_tracker_inserts.ch0uncore interconnectTracker Inserts : Channel 0event=0x32,umask=0x10401unc_b2cmi_tracker_occupancy.ch0uncore interconnectTracker Occupancy : Channel 0event=0x33,umask=101unc_b2cmi_wr_tracker_inserts.ch0uncore interconnectWrite Tracker Inserts : Channel 0event=0x40,umask=101unc_i_cache_total_occupancy.memuncore interconnectTotal Write Cache Occupancy : Memevent=0xf,umask=401unc_i_clockticksuncore interconnectIRP Clockticksevent=101unc_i_faf_insertsuncore interconnectInbound read requests received by the IRP and inserted into the FAF queueevent=0x1801unc_i_misc1.lost_fwduncore interconnectMisc Events - Set 1 : Lost Forward : Snoop pulled away ownership before a write was committedevent=0x1f,umask=0x1001unc_i_transactions.wr_prefuncore interconnectInbound write (fast path) requests to coherent memory, received by the IRP resulting in write ownership requests issued by IRP to the meshevent=0x11,umask=801unc_iio_clockticksuncore ioIIO Clockticksevent=101unc_iio_comp_buf_inserts.cmpd.all_partsuncore ioPCIE Completion Buffer Inserts.  Counts once per 64 byte read issued from this PCIE deviceevent=0xc2,ch_mask=0xff,fc_mask=7,umask=0x70ff00401unc_iio_comp_buf_inserts.cmpd.part0uncore ioPCIE Completion Buffer Inserts.  Counts once per 64 byte read issued from this PCIE deviceevent=0xc2,ch_mask=1,fc_mask=7,umask=0x700100401unc_iio_comp_buf_inserts.cmpd.part1uncore ioPCIE Completion Buffer Inserts.  Counts once per 64 byte read issued from this PCIE deviceevent=0xc2,ch_mask=2,fc_mask=7,umask=0x700200401unc_iio_comp_buf_inserts.cmpd.part2uncore ioPCIE Completion Buffer Inserts.  Counts once per 64 byte read issued from this PCIE deviceevent=0xc2,ch_mask=4,fc_mask=7,umask=0x700400401unc_iio_comp_buf_inserts.cmpd.part3uncore ioPCIE Completion Buffer Inserts.  Counts once per 64 byte read issued from this PCIE deviceevent=0xc2,ch_mask=8,fc_mask=7,umask=0x700800401unc_iio_comp_buf_inserts.cmpd.part4uncore ioPCIE Completion Buffer Inserts.  Counts once per 64 byte read issued from this PCIE deviceevent=0xc2,ch_mask=0x10,fc_mask=7,umask=0x701000401unc_iio_comp_buf_inserts.cmpd.part5uncore ioPCIE Completion Buffer Inserts.  Counts once per 64 byte read issued from this PCIE deviceevent=0xc2,ch_mask=0x20,fc_mask=7,umask=0x702000401unc_iio_comp_buf_inserts.cmpd.part6uncore ioPCIE Completion Buffer Inserts.  Counts once per 64 byte read issued from this PCIE deviceevent=0xc2,ch_mask=0x40,fc_mask=7,umask=0x704000401unc_iio_comp_buf_inserts.cmpd.part7uncore ioPCIE Completion Buffer Inserts.  Counts once per 64 byte read issued from this PCIE deviceevent=0xc2,ch_mask=0x80,fc_mask=7,umask=0x708000401unc_iio_comp_buf_occupancy.cmpd.all_partsuncore ioCount of allocations in the completion bufferevent=0xd5,ch_mask=0xff,fc_mask=7,umask=0x70ff0ff01unc_iio_comp_buf_occupancy.cmpd.part0uncore ioCount of allocations in the completion bufferevent=0xd5,ch_mask=1,fc_mask=7,umask=0x700100101unc_iio_comp_buf_occupancy.cmpd.part1uncore ioCount of allocations in the completion bufferevent=0xd5,ch_mask=2,fc_mask=7,umask=0x700200201unc_iio_comp_buf_occupancy.cmpd.part2uncore ioCount of allocations in the completion bufferevent=0xd5,ch_mask=4,fc_mask=7,umask=0x700400401unc_iio_comp_buf_occupancy.cmpd.part3uncore ioCount of allocations in the completion bufferevent=0xd5,ch_mask=8,fc_mask=7,umask=0x700800801unc_iio_comp_buf_occupancy.cmpd.part4uncore ioCount of allocations in the completion bufferevent=0xd5,ch_mask=0x10,fc_mask=7,umask=0x701001001unc_iio_comp_buf_occupancy.cmpd.part5uncore ioCount of allocations in the completion bufferevent=0xd5,ch_mask=0x20,fc_mask=7,umask=0x702002001unc_iio_comp_buf_occupancy.cmpd.part6uncore ioCount of allocations in the completion bufferevent=0xd5,ch_mask=0x40,fc_mask=7,umask=0x704004001unc_iio_comp_buf_occupancy.cmpd.part7uncore ioCount of allocations in the completion bufferevent=0xd5,ch_mask=0x80,fc_mask=7,umask=0x708008001unc_iio_data_req_by_cpu.mem_read.all_partsuncore ioData requested by the CPU : Core reporting completion of Card read from Core DRAMevent=0xc0,ch_mask=0xff,fc_mask=7,umask=0x70ff00401unc_iio_data_req_by_cpu.mem_read.part0uncore ioData requested by the CPU : Core reporting completion of Card read from Core DRAMevent=0xc0,ch_mask=1,fc_mask=7,umask=0x700100401unc_iio_data_req_by_cpu.mem_read.part1uncore ioData requested by the CPU : Core reporting completion of Card read from Core DRAMevent=0xc0,ch_mask=2,fc_mask=7,umask=0x700200401unc_iio_data_req_by_cpu.mem_read.part2uncore ioData requested by the CPU : Core reporting completion of Card read from Core DRAMevent=0xc0,ch_mask=4,fc_mask=7,umask=0x700400401unc_iio_data_req_by_cpu.mem_read.part3uncore ioData requested by the CPU : Core reporting completion of Card read from Core DRAMevent=0xc0,ch_mask=8,fc_mask=7,umask=0x700800401unc_iio_data_req_by_cpu.mem_read.part4uncore ioData requested by the CPU : Core reporting completion of Card read from Core DRAMevent=0xc0,ch_mask=0x10,fc_mask=7,umask=0x701000401unc_iio_data_req_by_cpu.mem_read.part5uncore ioData requested by the CPU : Core reporting completion of Card read from Core DRAMevent=0xc0,ch_mask=0x20,fc_mask=7,umask=0x702000401unc_iio_data_req_by_cpu.mem_read.part6uncore ioData requested by the CPU : Core reporting completion of Card read from Core DRAMevent=0xc0,ch_mask=0x40,fc_mask=7,umask=0x704000401unc_iio_data_req_by_cpu.mem_read.part7uncore ioData requested by the CPU : Core reporting completion of Card read from Core DRAMevent=0xc0,ch_mask=0x80,fc_mask=7,umask=0x708000401unc_iio_data_req_by_cpu.mem_write.all_partsuncore ioData requested by the CPU : Core writing to Cards MMIO spaceevent=0xc0,ch_mask=0xff,fc_mask=7,umask=0x70ff00101unc_iio_data_req_by_cpu.mem_write.part0uncore ioData requested by the CPU : Core writing to Cards MMIO spaceevent=0xc0,ch_mask=1,fc_mask=7,umask=0x700100101unc_iio_data_req_by_cpu.mem_write.part1uncore ioData requested by the CPU : Core writing to Cards MMIO spaceevent=0xc0,ch_mask=2,fc_mask=7,umask=0x700200101unc_iio_data_req_by_cpu.mem_write.part2uncore ioData requested by the CPU : Core writing to Cards MMIO spaceevent=0xc0,ch_mask=4,fc_mask=7,umask=0x700400101unc_iio_data_req_by_cpu.mem_write.part3uncore ioData requested by the CPU : Core writing to Cards MMIO spaceevent=0xc0,ch_mask=8,fc_mask=7,umask=0x700800101unc_iio_data_req_by_cpu.mem_write.part4uncore ioData requested by the CPU : Core writing to Cards MMIO spaceevent=0xc0,ch_mask=0x10,fc_mask=7,umask=0x701000101unc_iio_data_req_by_cpu.mem_write.part5uncore ioData requested by the CPU : Core writing to Cards MMIO spaceevent=0xc0,ch_mask=0x20,fc_mask=7,umask=0x702000101unc_iio_data_req_by_cpu.mem_write.part6uncore ioData requested by the CPU : Core writing to Cards MMIO spaceevent=0xc0,ch_mask=0x40,fc_mask=7,umask=0x704000101unc_iio_data_req_by_cpu.mem_write.part7uncore ioData requested by the CPU : Core writing to Cards MMIO spaceevent=0xc0,ch_mask=0x80,fc_mask=7,umask=0x708000101unc_iio_data_req_of_cpu.mem_read.part0uncore ioFour byte data request of the CPU : Card reading from DRAMevent=0x83,ch_mask=1,fc_mask=7,umask=0x700100401unc_iio_data_req_of_cpu.mem_read.part1uncore ioFour byte data request of the CPU : Card reading from DRAMevent=0x83,ch_mask=2,fc_mask=7,umask=0x700200401unc_iio_data_req_of_cpu.mem_read.part2uncore ioFour byte data request of the CPU : Card reading from DRAMevent=0x83,ch_mask=4,fc_mask=7,umask=0x700400401unc_iio_data_req_of_cpu.mem_read.part3uncore ioFour byte data request of the CPU : Card reading from DRAMevent=0x83,ch_mask=8,fc_mask=7,umask=0x700800401unc_iio_data_req_of_cpu.mem_read.part4uncore ioFour byte data request of the CPU : Card reading from DRAMevent=0x83,ch_mask=0x10,fc_mask=7,umask=0x701000401unc_iio_data_req_of_cpu.mem_read.part5uncore ioFour byte data request of the CPU : Card reading from DRAMevent=0x83,ch_mask=0x20,fc_mask=7,umask=0x702000401unc_iio_data_req_of_cpu.mem_read.part6uncore ioFour byte data request of the CPU : Card reading from DRAMevent=0x83,ch_mask=0x40,fc_mask=7,umask=0x704000401unc_iio_data_req_of_cpu.mem_read.part7uncore ioFour byte data request of the CPU : Card reading from DRAMevent=0x83,ch_mask=0x80,fc_mask=7,umask=0x708000401unc_iio_data_req_of_cpu.mem_write.part0uncore ioFour byte data request of the CPU : Card writing to DRAMevent=0x83,ch_mask=1,fc_mask=7,umask=0x700100101unc_iio_data_req_of_cpu.mem_write.part1uncore ioFour byte data request of the CPU : Card writing to DRAMevent=0x83,ch_mask=2,fc_mask=7,umask=0x700200101unc_iio_data_req_of_cpu.mem_write.part2uncore ioFour byte data request of the CPU : Card writing to DRAMevent=0x83,ch_mask=4,fc_mask=7,umask=0x700400101unc_iio_data_req_of_cpu.mem_write.part3uncore ioFour byte data request of the CPU : Card writing to DRAMevent=0x83,ch_mask=8,fc_mask=7,umask=0x700800101unc_iio_data_req_of_cpu.mem_write.part4uncore ioFour byte data request of the CPU : Card writing to DRAMevent=0x83,ch_mask=0x10,fc_mask=7,umask=0x701000101unc_iio_data_req_of_cpu.mem_write.part5uncore ioFour byte data request of the CPU : Card writing to DRAMevent=0x83,ch_mask=0x20,fc_mask=7,umask=0x702000101unc_iio_data_req_of_cpu.mem_write.part6uncore ioFour byte data request of the CPU : Card writing to DRAMevent=0x83,ch_mask=0x40,fc_mask=7,umask=0x704000101unc_iio_data_req_of_cpu.mem_write.part7uncore ioFour byte data request of the CPU : Card writing to DRAMevent=0x83,ch_mask=0x80,fc_mask=7,umask=0x708000101unc_iio_data_req_of_cpu.peer_write.part0uncore ioData requested of the CPU : Card writing to another Card (same or different stack)event=0x83,ch_mask=1,fc_mask=7,umask=0x700100201unc_iio_data_req_of_cpu.peer_write.part1uncore ioData requested of the CPU : Card writing to another Card (same or different stack)event=0x83,ch_mask=2,fc_mask=7,umask=0x700200201unc_iio_data_req_of_cpu.peer_write.part2uncore ioData requested of the CPU : Card writing to another Card (same or different stack)event=0x83,ch_mask=4,fc_mask=7,umask=0x700400201unc_iio_data_req_of_cpu.peer_write.part3uncore ioData requested of the CPU : Card writing to another Card (same or different stack)event=0x83,ch_mask=8,fc_mask=7,umask=0x700800201unc_iio_data_req_of_cpu.peer_write.part4uncore ioData requested of the CPU : Card writing to another Card (same or different stack)event=0x83,ch_mask=0x10,fc_mask=7,umask=0x701000201unc_iio_data_req_of_cpu.peer_write.part5uncore ioData requested of the CPU : Card writing to another Card (same or different stack)event=0x83,ch_mask=0x20,fc_mask=7,umask=0x702000201unc_iio_data_req_of_cpu.peer_write.part6uncore ioData requested of the CPU : Card writing to another Card (same or different stack)event=0x83,ch_mask=0x40,fc_mask=7,umask=0x704000201unc_iio_data_req_of_cpu.peer_write.part7uncore ioData requested of the CPU : Card writing to another Card (same or different stack)event=0x83,ch_mask=0x80,fc_mask=7,umask=0x708000201unc_iio_iommu0.1g_hitsuncore ioIOTLB Hits to a 1G Pageevent=0x40,umask=0x1001unc_iio_iommu0.2m_hitsuncore ioIOTLB Hits to a 2M Pageevent=0x40,umask=801unc_iio_iommu0.4k_hitsuncore ioIOTLB Hits to a 4K Pageevent=0x40,umask=401unc_iio_iommu0.ctxt_cache_hitsuncore ioContext cache hitsevent=0x40,umask=0x8001unc_iio_iommu0.ctxt_cache_lookupsuncore ioContext cache lookupsevent=0x40,umask=0x4001unc_iio_iommu0.first_lookupsuncore ioIOTLB lookups firstevent=0x40,umask=101unc_iio_iommu0.missesuncore ioIOTLB Fills (same as IOTLB miss)event=0x40,umask=0x2001unc_iio_iommu1.num_mem_accessesuncore ioIOMMU memory access (both low and high priority)event=0x41,umask=0xc001unc_iio_iommu1.slpwc_1g_hitsuncore ioSecond Level Page Walk Cache Hit to a 1G pageevent=0x41,umask=401unc_iio_iommu1.slpwc_256t_hitsuncore ioSecond Level Page Walk Cache Hit to a 256T pageevent=0x41,umask=0x1001unc_iio_iommu1.slpwc_512g_hitsuncore ioSecond Level Page Walk Cache Hit to a 512G pageevent=0x41,umask=801unc_iio_num_req_of_cpu_by_tgt.abortuncore io-event=0x8e,ch_mask=0xff,fc_mask=7,umask=0x70ff08001unc_iio_num_req_of_cpu_by_tgt.confined_p2puncore io-event=0x8e,ch_mask=0xff,fc_mask=7,umask=0x70ff04001unc_iio_num_req_of_cpu_by_tgt.loc_p2puncore io-event=0x8e,ch_mask=0xff,fc_mask=7,umask=0x70ff02001unc_iio_num_req_of_cpu_by_tgt.mcastuncore io-event=0x8e,ch_mask=0xff,fc_mask=7,umask=0x70ff00201unc_iio_num_req_of_cpu_by_tgt.memuncore io-event=0x8e,ch_mask=0xff,fc_mask=7,umask=0x70ff00801unc_iio_num_req_of_cpu_by_tgt.msgbuncore io-event=0x8e,ch_mask=0xff,fc_mask=7,umask=0x70ff00101unc_iio_num_req_of_cpu_by_tgt.uboxuncore io-event=0x8e,ch_mask=0xff,fc_mask=7,umask=0x70ff00401unc_iio_pwt_occupancyuncore ioAll 9 bits of Page Walk Tracker Occupancyevent=0x4201unc_iio_txn_req_by_cpu.mem_read.part0uncore ioNumber Transactions requested by the CPU : Core reading from Cards MMIO spaceevent=0xc1,ch_mask=1,fc_mask=7,umask=0x700100401unc_iio_txn_req_by_cpu.mem_read.part1uncore ioNumber Transactions requested by the CPU : Core reading from Cards MMIO spaceevent=0xc1,ch_mask=2,fc_mask=7,umask=0x700200401unc_iio_txn_req_by_cpu.mem_read.part2uncore ioNumber Transactions requested by the CPU : Core reading from Cards MMIO spaceevent=0xc1,ch_mask=4,fc_mask=7,umask=0x700400401unc_iio_txn_req_by_cpu.mem_read.part3uncore ioNumber Transactions requested by the CPU : Core reading from Cards MMIO spaceevent=0xc1,ch_mask=8,fc_mask=7,umask=0x700800401unc_iio_txn_req_by_cpu.mem_read.part4uncore ioNumber Transactions requested by the CPU : Core reading from Cards MMIO spaceevent=0xc1,ch_mask=0x10,fc_mask=7,umask=0x701000401unc_iio_txn_req_by_cpu.mem_read.part5uncore ioNumber Transactions requested by the CPU : Core reading from Cards MMIO spaceevent=0xc1,ch_mask=0x20,fc_mask=7,umask=0x702000401unc_iio_txn_req_by_cpu.mem_read.part6uncore ioNumber Transactions requested by the CPU : Core reading from Cards MMIO spaceevent=0xc1,ch_mask=0x40,fc_mask=7,umask=0x704000401unc_iio_txn_req_by_cpu.mem_read.part7uncore ioNumber Transactions requested by the CPU : Core reading from Cards MMIO spaceevent=0xc1,ch_mask=0x80,fc_mask=7,umask=0x708000401unc_iio_txn_req_by_cpu.mem_write.part0uncore ioNumber Transactions requested by the CPU : Core writing to Cards MMIO spaceevent=0xc1,ch_mask=1,fc_mask=7,umask=0x700100101unc_iio_txn_req_by_cpu.mem_write.part1uncore ioNumber Transactions requested by the CPU : Core writing to Cards MMIO spaceevent=0xc1,ch_mask=2,fc_mask=7,umask=0x700200101unc_iio_txn_req_by_cpu.mem_write.part2uncore ioNumber Transactions requested by the CPU : Core writing to Cards MMIO spaceevent=0xc1,ch_mask=4,fc_mask=7,umask=0x700400101unc_iio_txn_req_by_cpu.mem_write.part3uncore ioNumber Transactions requested by the CPU : Core writing to Cards MMIO spaceevent=0xc1,ch_mask=8,fc_mask=7,umask=0x700800101unc_iio_txn_req_by_cpu.mem_write.part4uncore ioNumber Transactions requested by the CPU : Core writing to Cards MMIO spaceevent=0xc1,ch_mask=0x10,fc_mask=7,umask=0x701000101unc_iio_txn_req_by_cpu.mem_write.part5uncore ioNumber Transactions requested by the CPU : Core writing to Cards MMIO spaceevent=0xc1,ch_mask=0x20,fc_mask=7,umask=0x702000101unc_iio_txn_req_by_cpu.mem_write.part6uncore ioNumber Transactions requested by the CPU : Core writing to Cards MMIO spaceevent=0xc1,ch_mask=0x40,fc_mask=7,umask=0x704000101unc_iio_txn_req_by_cpu.mem_write.part7uncore ioNumber Transactions requested by the CPU : Core writing to Cards MMIO spaceevent=0xc1,ch_mask=0x80,fc_mask=7,umask=0x708000101unc_iio_txn_req_of_cpu.mem_read.part0uncore ioNumber Transactions requested of the CPU : Card reading from DRAMevent=0x84,ch_mask=1,fc_mask=7,umask=0x700100401unc_iio_txn_req_of_cpu.mem_read.part1uncore ioNumber Transactions requested of the CPU : Card reading from DRAMevent=0x84,ch_mask=2,fc_mask=7,umask=0x700200401unc_iio_txn_req_of_cpu.mem_read.part2uncore ioNumber Transactions requested of the CPU : Card reading from DRAMevent=0x84,ch_mask=4,fc_mask=7,umask=0x700400401unc_iio_txn_req_of_cpu.mem_read.part3uncore ioNumber Transactions requested of the CPU : Card reading from DRAMevent=0x84,ch_mask=8,fc_mask=7,umask=0x700800401unc_iio_txn_req_of_cpu.mem_read.part4uncore ioNumber Transactions requested of the CPU : Card reading from DRAMevent=0x84,ch_mask=0x10,fc_mask=7,umask=0x701000401unc_iio_txn_req_of_cpu.mem_read.part5uncore ioNumber Transactions requested of the CPU : Card reading from DRAMevent=0x84,ch_mask=0x20,fc_mask=7,umask=0x702000401unc_iio_txn_req_of_cpu.mem_read.part6uncore ioNumber Transactions requested of the CPU : Card reading from DRAMevent=0x84,ch_mask=0x40,fc_mask=7,umask=0x704000401unc_iio_txn_req_of_cpu.mem_read.part7uncore ioNumber Transactions requested of the CPU : Card reading from DRAMevent=0x84,ch_mask=0x80,fc_mask=7,umask=0x708000401unc_iio_txn_req_of_cpu.mem_write.part0uncore ioNumber Transactions requested of the CPU : Card writing to DRAMevent=0x84,ch_mask=1,fc_mask=7,umask=0x700100101unc_iio_txn_req_of_cpu.mem_write.part1uncore ioNumber Transactions requested of the CPU : Card writing to DRAMevent=0x84,ch_mask=2,fc_mask=7,umask=0x700200101unc_iio_txn_req_of_cpu.mem_write.part2uncore ioNumber Transactions requested of the CPU : Card writing to DRAMevent=0x84,ch_mask=4,fc_mask=7,umask=0x700400101unc_iio_txn_req_of_cpu.mem_write.part3uncore ioNumber Transactions requested of the CPU : Card writing to DRAMevent=0x84,ch_mask=8,fc_mask=7,umask=0x700800101unc_iio_txn_req_of_cpu.mem_write.part4uncore ioNumber Transactions requested of the CPU : Card writing to DRAMevent=0x84,ch_mask=0x10,fc_mask=7,umask=0x701000101unc_iio_txn_req_of_cpu.mem_write.part5uncore ioNumber Transactions requested of the CPU : Card writing to DRAMevent=0x84,ch_mask=0x20,fc_mask=7,umask=0x702000101unc_iio_txn_req_of_cpu.mem_write.part6uncore ioNumber Transactions requested of the CPU : Card writing to DRAMevent=0x84,ch_mask=0x40,fc_mask=7,umask=0x704000101unc_iio_txn_req_of_cpu.mem_write.part7uncore ioNumber Transactions requested of the CPU : Card writing to DRAMevent=0x84,ch_mask=0x80,fc_mask=7,umask=0x708000101unc_iio_txn_req_of_cpu.peer_write.part0uncore ioNumber Transactions requested of the CPU : Card writing to another Card (same or different stack)event=0x84,ch_mask=1,fc_mask=7,umask=0x700100201unc_iio_txn_req_of_cpu.peer_write.part1uncore ioNumber Transactions requested of the CPU : Card writing to another Card (same or different stack)event=0x84,ch_mask=2,fc_mask=7,umask=0x700200201unc_iio_txn_req_of_cpu.peer_write.part2uncore ioNumber Transactions requested of the CPU : Card writing to another Card (same or different stack)event=0x84,ch_mask=4,fc_mask=7,umask=0x700400201unc_iio_txn_req_of_cpu.peer_write.part3uncore ioNumber Transactions requested of the CPU : Card writing to another Card (same or different stack)event=0x84,ch_mask=8,fc_mask=7,umask=0x700800201unc_iio_txn_req_of_cpu.peer_write.part4uncore ioNumber Transactions requested of the CPU : Card writing to another Card (same or different stack)event=0x84,ch_mask=0x10,fc_mask=7,umask=0x701000201unc_iio_txn_req_of_cpu.peer_write.part5uncore ioNumber Transactions requested of the CPU : Card writing to another Card (same or different stack)event=0x84,ch_mask=0x20,fc_mask=7,umask=0x702000201unc_iio_txn_req_of_cpu.peer_write.part6uncore ioNumber Transactions requested of the CPU : Card writing to another Card (same or different stack)event=0x84,ch_mask=0x40,fc_mask=7,umask=0x704000201unc_iio_txn_req_of_cpu.peer_write.part7uncore ioNumber Transactions requested of the CPU : Card writing to another Card (same or different stack)event=0x84,ch_mask=0x80,fc_mask=7,umask=0x708000201unc_m_act_count.alluncore memoryDRAM Activate Count : Counts the number of DRAM Activate commands sent on this channel.  Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS.  One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activatesevent=2,umask=0xf701unc_m_act_count.rduncore memoryDRAM Activate Count : Read transaction on Page Empty or Page Miss : Counts the number of DRAM Activate commands sent on this channel.  Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS.  One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activatesevent=2,umask=0xf101unc_m_act_count.ufilluncore memoryDRAM Activate Count : Underfill Read transaction on Page Empty or Page Miss : Counts the number of DRAM Activate commands sent on this channel.  Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS.  One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activatesevent=2,umask=0xf401unc_m_act_count.wruncore memoryDRAM Activate Count : Write transaction on Page Empty or Page Miss : Counts the number of DRAM Activate commands sent on this channel.  Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS.  One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activatesevent=2,umask=0xf201unc_m_cas_count_sch0.alluncore memoryCAS count for SubChannel 0, all CAS operationsevent=5,umask=0xff01unc_m_cas_count_sch0.rduncore memoryCAS count for SubChannel 0, all readsevent=5,umask=0xcf01unc_m_cas_count_sch0.rd_reguncore memoryCAS count for SubChannel 0 regular readsevent=5,umask=0xc101unc_m_cas_count_sch0.rd_underfilluncore memoryCAS count for SubChannel 0 underfill readsevent=5,umask=0xc401unc_m_cas_count_sch0.wruncore memoryCAS count for SubChannel 0, all writesevent=5,umask=0xf001unc_m_cas_count_sch0.wr_nonpreuncore memoryCAS count for SubChannel 0 regular writesevent=5,umask=0xd001unc_m_cas_count_sch0.wr_preuncore memoryCAS count for SubChannel 0 auto-precharge writesevent=5,umask=0xe001unc_m_cas_count_sch1.alluncore memoryCAS count for SubChannel 1, all CAS operationsevent=6,umask=0xff01unc_m_cas_count_sch1.rduncore memoryCAS count for SubChannel 1, all readsevent=6,umask=0xcf01unc_m_cas_count_sch1.rd_reguncore memoryCAS count for SubChannel 1 regular readsevent=6,umask=0xc101unc_m_cas_count_sch1.rd_underfilluncore memoryCAS count for SubChannel 1 underfill readsevent=6,umask=0xc401unc_m_cas_count_sch1.wruncore memoryCAS count for SubChannel 1, all writesevent=6,umask=0xf001unc_m_cas_count_sch1.wr_nonpreuncore memoryCAS count for SubChannel 1 regular writesevent=6,umask=0xd001unc_m_cas_count_sch1.wr_preuncore memoryCAS count for SubChannel 1 auto-precharge writesevent=6,umask=0xe001unc_m_clockticksuncore memoryNumber of DRAM DCLK clock cycles while the event is enabledevent=1,umask=101DRAM Clockticksunc_m_hclockticksuncore memoryNumber of DRAM HCLK clock cycles while the event is enabledevent=101DRAM Clockticksunc_m_pre_count.alluncore memoryDRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channelevent=3,umask=0xff01unc_m_pre_count.pgtuncore memoryDRAM Precharge commands. : Precharge due to (?) : Counts the number of DRAM Precharge commands sent on this channelevent=3,umask=0xf801unc_m_pre_count.rduncore memoryDRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channelevent=3,umask=0xf101unc_m_pre_count.ufilluncore memoryDRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channelevent=3,umask=0xf401unc_m_pre_count.wruncore memoryDRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channelevent=3,umask=0xf201unc_m_rdb_inserts.sch0uncore memoryRead buffer inserts on subchannel 0event=0x17,umask=0x4001unc_m_rdb_inserts.sch1uncore memoryRead buffer inserts on subchannel 1event=0x17,umask=0x8001unc_m_rdb_occupancy_sch0uncore memoryRead buffer occupancy on subchannel 0event=0x1a01unc_m_rdb_occupancy_sch1uncore memoryRead buffer occupancy on subchannel 1event=0x1b01unc_m_rpq_inserts.pch0uncore memoryRead Pending Queue Allocations : Counts the number of allocations into the Read Pending Queue.  This queue is used to schedule reads out to the memory controller and to track the requests.  Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC.  They deallocate after the CAS command has been issued to memory.  This includes both ISOCH and non-ISOCH requestsevent=0x10,umask=0x5001unc_m_rpq_inserts.pch1uncore memoryRead Pending Queue Allocations : Counts the number of allocations into the Read Pending Queue.  This queue is used to schedule reads out to the memory controller and to track the requests.  Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC.  They deallocate after the CAS command has been issued to memory.  This includes both ISOCH and non-ISOCH requestsevent=0x10,umask=0xa001unc_m_rpq_inserts.sch0_pch0uncore memoryRead Pending Queue inserts for subchannel 0, pseudochannel 0event=0x10,umask=0x1001unc_m_rpq_inserts.sch0_pch1uncore memoryRead Pending Queue inserts for subchannel 0, pseudochannel 1event=0x10,umask=0x2001unc_m_rpq_inserts.sch1_pch0uncore memoryRead Pending Queue inserts for subchannel 1, pseudochannel 0event=0x10,umask=0x4001unc_m_rpq_inserts.sch1_pch1uncore memoryRead Pending Queue inserts for subchannel 1, pseudochannel 1event=0x10,umask=0x8001unc_m_rpq_occupancy_sch0_pch0uncore memoryRead pending queue occupancy for subchannel 0, pseudochannel 0event=0x8001unc_m_rpq_occupancy_sch0_pch1uncore memoryRead pending queue occupancy for subchannel 0, pseudochannel 1event=0x8101unc_m_rpq_occupancy_sch1_pch0uncore memoryRead pending queue occupancy for subchannel 1, pseudochannel 0event=0x8201unc_m_rpq_occupancy_sch1_pch1uncore memoryRead pending queue occupancy for subchannel 1, pseudochannel 1event=0x8301unc_m_wpq_inserts.pch0uncore memoryWrite Pending Queue Allocationsevent=0x22,umask=0x5001unc_m_wpq_inserts.pch1uncore memoryWrite Pending Queue Allocationsevent=0x22,umask=0xa001unc_m_wpq_inserts.sch0_pch0uncore memoryWrite Pending Queue inserts for subchannel 0, pseudochannel 0event=0x22,umask=0x1001unc_m_wpq_inserts.sch0_pch1uncore memoryWrite Pending Queue inserts for subchannel 0, pseudochannel 1event=0x22,umask=0x2001unc_m_wpq_inserts.sch1_pch0uncore memoryWrite Pending Queue inserts for subchannel 1, pseudochannel 0event=0x22,umask=0x4001unc_m_wpq_inserts.sch1_pch1uncore memoryWrite Pending Queue inserts for subchannel 1, pseudochannel 1event=0x22,umask=0x8001unc_m_wpq_occupancy_sch0_pch0uncore memoryWrite pending queue occupancy for subchannel 0, pseudochannel 0event=0x8401unc_m_wpq_occupancy_sch0_pch1uncore memoryWrite pending queue occupancy for subchannel 0, pseudochannel 1event=0x8501unc_m_wpq_occupancy_sch1_pch0uncore memoryWrite pending queue occupancy for subchannel 1, pseudochannel 0event=0x8601unc_m_wpq_occupancy_sch1_pch1uncore memoryWrite pending queue occupancy for subchannel 1, pseudochannel 1event=0x8701unc_p_clockticksuncore powerPCU Clockticksevent=101PCU Clockticks:  The PCU runs off a fixed 1 GHz clock.  This event counts the number of pclk cycles measured while the counter was enabled.  The pclk, like the Memory Controller's dclk, counts at a constant rate making it a good measure of actual wall timedtlb_load_misses.stlb_hitvirtual memoryCounts the number of first level TLB misses but second level hits due to a demand load that did not start a page walk. Accounts for all page sizes. Will result in a DTLB write from STLBevent=8,period=200003,umask=0x2000dtlb_load_misses.walk_completedvirtual memoryCounts the number of page walks completed due to load DTLB missesevent=8,period=200003,umask=0xe00dtlb_load_misses.walk_pendingvirtual memoryCounts the number of page walks outstanding for Loads (demand or SW prefetch) in PMH every cycleevent=8,period=200003,umask=0x1000Counts the number of page walks outstanding for Loads (demand or SW prefetch) in PMH every cycle.  A PMH page walk is outstanding from page walk start till PMH becomes idle again (ready to serve next walk). Includes EPT-walk intervalsdtlb_store_misses.stlb_hitvirtual memoryCounts the number of first level TLB misses but second level hits due to stores that did not start a page walk. Accounts for all pages sizes. Will result in a DTLB write from STLBevent=0x49,period=2000003,umask=0x2000dtlb_store_misses.walk_completedvirtual memoryCounts the number of page walks completed due to store DTLB misses to a 1G pageevent=0x49,period=2000003,umask=0xe00dtlb_store_misses.walk_pendingvirtual memoryCounts the number of page walks outstanding in the page miss handler (PMH) for stores every cycleevent=0x49,period=200003,umask=0x1000Counts the number of page walks outstanding in the page miss handler (PMH) for stores every cycle. A PMH page walk is outstanding from page walk start till PMH becomes idle again (ready to serve next walk). Includes EPT-walk intervalsitlb_misses.walk_pendingvirtual memoryCounts the number of page walks outstanding for iside in PMH every cycleevent=0x85,period=200003,umask=0x1000Counts the number of page walks outstanding for iside in PMH every cycle.  A PMH page walk is outstanding from page walk start till PMH becomes idle again (ready to serve next walk). Includes EPT-walk intervals.  Walks could be counted by edge detecting on this event, but would count restarted suspended walksl2_request.hitcacheAll requests that hit L2 cache. [This event is alias to L2_RQSTS.HIT]event=0x24,period=200003,umask=0xdf00Counts all requests that hit L2 cache. [This event is alias to L2_RQSTS.HIT]l2_request.misscacheRead requests with true-miss in L2 cache [This event is alias to L2_RQSTS.MISS]event=0x24,period=200003,umask=0x3f00Counts read requests of any type with true-miss in the L2 cache. True-miss excludes L2 misses that were merged with ongoing L2 misses. [This event is alias to L2_RQSTS.MISS]l2_rqsts.hitcacheAll requests that hit L2 cache. [This event is alias to L2_REQUEST.HIT]event=0x24,period=200003,umask=0xdf00Counts all requests that hit L2 cache. [This event is alias to L2_REQUEST.HIT]l2_rqsts.misscacheRead requests with true-miss in L2 cache [This event is alias to L2_REQUEST.MISS]event=0x24,period=200003,umask=0x3f00Counts read requests of any type with true-miss in the L2 cache. True-miss excludes L2 misses that were merged with ongoing L2 misses. [This event is alias to L2_REQUEST.MISS]mem_inst_retired.stlb_hit_loadscacheRetired load instructions that hit the STLB  Supports address when precise (Precise event)event=0xd0,period=100003,umask=900Number of retired load instructions with a clean hit in the 2nd-level TLB (STLB)  Supports address when precise (Precise event)mem_inst_retired.stlb_hit_storescacheRetired store instructions that hit the STLB  Supports address when precise (Precise event)event=0xd0,period=100003,umask=0xa00Number of retired store instructions that hit in the 2nd-level TLB (STLB)  Supports address when precise (Precise event)offcore_requests.all_requestscacheAny memory transaction that reached the SQevent=0x21,period=100003,umask=0x8000Counts memory transactions reached the super queue including requests initiated by the core, all L3 prefetches, page walks, etc.offcore_requests.demand_code_rdcacheCacheable and Non-Cacheable code read requestsevent=0x21,period=100003,umask=200Counts both cacheable and Non-Cacheable code read requestsoffcore_requests_outstanding.cycles_with_data_rdcacheCycles when offcore outstanding cacheable Core Data Read transactions are present in SuperQueue (SQ), queue to uncoreevent=0x20,cmask=1,period=1000003,umask=800Counts cycles when offcore outstanding cacheable Core Data Read transactions are present in the super queue. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation). See corresponding Umask under OFFCORE_REQUESTSoffcore_requests_outstanding.cycles_with_demand_rfocacheCycles with offcore outstanding demand rfo reads transactions in SuperQueue (SQ), queue to uncoreevent=0x20,cmask=1,period=1000003,umask=400Counts the number of offcore outstanding demand rfo Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTSoffcore_requests_outstanding.demand_rfocacheStore Read transactions pending for off-core. Highly correlatedevent=0x20,period=1000003,umask=400Counts the number of off-core outstanding read-for-ownership (RFO) store transactions every cycle. An RFO transaction is considered to be in the Off-core outstanding state between L2 cache miss and transaction completionarith.fpdiv_activefloating pointThis event counts the cycles the floating point divider is busyevent=0xb0,cmask=1,period=1000003,umask=100frontend_retired.any_antfrontendRetired ANT branches (Precise event)event=0xc6,period=100007,umask=3,frontend=0x900Always Not Taken (ANT) conditional retired branches (no BTB entry and not mispredicted) (Precise event)frontend_retired.any_dsb_missfrontendRetired Instructions who experienced DSB miss (Precise event)event=0xc6,period=100007,umask=3,frontend=0x100Counts retired Instructions that experienced DSB (Decode stream buffer i.e. the decoded instruction-cache) miss (Precise event)frontend_retired.dsb_missfrontendRetired Instructions who experienced a critical DSB miss (Precise event)event=0xc6,period=100007,umask=3,frontend=0x1100Number of retired Instructions that experienced a critical DSB (Decode stream buffer i.e. the decoded instruction-cache) miss. Critical means stalls were exposed to the back-end as a result of the DSB miss (Precise event)frontend_retired.itlb_missfrontendRetired Instructions who experienced iTLB true miss (Precise event)event=0xc6,period=100007,umask=3,frontend=0x1400Counts retired Instructions that experienced iTLB (Instruction TLB) true miss (Precise event)frontend_retired.l1i_missfrontendRetired Instructions who experienced Instruction L1 Cache true miss (Precise event)event=0xc6,period=100007,umask=3,frontend=0x1200Counts retired Instructions who experienced Instruction L1 Cache true miss (Precise event)frontend_retired.l2_missfrontendRetired Instructions who experienced Instruction L2 Cache true miss (Precise event)event=0xc6,period=100007,umask=3,frontend=0x1300Counts retired Instructions who experienced Instruction L2 Cache true miss (Precise event)frontend_retired.latency_ge_1frontendRetired instructions after front-end starvation of at least 1 cycle (Precise event)event=0xc6,period=100007,umask=3,frontend=0x60010600Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of at least 1 cycle which was not interrupted by a back-end stall (Precise event)frontend_retired.latency_ge_128frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=3,frontend=0x60800600Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall (Precise event)frontend_retired.latency_ge_16frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 16 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=3,frontend=0x60100600Counts retired instructions that are delivered to the back-end after a front-end stall of at least 16 cycles. During this period the front-end delivered no uops (Precise event)frontend_retired.latency_ge_2frontendRetired instructions after front-end starvation of at least 2 cycles (Precise event)event=0xc6,period=100007,umask=3,frontend=0x60020600Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of at least 2 cycles which was not interrupted by a back-end stall (Precise event)frontend_retired.latency_ge_256frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=3,frontend=0x61000600Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall (Precise event)frontend_retired.latency_ge_2_bubbles_ge_1frontendRetired instructions that are fetched after an interval where the front-end had at least 1 bubble-slot for a period of 2 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=3,frontend=0x10020600Counts retired instructions that are delivered to the back-end after the front-end had at least 1 bubble-slot for a period of 2 cycles. A bubble-slot is an empty issue-pipeline slot while there was no RAT stall (Precise event)frontend_retired.latency_ge_32frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 32 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=3,frontend=0x60200600Counts retired instructions that are delivered to the back-end after a front-end stall of at least 32 cycles. During this period the front-end delivered no uops (Precise event)frontend_retired.latency_ge_4frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=3,frontend=0x60040600Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall (Precise event)frontend_retired.latency_ge_512frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=3,frontend=0x62000600Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall (Precise event)frontend_retired.latency_ge_64frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=3,frontend=0x60400600Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall (Precise event)frontend_retired.latency_ge_8frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 8 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=3,frontend=0x60080600Counts retired instructions that are delivered to the back-end after a front-end stall of at least 8 cycles. During this period the front-end delivered no uops (Precise event)frontend_retired.late_swpffrontendI-Cache miss too close to Code Prefetch Instruction (Precise event)event=0xc6,period=100007,umask=3,frontend=0x900Number of Instruction Cache demand miss in shadow of an on-going i-fetch cache-line triggered by PREFETCHIT0/1 instructions (Precise event)frontend_retired.misp_antfrontendMispredicted Retired ANT branches (Precise event)event=0xc6,period=100007,umask=2,frontend=0x900ANT retired branches that got just mispredicted (Precise event)frontend_retired.ms_flowsfrontendFRONTEND_RETIRED.MS_FLOWS (Precise event)event=0xc6,period=100007,umask=3,frontend=0x800frontend_retired.stlb_missfrontendRetired Instructions who experienced STLB (2nd level TLB) true miss (Precise event)event=0xc6,period=100007,umask=3,frontend=0x1500Counts retired Instructions that experienced STLB (2nd level TLB) true miss (Precise event)frontend_retired.unknown_branchfrontendFRONTEND_RETIRED.UNKNOWN_BRANCH (Precise event)event=0xc6,period=100007,umask=3,frontend=0x1700idq.ms_uopsfrontendUops initiated by MITE or Decode Stream Buffer (DSB) and delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busyevent=0x79,period=1000003,umask=0x2000Counts the number of uops initiated by MITE or Decode Stream Buffer (DSB) and delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Counting includes uops that may 'bypass' the IDQidq_bubbles.corefrontendThis event counts a subset of the Topdown Slots event that when no operation was delivered to the back-end pipeline due to instruction fetch limitations when the back-end could have accepted more operations. Common examples include instruction cache misses or x86 instruction decode limitationsevent=0x9c,period=1000003,umask=100This event counts a subset of the Topdown Slots event that when no operation was delivered to the back-end pipeline due to instruction fetch limitations when the back-end could have accepted more operations. Common examples include instruction cache misses or x86 instruction decode limitations. The count may be distributed among unhalted logical processors (hyper-threads) who share the same physical core, in processors that support Intel Hyper-Threading Technology. Software can use this event as the numerator for the Frontend Bound metric (or top-level category) of the Top-down Microarchitecture Analysis methodidq_uops_not_delivered.corefrontendUops not delivered by IDQ when backend of the machine is not stalledevent=0x9c,period=1000003,umask=100Counts the number of uops not delivered to by the Instruction Decode Queue (IDQ) to the back-end of the pipeline when there was no back-end stalls. This event counts for one SMT thread in a given cyclecycle_activity.cycles_l3_missmemoryCycles while L3 cache miss demand load is outstandingevent=0xa3,cmask=2,period=1000003,umask=200mem_trans_retired.load_latency_gt_2048memoryCounts randomly selected loads when the latency from first dispatch to completion is greater than 2048 cycles  Supports address when precise (Must be precise)event=0xcd,period=23,umask=1,ldlat=0x80000Counts randomly selected loads when the latency from first dispatch to completion is greater than 2048 cycles.  Reported latency may be longer than just the memory latency  Supports address when precise (Must be precise)offcore_requests_outstanding.cycles_with_l3_miss_demand_data_rdmemoryCycles where data return is pending for a Demand Data Read request who miss L3 cacheevent=0x20,cmask=1,period=1000003,umask=0x1000Cycles with at least 1 Demand Data Read requests who miss L3 cache in the superQrs.empty_resourceotherCycles when RS was empty and a resource allocation stall is assertedevent=0xa5,period=1000003,umask=100br_misp_retired.all_branches_costpipelineAll mispredicted branch instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch (Precise event)event=0xc5,period=400009,umask=0x4400br_misp_retired.cond_costpipelineMispredicted conditional branch instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch (Precise event)event=0xc5,period=400009,umask=0x5100br_misp_retired.cond_ntaken_costpipelineMispredicted non-taken conditional branch instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch (Precise event)event=0xc5,period=400009,umask=0x5000br_misp_retired.cond_taken_costpipelineMispredicted taken conditional branch instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch (Precise event)event=0xc5,period=400009,umask=0x4100br_misp_retired.indirect_call_costpipelineMispredicted indirect CALL retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch (Precise event)event=0xc5,period=400009,umask=0x4200br_misp_retired.indirect_costpipelineMispredicted near indirect branch instructions retired (excluding returns). This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch (Precise event)event=0xc5,period=100003,umask=0xc000br_misp_retired.near_taken_costpipelineMispredicted taken near branch instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch (Precise event)event=0xc5,period=400009,umask=0x6000br_misp_retired.ret_costpipelineMispredicted ret instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch (Precise event)event=0xc5,period=100007,umask=0x4800inst_retired.noppipelineRetired NOP instructions (Precise event)event=0xc0,period=2000003,umask=200Counts all retired NOP or ENDBR32/64 or PREFETCHIT0/1 instructions (Precise event)topdown.backend_bound_slotspipelineThis event counts a subset of the Topdown Slots event that were not consumed by the back-end pipeline due to lack of back-end resources, as a result of memory subsystem delays, execution units limitations, or other conditionsevent=0xa4,period=10000003,umask=200This event counts a subset of the Topdown Slots event that were not consumed by the back-end pipeline due to lack of back-end resources, as a result of memory subsystem delays, execution units limitations, or other conditions. The count is distributed among unhalted logical processors (hyper-threads) who share the same physical core, in processors that support Intel Hyper-Threading Technology. Software can use this event as the numerator for the Backend Bound metric (or top-level category) of the Top-down Microarchitecture Analysis methoduops_decoded.dec0_uopspipelineNumber of non dec-by-all uops decoded by decoderevent=0x76,period=1000003,umask=100This event counts the number of not dec-by-all uops decoded by decoder 0uops_retired.slotspipelineThis event counts a subset of the Topdown Slots event that are utilized by operations that eventually get retired (committed) by the processor pipeline. Usually, this event positively correlates with higher performance  for example, as measured by the instructions-per-cycle metricevent=0xc2,period=2000003,umask=200This event counts a subset of the Topdown Slots event that are utilized by operations that eventually get retired (committed) by the processor pipeline. Usually, this event positively correlates with higher performance  for example, as measured by the instructions-per-cycle metric. Software can use this event as the numerator for the Retiring metric (or top-level category) of the Top-down Microarchitecture Analysis methodunc_chacms_clockticksuncore cacheClockticks for CMS units attached to CHAevent=101UNC_CHACMS_CLOCKTICKSunc_cha_dir_lookup.no_snpuncore cacheCounts transactions that looked into the multi-socket cacheline Directory state, and therefore did not send a snoop because the Directory indicated it was not neededevent=0x53,umask=201unc_cha_dir_lookup.snpuncore cacheCounts  transactions that looked into the multi-socket cacheline Directory state, and sent one or more snoops, because the Directory indicated it was neededevent=0x53,umask=101unc_cha_dir_update.hauncore cacheCounts only multi-socket cacheline Directory state updates memory writes issued from the HA pipe. This does not include memory write requests which are for I (Invalid) or E (Exclusive) cachelinesevent=0x54,umask=101unc_cha_dir_update.toruncore cacheCounts only multi-socket cacheline Directory state updates due to memory writes issued from the TOR pipe which are the result of remote transaction hitting the SF/LLC and returning data Core2Core. This does not include memory write requests which are for I (Invalid) or E (Exclusive) cachelinesevent=0x54,umask=201unc_cha_llc_lookup.all_remoteuncore cacheCache Lookups: All Requests to Remotely Homed Memoryevent=0x34,umask=0x17e0ff01Cache Lookups : All transactions from Remote Agentsunc_cha_llc_lookup.remotely_homed_addressuncore cacheCache Lookups: All Requests to Remotely Homed Memoryevent=0x34,umask=0x15dfff01Cache Lookups : Transactions homed remotely : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Transaction whose address resides in a remote MCunc_cha_llc_lookup.remote_codeuncore cacheCache Lookups: Code Read/Prefetch Requests from a Remote Socketevent=0x34,umask=0x1a10ff01Cache Lookups : CRd Requestsunc_cha_llc_lookup.remote_data_rduncore cacheCache Lookups: Data Read/Prefetch Requests from a Remote Socketevent=0x34,umask=0x1a01ff01Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactionsunc_cha_llc_lookup.remote_rfouncore cacheCache Lookups: RFO Requests/Prefetches from a Remote Socketevent=0x34,umask=0x1a08ff01Cache Lookups : RFO Requestsunc_cha_llc_lookup.remote_snpuncore cacheCache Lookups: Snoop Requests from a Remote Socketevent=0x34,umask=0x1c19ff01Counts the number of times the LLC was accessedunc_cha_llc_lookup.write_remoteuncore cacheCache Lookups: Writes to Remotely Homed Memory (includes writebacks from L1/L2)event=0x34,umask=0x17c2ff01Cache Lookups : Remote Writesunc_cha_llc_victims.remote_alluncore cacheCounts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inevent=0x37,umask=0x800f01Lines Victimized : Remote - All Linesunc_cha_llc_victims.remote_euncore cacheLines Victimized : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inevent=0x37,umask=0x800201Lines Victimized : Remote - Lines in E Stateunc_cha_llc_victims.remote_muncore cacheLines Victimized : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inevent=0x37,umask=0x800101Lines Victimized : Remote - Lines in M Stateunc_cha_llc_victims.remote_suncore cacheLines Victimized : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inevent=0x37,umask=0x800401Lines Victimized : Remote - Lines in S Stateunc_cha_osb.remote_readuncore cacheOSB Snoop Broadcast : Remote Rd : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSBevent=0x55,umask=401unc_cha_remote_sf.alloc_exclusiveuncore cacheUNC_CHA_REMOTE_SF.ALLOC_EXCLUSIVEevent=0x69,umask=0x1001unc_cha_remote_sf.alloc_shareduncore cacheUNC_CHA_REMOTE_SF.ALLOC_SHAREDevent=0x69,umask=801unc_cha_remote_sf.dealloc_evctclnuncore cacheUNC_CHA_REMOTE_SF.DEALLOC_EVCTCLNevent=0x69,umask=0x4001unc_cha_remote_sf.dirbacked_onlyuncore cacheUNC_CHA_REMOTE_SF.DIRBACKED_ONLYevent=0x6901unc_cha_remote_sf.hit_exclusiveuncore cacheUNC_CHA_REMOTE_SF.HIT_EXCLUSIVEevent=0x69,umask=201unc_cha_remote_sf.hit_shareduncore cacheUNC_CHA_REMOTE_SF.HIT_SHAREDevent=0x69,umask=101unc_cha_remote_sf.inclusive_onlyuncore cacheUNC_CHA_REMOTE_SF.INCLUSIVE_ONLYevent=0x6901unc_cha_remote_sf.missuncore cacheUNC_CHA_REMOTE_SF.MISSevent=0x69,umask=401unc_cha_remote_sf.update_exclusiveuncore cacheUNC_CHA_REMOTE_SF.UPDATE_EXCLUSIVEevent=0x6901unc_cha_remote_sf.update_shareduncore cacheUNC_CHA_REMOTE_SF.UPDATE_SHAREDevent=0x69,umask=0x8001unc_cha_remote_sf.victim_exclusiveuncore cacheUNC_CHA_REMOTE_SF.VICTIM_EXCLUSIVEevent=0x6901unc_cha_remote_sf.victim_shareduncore cacheUNC_CHA_REMOTE_SF.VICTIM_SHAREDevent=0x6901unc_cha_requests.invitoe_remoteuncore cacheCounts the total number of requests coming from a remote socket for exclusive ownership of a cache line without receiving data (INVITOE) to the CHAevent=0x50,umask=0x2001unc_cha_requests.reads_remoteuncore cacheCounts read requests coming from a remote socket made into the CHA. Reads include all read opcodes (including RFO: the Read for Ownership issued before a  write)event=0x50,umask=201unc_cha_requests.writes_remoteuncore cacheCounts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO).  Writes include all writes (streaming, evictions, HitM, etc)event=0x50,umask=801unc_cha_tor_inserts.cxl_hit_clflushuncore cacheCLFlush transactions from a CXL device which hit in the L3event=0x35,umask=0x78c8c7fd2001unc_cha_tor_inserts.cxl_hit_fsrdcuruncore cacheFsRdCur transactions from a CXL device which hit in the L3event=0x35,umask=0x78c8effd2001unc_cha_tor_inserts.cxl_hit_fsrdcurptluncore cacheFsRdCurPtl transactions from a CXL device which hit in the L3event=0x35,umask=0x78c9effd2001unc_cha_tor_inserts.cxl_hit_itomuncore cacheItoM transactions from a CXL device which hit in the L3event=0x35,umask=0x78cc47fd2001unc_cha_tor_inserts.cxl_hit_itomwruncore cacheItoMWr transactions from a CXL device which hit in the L3event=0x35,umask=0x78cc4ffd2001unc_cha_tor_inserts.cxl_hit_mempushwruncore cacheMemPushWr transactions from a CXL device which hit in the L3event=0x35,umask=0x78cc6ffd2001unc_cha_tor_inserts.cxl_hit_wciluncore cacheWCiL transactions from a CXL device which hit in the L3event=0x35,umask=0x78c86ffd2001unc_cha_tor_inserts.cxl_hit_wcilfuncore cacheWcilF transactions from a CXL device which hit in the L3event=0x35,umask=0x78c867fd2001unc_cha_tor_inserts.cxl_hit_wiluncore cacheWiL transactions from a CXL device which hit in the L3event=0x35,umask=0x78c87ffd2001unc_cha_tor_inserts.cxl_miss_clflushuncore cacheCLFlush transactions from a CXL device which miss the L3event=0x35,umask=0x78c8c7fe2001unc_cha_tor_inserts.cxl_miss_fsrdcuruncore cacheFsRdCur transactions from a CXL device which miss the L3event=0x35,umask=0x78c8effe2001unc_cha_tor_inserts.cxl_miss_fsrdcurptluncore cacheFsRdCurPtl transactions from a CXL device which miss the L3event=0x35,umask=0x78c9effe2001unc_cha_tor_inserts.cxl_miss_itomuncore cacheItoM transactions from a CXL device which miss the L3event=0x35,umask=0x78cc47fe2001unc_cha_tor_inserts.cxl_miss_itomwruncore cacheItoMWr transactions from a CXL device which miss the L3event=0x35,umask=0x78cc4ffe2001unc_cha_tor_inserts.cxl_miss_mempushwruncore cacheMemPushWr transactions from a CXL device which miss the L3event=0x35,umask=0x78cc6ffe2001unc_cha_tor_inserts.cxl_miss_wciluncore cacheWCiL transactions from a CXL device which miss the L3event=0x35,umask=0x78c86ffe2001unc_cha_tor_inserts.cxl_miss_wcilfuncore cacheWcilF transactions from a CXL device which miss the L3event=0x35,umask=0x78c867fe2001unc_cha_tor_inserts.cxl_miss_wiluncore cacheWiL transactions from a CXL device which miss the L3event=0x35,umask=0x78c87ffe2001unc_cha_tor_inserts.ia_drduncore cacheData read from local IA that miss the cacheevent=0x35,umask=0xc817ff0101TOR Inserts : DRds issued by iA Coresunc_cha_tor_inserts.ia_drdpteuncore cacheDRd PTEs issued by iA Cores due to a page walkevent=0x35,umask=0xc837ff0101TOR Inserts : DRdPte issued by iA Cores due to a page walkunc_cha_tor_inserts.ia_drd_prefuncore cacheData read prefetch from local IA that miss the cacheevent=0x35,umask=0xc897ff0101TOR Inserts : DRd_Prefs issued by iA Coresunc_cha_tor_inserts.ia_hit_drduncore cacheData read from local IA that hit the cacheevent=0x35,umask=0xc817fd0101TOR Inserts : DRds issued by iA Cores that Hit the LLCunc_cha_tor_inserts.ia_hit_drdpteuncore cacheDRd PTEs issued by iA Cores due to page walks that hit the LLCevent=0x35,umask=0xc837fd0101TOR Inserts : DRdPte issued by iA Cores due to a page walk that hit the LLCunc_cha_tor_inserts.ia_hit_drd_prefuncore cacheData read prefetch from local IA that hit the cacheevent=0x35,umask=0xc897fd0101TOR Inserts : DRd_Prefs issued by iA Cores that Hit the LLCunc_cha_tor_inserts.ia_miss_crd_pref_remoteuncore cacheCRD Prefetches from local IA cores to remotely homed memoryevent=0x35,umask=0xc88f7e0101TOR Inserts : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed remotelyunc_cha_tor_inserts.ia_miss_crd_remoteuncore cacheCRDs from local IA cores to remotely homed memoryevent=0x35,umask=0xc80f7e0101TOR Inserts : CRd issued by iA Cores that Missed the LLC - HOMed remotelyunc_cha_tor_inserts.ia_miss_drduncore cacheData read from local IA that miss the cacheevent=0x35,umask=0xc817fe0101TOR Inserts : DRds issued by iA Cores that Missed the LLCunc_cha_tor_inserts.ia_miss_drdpteuncore cacheDRd PTEs issued by iA Cores due to a page walk that missed the LLCevent=0x35,umask=0xc837fe0101TOR Inserts : DRdPte issued by iA Cores due to a page walk that missed the LLCunc_cha_tor_inserts.ia_miss_drd_cxl_accuncore cacheDRds and equivalent opcodes issued from an IA core which miss the L3 and target memory in a CXL type 2 memory expander cardevent=0x35,umask=0x10c817820101DRds issued from an IA core which miss the L3 and target memory in a CXL type 2 memory expander cardunc_cha_tor_inserts.ia_miss_drd_ddruncore cacheDRds issued by iA Cores targeting DDR Mem that Missed the LLCevent=0x35,umask=0xc817860101TOR Inserts : DRds issued by iA Cores targeting DDR Mem that Missed the LLCunc_cha_tor_inserts.ia_miss_drd_localuncore cacheData read from local IA that miss the cacheevent=0x35,umask=0xc816fe0101TOR Inserts : DRds issued by iA Cores that Missed the LLC - HOMed locallyunc_cha_tor_inserts.ia_miss_drd_local_ddruncore cacheDRds from local IA cores to locally homed DDR addresses that miss the cacheevent=0x35,umask=0xc816860101TOR Inserts : DRds issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locallyunc_cha_tor_inserts.ia_miss_drd_local_pmmuncore cacheDRds from local IA cores to locally homed PMM addresses that miss the cacheevent=0x35,umask=0xc8168a0101TOR Inserts : DRds issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locallyunc_cha_tor_inserts.ia_miss_drd_pmmuncore cacheDRds issued by iA Cores targeting PMM Mem that Missed the LLCevent=0x35,umask=0xc8178a0101TOR Inserts : DRds issued by iA Cores targeting PMM Mem that Missed the LLCunc_cha_tor_inserts.ia_miss_drd_prefuncore cacheData read prefetch from local IA that miss the cacheevent=0x35,umask=0xc897fe0101TOR Inserts : DRd_Prefs issued by iA Cores that Missed the LLCunc_cha_tor_inserts.ia_miss_drd_pref_ddruncore cacheDRd Prefetches from local IA cores to DDR addresses that miss the cacheevent=0x35,umask=0xc897860101TOR Inserts : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLCunc_cha_tor_inserts.ia_miss_drd_pref_localuncore cacheData read prefetch from local IA that miss the cacheevent=0x35,umask=0xc896fe0101Inserts into the TOR from local IA cores which miss the LLC and snoop filter with the opcode DRD_PREF, and target local memoryunc_cha_tor_inserts.ia_miss_drd_pref_local_ddruncore cacheDRd Prefetches from local IA cores to locally homed DDR addresses that miss the cacheevent=0x35,umask=0xc896860101TOR Inserts : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locallyunc_cha_tor_inserts.ia_miss_drd_pref_local_pmmuncore cacheDRd Prefetches from local IA cores to locally homed PMM addresses that miss the cacheevent=0x35,umask=0xc8968a0101TOR Inserts : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locallyunc_cha_tor_inserts.ia_miss_drd_pref_pmmuncore cacheDRd Prefetches from local IA cores to PMM addresses that miss the cacheevent=0x35,umask=0xc8978a0101TOR Inserts : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLCunc_cha_tor_inserts.ia_miss_drd_pref_remoteuncore cacheData read prefetch from local IA that miss the cacheevent=0x35,umask=0xc8977e0101Inserts into the TOR from local IA cores which miss the LLC and snoop filter with the opcode DRD_PREF, and target remote memoryunc_cha_tor_inserts.ia_miss_drd_pref_remote_ddruncore cacheDRd Prefetches from local IA cores to remotely homed DDR addresses that miss the cacheevent=0x35,umask=0xc897060101TOR Inserts : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotelyunc_cha_tor_inserts.ia_miss_drd_pref_remote_pmmuncore cacheDRd Prefetches from local IA cores to remotely homed PMM addresses that miss the cacheevent=0x35,umask=0xc8970a0101TOR Inserts : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotelyunc_cha_tor_inserts.ia_miss_drd_remoteuncore cacheData read from local IA that miss the cacheevent=0x35,umask=0xc8177e0101TOR Inserts : DRds issued by iA Cores that Missed the LLC - HOMed remotelyunc_cha_tor_inserts.ia_miss_drd_remote_ddruncore cacheDRds from local IA cores to remotely homed DDR addresses that miss the cacheevent=0x35,umask=0xc817060101TOR Inserts : DRds issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotelyunc_cha_tor_inserts.ia_miss_drd_remote_pmmuncore cacheDRds from local IA cores to remotely homed PMM addresses that miss the cacheevent=0x35,umask=0xc8170a0101TOR Inserts : DRds issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotelyunc_cha_tor_inserts.ia_miss_local_wcilf_pmmuncore cacheWCILF requests from local IA cores to locally homed PMM addresses which miss the cacheevent=0x35,umask=0xc8668a0101TOR Inserts : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed locallyunc_cha_tor_inserts.ia_miss_local_wcil_pmmuncore cacheWCIL requests from local IA cores to locally homed PMM addresses which miss the cacheevent=0x35,umask=0xc86e8a0101TOR Inserts : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed locallyunc_cha_tor_inserts.ia_miss_remote_wcilf_ddruncore cacheWCILF requests from local IA cores to remotely homed DDR addresses that miss the cacheevent=0x35,umask=0xc867060101TOR Inserts : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed remotelyunc_cha_tor_inserts.ia_miss_remote_wcilf_pmmuncore cacheWCILF requests from local IA cores to remotely homed PMM addresses which miss the cacheevent=0x35,umask=0xc8670a0101TOR Inserts : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed remotelyunc_cha_tor_inserts.ia_miss_remote_wcil_ddruncore cacheWCIL requests from local IA cores to remotely homed DDR addresses that miss the cacheevent=0x35,umask=0xc86f060101TOR Inserts : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed remotelyunc_cha_tor_inserts.ia_miss_remote_wcil_pmmuncore cacheWCIL requests from local IA cores to remotely homed PMM addresses which miss the cacheevent=0x35,umask=0xc86f0a0101TOR Inserts : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed remotelyunc_cha_tor_inserts.ia_miss_rfo_pref_remoteuncore cacheRead for ownership prefetch from local IA that miss the cacheevent=0x35,umask=0xc8877e0101TOR Inserts : RFO_Prefs issued by iA Cores that Missed the LLC - HOMed remotelyunc_cha_tor_inserts.ia_miss_rfo_remoteuncore cacheRead for ownership from local IA that miss the cacheevent=0x35,umask=0xc8077e0101TOR Inserts : RFOs issued by iA Cores that Missed the LLC - HOMed remotelyunc_cha_tor_inserts.ia_miss_wcilf_pmmuncore cacheWCILF requests from local IA cores to PMM homed addresses which miss the cacheevent=0x35,umask=0xc8678a0101TOR Inserts : WCiLFs issued by iA Cores targeting PMM that missed the LLCunc_cha_tor_inserts.ia_miss_wcil_pmmuncore cacheWCIL requests from a local IA core to PMM homed addresses that miss the cacheevent=0x35,umask=0xc86f8a0101TOR Inserts : WCiLs issued by iA Cores targeting PMM that missed the LLCunc_cha_tor_inserts.io_itomcachenear_localuncore cacheItoMCacheNear (partial write) transactions from an IO device that addresses memory on the local socketevent=0x35,umask=0xcd42ff0401TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices that address memory on the local socketunc_cha_tor_inserts.io_itomcachenear_remoteuncore cacheItoMCacheNear (partial write) transactions from an IO device that addresses memory on a remote socketevent=0x35,umask=0xcd437f0401TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices that address memory on a remote socketunc_cha_tor_inserts.io_itom_localuncore cacheItoM (write) transactions from an IO device that addresses memory on the local socketevent=0x35,umask=0xcc42ff0401TOR Inserts : ItoM, indicating a write request, from IO Devices that address memory on the local socketunc_cha_tor_inserts.io_itom_remoteuncore cacheItoM (write) transactions from an IO device that addresses memory on a remote socketevent=0x35,umask=0xcc437f0401TOR Inserts : ItoM, indicating a write request, from IO Devices that address memory on a remote socketunc_cha_tor_inserts.io_pcirdcur_localuncore cachePCIRDCUR (read) transactions from an IO device that addresses memory on the local socketevent=0x35,umask=0xc8f2ff0401TOR Inserts : PCIRdCurs issued by IO Devices that addresses memory on the local socketunc_cha_tor_inserts.io_pcirdcur_remoteuncore cachePCIRDCUR (read) transactions from an IO device that addresses memory on a remote socketevent=0x35,umask=0xc8f37f0401TOR Inserts : PCIRdCurs issued by IO Devices that addresses memory on a remote socketunc_cha_tor_inserts.rem_alluncore cacheAll remote requests (e.g. snoops, writebacks) that came from remote socketsevent=0x35,umask=0xc001ffc801TOR Inserts : All Remote Requestsunc_cha_tor_inserts.rem_snpsuncore cacheAll snoops to this LLC that came from remote socketsevent=0x35,umask=0xc001ff0801TOR Inserts : All Snoops from Remoteunc_cha_tor_occupancy.cxl_hit_clflushuncore cacheTOR Occupancy for CLFlush transactions from a CXL device which hit in the L3event=0x36,umask=0x78c8c7fd2001unc_cha_tor_occupancy.cxl_hit_fsrdcuruncore cacheTOR Occupancy for FsRdCur transactions from a CXL device which hit in the L3event=0x36,umask=0x78c8effd2001unc_cha_tor_occupancy.cxl_hit_fsrdcurptluncore cacheTOR Occupancy for FsRdCurPtl transactions from a CXL device which hit in the L3event=0x36,umask=0x78c9effd2001unc_cha_tor_occupancy.cxl_hit_itomuncore cacheTOR Occupancy for ItoM transactions from a CXL device which hit in the L3event=0x36,umask=0x78cc47fd2001unc_cha_tor_occupancy.cxl_hit_itomwruncore cacheTOR Occupancy for ItoMWr transactions from a CXL device which hit in the L3event=0x36,umask=0x78cc4ffd2001unc_cha_tor_occupancy.cxl_hit_mempushwruncore cacheTOR Occupancy for MemPushWr transactions from a CXL device which hit in the L3event=0x36,umask=0x78cc6ffd2001unc_cha_tor_occupancy.cxl_hit_wciluncore cacheTOR Occupancy for WCiL transactions from a CXL device which hit in the L3event=0x36,umask=0x78c86ffd2001unc_cha_tor_occupancy.cxl_hit_wcilfuncore cacheTOR Occupancy for WcilF transactions from a CXL device which hit in the L3event=0x36,umask=0x78c867fd2001unc_cha_tor_occupancy.cxl_hit_wiluncore cacheTOR Occupancy for WiL transactions from a CXL device which hit in the L3event=0x36,umask=0x78c87ffd2001unc_cha_tor_occupancy.cxl_miss_clflushuncore cacheTOR Occupancy for CLFlush transactions from a CXL device which miss the L3event=0x36,umask=0x78c8c7fe2001unc_cha_tor_occupancy.cxl_miss_fsrdcuruncore cacheTOR Occupancy for FsRdCur transactions from a CXL device which miss the L3event=0x36,umask=0x78c8effe2001unc_cha_tor_occupancy.cxl_miss_fsrdcurptluncore cacheTOR Occupancy for FsRdCurPtl transactions from a CXL device which miss the L3event=0x36,umask=0x78c9effe2001unc_cha_tor_occupancy.cxl_miss_itomuncore cacheTOR Occupancy for ItoM transactions from a CXL device which miss the L3event=0x36,umask=0x78cc47fe2001unc_cha_tor_occupancy.cxl_miss_itomwruncore cacheTOR Occupancy for ItoMWr transactions from a CXL device which miss the L3event=0x36,umask=0x78cc4ffe2001unc_cha_tor_occupancy.cxl_miss_mempushwruncore cacheTOR Occupancy for MemPushWr transactions from a CXL device which miss the L3event=0x36,umask=0x78cc6ffe2001unc_cha_tor_occupancy.cxl_miss_wciluncore cacheTOR Occupancy for WCiL transactions from a CXL device which miss the L3event=0x36,umask=0x78c86ffe2001unc_cha_tor_occupancy.cxl_miss_wcilfuncore cacheTOR Occupancy for WcilF transactions from a CXL device which miss the L3event=0x36,umask=0x78c867fe2001unc_cha_tor_occupancy.cxl_miss_wiluncore cacheTOR Occupancy for WiL transactions from a CXL device which miss the L3event=0x36,umask=0x78c87ffe2001unc_cha_tor_occupancy.ia_drduncore cacheTOR Occupancy for Data read from local IA that miss the cacheevent=0x36,umask=0xc817ff0101TOR Occupancy : DRds issued by iA Coresunc_cha_tor_occupancy.ia_drdpteuncore cacheTOR Occupancy for DRd PTEs issued by iA Cores due to a page walkevent=0x36,umask=0xc837ff0101TOR Occupancy : DRdPte issued by iA Cores due to a page walkunc_cha_tor_occupancy.ia_drd_prefuncore cacheTOR Occupancy for Data read prefetch from local IA that miss the cacheevent=0x36,umask=0xc897ff0101TOR Occupancy : DRd_Prefs issued by iA Coresunc_cha_tor_occupancy.ia_hit_drduncore cacheTOR Occupancy for Data read from local IA that hit the cacheevent=0x36,umask=0xc817fd0101TOR Occupancy : DRds issued by iA Cores that Hit the LLCunc_cha_tor_occupancy.ia_hit_drdpteuncore cacheTOR Occupancy for DRd PTEs issued by iA Cores due to page walks that hit the LLCevent=0x36,umask=0xc837fd0101TOR Occupancy : DRdPte issued by iA Cores due to a page walk that hit the LLCunc_cha_tor_occupancy.ia_hit_drd_prefuncore cacheTOR Occupancy for Data read prefetch from local IA that hit the cacheevent=0x36,umask=0xc897fd0101TOR Occupancy : DRd_Prefs issued by iA Cores that Hit the LLCunc_cha_tor_occupancy.ia_miss_crd_pref_remoteuncore cacheTOR Occupancy for CRD Prefetches from local IA cores to remotely homed memoryevent=0x36,umask=0xc88f7e0101TOR Occupancy : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed remotelyunc_cha_tor_occupancy.ia_miss_crd_remoteuncore cacheTOR Occupancy for CRDs from local IA cores to remotely homed memoryevent=0x36,umask=0xc80f7e0101TOR Occupancy : CRd issued by iA Cores that Missed the LLC - HOMed remotelyunc_cha_tor_occupancy.ia_miss_drduncore cacheTOR Occupancy for Data read from local IA that miss the cacheevent=0x36,umask=0xc817fe0101TOR Occupancy : DRds issued by iA Cores that Missed the LLCunc_cha_tor_occupancy.ia_miss_drdpteuncore cacheTOR Occupancy for DRd PTEs issued by iA Cores due to a page walk that missed the LLCevent=0x36,umask=0xc837fe0101TOR Occupancy : DRdPte issued by iA Cores due to a page walk that missed the LLCunc_cha_tor_occupancy.ia_miss_drd_ddruncore cacheTOR Occupancy for DRds issued by iA Cores targeting DDR Mem that Missed the LLCevent=0x36,umask=0xc817860101TOR Occupancy : DRds issued by iA Cores targeting DDR Mem that Missed the LLCunc_cha_tor_occupancy.ia_miss_drd_localuncore cacheTOR Occupancy for Data read from local IA that miss the cacheevent=0x36,umask=0xc816fe0101TOR Occupancy : DRds issued by iA Cores that Missed the LLC - HOMed locallyunc_cha_tor_occupancy.ia_miss_drd_local_ddruncore cacheTOR Occupancy for DRds from local IA cores to locally homed DDR addresses that miss the cacheevent=0x36,umask=0xc816860101TOR Occupancy : DRds issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locallyunc_cha_tor_occupancy.ia_miss_drd_local_pmmuncore cacheTOR Occupancy for DRds from local IA cores to locally homed PMM addresses that miss the cacheevent=0x36,umask=0xc8168a0101TOR Occupancy : DRds issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locallyunc_cha_tor_occupancy.ia_miss_drd_pmmuncore cacheTOR Occupancy for DRds issued by iA Cores targeting PMM Mem that Missed the LLCevent=0x36,umask=0xc8178a0101TOR Occupancy : DRds issued by iA Cores targeting PMM Mem that Missed the LLCunc_cha_tor_occupancy.ia_miss_drd_prefuncore cacheTOR Occupancy for Data read prefetch from local IA that miss the cacheevent=0x36,umask=0xc897fe0101TOR Occupancy : DRd_Prefs issued by iA Cores that Missed the LLCunc_cha_tor_occupancy.ia_miss_drd_pref_ddruncore cacheTOR Occupancy for DRd Prefetches from local IA cores to DDR addresses that miss the cacheevent=0x36,umask=0xc897860101TOR Occupancy : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLCunc_cha_tor_occupancy.ia_miss_drd_pref_localuncore cacheTOR Occupancy for Data read prefetch from local IA that miss the cacheevent=0x36,umask=0xc896fe0101TOR Occupancy; Data read prefetch from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_drd_pref_local_ddruncore cacheTOR Occupancy for DRd Prefetches from local IA cores to locally homed DDR addresses that miss the cacheevent=0x36,umask=0xc896860101TOR Occupancy : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locallyunc_cha_tor_occupancy.ia_miss_drd_pref_local_pmmuncore cacheTOR Occupancy for DRd Prefetches from local IA cores to locally homed PMM addresses that miss the cacheevent=0x36,umask=0xc8968a0101TOR Occupancy : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locallyunc_cha_tor_occupancy.ia_miss_drd_pref_pmmuncore cacheTOR Occupancy for DRd Prefetches from local IA cores to PMM addresses that miss the cacheevent=0x36,umask=0xc8978a0101TOR Occupancy : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLCunc_cha_tor_occupancy.ia_miss_drd_pref_remoteuncore cacheTOR Occupancy for Data read prefetch from local IA that miss the cacheevent=0x36,umask=0xc8977e0101TOR Occupancy; Data read prefetch from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_drd_pref_remote_ddruncore cacheTOR Occupancy for DRd Prefetches from local IA cores to remotely homed DDR addresses that miss the cacheevent=0x36,umask=0xc897060101TOR Occupancy : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotelyunc_cha_tor_occupancy.ia_miss_drd_pref_remote_pmmuncore cacheTOR Occupancy for DRd Prefetches from local IA cores to remotely homed PMM addresses that miss the cacheevent=0x36,umask=0xc8970a0101TOR Occupancy : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotelyunc_cha_tor_occupancy.ia_miss_drd_remoteuncore cacheTOR Occupancy for Data read from local IA that miss the cacheevent=0x36,umask=0xc8177e0101TOR Occupancy : DRds issued by iA Cores that Missed the LLC - HOMed remotelyunc_cha_tor_occupancy.ia_miss_drd_remote_ddruncore cacheTOR Occupancy for DRds from local IA cores to remotely homed DDR addresses that miss the cacheevent=0x36,umask=0xc817060101TOR Occupancy : DRds issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotelyunc_cha_tor_occupancy.ia_miss_drd_remote_pmmuncore cacheTOR Occupancy for DRds from local IA cores to remotely homed PMM addresses that miss the cacheevent=0x36,umask=0xc8170a0101TOR Occupancy : DRds issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotelyunc_cha_tor_occupancy.ia_miss_local_wcilf_pmmuncore cacheTOR Occupancy for WCILF requests from local IA cores to locally homed PMM addresses which miss the cacheevent=0x36,umask=0xc8668a0101TOR Occupancy : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed locallyunc_cha_tor_occupancy.ia_miss_local_wcil_pmmuncore cacheTOR Occupancy for WCIL requests from local IA cores to locally homed PMM addresses which miss the cacheevent=0x36,umask=0xc86e8a0101TOR Occupancy : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed locallyunc_cha_tor_occupancy.ia_miss_remote_wcilf_ddruncore cacheTOR Occupancy for WCILF requests from local IA cores to remotely homed DDR addresses that miss the cacheevent=0x36,umask=0xc867060101TOR Occupancy : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed remotelyunc_cha_tor_occupancy.ia_miss_remote_wcilf_pmmuncore cacheTOR Occupancy for WCILF requests from local IA cores to remotely homed PMM addresses which miss the cacheevent=0x36,umask=0xc8670a0101TOR Occupancy : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed remotelyunc_cha_tor_occupancy.ia_miss_remote_wcil_ddruncore cacheTOR Occupancy for WCIL requests from local IA cores to remotely homed DDR addresses that miss the cacheevent=0x36,umask=0xc86f060101TOR Occupancy : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed remotelyunc_cha_tor_occupancy.ia_miss_remote_wcil_pmmuncore cacheTOR Occupancy for WCIL requests from local IA cores to remotely homed PMM addresses which miss the cacheevent=0x36,umask=0xc86f0a0101TOR Occupancy : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed remotelyunc_cha_tor_occupancy.ia_miss_rfo_pref_remoteuncore cacheTOR Occupancy for Read for ownership prefetch from local IA that miss the cacheevent=0x36,umask=0xc8877e0101TOR Occupancy : RFO_Prefs issued by iA Cores that Missed the LLC - HOMed remotelyunc_cha_tor_occupancy.ia_miss_rfo_remoteuncore cacheTOR Occupancy for Read for ownership from local IA that miss the cacheevent=0x36,umask=0xc8077e0101TOR Occupancy : RFOs issued by iA Cores that Missed the LLC - HOMed remotelyunc_cha_tor_occupancy.ia_miss_wcilf_pmmuncore cacheTOR Occupancy for WCILF requests from local IA cores to PMM homed addresses which miss the cacheevent=0x36,umask=0xc8678a0101TOR Occupancy : WCiLFs issued by iA Cores targeting PMM that missed the LLCunc_cha_tor_occupancy.ia_miss_wcil_pmmuncore cacheTOR Occupancy for WCIL requests from a local IA core to PMM homed addresses that miss the cacheevent=0x36,umask=0xc86f8a0101TOR Occupancy : WCiLs issued by iA Cores targeting PMM that missed the LLCunc_cha_tor_occupancy.io_miss_itomcachenear_localuncore cacheTOR Occupancy for ItoMCacheNear transactions from an IO device on the local socket that miss the cacheevent=0x36,umask=0xcd42fe0401TOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLCunc_cha_tor_occupancy.io_miss_itomcachenear_remoteuncore cacheTOR Occupancy for ItoMCacheNear transactions from an IO device on a remote socket that miss the cacheevent=0x36,umask=0xcd437e0401TOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLCunc_cha_tor_occupancy.io_miss_itom_localuncore cacheTOR Occupancy for ItoM transactions from an IO device on the local socket that miss the cacheevent=0x36,umask=0xcc42fe0401TOR Occupancy : ItoMs issued by IO Devices that missed the LLCunc_cha_tor_occupancy.io_miss_itom_remoteuncore cacheTOR Occupancy for ItoM transactions from an IO device on a remote socket that miss the cacheevent=0x36,umask=0xcc437e0401TOR Occupancy : ItoMs issued by IO Devices that missed the LLCunc_cha_tor_occupancy.io_miss_pcirdcur_localuncore cacheTOR Occupancy for PCIRDCUR transactions from an IO device on the local socket that miss the cacheevent=0x36,umask=0xc8f2fe0401TOR Occupancy : PCIRdCurs issued by IO Devices that missed the LLCunc_cha_tor_occupancy.io_miss_pcirdcur_remoteuncore cacheTOR Occupancy for PCIRDCUR transactions from an IO device on a remote socket that miss the cacheevent=0x36,umask=0xc8f37e0401TOR Occupancy : PCIRdCurs issued by IO Devices that missed the LLCunc_cha_tor_occupancy.rem_alluncore cacheTOR Occupancy for All remote requests (e.g. snoops, writebacks) that came from remote socketsevent=0x36,umask=0xc001ffc801TOR Occupancy : All Remote Requestsunc_cha_tor_occupancy.rem_snpsuncore cacheTOR Occupancy for All snoops to this LLC that came from remote socketsevent=0x36,umask=0xc001ff0801TOR Occupancy : All Snoops from Remoteuncore_b2cxlunc_b2cxl_clockticksuncore cxlB2CXL Clockticksevent=101unc_b2cmi_direct2core_not_taken_dirstateuncore interconnectCounts the number of time D2C was not honoured by egress due to directory state constraintsevent=0x17,umask=101unc_b2cmi_direct2upi_not_taken_creditsuncore interconnectCounts the number of d2k wasn't done due to credit constraintsevent=0x1b,umask=101unc_b2cmi_direct2upi_not_taken_credits.egressuncore interconnectDirect to UPI Transactions - Ignored due to lack of credits : All : Counts the number of d2k wasn't done due to credit constraintsevent=0x1b,umask=101unc_b2cmi_direct2upi_not_taken_dirstateuncore interconnectCounts the number of time D2K was not honoured by egress due to directory state constraintsevent=0x1a,umask=101unc_b2cmi_direct2upi_not_taken_dirstate.egressuncore interconnectCycles when Direct2UPI was Disabled : Egress Ignored D2U : Counts the number of time D2K was not honoured by egress due to directory state constraintsevent=0x1a,umask=101unc_b2cmi_direct2upi_takenuncore interconnectCounts the number of times egress did D2K (Direct to KTI)event=0x19,umask=101unc_b2cmi_direct2upi_txn_overrideuncore interconnectCounts the number of times D2K wasn't honoured even though the incoming request had d2k set for non cisgress txnevent=0x1c,umask=101unc_b2cmi_directory_hit.cleanuncore interconnectDirectory Hit Cleanevent=0x1d,umask=0x3801unc_b2cmi_directory_hit.clean_auncore interconnectDirectory Hit : On NonDirty Line in A Stateevent=0x1d,umask=0x2001unc_b2cmi_directory_hit.clean_iuncore interconnectDirectory Hit : On NonDirty Line in I Stateevent=0x1d,umask=801unc_b2cmi_directory_hit.clean_suncore interconnectDirectory Hit : On NonDirty Line in S Stateevent=0x1d,umask=0x1001unc_b2cmi_directory_hit.dirtyuncore interconnectDirectory Hit Dirty (modified)event=0x1d,umask=701unc_b2cmi_directory_hit.dirty_auncore interconnectDirectory Hit : On Dirty Line in A Stateevent=0x1d,umask=401unc_b2cmi_directory_hit.dirty_iuncore interconnectDirectory Hit : On Dirty Line in I Stateevent=0x1d,umask=101unc_b2cmi_directory_hit.dirty_suncore interconnectDirectory Hit : On Dirty Line in S Stateevent=0x1d,umask=201unc_b2cmi_directory_lookup.anyuncore interconnectCounts the number of 1lm or 2lm hit read data returns to egress with any directory to non persistent memoryevent=0x20,umask=101unc_b2cmi_directory_lookup.state_auncore interconnectCounts the number of 1lm or 2lm hit read data returns to egress with directory A to non persistent memoryevent=0x20,umask=801unc_b2cmi_directory_lookup.state_iuncore interconnectCounts the number of 1lm or 2lm hit read data returns to egress with directory I to non persistent memoryevent=0x20,umask=201unc_b2cmi_directory_lookup.state_suncore interconnectCounts the number of 1lm or 2lm hit read data returns to egress with directory S to non persistent memoryevent=0x20,umask=401Counts the number of 1lm or 2lm hit read  data returns to egress with directory S to non persistent memoryunc_b2cmi_directory_miss.cleanuncore interconnectDirectory Miss Cleanevent=0x1e,umask=0x3801unc_b2cmi_directory_miss.clean_auncore interconnectDirectory Miss : On NonDirty Line in A Stateevent=0x1e,umask=0x2001unc_b2cmi_directory_miss.clean_iuncore interconnectDirectory Miss : On NonDirty Line in I Stateevent=0x1e,umask=801unc_b2cmi_directory_miss.clean_suncore interconnectDirectory Miss : On NonDirty Line in S Stateevent=0x1e,umask=0x1001unc_b2cmi_directory_miss.dirtyuncore interconnectDirectory Miss Dirty (modified)event=0x1e,umask=701unc_b2cmi_directory_miss.dirty_auncore interconnectDirectory Miss : On Dirty Line in A Stateevent=0x1e,umask=401unc_b2cmi_directory_miss.dirty_iuncore interconnectDirectory Miss : On Dirty Line in I Stateevent=0x1e,umask=101unc_b2cmi_directory_miss.dirty_suncore interconnectDirectory Miss : On Dirty Line in S Stateevent=0x1e,umask=201unc_b2cmi_directory_update.a2iuncore interconnectAny A2I Transitionevent=0x21,umask=0x32001unc_b2cmi_directory_update.a2suncore interconnectAny A2S Transitionevent=0x21,umask=0x34001unc_b2cmi_directory_update.anyuncore interconnectCounts cisgress directory updatesevent=0x21,umask=0x30101unc_b2cmi_directory_update.hit_anyuncore interconnectCounts any 1lm or 2lm hit data return that would result in directory update to non persistent memory (DRAM)event=0x21,umask=0x10101unc_b2cmi_directory_update.hit_x2auncore interconnectDirectory update in near memory to the A stateevent=0x21,umask=0x11401unc_b2cmi_directory_update.hit_x2iuncore interconnectDirectory update in near memory to the I stateevent=0x21,umask=0x12801unc_b2cmi_directory_update.hit_x2suncore interconnectDirectory update in near memory to the S stateevent=0x21,umask=0x14201unc_b2cmi_directory_update.i2auncore interconnectAny I2A Transitionevent=0x21,umask=0x30401unc_b2cmi_directory_update.i2suncore interconnectAny I2S Transitionevent=0x21,umask=0x30201unc_b2cmi_directory_update.miss_x2auncore interconnectDirectory update in far memory to the A stateevent=0x21,umask=0x21401unc_b2cmi_directory_update.miss_x2iuncore interconnectDirectory update in far memory to the I stateevent=0x21,umask=0x22801unc_b2cmi_directory_update.miss_x2suncore interconnectDirectory update in far memory to the S stateevent=0x21,umask=0x24201unc_b2cmi_directory_update.s2auncore interconnectAny S2A Transitionevent=0x21,umask=0x31001unc_b2cmi_directory_update.s2iuncore interconnectAny S2I Transitionevent=0x21,umask=0x30801unc_b2cmi_directory_update.x2auncore interconnectDirectory update to the A stateevent=0x21,umask=0x31401unc_b2cmi_directory_update.x2iuncore interconnectDirectory update to the I stateevent=0x21,umask=0x32801unc_b2cmi_directory_update.x2suncore interconnectDirectory update to the S stateevent=0x21,umask=0x34201unc_b2cmi_imc_reads.to_ddr_as_cacheuncore interconnectCount reads to NM regionevent=0x24,umask=0x11001unc_b2cmi_imc_writes.niuncore interconnectNon-Inclusive - All Channelsevent=0x2501unc_b2cmi_imc_writes.ni_missuncore interconnectNon-Inclusive Miss - All Channelsevent=0x2501unc_b2cmi_imc_writes.to_ddr_as_cacheuncore interconnectDDR, acting as Cache - All Channelsevent=0x25,umask=0x14001unc_b2cmi_prefcam_inserts.ch0_upiuncore interconnectPrefetch CAM Inserts : UPI - Ch 0event=0x56,umask=201unc_b2cmi_prefcam_inserts.upi_allchuncore interconnectPrefetch CAM Inserts : UPI - All Channelsevent=0x56,umask=201unc_b2cmi_tag_hit.alluncore interconnectCounts the 2lm reads and WRNI which were a hitevent=0x1f,umask=0xf01unc_b2cmi_tag_hit.rd_cleanuncore interconnectCounts the 2lm reads which were a hit cleanevent=0x1f,umask=101unc_b2cmi_tag_hit.rd_dirtyuncore interconnectCounts the 2lm reads which were a hit dirtyevent=0x1f,umask=201unc_b2cmi_tag_hit.wr_cleanuncore interconnectCounts the 2lm WRNI which were a hit cleanevent=0x1f,umask=401unc_b2cmi_tag_hit.wr_dirtyuncore interconnectCounts the 2lm WRNI which were a hit dirtyevent=0x1f,umask=801unc_b2cmi_tag_miss.cleanuncore interconnectCounts the 2lm second way read miss for a WrNIevent=0x4b,umask=501unc_b2cmi_tag_miss.dirtyuncore interconnectCounts the 2lm second way read miss for a WrNIevent=0x4b,umask=0xa01unc_b2cmi_tag_miss.rd_2wayuncore interconnectCounts the 2lm second way read miss for a Rdevent=0x4b,umask=0x1001unc_b2cmi_tag_miss.rd_cleanuncore interconnectCounts the 2lm reads which were a miss and the cache line is unmodifiedevent=0x4b,umask=101unc_b2cmi_tag_miss.rd_dirtyuncore interconnectCounts the 2lm reads which were a miss and the cache line is modifiedevent=0x4b,umask=201unc_b2cmi_tag_miss.wr_2wayuncore interconnectCounts the 2lm second way read miss for a WrNIevent=0x4b,umask=0x2001unc_b2cmi_tag_miss.wr_cleanuncore interconnectCounts the 2lm WRNI which were a miss and the cache line is unmodifiedevent=0x4b,umask=401unc_b2cmi_tag_miss.wr_dirtyuncore interconnectCounts the 2lm WRNI which were a miss and the cache line is modifiedevent=0x4b,umask=801uncore_b2hotunc_b2hot_clockticksuncore interconnectUNC_B2HOT_CLOCKTICKSevent=1,umask=101Clockticks for the B2HOT unituncore_b2upiunc_b2upi_clockticksuncore interconnectNumber of uclks in domainevent=101unc_mdf_clockticksuncore interconnectMDF Clockticksevent=101unc_mdf_rxr_bypass.ad_bncuncore interconnectNumber of packets bypassing the ingress queueevent=0x14,umask=101unc_mdf_rxr_bypass.ad_crduncore interconnectNumber of packets bypassing the ingress queueevent=0x14,umask=0x1001unc_mdf_rxr_bypass.akuncore interconnectNumber of packets bypassing the ingress queueevent=0x14,umask=201unc_mdf_rxr_bypass.bl_bncuncore interconnectNumber of packets bypassing the ingress queueevent=0x14,umask=401unc_mdf_rxr_bypass.bl_crduncore interconnectNumber of packets bypassing the ingress queueevent=0x14,umask=0x2001unc_mdf_rxr_bypass.ivuncore interconnectNumber of packets bypassing the ingress queueevent=0x14,umask=801unc_mdf_rxr_inserts.ad_bncuncore interconnectNumber of allocations into the Ingress  used to queue up requests from the mesh (AD_BNC)event=0x12,umask=101unc_mdf_rxr_inserts.ad_crduncore interconnectNumber of allocations into the Ingress  used to queue up requests from the mesh (AD)event=0x12,umask=0x1001unc_mdf_rxr_inserts.akuncore interconnectNumber of allocations into the Ingress  used to queue up requests from the mesh (AK)event=0x12,umask=201unc_mdf_rxr_inserts.bl_bncuncore interconnectNumber of allocations into the Ingress  used to queue up requests from the mesh (BL_BNC)event=0x12,umask=401unc_mdf_rxr_inserts.bl_crduncore interconnectNumber of allocations into the Ingress  used to queue up requests from the mesh (BL_CRD)event=0x12,umask=0x2001unc_mdf_rxr_inserts.ivuncore interconnectNumber of allocations into the Ingress  used to queue up requests from the mesh (IV)event=0x12,umask=801unc_mdf_rxr_occupancy.ad_bncuncore interconnectOccupancy counts for the Ingress bufferevent=0x13,umask=101unc_mdf_rxr_occupancy.ad_crduncore interconnectOccupancy counts for the Ingress bufferevent=0x13,umask=0x1001unc_mdf_rxr_occupancy.akuncore interconnectOccupancy counts for the Ingress bufferevent=0x13,umask=201unc_mdf_rxr_occupancy.bl_bncuncore interconnectOccupancy counts for the Ingress bufferevent=0x13,umask=401unc_mdf_rxr_occupancy.bl_crduncore interconnectOccupancy counts for the Ingress bufferevent=0x13,umask=0x2001unc_mdf_rxr_occupancy.ivuncore interconnectOccupancy counts for the Ingress bufferevent=0x13,umask=801unc_mdf_txr_bypass.ad_bncuncore interconnectEgress bypasses for for AD_BNCevent=0x1e,umask=101unc_mdf_txr_bypass.ad_crduncore interconnectEgress bypasses for for AD_CRDevent=0x1e,umask=0x1001unc_mdf_txr_bypass.akuncore interconnectEgress bypasses for for AKevent=0x1e,umask=201unc_mdf_txr_bypass.bl_bncuncore interconnectEgress bypasses for for BL_BNCevent=0x1e,umask=401unc_mdf_txr_bypass.bl_crduncore interconnectEgress bypasses for for BL_CRDevent=0x1e,umask=0x2001unc_mdf_txr_bypass.ivuncore interconnectEgress bypasses for for IVevent=0x1e,umask=801unc_mdf_txr_inserts.ad_bncuncore interconnectNumber of egress inserts for for AD_BNCevent=0x1c,umask=101unc_mdf_txr_inserts.ad_crduncore interconnectNumber of egress inserts for for AD_CRDevent=0x1c,umask=0x1001unc_mdf_txr_inserts.akuncore interconnectNumber of egress inserts for for AKevent=0x1c,umask=201unc_mdf_txr_inserts.bl_bncuncore interconnectNumber of egress inserts for for BL_BNCevent=0x1c,umask=401unc_mdf_txr_inserts.bl_crduncore interconnectNumber of egress inserts for for BL_CRDevent=0x1c,umask=0x2001unc_mdf_txr_inserts.ivuncore interconnectNumber of egress inserts for for IVevent=0x1c,umask=801unc_mdf_txr_occupancy.ad_bncuncore interconnectEgress occupancy for for AD_BNCevent=0x1d,umask=101unc_mdf_txr_occupancy.ad_crduncore interconnectEgress occupancy for for AD_CRDevent=0x1d,umask=0x1001unc_mdf_txr_occupancy.akuncore interconnectEgress occupancy for for AKevent=0x1d,umask=201unc_mdf_txr_occupancy.bl_bncuncore interconnectEgress occupancy for for BL_BNCevent=0x1d,umask=401unc_mdf_txr_occupancy.bl_crduncore interconnectEgress occupancy for for BL_CRDevent=0x1d,umask=0x2001unc_mdf_txr_occupancy.ivuncore interconnectEgress occupancy for for IVevent=0x1d,umask=801unc_upi_clockticksuncore interconnectNumber of UPI LL clock cycles while the event is enabledevent=101Number of kfclksunc_upi_l1_power_cyclesuncore interconnectCycles in L1 : Number of UPI qfclk cycles spent in L1 power mode.  L1 is a mode that totally shuts down a UPI link.  Use edge detect to count the number of instances when the UPI link entered L1.  Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another. Because L1 totally shuts down the link, it takes a good amount of time to exit this modeevent=0x2101unc_upi_rxl_basic_hdr_match.ncbuncore interconnectMatches on Receive path of a UPI Port : Non-Coherent Bypassevent=5,umask=0xe01unc_upi_rxl_basic_hdr_match.ncb_opcuncore interconnectMatches on Receive path of a UPI Port : Non-Coherent Bypass, Match Opcodeevent=5,umask=0x10e01unc_upi_rxl_basic_hdr_match.ncsuncore interconnectMatches on Receive path of a UPI Port : Non-Coherent Standardevent=5,umask=0xf01unc_upi_rxl_basic_hdr_match.ncs_opcuncore interconnectMatches on Receive path of a UPI Port : Non-Coherent Standard, Match Opcodeevent=5,umask=0x10f01unc_upi_rxl_basic_hdr_match.requncore interconnectMatches on Receive path of a UPI Port : Requestevent=5,umask=801unc_upi_rxl_basic_hdr_match.req_opcuncore interconnectMatches on Receive path of a UPI Port : Request, Match Opcodeevent=5,umask=0x10801unc_upi_rxl_basic_hdr_match.rspcnfltuncore interconnectMatches on Receive path of a UPI Port : Response - Conflictevent=5,umask=0x1aa01unc_upi_rxl_basic_hdr_match.rspiuncore interconnectMatches on Receive path of a UPI Port : Response - Invalidevent=5,umask=0x12a01unc_upi_rxl_basic_hdr_match.rsp_datauncore interconnectMatches on Receive path of a UPI Port : Response - Dataevent=5,umask=0xc01unc_upi_rxl_basic_hdr_match.rsp_data_opcuncore interconnectMatches on Receive path of a UPI Port : Response - Data, Match Opcodeevent=5,umask=0x10c01unc_upi_rxl_basic_hdr_match.rsp_nodatauncore interconnectMatches on Receive path of a UPI Port : Response - No Dataevent=5,umask=0xa01unc_upi_rxl_basic_hdr_match.rsp_nodata_opcuncore interconnectMatches on Receive path of a UPI Port : Response - No Data, Match Opcodeevent=5,umask=0x10a01unc_upi_rxl_basic_hdr_match.snpuncore interconnectMatches on Receive path of a UPI Port : Snoopevent=5,umask=901unc_upi_rxl_basic_hdr_match.snp_opcuncore interconnectMatches on Receive path of a UPI Port : Snoop, Match Opcodeevent=5,umask=0x10901unc_upi_rxl_basic_hdr_match.wbuncore interconnectMatches on Receive path of a UPI Port : Writebackevent=5,umask=0xd01unc_upi_rxl_basic_hdr_match.wb_opcuncore interconnectMatches on Receive path of a UPI Port : Writeback, Match Opcodeevent=5,umask=0x10d01unc_upi_rxl_flits.all_datauncore interconnectValid Flits Received : All Data : Shows legal flit time (hides impact of L0p and L0c)event=3,umask=0xf01unc_upi_rxl_flits.all_nulluncore interconnectNull FLITs received from any slotevent=3,umask=0x2701Valid Flits Received : Null FLITs received from any slotunc_upi_rxl_flits.datauncore interconnectValid Flits Received : Data : Shows legal flit time (hides impact of L0p and L0c). : Count Data Flits (which consume all slots), but how much to count is based on Slot0-2 mask, so count can be 0-3 depending on which slots are enabled for counting.event=3,umask=801unc_upi_rxl_flits.idleuncore interconnectValid Flits Received : Idle : Shows legal flit time (hides impact of L0p and L0c)event=3,umask=0x4701unc_upi_rxl_flits.llcrduncore interconnectValid Flits Received : LLCRD Not Empty : Shows legal flit time (hides impact of L0p and L0c). : Enables counting of LLCRD (with non-zero payload). This only applies to slot 2 since LLCRD is only allowed in slot 2event=3,umask=0x1001unc_upi_rxl_flits.llctrluncore interconnectValid Flits Received : LLCTRL : Shows legal flit time (hides impact of L0p and L0c). : Equivalent to an idle packet.  Enables counting of slot 0 LLCTRL messagesevent=3,umask=0x4001unc_upi_rxl_flits.non_datauncore interconnectValid Flits Received : All Non Data : Shows legal flit time (hides impact of L0p and L0c)event=3,umask=0x9701unc_upi_rxl_flits.nulluncore interconnectValid Flits Received : Slot NULL or LLCRD Empty : Shows legal flit time (hides impact of L0p and L0c). : LLCRD with all zeros is treated as NULL. Slot 1 is not treated as NULL if slot 0 is a dual slot. This can apply to slot 0,1, or 2event=3,umask=0x2001unc_upi_rxl_flits.prothdruncore interconnectValid Flits Received : Protocol Header : Shows legal flit time (hides impact of L0p and L0c). : Enables count of protocol headers in slot 0,1,2 (depending on slot uMask bits)event=3,umask=0x8001unc_upi_rxl_flits.slot0uncore interconnectValid Flits Received : Slot 0 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 0 - Other mask bits determine types of headers to countevent=3,umask=101unc_upi_rxl_flits.slot1uncore interconnectValid Flits Received : Slot 1 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 1 - Other mask bits determine types of headers to countevent=3,umask=201unc_upi_rxl_flits.slot2uncore interconnectValid Flits Received : Slot 2 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 2 - Other mask bits determine types of headers to countevent=3,umask=401unc_upi_rxl_inserts.slot0uncore interconnectRxQ Flit Buffer Allocations : Slot 0 : Number of allocations into the UPI Rx Flit Buffer.  Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetimeevent=0x30,umask=101unc_upi_rxl_inserts.slot1uncore interconnectRxQ Flit Buffer Allocations : Slot 1 : Number of allocations into the UPI Rx Flit Buffer.  Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetimeevent=0x30,umask=201unc_upi_rxl_inserts.slot2uncore interconnectRxQ Flit Buffer Allocations : Slot 2 : Number of allocations into the UPI Rx Flit Buffer.  Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetimeevent=0x30,umask=401unc_upi_rxl_occupancy.slot0uncore interconnectRxQ Occupancy - All Packets : Slot 0event=0x32,umask=101unc_upi_rxl_occupancy.slot1uncore interconnectRxQ Occupancy - All Packets : Slot 1event=0x32,umask=201unc_upi_rxl_occupancy.slot2uncore interconnectRxQ Occupancy - All Packets : Slot 2event=0x32,umask=401unc_upi_txl_basic_hdr_match.ncbuncore interconnectMatches on Transmit path of a UPI Port : Non-Coherent Bypassevent=4,umask=0xe01unc_upi_txl_basic_hdr_match.ncb_opcuncore interconnectMatches on Transmit path of a UPI Port : Non-Coherent Bypass, Match Opcodeevent=4,umask=0x10e01unc_upi_txl_basic_hdr_match.ncsuncore interconnectMatches on Transmit path of a UPI Port : Non-Coherent Standardevent=4,umask=0xf01unc_upi_txl_basic_hdr_match.ncs_opcuncore interconnectMatches on Transmit path of a UPI Port : Non-Coherent Standard, Match Opcodeevent=4,umask=0x10f01unc_upi_txl_basic_hdr_match.requncore interconnectMatches on Transmit path of a UPI Port : Requestevent=4,umask=801unc_upi_txl_basic_hdr_match.req_opcuncore interconnectMatches on Transmit path of a UPI Port : Request, Match Opcodeevent=4,umask=0x10801unc_upi_txl_basic_hdr_match.rspcnfltuncore interconnectMatches on Transmit path of a UPI Port : Response - Conflictevent=4,umask=0x1aa01unc_upi_txl_basic_hdr_match.rspiuncore interconnectMatches on Transmit path of a UPI Port : Response - Invalidevent=4,umask=0x12a01unc_upi_txl_basic_hdr_match.rsp_datauncore interconnectMatches on Transmit path of a UPI Port : Response - Dataevent=4,umask=0xc01unc_upi_txl_basic_hdr_match.rsp_data_opcuncore interconnectMatches on Transmit path of a UPI Port : Response - Data, Match Opcodeevent=4,umask=0x10c01unc_upi_txl_basic_hdr_match.rsp_nodatauncore interconnectMatches on Transmit path of a UPI Port : Response - No Dataevent=4,umask=0xa01unc_upi_txl_basic_hdr_match.rsp_nodata_opcuncore interconnectMatches on Transmit path of a UPI Port : Response - No Data, Match Opcodeevent=4,umask=0x10a01unc_upi_txl_basic_hdr_match.snpuncore interconnectMatches on Transmit path of a UPI Port : Snoopevent=4,umask=901unc_upi_txl_basic_hdr_match.snp_opcuncore interconnectMatches on Transmit path of a UPI Port : Snoop, Match Opcodeevent=4,umask=0x10901unc_upi_txl_basic_hdr_match.wbuncore interconnectMatches on Transmit path of a UPI Port : Writebackevent=4,umask=0xd01unc_upi_txl_basic_hdr_match.wb_opcuncore interconnectMatches on Transmit path of a UPI Port : Writeback, Match Opcodeevent=4,umask=0x10d01unc_upi_txl_flits.all_datauncore interconnectValid Flits Sent : All Data : Counts number of data flits across this UPI linkevent=2,umask=0xf01unc_upi_txl_flits.all_nulluncore interconnectAll Null Flitsevent=2,umask=0x2701Valid Flits Sent : Idleunc_upi_txl_flits.datauncore interconnectValid Flits Sent : Data : Shows legal flit time (hides impact of L0p and L0c). : Count Data Flits (which consume all slots), but how much to count is based on Slot0-2 mask, so count can be 0-3 depending on which slots are enabled for counting.event=2,umask=801unc_upi_txl_flits.idleuncore interconnectValid Flits Sent : Idle : Shows legal flit time (hides impact of L0p and L0c)event=2,umask=0x4701unc_upi_txl_flits.llcrduncore interconnectValid Flits Sent : LLCRD Not Empty : Shows legal flit time (hides impact of L0p and L0c). : Enables counting of LLCRD (with non-zero payload). This only applies to slot 2 since LLCRD is only allowed in slot 2event=2,umask=0x1001unc_upi_txl_flits.llctrluncore interconnectValid Flits Sent : LLCTRL : Shows legal flit time (hides impact of L0p and L0c). : Equivalent to an idle packet.  Enables counting of slot 0 LLCTRL messagesevent=2,umask=0x4001unc_upi_txl_flits.non_datauncore interconnectValid Flits Sent : All Non Data : Shows legal flit time (hides impact of L0p and L0c)event=2,umask=0x9701Valid Flits Sent : Null FLITs transmitted to any slotunc_upi_txl_flits.nulluncore interconnectValid Flits Sent : Slot NULL or LLCRD Empty : Shows legal flit time (hides impact of L0p and L0c). : LLCRD with all zeros is treated as NULL. Slot 1 is not treated as NULL if slot 0 is a dual slot. This can apply to slot 0,1, or 2event=2,umask=0x2001unc_upi_txl_flits.prothdruncore interconnectValid Flits Sent : Protocol Header : Shows legal flit time (hides impact of L0p and L0c). : Enables count of protocol headers in slot 0,1,2 (depending on slot uMask bits)event=2,umask=0x8001unc_upi_txl_flits.slot0uncore interconnectValid Flits Sent : Slot 0 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 0 - Other mask bits determine types of headers to countevent=2,umask=101unc_upi_txl_flits.slot1uncore interconnectValid Flits Sent : Slot 1 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 1 - Other mask bits determine types of headers to countevent=2,umask=201unc_upi_txl_flits.slot2uncore interconnectValid Flits Sent : Slot 2 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 2 - Other mask bits determine types of headers to countevent=2,umask=401unc_upi_txl_insertsuncore interconnectTx Flit Buffer Allocations : Number of allocations into the UPI Tx Flit Buffer.  Generally, when data is transmitted across UPI, it will bypass the TxQ and pass directly to the link.  However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetimeevent=0x4001unc_upi_txl_occupancyuncore interconnectTx Flit Buffer Occupancy : Accumulates the number of flits in the TxQ.  Generally, when data is transmitted across UPI, it will bypass the TxQ and pass directly to the link.  However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link. This can be used with the cycles not empty event to track average occupancy, or the allocations event to track average lifetime in the TxQevent=0x4201unc_iio_data_req_by_cpu.peer_read.all_partsuncore ioData requested by the CPU : Another card (different IIO stack) reading from this cardevent=0xc0,ch_mask=0xff,fc_mask=7,umask=0x70ff00801unc_iio_data_req_by_cpu.peer_write.all_partsuncore ioData requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc0,ch_mask=0xff,fc_mask=7,umask=0x70ff00201unc_iio_data_req_of_cpu.mem_read.all_partsuncore ioCounts once for every 4 bytes read from this card to memory.  This event does include reads to IOevent=0x83,ch_mask=0xff,fc_mask=7,umask=0x70ff00401unc_iio_data_req_of_cpu.mem_write.all_partsuncore ioCounts once for every 4 bytes written from this card to memory.  This event does include writes to IOevent=0x83,ch_mask=0xff,fc_mask=7,umask=0x70ff00101unc_iio_data_req_of_cpu.peer_read.part0uncore ioData requested of the CPU : Card reading from another Card (same or different stack)event=0x83,ch_mask=1,fc_mask=7,umask=0x700100801unc_iio_data_req_of_cpu.peer_read.part1uncore ioData requested of the CPU : Card reading from another Card (same or different stack)event=0x83,ch_mask=2,fc_mask=7,umask=0x700200801unc_iio_data_req_of_cpu.peer_read.part2uncore ioData requested of the CPU : Card reading from another Card (same or different stack)event=0x83,ch_mask=4,fc_mask=7,umask=0x700400801unc_iio_data_req_of_cpu.peer_read.part3uncore ioData requested of the CPU : Card reading from another Card (same or different stack)event=0x83,ch_mask=8,fc_mask=7,umask=0x700800801unc_iio_data_req_of_cpu.peer_read.part4uncore ioData requested of the CPU : Card reading from another Card (same or different stack)event=0x83,ch_mask=0x10,fc_mask=7,umask=0x701000801unc_iio_data_req_of_cpu.peer_read.part5uncore ioData requested of the CPU : Card reading from another Card (same or different stack)event=0x83,ch_mask=0x20,fc_mask=7,umask=0x702000801unc_iio_data_req_of_cpu.peer_read.part6uncore ioData requested of the CPU : Card reading from another Card (same or different stack)event=0x83,ch_mask=0x40,fc_mask=7,umask=0x704000801unc_iio_data_req_of_cpu.peer_read.part7uncore ioData requested of the CPU : Card reading from another Card (same or different stack)event=0x83,ch_mask=0x80,fc_mask=7,umask=0x708000801unc_iio_data_req_of_cpu.peer_write.all_partsuncore ioCounts once for every 4 bytes written from this card to a peer device's IO spaceevent=0x83,ch_mask=0xff,fc_mask=7,umask=0x70ff00201unc_iio_iommu0.all_lookupsuncore ioIOTLB lookups allevent=0x40,umask=201unc_iio_iommu1.num_mem_accesses_highuncore ioIOMMU high priority memory accessevent=0x41,umask=0x8001unc_iio_iommu1.num_mem_accesses_lowuncore ioIOMMU low priority memory accessevent=0x41,umask=0x4001unc_iio_iommu1.slpwc_2m_hitsuncore ioSecond Level Page Walk Cache Hit to a 2M pageevent=0x41,umask=201unc_iio_iommu1.slpwc_cache_fillsuncore ioSecond Level Page Walk Cache fillevent=0x41,umask=0x2001unc_iio_iommu1.slpwc_cache_lookupsuncore ioSecond Level Page Walk Cache lookupevent=0x41,umask=101unc_iio_iommu3.cyc_pwt_fulluncore ioCycles PWT fullevent=0x43,umask=201unc_iio_iommu3.int_cache_hitsuncore ioInterrupt Entry cache hitevent=0x43,umask=0x8001unc_iio_iommu3.int_cache_lookupsuncore ioInterrupt Entry cache lookupevent=0x43,umask=0x4001unc_iio_iommu3.num_inval_ctxt_cacheuncore ioContext Cache invalidation eventsevent=0x43,umask=801unc_iio_iommu3.num_inval_int_cacheuncore ioInterrupt Entry Cache invalidation eventsevent=0x43,umask=0x2001unc_iio_iommu3.num_inval_iotlbuncore ioIOTLB invalidation eventsevent=0x43,umask=401unc_iio_iommu3.num_inval_pasid_cacheuncore ioPASID Cache invalidation eventsevent=0x43,umask=0x1001unc_iio_num_oustanding_req_from_cpu.to_iouncore ioOccupancy of outbound request queue : To device : Counts number of outbound requests/completions IIO is currently processingevent=0xc5,ch_mask=0xff,fc_mask=7,umask=0x70ff00801unc_iio_num_outstanding_req_of_cpu.datauncore ioPassing data to be writtenevent=0x88,ch_mask=0xff,fc_mask=7,umask=0x700f02001unc_iio_num_outstanding_req_of_cpu.final_rd_wruncore ioIssuing final read or write of lineevent=0x88,ch_mask=0xff,fc_mask=7,umask=0x700f00801unc_iio_num_outstanding_req_of_cpu.iommu_hituncore ioProcessing response from IOMMUevent=0x88,ch_mask=0xff,fc_mask=7,umask=0x700f00201unc_iio_num_outstanding_req_of_cpu.iommu_requncore ioIssuing to IOMMUevent=0x88,ch_mask=0xff,fc_mask=7,umask=0x700f00101unc_iio_num_outstanding_req_of_cpu.req_ownuncore ioRequest Ownershipevent=0x88,ch_mask=0xff,fc_mask=7,umask=0x700f00401unc_iio_num_outstanding_req_of_cpu.wruncore ioWriting lineevent=0x88,ch_mask=0xff,fc_mask=7,umask=0x700f01001unc_iio_num_req_of_cpu_by_tgt.rem_p2puncore io-event=0x8e,ch_mask=0xff,fc_mask=7,umask=0x70ff01001unc_iio_txn_req_by_cpu.mem_read.all_partsuncore ioNumber Transactions requested by the CPU : Core reading from Cards MMIO spaceevent=0xc1,ch_mask=0xff,fc_mask=7,umask=0x70ff00401unc_iio_txn_req_by_cpu.mem_write.all_partsuncore ioNumber Transactions requested by the CPU : Core writing to Cards MMIO spaceevent=0xc1,ch_mask=0xff,fc_mask=7,umask=0x70ff00101unc_iio_txn_req_by_cpu.peer_read.all_partsuncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) reading from this cardevent=0xc1,ch_mask=0xff,fc_mask=7,umask=0x70ff00801unc_iio_txn_req_by_cpu.peer_write.all_partsuncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc1,ch_mask=0xff,fc_mask=7,umask=0x70ff00201unc_iio_txn_req_of_cpu.peer_read.part0uncore ioNumber Transactions requested of the CPU : Card reading from another Card (same or different stack)event=0x84,ch_mask=1,fc_mask=7,umask=0x700100801unc_iio_txn_req_of_cpu.peer_read.part1uncore ioNumber Transactions requested of the CPU : Card reading from another Card (same or different stack)event=0x84,ch_mask=2,fc_mask=7,umask=0x700200801unc_iio_txn_req_of_cpu.peer_read.part2uncore ioNumber Transactions requested of the CPU : Card reading from another Card (same or different stack)event=0x84,ch_mask=4,fc_mask=7,umask=0x700400801unc_iio_txn_req_of_cpu.peer_read.part3uncore ioNumber Transactions requested of the CPU : Card reading from another Card (same or different stack)event=0x84,ch_mask=8,fc_mask=7,umask=0x700800801unc_iio_txn_req_of_cpu.peer_read.part4uncore ioNumber Transactions requested of the CPU : Card reading from another Card (same or different stack)event=0x84,ch_mask=0x10,fc_mask=7,umask=0x701000801unc_iio_txn_req_of_cpu.peer_read.part5uncore ioNumber Transactions requested of the CPU : Card reading from another Card (same or different stack)event=0x84,ch_mask=0x20,fc_mask=7,umask=0x702000801unc_iio_txn_req_of_cpu.peer_read.part6uncore ioNumber Transactions requested of the CPU : Card reading from another Card (same or different stack)event=0x84,ch_mask=0x40,fc_mask=7,umask=0x704000801unc_iio_txn_req_of_cpu.peer_read.part7uncore ioNumber Transactions requested of the CPU : Card reading from another Card (same or different stack)event=0x84,ch_mask=0x80,fc_mask=7,umask=0x708000801l1d.replacementcacheL1D data line replacementsevent=0x51,period=2000003,umask=100This event counts when new data lines are brought into the L1 Data cache, which cause other lines to be evicted from the cachel1d_pend_miss.pendingcacheL1D miss outstanding duration in cyclesevent=0x48,period=2000003,umask=100Increments the number of outstanding L1D misses every cycle. Set Cmask = 1 and Edge =1 to count occurrencesl1d_pend_miss.pending_cyclescacheCycles with L1D load Misses outstandingevent=0x48,cmask=1,period=2000003,umask=100l1d_pend_miss.request_fb_fullcacheNumber of times a request needed a FB entry but there was no entry available for it. That is the FB unavailability was dominant reason for blocking the request. A request includes cacheable/uncacheable demands that is load, store or SW prefetch. HWP are eevent=0x48,period=2000003,umask=200l2_demand_rqsts.wb_hitcacheNot rejected writebacks that hit L2 cacheevent=0x27,period=200003,umask=0x5000Not rejected writebacks that hit L2 cachel2_lines_in.allcacheL2 cache lines filling L2event=0xf1,period=100003,umask=700This event counts the number of L2 cache lines brought into the L2 cache.  Lines are filled into the L2 cache when there was an L2 missl2_lines_in.ecacheL2 cache lines in E state filling L2event=0xf1,period=100003,umask=400L2 cache lines in E state filling L2l2_lines_in.icacheL2 cache lines in I state filling L2event=0xf1,period=100003,umask=100L2 cache lines in I state filling L2l2_lines_in.scacheL2 cache lines in S state filling L2event=0xf1,period=100003,umask=200L2 cache lines in S state filling L2l2_lines_out.demand_cleancacheClean L2 cache lines evicted by demandevent=0xf2,period=100003,umask=500Clean L2 cache lines evicted by demandl2_lines_out.demand_dirtycacheDirty L2 cache lines evicted by demandevent=0xf2,period=100003,umask=600Dirty L2 cache lines evicted by demandl2_rqsts.all_code_rdcacheL2 code requestsevent=0x24,period=200003,umask=0xe400Counts all L2 code requestsl2_rqsts.all_demand_data_rdcacheDemand Data Read requests  Spec update: HSD78, HSM80event=0x24,period=200003,umask=0xe100Counts any demand and L1 HW prefetch data load requests to L2  Spec update: HSD78, HSM80l2_rqsts.all_demand_misscacheDemand requests that miss L2 cache  Spec update: HSD78, HSM80event=0x24,period=200003,umask=0x2700Demand requests that miss L2 cache  Spec update: HSD78, HSM80l2_rqsts.all_demand_referencescacheDemand requests to L2 cache  Spec update: HSD78, HSM80event=0x24,period=200003,umask=0xe700Demand requests to L2 cache  Spec update: HSD78, HSM80l2_rqsts.all_pfcacheRequests from L2 hardware prefetchersevent=0x24,period=200003,umask=0xf800Counts all L2 HW prefetcher requestsl2_rqsts.all_rfocacheRFO requests to L2 cacheevent=0x24,period=200003,umask=0xe200Counts all L2 store RFO requestsl2_rqsts.code_rd_hitcacheL2 cache hits when fetching instructions, code readsevent=0x24,period=200003,umask=0xc400Number of instruction fetches that hit the L2 cachel2_rqsts.code_rd_misscacheL2 cache misses when fetching instructionsevent=0x24,period=200003,umask=0x2400Number of instruction fetches that missed the L2 cachel2_rqsts.demand_data_rd_hitcacheDemand Data Read requests that hit L2 cache  Spec update: HSD78, HSM80event=0x24,period=200003,umask=0xc100Counts the number of demand Data Read requests, initiated by load instructions, that hit L2 cache  Spec update: HSD78, HSM80l2_rqsts.demand_data_rd_misscacheDemand Data Read miss L2, no rejects  Spec update: HSD78, HSM80event=0x24,period=200003,umask=0x2100Demand data read requests that missed L2, no rejects  Spec update: HSD78, HSM80l2_rqsts.l2_pf_hitcacheL2 prefetch requests that hit L2 cacheevent=0x24,period=200003,umask=0xd000Counts all L2 HW prefetcher requests that hit L2l2_rqsts.l2_pf_misscacheL2 prefetch requests that miss L2 cacheevent=0x24,period=200003,umask=0x3000Counts all L2 HW prefetcher requests that missed L2l2_rqsts.misscacheAll requests that miss L2 cache  Spec update: HSD78, HSM80event=0x24,period=200003,umask=0x3f00All requests that missed L2  Spec update: HSD78, HSM80l2_rqsts.referencescacheAll L2 requests  Spec update: HSD78, HSM80event=0x24,period=200003,umask=0xff00All requests to L2 cache  Spec update: HSD78, HSM80l2_rqsts.rfo_hitcacheRFO requests that hit L2 cacheevent=0x24,period=200003,umask=0xc200Counts the number of store RFO requests that hit the L2 cachel2_rqsts.rfo_misscacheRFO requests that miss L2 cacheevent=0x24,period=200003,umask=0x2200Counts the number of store RFO requests that miss the L2 cachel2_trans.all_pfcacheL2 or L3 HW prefetches that access L2 cacheevent=0xf0,period=200003,umask=800Any MLC or L3 HW prefetch accessing L2, including rejectsl2_trans.all_requestscacheTransactions accessing L2 pipeevent=0xf0,period=200003,umask=0x8000Transactions accessing L2 pipel2_trans.code_rdcacheL2 cache accesses when fetching instructionsevent=0xf0,period=200003,umask=400L2 cache accesses when fetching instructionsl2_trans.demand_data_rdcacheDemand Data Read requests that access L2 cacheevent=0xf0,period=200003,umask=100Demand data read requests that access L2 cachel2_trans.l1d_wbcacheL1D writebacks that access L2 cacheevent=0xf0,period=200003,umask=0x1000L1D writebacks that access L2 cachel2_trans.l2_fillcacheL2 fill requests that access L2 cacheevent=0xf0,period=200003,umask=0x2000L2 fill requests that access L2 cachel2_trans.l2_wbcacheL2 writebacks that access L2 cacheevent=0xf0,period=200003,umask=0x4000L2 writebacks that access L2 cachel2_trans.rfocacheRFO requests that access L2 cacheevent=0xf0,period=200003,umask=200RFO requests that access L2 cachelock_cycles.cache_lock_durationcacheCycles when L1D is lockedevent=0x63,period=2000003,umask=200Cycles in which the L1D is lockedlongest_lat_cache.misscacheCore-originated cacheable demand requests missed L3event=0x2e,period=100003,umask=0x4100This event counts each cache miss condition for references to the last level cachelongest_lat_cache.referencecacheCore-originated cacheable demand requests that refer to L3event=0x2e,period=100003,umask=0x4f00This event counts requests originating from the core that reference a cache line in the last level cachemem_load_uops_l3_hit_retired.xsnp_hitcacheRetired load uops which data sources were L3 and cross-core snoop hits in on-pkg core cache  Supports address when precise.  Spec update: HSD29, HSD25, HSM26, HSM30 (Precise event)event=0xd2,period=20011,umask=200mem_load_uops_l3_hit_retired.xsnp_hitmcacheRetired load uops which data sources were HitM responses from shared L3  Supports address when precise.  Spec update: HSD29, HSD25, HSM26, HSM30 (Precise event)event=0xd2,period=20011,umask=400mem_load_uops_l3_hit_retired.xsnp_misscacheRetired load uops which data sources were L3 hit and cross-core snoop missed in on-pkg core cache  Supports address when precise.  Spec update: HSD29, HSD25, HSM26, HSM30 (Precise event)event=0xd2,period=20011,umask=100mem_load_uops_l3_hit_retired.xsnp_nonecacheRetired load uops which data sources were hits in L3 without snoops required  Supports address when precise.  Spec update: HSD74, HSD29, HSD25, HSM26, HSM30 (Precise event)event=0xd2,period=100003,umask=800mem_load_uops_l3_miss_retired.local_dramcacheData from local DRAM either Snoop not needed or Snoop Miss (RspI)  Supports address when precise.  Spec update: HSD74, HSD29, HSD25, HSM30 (Precise event)event=0xd3,period=100003,umask=100This event counts retired load uops where the data came from local DRAM. This does not include hardware prefetches  Supports address when precise.  Spec update: HSD74, HSD29, HSD25, HSM30 (Precise event)mem_load_uops_retired.hit_lfbcacheRetired load uops which data sources were load uops missed L1 but hit FB due to preceding miss to the same cache line with data not ready  Supports address when precise.  Spec update: HSM30 (Precise event)event=0xd1,period=100003,umask=0x4000mem_load_uops_retired.l1_hitcacheRetired load uops with L1 cache hits as data sources  Supports address when precise.  Spec update: HSD29, HSM30 (Precise event)event=0xd1,period=2000003,umask=100mem_load_uops_retired.l1_misscacheRetired load uops misses in L1 cache as data sources  Supports address when precise.  Spec update: HSM30 (Precise event)event=0xd1,period=100003,umask=800Retired load uops missed L1 cache as data sources  Supports address when precise.  Spec update: HSM30 (Precise event)mem_load_uops_retired.l2_hitcacheRetired load uops with L2 cache hits as data sources  Supports address when precise.  Spec update: HSD76, HSD29, HSM30 (Precise event)event=0xd1,period=100003,umask=200mem_load_uops_retired.l2_misscacheMiss in mid-level (L2) cache. Excludes Unknown data-source  Supports address when precise.  Spec update: HSD29, HSM30 (Precise event)event=0xd1,period=50021,umask=0x1000Retired load uops missed L2. Unknown data source excluded  Supports address when precise.  Spec update: HSD29, HSM30 (Precise event)mem_load_uops_retired.l3_hitcacheRetired load uops which data sources were data hits in L3 without snoops required  Supports address when precise.  Spec update: HSD74, HSD29, HSD25, HSM26, HSM30 (Precise event)event=0xd1,period=50021,umask=400Retired load uops with L3 cache hits as data sources  Supports address when precise.  Spec update: HSD74, HSD29, HSD25, HSM26, HSM30 (Precise event)mem_load_uops_retired.l3_misscacheMiss in last-level (L3) cache. Excludes Unknown data-source  Supports address when precise.  Spec update: HSD74, HSD29, HSD25, HSM26, HSM30 (Precise event)event=0xd1,period=100003,umask=0x2000Retired load uops missed L3. Excludes unknown data source   Supports address when precise.  Spec update: HSD74, HSD29, HSD25, HSM26, HSM30 (Precise event)mem_uops_retired.all_loadscacheRetired load uops  Supports address when precise.  Spec update: HSD29, HSM30 (Precise event)event=0xd0,period=2000003,umask=0x8100Counts all retired load uops. This event accounts for SW prefetch uops of PREFETCHNTA or PREFETCHT0/1/2 or PREFETCHW  Supports address when precise.  Spec update: HSD29, HSM30 (Precise event)mem_uops_retired.all_storescacheRetired store uops  Supports address when precise.  Spec update: HSD29, HSM30 (Precise event)event=0xd0,period=2000003,umask=0x8200Counts all retired store uops  Supports address when precise.  Spec update: HSD29, HSM30 (Precise event)mem_uops_retired.lock_loadscacheRetired load uops with locked access  Supports address when precise.  Spec update: HSD76, HSD29, HSM30 (Precise event)event=0xd0,period=100003,umask=0x2100mem_uops_retired.split_loadscacheRetired load uops that split across a cacheline boundary  Supports address when precise.  Spec update: HSD29, HSM30 (Precise event)event=0xd0,period=100003,umask=0x4100mem_uops_retired.split_storescacheRetired store uops that split across a cacheline boundary  Supports address when precise.  Spec update: HSD29, HSM30 (Precise event)event=0xd0,period=100003,umask=0x4200mem_uops_retired.stlb_miss_loadscacheRetired load uops that miss the STLB  Supports address when precise.  Spec update: HSD29, HSM30 (Precise event)event=0xd0,period=100003,umask=0x1100mem_uops_retired.stlb_miss_storescacheRetired store uops that miss the STLB  Supports address when precise.  Spec update: HSD29, HSM30 (Precise event)event=0xd0,period=100003,umask=0x1200offcore_requests.all_data_rdcacheDemand and prefetch data readsevent=0xb0,period=100003,umask=800Data read requests sent to uncore (demand and prefetch)offcore_requests.demand_code_rdcacheCacheable and noncacheable code read requestsevent=0xb0,period=100003,umask=200Demand code read requests sent to uncoreoffcore_requests.demand_data_rdcacheDemand Data Read requests sent to uncore  Spec update: HSD78, HSM80event=0xb0,period=100003,umask=100Demand data read requests sent to uncore  Spec update: HSD78, HSM80offcore_requests.demand_rfocacheDemand RFO requests including regular RFOs, locks, ItoMevent=0xb0,period=100003,umask=400Demand RFO read requests sent to uncore, including regular RFOs, locks, ItoMoffcore_requests_buffer.sq_fullcacheOffcore requests buffer cannot take more entries for this thread coreevent=0xb2,period=2000003,umask=100offcore_requests_outstanding.all_data_rdcacheOffcore outstanding cacheable Core Data Read transactions in SuperQueue (SQ), queue to uncore  Spec update: HSD62, HSD61, HSM63event=0x60,period=2000003,umask=800Offcore outstanding cacheable data read transactions in SQ to uncore. Set Cmask=1 to count cycles  Spec update: HSD62, HSD61, HSM63offcore_requests_outstanding.cycles_with_data_rdcacheCycles when offcore outstanding cacheable Core Data Read transactions are present in SuperQueue (SQ), queue to uncore  Spec update: HSD62, HSD61, HSM63event=0x60,cmask=1,period=2000003,umask=800offcore_requests_outstanding.cycles_with_demand_data_rdcacheCycles when offcore outstanding Demand Data Read transactions are present in SuperQueue (SQ), queue to uncore  Spec update: HSD78, HSD62, HSD61, HSM63, HSM80event=0x60,cmask=1,period=2000003,umask=100offcore_requests_outstanding.cycles_with_demand_rfocacheOffcore outstanding demand rfo reads transactions in SuperQueue (SQ), queue to uncore, every cycle  Spec update: HSD62, HSD61, HSM63event=0x60,cmask=1,period=2000003,umask=400offcore_requests_outstanding.demand_code_rdcacheOffcore outstanding code reads transactions in SuperQueue (SQ), queue to uncore, every cycle  Spec update: HSD62, HSD61, HSM63event=0x60,period=2000003,umask=200Offcore outstanding Demand code Read transactions in SQ to uncore. Set Cmask=1 to count cycles  Spec update: HSD62, HSD61, HSM63offcore_requests_outstanding.demand_data_rdcacheOffcore outstanding Demand Data Read transactions in uncore queue  Spec update: HSD78, HSD62, HSD61, HSM63, HSM80event=0x60,period=2000003,umask=100Offcore outstanding demand data read transactions in SQ to uncore. Set Cmask=1 to count cycles  Spec update: HSD78, HSD62, HSD61, HSM63, HSM80offcore_requests_outstanding.demand_data_rd_ge_6cacheCycles with at least 6 offcore outstanding Demand Data Read transactions in uncore queue  Spec update: HSD78, HSD62, HSD61, HSM63, HSM80event=0x60,cmask=6,period=2000003,umask=100offcore_requests_outstanding.demand_rfocacheOffcore outstanding RFO store transactions in SuperQueue (SQ), queue to uncore  Spec update: HSD62, HSD61, HSM63event=0x60,period=2000003,umask=400Offcore outstanding RFO store transactions in SQ to uncore. Set Cmask=1 to count cycles  Spec update: HSD62, HSD61, HSM63offcore_response.all_code_rd.l3_hit.hit_other_core_no_fwdcacheCounts all demand & prefetch code reads hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C024400offcore_response.all_data_rd.l3_hit.hitm_other_corecacheCounts all demand & prefetch data reads hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C009100offcore_response.all_data_rd.l3_hit.hit_other_core_no_fwdcacheCounts all demand & prefetch data reads hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C009100offcore_response.all_reads.l3_hit.hitm_other_corecachehit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C07F700offcore_response.all_reads.l3_hit.hit_other_core_no_fwdcachehit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C07F700offcore_response.all_requests.l3_hit.any_responsecacheCounts all requests hit in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C8FFF00offcore_response.all_rfo.l3_hit.hitm_other_corecacheCounts all demand & prefetch RFOs hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C012200offcore_response.all_rfo.l3_hit.hit_other_core_no_fwdcacheCounts all demand & prefetch RFOs hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C012200offcore_response.demand_code_rd.l3_hit.hitm_other_corecacheCounts all demand code reads hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000400offcore_response.demand_code_rd.l3_hit.hit_other_core_no_fwdcacheCounts all demand code reads hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C000400offcore_response.demand_data_rd.l3_hit.hitm_other_corecacheCounts demand data reads hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000100offcore_response.demand_data_rd.l3_hit.hit_other_core_no_fwdcacheCounts demand data reads hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C000100offcore_response.demand_rfo.l3_hit.hitm_other_corecacheCounts all demand data writes (RFOs) hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000200offcore_response.demand_rfo.l3_hit.hit_other_core_no_fwdcacheCounts all demand data writes (RFOs) hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C000200offcore_response.pf_l2_code_rd.l3_hit.any_responsecacheCounts all prefetch (that bring data to LLC only) code reads hit in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C004000offcore_response.pf_l2_data_rd.l3_hit.any_responsecacheCounts prefetch (that bring data to L2) data reads hit in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C001000offcore_response.pf_l2_rfo.l3_hit.any_responsecacheCounts all prefetch (that bring data to L2) RFOs hit in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C002000offcore_response.pf_l3_code_rd.l3_hit.any_responsecacheCounts prefetch (that bring data to LLC only) code reads hit in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C020000offcore_response.pf_l3_data_rd.l3_hit.any_responsecacheCounts all prefetch (that bring data to LLC only) data reads hit in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C008000offcore_response.pf_l3_rfo.l3_hit.any_responsecacheCounts all prefetch (that bring data to LLC only) RFOs hit in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C010000sq_misc.split_lockcacheSplit locks in SQevent=0xf4,period=100003,umask=0x1000avx_insts.allfloating pointApproximate counts of AVX & AVX2 256-bit instructions, including non-arithmetic instructions, loads, and stores.  May count non-AVX instructions that employ 256-bit operations, including (but not necessarily limited to) rep string instructions that use 256-bit loads and stores for optimized performance, XSAVE* and XRSTOR*, and operations that transition the x87 FPU data registers between x87 and MMXevent=0xc6,period=2000003,umask=700Note that a whole rep string only counts AVX_INST.ALL oncefp_assist.anyfloating pointCycles with any input/output SSE or FP assistevent=0xca,cmask=1,period=100003,umask=0x1e00Cycles with any input/output SSE* or FP assistsfp_assist.simd_inputfloating pointNumber of SIMD FP assists due to input valuesevent=0xca,period=100003,umask=0x1000Number of SIMD FP assists due to input valuesfp_assist.simd_outputfloating pointNumber of SIMD FP assists due to Output valuesevent=0xca,period=100003,umask=800Number of SIMD FP assists due to output valuesfp_assist.x87_inputfloating pointNumber of X87 assists due to input valueevent=0xca,period=100003,umask=400Number of X87 FP assists due to input valuesfp_assist.x87_outputfloating pointNumber of X87 assists due to output valueevent=0xca,period=100003,umask=200Number of X87 FP assists due to output valuesmove_elimination.simd_eliminatedfloating pointNumber of SIMD Move Elimination candidate uops that were eliminatedevent=0x58,period=1000003,umask=200Number of SIMD move elimination candidate uops that were eliminatedmove_elimination.simd_not_eliminatedfloating pointNumber of SIMD Move Elimination candidate uops that were not eliminatedevent=0x58,period=1000003,umask=800Number of SIMD move elimination candidate uops that were not eliminatedother_assists.avx_to_ssefloating pointNumber of transitions from AVX-256 to legacy SSE when penalty applicable  Spec update: HSD56, HSM57event=0xc1,period=100003,umask=800other_assists.sse_to_avxfloating pointNumber of transitions from SSE to AVX-256 when penalty applicable  Spec update: HSD56, HSM57event=0xc1,period=100003,umask=0x1000baclears.anyfrontendCounts the total number when the front end is resteered, mainly when the BPU cannot provide a correct prediction and this is corrected by other branch handling mechanisms at the front endevent=0xe6,period=100003,umask=0x1f00Number of front end re-steers due to BPU mispredictiondsb2mite_switches.penalty_cyclesfrontendDecode Stream Buffer (DSB)-to-MITE switch true penalty cyclesevent=0xab,period=2000003,umask=200icache.hitfrontendNumber of Instruction Cache, Streaming Buffer and Victim Cache Reads. both cacheable and noncacheable, including UC fetchesevent=0x80,period=2000003,umask=100icache.ifdata_stallfrontendCycles where a code fetch is stalled due to L1 instruction-cache missevent=0x80,period=2000003,umask=400icache.ifetch_stallfrontendCycles where a code fetch is stalled due to L1 instruction-cache missevent=0x80,period=2000003,umask=400icache.missesfrontendNumber of Instruction Cache, Streaming Buffer and Victim Cache Misses. Includes Uncacheable accessesevent=0x80,period=200003,umask=200This event counts Instruction Cache (ICACHE) missesidq.all_dsb_cycles_4_uopsfrontendCycles Decode Stream Buffer (DSB) is delivering 4 Uopsevent=0x79,cmask=4,period=2000003,umask=0x1800Counts cycles DSB is delivered four uops. Set Cmask = 4idq.all_dsb_cycles_any_uopsfrontendCycles Decode Stream Buffer (DSB) is delivering any Uopevent=0x79,cmask=1,period=2000003,umask=0x1800Counts cycles DSB is delivered at least one uops. Set Cmask = 1idq.all_mite_cycles_4_uopsfrontendCycles MITE is delivering 4 Uopsevent=0x79,cmask=4,period=2000003,umask=0x2400Counts cycles MITE is delivered four uops. Set Cmask = 4idq.all_mite_cycles_any_uopsfrontendCycles MITE is delivering any Uopevent=0x79,cmask=1,period=2000003,umask=0x2400Counts cycles MITE is delivered at least one uop. Set Cmask = 1idq.dsb_cyclesfrontendCycles when uops are being delivered to Instruction Decode Queue (IDQ) from Decode Stream Buffer (DSB) pathevent=0x79,cmask=1,period=2000003,umask=800idq.dsb_uopsfrontendUops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) pathevent=0x79,period=2000003,umask=800Increment each cycle. # of uops delivered to IDQ from DSB path. Set Cmask = 1 to count cyclesidq.emptyfrontendInstruction Decode Queue (IDQ) empty cycles  Spec update: HSD135event=0x79,period=2000003,umask=200Counts cycles the IDQ is empty  Spec update: HSD135idq.mite_all_uopsfrontendUops delivered to Instruction Decode Queue (IDQ) from MITE pathevent=0x79,period=2000003,umask=0x3c00Number of uops delivered to IDQ from any pathidq.mite_cyclesfrontendCycles when uops are being delivered to Instruction Decode Queue (IDQ) from MITE pathevent=0x79,cmask=1,period=2000003,umask=400idq.mite_uopsfrontendUops delivered to Instruction Decode Queue (IDQ) from MITE pathevent=0x79,period=2000003,umask=400Increment each cycle # of uops delivered to IDQ from MITE path. Set Cmask = 1 to count cyclesidq.ms_cyclesfrontendCycles when uops are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busyevent=0x79,cmask=1,period=2000003,umask=0x3000This event counts cycles during which the microcode sequencer assisted the Front-end in delivering uops.  Microcode assists are used for complex instructions or scenarios that can't be handled by the standard decoder.  Using other instructions, if possible, will usually improve performanceidq.ms_dsb_cyclesfrontendCycles when uops initiated by Decode Stream Buffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busyevent=0x79,cmask=1,period=2000003,umask=0x1000idq.ms_dsb_occurfrontendDeliveries to Instruction Decode Queue (IDQ) initiated by Decode Stream Buffer (DSB) while Microcode Sequencer (MS) is busyevent=0x79,cmask=1,edge=1,period=2000003,umask=0x1000idq.ms_dsb_uopsfrontendUops initiated by Decode Stream Buffer (DSB) that are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busyevent=0x79,period=2000003,umask=0x1000Increment each cycle # of uops delivered to IDQ when MS_busy by DSB. Set Cmask = 1 to count cycles. Add Edge=1 to count # of deliveryidq.ms_mite_uopsfrontendUops initiated by MITE and delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busyevent=0x79,period=2000003,umask=0x2000Increment each cycle # of uops delivered to IDQ when MS_busy by MITE. Set Cmask = 1 to count cyclesidq.ms_uopsfrontendUops delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busyevent=0x79,period=2000003,umask=0x3000This event counts uops delivered by the Front-end with the assistance of the microcode sequencer.  Microcode assists are used for complex instructions or scenarios that can't be handled by the standard decoder.  Using other instructions, if possible, will usually improve performanceidq_uops_not_delivered.corefrontendUops not delivered to Resource Allocation Table (RAT) per thread when backend of the machine is not stalled  Spec update: HSD135event=0x9c,period=2000003,umask=100This event count the number of undelivered (unallocated) uops from the Front-end to the Resource Allocation Table (RAT) while the Back-end of the processor is not stalled. The Front-end can allocate up to 4 uops per cycle so this event can increment 0-4 times per cycle depending on the number of unallocated uops. This event is counted on a per-core basis  Spec update: HSD135idq_uops_not_delivered.cycles_0_uops_deliv.corefrontendCycles per thread when 4 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalled  Spec update: HSD135event=0x9c,cmask=4,period=2000003,umask=100This event counts the number cycles during which the Front-end allocated exactly zero uops to the Resource Allocation Table (RAT) while the Back-end of the processor is not stalled.  This event is counted on a per-core basis  Spec update: HSD135idq_uops_not_delivered.cycles_fe_was_okfrontendCounts cycles FE delivered 4 uops or Resource Allocation Table (RAT) was stalling FE  Spec update: HSD135event=0x9c,cmask=1,inv=1,period=2000003,umask=100idq_uops_not_delivered.cycles_le_1_uop_deliv.corefrontendCycles per thread when 3 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalled  Spec update: HSD135event=0x9c,cmask=3,period=2000003,umask=100idq_uops_not_delivered.cycles_le_2_uop_deliv.corefrontendCycles with less than 2 uops delivered by the front end  Spec update: HSD135event=0x9c,cmask=2,period=2000003,umask=100idq_uops_not_delivered.cycles_le_3_uop_deliv.corefrontendCycles with less than 3 uops delivered by the front end  Spec update: HSD135event=0x9c,cmask=1,period=2000003,umask=100hle_retired.abortedmemoryNumber of times an HLE execution aborted due to any reasons (multiple categories may count as one) (Precise event)event=0xc8,period=2000003,umask=400hle_retired.aborted_misc1memoryNumber of times an HLE execution aborted due to various memory events (e.g., read/write capacity and conflicts)event=0xc8,period=2000003,umask=800hle_retired.aborted_misc2memoryNumber of times an HLE execution aborted due to uncommon conditionsevent=0xc8,period=2000003,umask=0x1000hle_retired.aborted_misc3memoryNumber of times an HLE execution aborted due to HLE-unfriendly instructionsevent=0xc8,period=2000003,umask=0x2000hle_retired.aborted_misc4memoryNumber of times an HLE execution aborted due to incompatible memory type  Spec update: HSD65event=0xc8,period=2000003,umask=0x4000hle_retired.aborted_misc5memoryNumber of times an HLE execution aborted due to none of the previous 4 categories (e.g. interrupts)event=0xc8,period=2000003,umask=0x8000Number of times an HLE execution aborted due to none of the previous 4 categories (e.g. interrupts)hle_retired.commitmemoryNumber of times an HLE execution successfully committedevent=0xc8,period=2000003,umask=200hle_retired.startmemoryNumber of times an HLE execution startedevent=0xc8,period=2000003,umask=100machine_clears.memory_orderingmemoryCounts the number of machine clears due to memory order conflictsevent=0xc3,period=100003,umask=200This event counts the number of memory ordering machine clears detected. Memory ordering machine clears can result from memory address aliasing or snoops from another hardware thread or core to data inflight in the pipeline.  Machine clears can have a significant performance impact if they are happening frequentlymem_trans_retired.load_latency_gt_128memoryRandomly selected loads with latency value being above 128  Supports address when precise.  Spec update: HSD76, HSD25, HSM26 (Must be precise)event=0xcd,period=1009,umask=1,ldlat=0x8000mem_trans_retired.load_latency_gt_16memoryRandomly selected loads with latency value being above 16  Supports address when precise.  Spec update: HSD76, HSD25, HSM26 (Must be precise)event=0xcd,period=20011,umask=1,ldlat=0x1000mem_trans_retired.load_latency_gt_256memoryRandomly selected loads with latency value being above 256  Supports address when precise.  Spec update: HSD76, HSD25, HSM26 (Must be precise)event=0xcd,period=503,umask=1,ldlat=0x10000mem_trans_retired.load_latency_gt_32memoryRandomly selected loads with latency value being above 32  Supports address when precise.  Spec update: HSD76, HSD25, HSM26 (Must be precise)event=0xcd,period=100003,umask=1,ldlat=0x2000mem_trans_retired.load_latency_gt_4memoryRandomly selected loads with latency value being above 4  Supports address when precise.  Spec update: HSD76, HSD25, HSM26 (Must be precise)event=0xcd,period=100003,umask=1,ldlat=0x400mem_trans_retired.load_latency_gt_512memoryRandomly selected loads with latency value being above 512  Supports address when precise.  Spec update: HSD76, HSD25, HSM26 (Must be precise)event=0xcd,period=101,umask=1,ldlat=0x20000mem_trans_retired.load_latency_gt_64memoryRandomly selected loads with latency value being above 64  Supports address when precise.  Spec update: HSD76, HSD25, HSM26 (Must be precise)event=0xcd,period=2003,umask=1,ldlat=0x4000mem_trans_retired.load_latency_gt_8memoryRandomly selected loads with latency value being above 8  Supports address when precise.  Spec update: HSD76, HSD25, HSM26 (Must be precise)event=0xcd,period=50021,umask=1,ldlat=0x800misalign_mem_ref.loadsmemorySpeculative cache line split load uops dispatched to L1 cacheevent=5,period=2000003,umask=100Speculative cache-line split load uops dispatched to L1Dmisalign_mem_ref.storesmemorySpeculative cache line split STA uops dispatched to L1 cacheevent=5,period=2000003,umask=200Speculative cache-line split store-address uops dispatched to L1Doffcore_response.all_code_rd.l3_miss.any_responsememoryCounts all demand & prefetch code reads miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FFFC0024400offcore_response.all_code_rd.l3_miss.local_drammemoryCounts all demand & prefetch code reads miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040024400offcore_response.all_data_rd.l3_miss.any_responsememoryCounts all demand & prefetch data reads miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FFFC0009100offcore_response.all_data_rd.l3_miss.local_drammemoryCounts all demand & prefetch data reads miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040009100offcore_response.all_reads.l3_miss.any_responsememorymiss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FFFC007F700offcore_response.all_reads.l3_miss.local_drammemorymiss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x1004007F700offcore_response.all_requests.l3_miss.any_responsememoryCounts all requests miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FFFC08FFF00offcore_response.all_rfo.l3_miss.any_responsememoryCounts all demand & prefetch RFOs miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FFFC0012200offcore_response.all_rfo.l3_miss.local_drammemoryCounts all demand & prefetch RFOs miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040012200offcore_response.demand_code_rd.l3_miss.any_responsememoryCounts all demand code reads miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FFFC0000400offcore_response.demand_code_rd.l3_miss.local_drammemoryCounts all demand code reads miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040000400offcore_response.demand_data_rd.l3_miss.any_responsememoryCounts demand data reads miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FFFC0000100offcore_response.demand_data_rd.l3_miss.local_drammemoryCounts demand data reads miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040000100offcore_response.demand_rfo.l3_miss.any_responsememoryCounts all demand data writes (RFOs) miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FFFC0000200offcore_response.demand_rfo.l3_miss.local_drammemoryCounts all demand data writes (RFOs) miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040000200offcore_response.pf_l2_code_rd.l3_miss.any_responsememoryCounts all prefetch (that bring data to LLC only) code reads miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FFFC0004000offcore_response.pf_l2_data_rd.l3_miss.any_responsememoryCounts prefetch (that bring data to L2) data reads miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FFFC0001000offcore_response.pf_l2_rfo.l3_miss.any_responsememoryCounts all prefetch (that bring data to L2) RFOs miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FFFC0002000offcore_response.pf_l3_code_rd.l3_miss.any_responsememoryCounts prefetch (that bring data to LLC only) code reads miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FFFC0020000offcore_response.pf_l3_data_rd.l3_miss.any_responsememoryCounts all prefetch (that bring data to LLC only) data reads miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FFFC0008000offcore_response.pf_l3_rfo.l3_miss.any_responsememoryCounts all prefetch (that bring data to LLC only) RFOs miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FFFC0010000rtm_retired.abortedmemoryNumber of times an RTM execution aborted due to any reasons (multiple categories may count as one) (Must be precise)event=0xc9,period=2000003,umask=400rtm_retired.aborted_misc1memoryNumber of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts)event=0xc9,period=2000003,umask=800Number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts)rtm_retired.aborted_misc2memoryNumber of times an RTM execution aborted due to various memory events (e.g., read/write capacity and conflicts)event=0xc9,period=2000003,umask=0x1000rtm_retired.aborted_misc3memoryNumber of times an RTM execution aborted due to HLE-unfriendly instructionsevent=0xc9,period=2000003,umask=0x2000rtm_retired.aborted_misc4memoryNumber of times an RTM execution aborted due to incompatible memory type  Spec update: HSD65event=0xc9,period=2000003,umask=0x4000rtm_retired.aborted_misc5memoryNumber of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt)event=0xc9,period=2000003,umask=0x8000Number of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt)rtm_retired.commitmemoryNumber of times an RTM execution successfully committedevent=0xc9,period=2000003,umask=200rtm_retired.startmemoryNumber of times an RTM execution startedevent=0xc9,period=2000003,umask=100tx_exec.misc2memoryCounts the number of times a class of instructions (e.g., vzeroupper) that may cause a transactional abort was executed inside a transactional regionevent=0x5d,period=2000003,umask=200tx_exec.misc3memoryCounts the number of times an instruction execution caused the transactional nest count supported to be exceededevent=0x5d,period=2000003,umask=400tx_exec.misc4memoryCounts the number of times a XBEGIN instruction was executed inside an HLE transactional regionevent=0x5d,period=2000003,umask=800tx_mem.abort_capacity_writememoryNumber of times a transactional abort was signaled due to a data capacity limitation for transactional writesevent=0x54,period=2000003,umask=200tx_mem.abort_conflictmemoryNumber of times a transactional abort was signaled due to a data conflict on a transactionally accessed addressevent=0x54,period=2000003,umask=100tx_mem.abort_hle_elision_buffer_mismatchmemoryNumber of times an HLE transactional execution aborted due to XRELEASE lock not satisfying the address and value requirements in the elision bufferevent=0x54,period=2000003,umask=0x1000tx_mem.abort_hle_elision_buffer_not_emptymemoryNumber of times an HLE transactional execution aborted due to NoAllocatedElisionBuffer being non-zeroevent=0x54,period=2000003,umask=800tx_mem.abort_hle_elision_buffer_unsupported_alignmentmemoryNumber of times an HLE transactional execution aborted due to an unsupported read alignment from the elision bufferevent=0x54,period=2000003,umask=0x2000tx_mem.abort_hle_store_to_elided_lockmemoryNumber of times a HLE transactional region aborted due to a non XRELEASE prefixed instruction writing to an elided lock in the elision bufferevent=0x54,period=2000003,umask=400tx_mem.hle_elision_buffer_fullmemoryNumber of times HLE lock could not be elided due to ElisionBufferAvailable being zeroevent=0x54,period=2000003,umask=0x4000cpl_cycles.ring0otherUnhalted core cycles when the thread is in ring 0event=0x5c,period=2000003,umask=100Unhalted core cycles when the thread is in ring 0cpl_cycles.ring0_transotherNumber of intervals between processor halts while thread is in ring 0event=0x5c,cmask=1,edge=1,period=100003,umask=100cpl_cycles.ring123otherUnhalted core cycles when thread is in rings 1, 2, or 3event=0x5c,period=2000003,umask=200Unhalted core cycles when the thread is not in ring 0lock_cycles.split_lock_uc_lock_durationotherCycles when L1 and L2 are locked due to UC or split lockevent=0x63,period=2000003,umask=100Cycles in which the L1D and L2 are locked, due to a UC lock or split lockarith.divider_uopspipelineAny uop executed by the Divider. (This includes all divide uops, sqrt, ...)event=0x14,period=2000003,umask=200br_inst_exec.all_branchespipelineSpeculative and retired  branchesevent=0x88,period=200003,umask=0xff00Counts all near executed branches (not necessarily retired)br_inst_exec.all_conditionalpipelineSpeculative and retired macro-conditional branchesevent=0x88,period=200003,umask=0xc100br_inst_exec.all_direct_jmppipelineSpeculative and retired macro-unconditional branches excluding calls and indirectsevent=0x88,period=200003,umask=0xc200br_inst_exec.all_direct_near_callpipelineSpeculative and retired direct near callsevent=0x88,period=200003,umask=0xd000br_inst_exec.all_indirect_jump_non_call_retpipelineSpeculative and retired indirect branches excluding calls and returnsevent=0x88,period=200003,umask=0xc400br_inst_exec.all_indirect_near_returnpipelineSpeculative and retired indirect return branchesevent=0x88,period=200003,umask=0xc800br_inst_exec.nontaken_conditionalpipelineNot taken macro-conditional branchesevent=0x88,period=200003,umask=0x4100br_inst_exec.taken_conditionalpipelineTaken speculative and retired macro-conditional branchesevent=0x88,period=200003,umask=0x8100br_inst_exec.taken_direct_jumppipelineTaken speculative and retired macro-conditional branch instructions excluding calls and indirectsevent=0x88,period=200003,umask=0x8200br_inst_exec.taken_direct_near_callpipelineTaken speculative and retired direct near callsevent=0x88,period=200003,umask=0x9000br_inst_exec.taken_indirect_jump_non_call_retpipelineTaken speculative and retired indirect branches excluding calls and returnsevent=0x88,period=200003,umask=0x8400br_inst_exec.taken_indirect_near_callpipelineTaken speculative and retired indirect callsevent=0x88,period=200003,umask=0xa000br_inst_exec.taken_indirect_near_returnpipelineTaken speculative and retired indirect branches with return mnemonicevent=0x88,period=200003,umask=0x8800br_inst_retired.all_branchespipelineAll (macro) branch instructions retiredevent=0xc4,period=40000900Branch instructions at retirementbr_inst_retired.all_branches_pebspipelineAll (macro) branch instructions retired (Must be precise)event=0xc4,period=400009,umask=400br_inst_retired.conditionalpipelineConditional branch instructions retired (Precise event)event=0xc4,period=400009,umask=100Counts the number of conditional branch instructions retired (Precise event)br_inst_retired.far_branchpipelineFar branch instructions retiredevent=0xc4,period=100003,umask=0x4000Number of far branches retiredbr_inst_retired.near_callpipelineDirect and indirect near call instructions retired (Precise event)event=0xc4,period=100003,umask=200br_inst_retired.near_call_r3pipelineDirect and indirect macro near call instructions retired (captured in ring 3) (Precise event)event=0xc4,period=100003,umask=200br_inst_retired.near_returnpipelineReturn instructions retired (Precise event)event=0xc4,period=100003,umask=800Counts the number of near return instructions retired (Precise event)br_inst_retired.near_takenpipelineTaken branch instructions retired (Precise event)event=0xc4,period=400009,umask=0x2000Number of near taken branches retired (Precise event)br_inst_retired.not_takenpipelineNot taken branch instructions retiredevent=0xc4,period=400009,umask=0x1000Counts the number of not taken branch instructions retiredbr_misp_exec.all_branchespipelineSpeculative and retired mispredicted macro conditional branchesevent=0x89,period=200003,umask=0xff00Counts all near executed branches (not necessarily retired)br_misp_exec.all_conditionalpipelineSpeculative and retired mispredicted macro conditional branchesevent=0x89,period=200003,umask=0xc100br_misp_exec.all_indirect_jump_non_call_retpipelineMispredicted indirect branches excluding calls and returnsevent=0x89,period=200003,umask=0xc400br_misp_exec.nontaken_conditionalpipelineNot taken speculative and retired mispredicted macro conditional branchesevent=0x89,period=200003,umask=0x4100br_misp_exec.taken_conditionalpipelineTaken speculative and retired mispredicted macro conditional branchesevent=0x89,period=200003,umask=0x8100br_misp_exec.taken_indirect_jump_non_call_retpipelineTaken speculative and retired mispredicted indirect branches excluding calls and returnsevent=0x89,period=200003,umask=0x8400br_misp_exec.taken_return_nearpipelineTaken speculative and retired mispredicted indirect branches with return mnemonicevent=0x89,period=200003,umask=0x8800br_misp_retired.all_branchespipelineAll mispredicted macro branch instructions retiredevent=0xc5,period=40000900Mispredicted branch instructions at retirementbr_misp_retired.all_branches_pebspipelineMispredicted macro branch instructions retired (Must be precise)event=0xc5,period=400009,umask=400This event counts all mispredicted branch instructions retired. This is a precise event (Must be precise)br_misp_retired.conditionalpipelineMispredicted conditional branch instructions retired (Precise event)event=0xc5,period=400009,umask=100br_misp_retired.near_takenpipelinenumber of near branch instructions retired that were mispredicted and taken (Precise event)event=0xc5,period=400009,umask=0x2000Number of near branch instructions retired that were taken but mispredicted (Precise event)cpu_clk_thread_unhalted.ref_xclkpipelineReference cycles when the thread is unhalted (counts at 100 MHz rate)event=0x3c,period=100003,umask=100Increments at the frequency of XCLK (100 MHz) when not haltedcpu_clk_thread_unhalted.ref_xclk_anypipelineReference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate)event=0x3c,any=1,period=100003,umask=100Reference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate)cpu_clk_unhalted.ref_tscpipelineReference cycles when the core is not in halt stateevent=0,period=2000003,umask=300This event counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt statecpu_clk_unhalted.ref_xclkpipelineReference cycles when the thread is unhalted (counts at 100 MHz rate)event=0x3c,period=100003,umask=100Reference cycles when the thread is unhalted. (counts at 100 MHz rate)cpu_clk_unhalted.ref_xclk_anypipelineReference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate)event=0x3c,any=1,period=100003,umask=100Reference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate)cpu_clk_unhalted.threadpipelineCore cycles when the thread is not in halt stateevent=0x3c,period=200000300This event counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttlingcpu_clk_unhalted.thread_ppipelineThread cycles when thread is not in halt stateevent=0x3c,period=200000300Counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttlingcycle_activity.cycles_l1d_pendingpipelineCycles with pending L1 cache miss loadsevent=0xa3,cmask=8,period=2000003,umask=800Cycles with pending L1 data cache miss loads. Set Cmask=8 to count cyclecycle_activity.cycles_l2_pendingpipelineCycles with pending L2 cache miss loads  Spec update: HSD78, HSM63, HSM80event=0xa3,cmask=1,period=2000003,umask=100Cycles with pending L2 miss loads. Set Cmask=2 to count cycle  Spec update: HSD78, HSM63, HSM80cycle_activity.cycles_ldm_pendingpipelineCycles with pending memory loadsevent=0xa3,cmask=2,period=2000003,umask=200Cycles with pending memory loads. Set Cmask=2 to count cyclecycle_activity.cycles_no_executepipelineThis event increments by 1 for every cycle where there was no execute for this threadevent=0xa3,cmask=4,period=2000003,umask=400This event counts cycles during which no instructions were executed in the execution stage of the pipelinecycle_activity.stalls_l1d_pendingpipelineExecution stalls due to L1 data cache missesevent=0xa3,cmask=12,period=2000003,umask=0xc00Execution stalls due to L1 data cache miss loads. Set Cmask=0CHcycle_activity.stalls_l2_pendingpipelineExecution stalls due to L2 cache misses  Spec update: HSM63, HSM80event=0xa3,cmask=5,period=2000003,umask=500Number of loads missed L2  Spec update: HSM63, HSM80cycle_activity.stalls_ldm_pendingpipelineExecution stalls due to memory subsystemevent=0xa3,cmask=6,period=2000003,umask=600This event counts cycles during which no instructions were executed in the execution stage of the pipeline and there were memory instructions pending (waiting for data)ild_stall.iq_fullpipelineStall cycles because IQ is fullevent=0x87,period=2000003,umask=400Stall cycles due to IQ is fullild_stall.lcppipelineStalls caused by changing prefix length of the instructionevent=0x87,period=2000003,umask=100This event counts cycles where the decoder is stalled on an instruction with a length changing prefix (LCP)inst_retired.anypipelineInstructions retired from execution  Spec update: HSD140, HSD143event=0xc0,period=200000300This event counts the number of instructions retired from execution. For instructions that consist of multiple micro-ops, this event counts the retirement of the last micro-op of the instruction. Counting continues during hardware interrupts, traps, and inside interrupt handlers. INST_RETIRED.ANY is counted by a designated fixed counter, leaving the programmable counters available for other events. Faulting executions of GETSEC/VM entry/VM Exit/MWait will not count as retired instructions  Spec update: HSD140, HSD143inst_retired.any_ppipelineNumber of instructions retired. General Counter   - architectural event  Spec update: HSD11, HSD140event=0xc0,period=200000300Number of instructions at retirement  Spec update: HSD11, HSD140inst_retired.prec_distpipelinePrecise instruction retired event with HW to reduce effect of PEBS shadow in IP distribution  Spec update: HSD140 (Must be precise)event=0xc0,period=2000003,umask=100Precise instruction retired event with HW to reduce effect of PEBS shadow in IP distribution  Spec update: HSD140 (Must be precise)inst_retired.x87pipelineFP operations retired. X87 FP operations that have no exceptions: Counts also flows that have several X87 or flows that use X87 uops in the exception handlingevent=0xc0,period=2000003,umask=200This is a non-precise version (that is, does not use PEBS) of the event that counts FP operations retired. For X87 FP operations that have no exceptions counting also includes flows that have several X87, or flows that use X87 uops in the exception handlingint_misc.recovery_cyclespipelineCore cycles the allocator was stalled due to recovery from earlier clear event for this thread (e.g. misprediction or memory nuke)event=0xd,cmask=1,period=2000003,umask=300This event counts the number of cycles spent waiting for a recovery after an event such as a processor nuke, JEClear, assist, hle/rtm abort etcint_misc.recovery_cycles_anypipelineCore cycles the allocator was stalled due to recovery from earlier clear event for any thread running on the physical core (e.g. misprediction or memory nuke)event=0xd,any=1,cmask=1,period=2000003,umask=300Core cycles the allocator was stalled due to recovery from earlier clear event for any thread running on the physical core (e.g. misprediction or memory nuke)ld_blocks.store_forwardpipelineloads blocked by overlapping with store buffer that cannot be forwardedevent=3,period=100003,umask=200This event counts loads that followed a store to the same address, where the data could not be forwarded inside the pipeline from the store to the load.  The most common reason why store forwarding would be blocked is when a load's address range overlaps with a preceding smaller uncompleted store. The penalty for blocked store forwarding is that the load must wait for the store to write its value to the cache before it can be issuedld_blocks_partial.address_aliaspipelineFalse dependencies in MOB due to partial compare on addressevent=7,period=100003,umask=100Aliasing occurs when a load is issued after a store and their memory addresses are offset by 4K.  This event counts the number of loads that aliased with a preceding store, resulting in an extended address check in the pipeline which can have a performance impactload_hit_pre.hw_pfpipelineNot software-prefetch load dispatches that hit FB allocated for hardware prefetchevent=0x4c,period=100003,umask=200Non-SW-prefetch load dispatches that hit fill buffer allocated for H/W prefetchload_hit_pre.sw_pfpipelineNot software-prefetch load dispatches that hit FB allocated for software prefetchevent=0x4c,period=100003,umask=100Non-SW-prefetch load dispatches that hit fill buffer allocated for S/W prefetchlsd.uopspipelineNumber of Uops delivered by the LSDevent=0xa8,period=2000003,umask=100Number of uops delivered by the LSDmachine_clears.cyclespipelineCycles there was a Nuke. Account for both thread-specific and All Thread Nukesevent=0xc3,period=2000003,umask=100machine_clears.maskmovpipelineThis event counts the number of executed Intel AVX masked load operations that refer to an illegal address range with the mask bits set to 0event=0xc3,period=100003,umask=0x2000machine_clears.smcpipelineSelf-modifying code (SMC) detectedevent=0xc3,period=100003,umask=400This event is incremented when self-modifying code (SMC) is detected, which causes a machine clear.  Machine clears can have a significant performance impact if they are happening frequentlymove_elimination.int_eliminatedpipelineNumber of integer Move Elimination candidate uops that were eliminatedevent=0x58,period=1000003,umask=100Number of integer move elimination candidate uops that were eliminatedmove_elimination.int_not_eliminatedpipelineNumber of integer Move Elimination candidate uops that were not eliminatedevent=0x58,period=1000003,umask=400Number of integer move elimination candidate uops that were not eliminatedother_assists.any_wb_assistpipelineNumber of times any microcode assist is invoked by HW upon uop writebackevent=0xc1,period=100003,umask=0x4000Number of microcode assists invoked by HW upon uop writebackresource_stalls.anypipelineResource-related stall cycles  Spec update: HSD135event=0xa2,period=2000003,umask=100Cycles allocation is stalled due to resource related reason  Spec update: HSD135resource_stalls.robpipelineCycles stalled due to re-order buffer fullevent=0xa2,period=2000003,umask=0x1000resource_stalls.rspipelineCycles stalled due to no eligible RS entry availableevent=0xa2,period=2000003,umask=400resource_stalls.sbpipelineCycles stalled due to no store buffers available. (not including draining form sync)event=0xa2,period=2000003,umask=800This event counts cycles during which no instructions were allocated because no Store Buffers (SB) were availablerob_misc_events.lbr_insertspipelineCount cases of saving new LBRevent=0xcc,period=2000003,umask=0x2000Count cases of saving new LBR records by hardwarers_events.empty_cyclespipelineCycles when Reservation Station (RS) is empty for the threadevent=0x5e,period=2000003,umask=100This event counts cycles when the Reservation Station ( RS ) is empty for the thread. The RS is a structure that buffers allocated micro-ops from the Front-end. If there are many cycles when the RS is empty, it may represent an underflow of instructions delivered from the Front-enduops_dispatched_port.port_0pipelineCycles per thread when uops are executed in port 0event=0xa1,period=2000003,umask=100uops_dispatched_port.port_1pipelineCycles per thread when uops are executed in port 1event=0xa1,period=2000003,umask=200uops_dispatched_port.port_2pipelineCycles per thread when uops are executed in port 2event=0xa1,period=2000003,umask=400uops_dispatched_port.port_3pipelineCycles per thread when uops are executed in port 3event=0xa1,period=2000003,umask=800uops_dispatched_port.port_4pipelineCycles per thread when uops are executed in port 4event=0xa1,period=2000003,umask=0x1000uops_dispatched_port.port_5pipelineCycles per thread when uops are executed in port 5event=0xa1,period=2000003,umask=0x2000uops_dispatched_port.port_6pipelineCycles per thread when uops are executed in port 6event=0xa1,period=2000003,umask=0x4000uops_dispatched_port.port_7pipelineCycles per thread when uops are executed in port 7event=0xa1,period=2000003,umask=0x8000uops_executed.corepipelineNumber of uops executed on the core  Spec update: HSD30, HSM31event=0xb1,period=2000003,umask=200Counts total number of uops to be executed per-core each cycle  Spec update: HSD30, HSM31uops_executed.core_cycles_ge_1pipelineCycles at least 1 micro-op is executed from any thread on physical core  Spec update: HSD30, HSM31event=0xb1,cmask=1,period=2000003,umask=200uops_executed.core_cycles_ge_2pipelineCycles at least 2 micro-op is executed from any thread on physical core  Spec update: HSD30, HSM31event=0xb1,cmask=2,period=2000003,umask=200uops_executed.core_cycles_ge_3pipelineCycles at least 3 micro-op is executed from any thread on physical core  Spec update: HSD30, HSM31event=0xb1,cmask=3,period=2000003,umask=200uops_executed.core_cycles_ge_4pipelineCycles at least 4 micro-op is executed from any thread on physical core  Spec update: HSD30, HSM31event=0xb1,cmask=4,period=2000003,umask=200uops_executed.core_cycles_nonepipelineCycles with no micro-ops executed from any thread on physical core  Spec update: HSD30, HSM31event=0xb1,inv=1,period=2000003,umask=200uops_executed.cycles_ge_1_uop_execpipelineCycles where at least 1 uop was executed per-thread  Spec update: HSD144, HSD30, HSM31event=0xb1,cmask=1,period=2000003,umask=100This events counts the cycles where at least one uop was executed. It is counted per thread  Spec update: HSD144, HSD30, HSM31uops_executed.cycles_ge_2_uops_execpipelineCycles where at least 2 uops were executed per-thread  Spec update: HSD144, HSD30, HSM31event=0xb1,cmask=2,period=2000003,umask=100This events counts the cycles where at least two uop were executed. It is counted per thread  Spec update: HSD144, HSD30, HSM31uops_executed.cycles_ge_3_uops_execpipelineCycles where at least 3 uops were executed per-thread  Spec update: HSD144, HSD30, HSM31event=0xb1,cmask=3,period=2000003,umask=100This events counts the cycles where at least three uop were executed. It is counted per thread  Spec update: HSD144, HSD30, HSM31uops_executed.cycles_ge_4_uops_execpipelineCycles where at least 4 uops were executed per-thread  Spec update: HSD144, HSD30, HSM31event=0xb1,cmask=4,period=2000003,umask=100uops_executed.stall_cyclespipelineCounts number of cycles no uops were dispatched to be executed on this thread  Spec update: HSD144, HSD30, HSM31event=0xb1,cmask=1,inv=1,period=2000003,umask=100uops_executed_port.port_0pipelineCycles per thread when uops are executed in port 0event=0xa1,period=2000003,umask=100Cycles which a uop is dispatched on port 0 in this threaduops_executed_port.port_1pipelineCycles per thread when uops are executed in port 1event=0xa1,period=2000003,umask=200Cycles which a uop is dispatched on port 1 in this threaduops_executed_port.port_2pipelineCycles per thread when uops are executed in port 2event=0xa1,period=2000003,umask=400Cycles which a uop is dispatched on port 2 in this threaduops_executed_port.port_3pipelineCycles per thread when uops are executed in port 3event=0xa1,period=2000003,umask=800Cycles which a uop is dispatched on port 3 in this threaduops_executed_port.port_4pipelineCycles per thread when uops are executed in port 4event=0xa1,period=2000003,umask=0x1000Cycles which a uop is dispatched on port 4 in this threaduops_executed_port.port_5pipelineCycles per thread when uops are executed in port 5event=0xa1,period=2000003,umask=0x2000Cycles which a uop is dispatched on port 5 in this threaduops_executed_port.port_6pipelineCycles per thread when uops are executed in port 6event=0xa1,period=2000003,umask=0x4000Cycles which a uop is dispatched on port 6 in this threaduops_executed_port.port_7pipelineCycles per thread when uops are executed in port 7event=0xa1,period=2000003,umask=0x8000Cycles which a uop is dispatched on port 7 in this threaduops_issued.anypipelineUops that Resource Allocation Table (RAT) issues to Reservation Station (RS)event=0xe,period=2000003,umask=100This event counts the number of uops issued by the Front-end of the pipeline to the Back-end. This event is counted at the allocation stage and will count both retired and non-retired uopsuops_issued.core_stall_cyclespipelineCycles when Resource Allocation Table (RAT) does not issue Uops to Reservation Station (RS) for all threadsevent=0xe,any=1,cmask=1,inv=1,period=2000003,umask=100uops_issued.flags_mergepipelineNumber of flags-merge uops being allocated. Such uops considered perf sensitive; added by GSR u-archevent=0xe,period=2000003,umask=0x1000Number of flags-merge uops allocated. Such uops add delayuops_issued.single_mulpipelineNumber of Multiply packed/scalar single precision uops allocatedevent=0xe,period=2000003,umask=0x4000Number of multiply packed/scalar single precision uops allocateduops_issued.slow_leapipelineNumber of slow LEA uops being allocated. A uop is generally considered SlowLea if it has 3 sources (e.g. 2 sources + immediate) regardless if as a result of LEA instruction or notevent=0xe,period=2000003,umask=0x2000Number of slow LEA or similar uops allocated. Such uop has 3 sources (for example, 2 sources + immediate) regardless of whether it is a result of LEA instruction or notuops_issued.stall_cyclespipelineCycles when Resource Allocation Table (RAT) does not issue Uops to Reservation Station (RS) for the threadevent=0xe,cmask=1,inv=1,period=2000003,umask=100uops_retired.allpipelineActually retired uops (Precise event)event=0xc2,period=2000003,umask=100Counts the number of micro-ops retired. Use Cmask=1 and invert to count active cycles or stalled cycles (Precise event)uops_retired.core_stall_cyclespipelineCycles without actually retired uopsevent=0xc2,any=1,cmask=1,inv=1,period=2000003,umask=100uops_retired.retire_slotspipelineRetirement slots used (Precise event)event=0xc2,period=2000003,umask=200This event counts the number of retirement slots used each cycle.  There are potentially 4 slots that can be used each cycle - meaning, 4 uops or 4 instructions could retire each cycle (Precise event)uops_retired.stall_cyclespipelineCycles without actually retired uopsevent=0xc2,cmask=1,inv=1,period=2000003,umask=100uops_retired.total_cyclespipelineCycles with less than 10 actually retired uopsevent=0xc2,cmask=16,inv=1,period=2000003,umask=100unc_cbo_cache_lookup.any_esuncore cacheL3 Lookup any request that access cache and found line in E or S-stateevent=0x34,umask=0x8601unc_cbo_cache_lookup.any_iuncore cacheL3 Lookup any request that access cache and found line in I-stateevent=0x34,umask=0x8801unc_cbo_cache_lookup.any_muncore cacheL3 Lookup any request that access cache and found line in M-stateevent=0x34,umask=0x8101unc_cbo_cache_lookup.any_mesiuncore cacheL3 Lookup any request that access cache and found line in MESI-stateevent=0x34,umask=0x8f01unc_cbo_cache_lookup.extsnp_esuncore cacheL3 Lookup external snoop request that access cache and found line in E or S-stateevent=0x34,umask=0x4601unc_cbo_cache_lookup.extsnp_iuncore cacheL3 Lookup external snoop request that access cache and found line in I-stateevent=0x34,umask=0x4801unc_cbo_cache_lookup.extsnp_muncore cacheL3 Lookup external snoop request that access cache and found line in M-stateevent=0x34,umask=0x4101unc_cbo_cache_lookup.extsnp_mesiuncore cacheL3 Lookup external snoop request that access cache and found line in MESI-stateevent=0x34,umask=0x4f01unc_cbo_cache_lookup.read_esuncore cacheL3 Lookup read request that access cache and found line in E or S-stateevent=0x34,umask=0x1601unc_cbo_cache_lookup.read_iuncore cacheL3 Lookup read request that access cache and found line in I-stateevent=0x34,umask=0x1801unc_cbo_cache_lookup.read_muncore cacheL3 Lookup read request that access cache and found line in M-stateevent=0x34,umask=0x1101unc_cbo_cache_lookup.read_mesiuncore cacheL3 Lookup read request that access cache and found line in any MESI-stateevent=0x34,umask=0x1f01unc_cbo_cache_lookup.write_esuncore cacheL3 Lookup write request that access cache and found line in E or S-stateevent=0x34,umask=0x2601unc_cbo_cache_lookup.write_iuncore cacheL3 Lookup write request that access cache and found line in I-stateevent=0x34,umask=0x2801unc_cbo_cache_lookup.write_muncore cacheL3 Lookup write request that access cache and found line in M-stateevent=0x34,umask=0x2101unc_cbo_cache_lookup.write_mesiuncore cacheL3 Lookup write request that access cache and found line in MESI-stateevent=0x34,umask=0x2f01unc_cbo_xsnp_response.hitm_evictionuncore cacheA cross-core snoop resulted from L3 Eviction which hits a modified line in some processor coreevent=0x22,umask=0x8801unc_cbo_xsnp_response.hitm_externaluncore cacheAn external snoop hits a modified line in some processor coreevent=0x22,umask=0x2801unc_cbo_xsnp_response.hit_evictionuncore cacheA cross-core snoop resulted from L3 Eviction which hits a non-modified line in some processor coreevent=0x22,umask=0x8401unc_cbo_xsnp_response.hit_externaluncore cacheAn external snoop hits a non-modified line in some processor coreevent=0x22,umask=0x2401unc_cbo_xsnp_response.miss_externaluncore cacheAn external snoop misses in some processor coreevent=0x22,umask=0x2101unc_clock.socketuncore cacheThis 48-bit fixed counter counts the UCLK cyclesevent=0xff01unc_arb_coh_trk_occupancy.alluncore interconnectEach cycle count number of valid entries in Coherency Tracker queue from allocation till deallocation. Aperture requests (snoops) appear as NC decoded internally and become coherent (snoop L3, access memory)event=0x83,umask=101Each cycle count number of valid entries in Coherency Tracker queue from allocation till deallocation. Aperture requests (snoops) appear as NC decoded internally and become coherent (snoop L3, access memory)unc_arb_trk_occupancy.cycles_with_any_requestuncore interconnectCycles with at least one request outstanding is waiting for data return from memory controller. Account for coherent and non-coherent requests initiated by IA Cores, Processor Graphics Unit, or LLCevent=0x80,cmask=1,umask=101dtlb_load_misses.miss_causes_a_walkvirtual memoryLoad misses in all DTLB levels that cause page walksevent=8,period=100003,umask=100Misses in all TLB levels that cause a page walk of any page sizedtlb_load_misses.pde_cache_missvirtual memoryDTLB demand load misses with low part of linear-to-physical address translation missedevent=8,period=100003,umask=0x8000DTLB demand load misses with low part of linear-to-physical address translation misseddtlb_load_misses.stlb_hitvirtual memoryLoad operations that miss the first DTLB level but hit the second and do not cause page walksevent=8,period=2000003,umask=0x6000Number of cache load STLB hits. No page walkdtlb_load_misses.stlb_hit_2mvirtual memoryLoad misses that miss the  DTLB and hit the STLB (2M)event=8,period=2000003,umask=0x4000This event counts load operations from a 2M page that miss the first DTLB level but hit the second and do not cause page walksdtlb_load_misses.stlb_hit_4kvirtual memoryLoad misses that miss the  DTLB and hit the STLB (4K)event=8,period=2000003,umask=0x2000This event counts load operations from a 4K page that miss the first DTLB level but hit the second and do not cause page walksdtlb_load_misses.walk_completedvirtual memoryDemand load Miss in all translation lookaside buffer (TLB) levels causes a page walk that completes of any page sizeevent=8,period=100003,umask=0xe00Completed page walks in any TLB of any page size due to demand load missesdtlb_load_misses.walk_completed_1gvirtual memoryLoad miss in all TLB levels causes a page walk that completes. (1G)event=8,period=2000003,umask=800dtlb_load_misses.walk_completed_2m_4mvirtual memoryDemand load Miss in all translation lookaside buffer (TLB) levels causes a page walk that completes (2M/4M)event=8,period=2000003,umask=400Completed page walks due to demand load misses that caused 2M/4M page walks in any TLB levelsdtlb_load_misses.walk_completed_4kvirtual memoryDemand load Miss in all translation lookaside buffer (TLB) levels causes a page walk that completes (4K)event=8,period=2000003,umask=200Completed page walks due to demand load misses that caused 4K page walks in any TLB levelsdtlb_load_misses.walk_durationvirtual memoryCycles when PMH is busy with page walksevent=8,period=2000003,umask=0x1000This event counts cycles when the  page miss handler (PMH) is servicing page walks caused by DTLB load missesdtlb_store_misses.miss_causes_a_walkvirtual memoryStore misses in all DTLB levels that cause page walksevent=0x49,period=100003,umask=100Miss in all TLB levels causes a page walk of any page size (4K/2M/4M/1G)dtlb_store_misses.pde_cache_missvirtual memoryDTLB store misses with low part of linear-to-physical address translation missedevent=0x49,period=100003,umask=0x8000DTLB store misses with low part of linear-to-physical address translation misseddtlb_store_misses.stlb_hitvirtual memoryStore operations that miss the first TLB level but hit the second and do not cause page walksevent=0x49,period=100003,umask=0x6000Store operations that miss the first TLB level but hit the second and do not cause page walksdtlb_store_misses.stlb_hit_2mvirtual memoryStore misses that miss the  DTLB and hit the STLB (2M)event=0x49,period=100003,umask=0x4000This event counts store operations from a 2M page that miss the first DTLB level but hit the second and do not cause page walksdtlb_store_misses.stlb_hit_4kvirtual memoryStore misses that miss the  DTLB and hit the STLB (4K)event=0x49,period=100003,umask=0x2000This event counts store operations from a 4K page that miss the first DTLB level but hit the second and do not cause page walksdtlb_store_misses.walk_completedvirtual memoryStore misses in all DTLB levels that cause completed page walksevent=0x49,period=100003,umask=0xe00Completed page walks due to store miss in any TLB levels of any page size (4K/2M/4M/1G)dtlb_store_misses.walk_completed_1gvirtual memoryStore misses in all DTLB levels that cause completed page walks. (1G)event=0x49,period=100003,umask=800dtlb_store_misses.walk_completed_2m_4mvirtual memoryStore misses in all DTLB levels that cause completed page walks (2M/4M)event=0x49,period=100003,umask=400Completed page walks due to store misses in one or more TLB levels of 2M/4M page structuredtlb_store_misses.walk_completed_4kvirtual memoryStore miss in all TLB levels causes a page walk that completes. (4K)event=0x49,period=100003,umask=200Completed page walks due to store misses in one or more TLB levels of 4K page structuredtlb_store_misses.walk_durationvirtual memoryCycles when PMH is busy with page walksevent=0x49,period=100003,umask=0x1000This event counts cycles when the  page miss handler (PMH) is servicing page walks caused by DTLB store missesept.walk_cyclesvirtual memoryCycle count for an Extended Page table walkevent=0x4f,period=2000003,umask=0x1000itlb.itlb_flushvirtual memoryFlushing of the Instruction TLB (ITLB) pages, includes 4k/2M/4M pagesevent=0xae,period=100003,umask=100Counts the number of ITLB flushes, includes 4k/2M/4M pagesitlb_misses.miss_causes_a_walkvirtual memoryMisses at all ITLB levels that cause page walksevent=0x85,period=100003,umask=100Misses in ITLB that causes a page walk of any page sizeitlb_misses.stlb_hitvirtual memoryOperations that miss the first ITLB level but hit the second and do not cause any page walksevent=0x85,period=100003,umask=0x6000ITLB misses that hit STLB. No page walkitlb_misses.stlb_hit_2mvirtual memoryCode misses that miss the  DTLB and hit the STLB (2M)event=0x85,period=100003,umask=0x4000ITLB misses that hit STLB (2M)itlb_misses.stlb_hit_4kvirtual memoryCore misses that miss the  DTLB and hit the STLB (4K)event=0x85,period=100003,umask=0x2000ITLB misses that hit STLB (4K)itlb_misses.walk_completedvirtual memoryMisses in all ITLB levels that cause completed page walksevent=0x85,period=100003,umask=0xe00Completed page walks in ITLB of any page sizeitlb_misses.walk_completed_1gvirtual memoryStore miss in all TLB levels causes a page walk that completes. (1G)event=0x85,period=100003,umask=800itlb_misses.walk_completed_2m_4mvirtual memoryCode miss in all TLB levels causes a page walk that completes. (2M/4M)event=0x85,period=100003,umask=400Completed page walks due to misses in ITLB 2M/4M page entriesitlb_misses.walk_completed_4kvirtual memoryCode miss in all TLB levels causes a page walk that completes. (4K)event=0x85,period=100003,umask=200Completed page walks due to misses in ITLB 4K page entriesitlb_misses.walk_durationvirtual memoryCycles when PMH is busy with page walksevent=0x85,period=100003,umask=0x1000This event counts cycles when the  page miss handler (PMH) is servicing page walks caused by ITLB missespage_walker_loads.dtlb_l1virtual memoryNumber of DTLB page walker hits in the L1+FBevent=0xbc,period=2000003,umask=0x1100Number of DTLB page walker loads that hit in the L1+FBpage_walker_loads.dtlb_l2virtual memoryNumber of DTLB page walker hits in the L2event=0xbc,period=2000003,umask=0x1200Number of DTLB page walker loads that hit in the L2page_walker_loads.dtlb_l3virtual memoryNumber of DTLB page walker hits in the L3 + XSNP  Spec update: HSD25event=0xbc,period=2000003,umask=0x1400Number of DTLB page walker loads that hit in the L3  Spec update: HSD25page_walker_loads.dtlb_memoryvirtual memoryNumber of DTLB page walker hits in Memory  Spec update: HSD25event=0xbc,period=2000003,umask=0x1800Number of DTLB page walker loads from memory  Spec update: HSD25page_walker_loads.ept_dtlb_l1virtual memoryCounts the number of Extended Page Table walks from the DTLB that hit in the L1 and FBevent=0xbc,period=2000003,umask=0x4100page_walker_loads.ept_dtlb_l2virtual memoryCounts the number of Extended Page Table walks from the DTLB that hit in the L2event=0xbc,period=2000003,umask=0x4200page_walker_loads.ept_dtlb_l3virtual memoryCounts the number of Extended Page Table walks from the DTLB that hit in the L3event=0xbc,period=2000003,umask=0x4400page_walker_loads.ept_dtlb_memoryvirtual memoryCounts the number of Extended Page Table walks from the DTLB that hit in memoryevent=0xbc,period=2000003,umask=0x4800page_walker_loads.ept_itlb_l1virtual memoryCounts the number of Extended Page Table walks from the ITLB that hit in the L1 and FBevent=0xbc,period=2000003,umask=0x8100page_walker_loads.ept_itlb_l2virtual memoryCounts the number of Extended Page Table walks from the ITLB that hit in the L2event=0xbc,period=2000003,umask=0x8200page_walker_loads.ept_itlb_l3virtual memoryCounts the number of Extended Page Table walks from the ITLB that hit in the L2event=0xbc,period=2000003,umask=0x8400page_walker_loads.ept_itlb_memoryvirtual memoryCounts the number of Extended Page Table walks from the ITLB that hit in memoryevent=0xbc,period=2000003,umask=0x8800page_walker_loads.itlb_l1virtual memoryNumber of ITLB page walker hits in the L1+FBevent=0xbc,period=2000003,umask=0x2100Number of ITLB page walker loads that hit in the L1+FBpage_walker_loads.itlb_l2virtual memoryNumber of ITLB page walker hits in the L2event=0xbc,period=2000003,umask=0x2200Number of ITLB page walker loads that hit in the L2page_walker_loads.itlb_l3virtual memoryNumber of ITLB page walker hits in the L3 + XSNP  Spec update: HSD25event=0xbc,period=2000003,umask=0x2400Number of ITLB page walker loads that hit in the L3  Spec update: HSD25page_walker_loads.itlb_memoryvirtual memoryNumber of ITLB page walker hits in Memory  Spec update: HSD25event=0xbc,period=2000003,umask=0x2800Number of ITLB page walker loads from memory  Spec update: HSD25tlb_flush.dtlb_threadvirtual memoryDTLB flush attempts of the thread-specific entriesevent=0xbd,period=100003,umask=100DTLB flush attempts of the thread-specific entriestlb_flush.stlb_anyvirtual memorySTLB flush attemptsevent=0xbd,period=100003,umask=0x2000Count number of STLB flush attemptsmem_load_uops_l3_miss_retired.remote_dramcacheRetired load uop whose Data Source was: remote DRAM either Snoop not needed or Snoop Miss (RspI)  Supports address when precise.  Spec update: HSD29, HSM30 (Precise event)event=0xd3,period=100003,umask=400mem_load_uops_l3_miss_retired.remote_fwdcacheRetired load uop whose Data Source was: forwarded from remote cache  Supports address when precise.  Spec update: HSM30 (Precise event)event=0xd3,period=100003,umask=0x2000mem_load_uops_l3_miss_retired.remote_hitmcacheRetired load uop whose Data Source was: Remote cache HITM  Supports address when precise.  Spec update: HSM30 (Precise event)event=0xd3,period=100003,umask=0x1000offcore_response.demand_code_rd.llc_hit.hitm_other_corecacheCounts all demand code reads hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000400offcore_response.demand_code_rd.llc_hit.hit_other_core_no_fwdcacheCounts all demand code reads hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C000400offcore_response.demand_data_rd.llc_hit.hitm_other_corecacheCounts demand data reads hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000100offcore_response.demand_data_rd.llc_hit.hit_other_core_no_fwdcacheCounts demand data reads hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C000100offcore_response.demand_rfo.llc_hit.hit_other_core_no_fwdcacheCounts all demand data writes (RFOs) hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C000200offcore_response.pf_l2_code_rd.llc_hit.any_responsecacheCounts all prefetch (that bring data to LLC only) code reads hit in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C004000offcore_response.pf_l2_data_rd.llc_hit.any_responsecacheCounts prefetch (that bring data to L2) data reads hit in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C001000offcore_response.pf_l2_rfo.llc_hit.any_responsecacheCounts all prefetch (that bring data to L2) RFOs hit in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C002000offcore_response.pf_llc_data_rd.llc_hit.any_responsecacheCounts all prefetch (that bring data to LLC only) data reads hit in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C008000offcore_response.all_code_rd.llc_miss.local_drammemoryCounts all demand & prefetch code reads miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x60040024400offcore_response.all_data_rd.llc_miss.local_drammemoryCounts all demand & prefetch data reads miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x60040009100offcore_response.all_data_rd.llc_miss.remote_drammemoryCounts all demand & prefetch data reads miss the L3 and the data is returned from remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x63F80009100offcore_response.all_data_rd.llc_miss.remote_hit_forwardmemoryCounts all demand & prefetch data reads miss the L3 and clean or shared data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0009100offcore_response.all_reads.llc_miss.local_drammemoryCounts all data/code/rfo reads (demand & prefetch) miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x6004007F700offcore_response.all_reads.llc_miss.remote_drammemoryCounts all data/code/rfo reads (demand & prefetch) miss the L3 and the data is returned from remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x63F8007F700offcore_response.all_reads.llc_miss.remote_hit_forwardmemoryCounts all data/code/rfo reads (demand & prefetch) miss the L3 and clean or shared data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC007F700offcore_response.all_rfo.llc_miss.local_drammemoryCounts all demand & prefetch RFOs miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x60040012200offcore_response.demand_code_rd.llc_miss.any_responsememoryCounts all demand code reads miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FBFC0000400offcore_response.demand_code_rd.llc_miss.local_drammemoryCounts all demand code reads miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x60040000400offcore_response.demand_data_rd.llc_miss.any_responsememoryCounts demand data reads miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FBFC0000100offcore_response.demand_data_rd.llc_miss.local_drammemoryCounts demand data reads miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x60040000100offcore_response.demand_rfo.llc_miss.local_drammemoryCounts all demand data writes (RFOs) miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x60040000200offcore_response.pf_l2_code_rd.llc_miss.any_responsememoryCounts all prefetch (that bring data to LLC only) code reads miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FBFC0004000offcore_response.pf_l2_data_rd.llc_miss.any_responsememoryCounts prefetch (that bring data to L2) data reads miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FBFC0001000offcore_response.pf_l2_rfo.llc_miss.any_responsememoryCounts all prefetch (that bring data to L2) RFOs miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FBFC0002000offcore_response.pf_llc_data_rd.llc_miss.any_responsememoryCounts all prefetch (that bring data to LLC only) data reads miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FBFC0008000rtm_retired.abortedmemoryNumber of times an RTM execution aborted due to any reasons (multiple categories may count as one) (Precise event)event=0xc9,period=2000003,umask=400unc_c_llc_victims.s_stateuncore cacheLines in S Stateevent=0x37,umask=401Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_c_ring_ad_used.alluncore cacheAD Ring In Use; Allevent=0x1b,umask=0xf01Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_ad_used.downuncore cacheAD Ring In Use; Downevent=0x1b,umask=0xc01Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_ad_used.down_evenuncore cacheAD Ring In Use; Down and Evenevent=0x1b,umask=401Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Even ring polarityunc_c_ring_ad_used.down_odduncore cacheAD Ring In Use; Down and Oddevent=0x1b,umask=801Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Odd ring polarityunc_c_ring_ad_used.upuncore cacheAD Ring In Use; Upevent=0x1b,umask=301Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_ad_used.up_evenuncore cacheAD Ring In Use; Up and Evenevent=0x1b,umask=101Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Even ring polarityunc_c_ring_ad_used.up_odduncore cacheAD Ring In Use; Up and Oddevent=0x1b,umask=201Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Odd ring polarityunc_c_ring_ak_used.alluncore cacheAK Ring In Use; Allevent=0x1c,umask=0xf01Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_ak_used.down_evenuncore cacheAK Ring In Use; Down and Evenevent=0x1c,umask=401Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Even ring polarityunc_c_ring_ak_used.down_odduncore cacheAK Ring In Use; Down and Oddevent=0x1c,umask=801Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Odd ring polarityunc_c_ring_ak_used.up_evenuncore cacheAK Ring In Use; Up and Evenevent=0x1c,umask=101Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Even ring polarityunc_c_ring_ak_used.up_odduncore cacheAK Ring In Use; Up and Oddevent=0x1c,umask=201Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Odd ring polarityunc_c_ring_bl_used.alluncore cacheBL Ring in Use; Downevent=0x1d,umask=0xf01Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_bl_used.down_evenuncore cacheBL Ring in Use; Down and Evenevent=0x1d,umask=401Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Even ring polarityunc_c_ring_bl_used.down_odduncore cacheBL Ring in Use; Down and Oddevent=0x1d,umask=801Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Odd ring polarityunc_c_ring_bl_used.up_evenuncore cacheBL Ring in Use; Up and Evenevent=0x1d,umask=101Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Even ring polarityunc_c_ring_bl_used.up_odduncore cacheBL Ring in Use; Up and Oddevent=0x1d,umask=201Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Odd ring polarityunc_c_ring_iv_used.anyuncore cacheBL Ring in Use; Anyevent=0x1e,umask=0xf01Counts the number of cycles that the IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring in HSX  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODD.; Filters any polarityunc_c_ring_iv_used.dnuncore cacheBL Ring in Use; Anyevent=0x1e,umask=0xc01Counts the number of cycles that the IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring in HSX  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODD.; Filters any polarityunc_c_ring_iv_used.downuncore cacheBL Ring in Use; Downevent=0x1e,umask=0xcc01Counts the number of cycles that the IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring in HSX  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODD.; Filters for Down polarityunc_c_ring_iv_used.upuncore cacheBL Ring in Use; Anyevent=0x1e,umask=301Counts the number of cycles that the IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring in HSX  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODD.; Filters any polarityunc_c_ring_sink_starved.aduncore cacheUNC_C_RING_SINK_STARVED.ADevent=6,umask=101unc_c_ring_sink_starved.akuncore cacheUNC_C_RING_SINK_STARVED.AKevent=6,umask=201unc_c_ring_sink_starved.bluncore cacheUNC_C_RING_SINK_STARVED.BLevent=6,umask=401unc_c_ring_sink_starved.ivuncore cacheUNC_C_RING_SINK_STARVED.IVevent=6,umask=801unc_c_tor_inserts.local_opcodeuncore cacheTOR Inserts; Local Memory - Opcode Matchedevent=0x35,umask=0x2101Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182).; All transactions, satisfied by an opcode,  inserted into the TOR that are satisfied by locally HOMed memoryunc_h_snoop_resp.rspsfwduncore cacheShared line forwarded from remote cacheevent=0x21,umask=80164BytesCounts the total number of RspI snoop responses received.  Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system.   In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received.  For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1.; Filters for a snoop response of RspSFwd.  This is returned when a remote caching agent forwards data but holds on to its currently copy.  This is common for data and code reads that hit in a remote socket in E or F stateunc_h_snp_resp_recv_local.rspsfwduncore cacheSnoop Responses Received Local; RspSFwdevent=0x60,umask=801Number of snoop responses received for a Local  request; Filters for a snoop response of RspSFwd.  This is returned when a remote caching agent forwards data but holds on to its currently copy.  This is common for data and code reads that hit in a remote socket in E or F stateunc_i_transactions.writesuncore interconnectInbound Transaction Count; Writesevent=0x16,umask=201Counts the number of Inbound transactions from the IRP to the Uncore.  This can be filtered based on request type in addition to the source queue.  Note the special filtering equation.  We do OR-reduction on the request type.  If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Trackes only write requests.  Each write request should have a prefetch, so there is no need to explicitly track these requests.  For writes that are tickled and have to retry, the counter will be incremented for each retryunc_q_clockticksuncore interconnectNumber of qfclksevent=0x1401Counts the number of clocks in the QPI LL.  This clock runs at 1/4th the GT/s speed of the QPI link.  For example, a 4GT/s link will have qfclk or 1GHz.  HSX does not support dynamic link speeds, so this frequency is fixedunc_q_rxl_crc_errors.normal_opuncore interconnectCRC Errors Detected; Normal Operationsevent=3,umask=201Number of CRC errors detected in the QPI Agent.  Each QPI flit incorporates 8 bits of CRC for error detection.  This counts the number of flits where the CRC was able to detect an error.  After an error has been detected, the QPI agent will send a request to the transmitting socket to resend the flit (as well as any flits that came after it).; CRC errors detected during normal operationunc_s_ring_ad_used.down_evenuncore interconnectAD Ring In Use; Down and Eventevent=0x1b,umask=401Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.  We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Event ring polarityunc_s_ring_ad_used.down_odduncore interconnectAD Ring In Use; Down and Oddevent=0x1b,umask=801Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.  We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Odd ring polarityunc_s_ring_ad_used.up_evenuncore interconnectAD Ring In Use; Up and Evenevent=0x1b,umask=101Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.  We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Even ring polarityunc_s_ring_ad_used.up_odduncore interconnectAD Ring In Use; Up and Oddevent=0x1b,umask=201Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.  We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Odd ring polarityunc_s_ring_ak_used.down_evenuncore interconnectAK Ring In Use; Down and Eventevent=0x1c,umask=401Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Event ring polarityunc_s_ring_ak_used.down_odduncore interconnectAK Ring In Use; Down and Oddevent=0x1c,umask=801Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Odd ring polarityunc_s_ring_ak_used.up_evenuncore interconnectAK Ring In Use; Up and Evenevent=0x1c,umask=101Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Even ring polarityunc_s_ring_ak_used.up_odduncore interconnectAK Ring In Use; Up and Oddevent=0x1c,umask=201Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Odd ring polarityunc_s_ring_bl_used.down_evenuncore interconnectBL Ring in Use; Down and Eventevent=0x1d,umask=401Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Event ring polarityunc_s_ring_bl_used.down_odduncore interconnectBL Ring in Use; Down and Oddevent=0x1d,umask=801Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Odd ring polarityunc_s_ring_bl_used.up_evenuncore interconnectBL Ring in Use; Up and Evenevent=0x1d,umask=101Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Even ring polarityunc_s_ring_bl_used.up_odduncore interconnectBL Ring in Use; Up and Oddevent=0x1d,umask=201Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Odd ring polarityunc_u_clockticksuncore interconnectUNC_U_CLOCKTICKSevent=001unc_m_clockticksuncore memoryDRAM Clockticksevent=001unc_p_clockticksuncore powerpclk Cyclesevent=001The PCU runs off a fixed 800 MHz clock.  This event counts the number of pclk cycles measured while the counter was enabled.  The pclk, like the Memory Controller's dclk, counts at a constant rate making it a good measure of actual wall timeunc_p_freq_band0_cyclesuncore powerFrequency Residencyevent=0xb01Counts the number of cycles that the uncore was running at a frequency greater than or equal to the frequency that is configured in the filter.  One can use all four counters with this event, so it is possible to track up to 4 configurable bands.  One can use edge detect in conjunction with this event to track the number of times that we transitioned into a frequency greater than or equal to the configurable frequency. One can also use inversion to track cycles when we were less than the configured frequencyunc_p_freq_band1_cyclesuncore powerFrequency Residencyevent=0xc01Counts the number of cycles that the uncore was running at a frequency greater than or equal to the frequency that is configured in the filter.  One can use all four counters with this event, so it is possible to track up to 4 configurable bands.  One can use edge detect in conjunction with this event to track the number of times that we transitioned into a frequency greater than or equal to the configurable frequency. One can also use inversion to track cycles when we were less than the configured frequencyunc_p_freq_band2_cyclesuncore powerFrequency Residencyevent=0xd01Counts the number of cycles that the uncore was running at a frequency greater than or equal to the frequency that is configured in the filter.  One can use all four counters with this event, so it is possible to track up to 4 configurable bands.  One can use edge detect in conjunction with this event to track the number of times that we transitioned into a frequency greater than or equal to the configurable frequency. One can also use inversion to track cycles when we were less than the configured frequencyunc_p_freq_band3_cyclesuncore powerFrequency Residencyevent=0xe01Counts the number of cycles that the uncore was running at a frequency greater than or equal to the frequency that is configured in the filter.  One can use all four counters with this event, so it is possible to track up to 4 configurable bands.  One can use edge detect in conjunction with this event to track the number of times that we transitioned into a frequency greater than or equal to the configurable frequency. One can also use inversion to track cycles when we were less than the configured frequencyunc_p_ufs_transitions_no_changeuncore powerUNC_P_UFS_TRANSITIONS_NO_CHANGEevent=0x7901Ring GV with same final and initial frequencyl1d_pend_miss.l2_stallcacheNumber of cycles a demand request has waited due to L1D due to lack of L2 resourcesevent=0x48,period=1000003,umask=400Counts number of cycles a demand request has waited due to L1D due to lack of L2 resources. Demand requests include cacheable/uncacheable demand load, store, lock or SW prefetch accessesl2_lines_out.non_silentcacheModified cache lines that are evicted by L2 cache when triggered by an L2 cache fillevent=0xf2,period=200003,umask=200Counts the number of lines that are evicted by L2 cache when triggered by an L2 cache fill. Those lines are in Modified state. Modified lines are written back to L3l2_lines_out.silentcacheNon-modified cache lines that are silently dropped by L2 cache when triggered by an L2 cache fillevent=0xf2,period=200003,umask=100Counts the number of lines that are silently dropped by L2 cache when triggered by an L2 cache fill. These lines are typically in Shared or Exclusive state. A non-threaded eventl2_lines_out.useless_hwpfcacheCache lines that have been L2 hardware prefetched but not used by demand accessesevent=0xf2,period=200003,umask=400Counts the number of cache lines that have been prefetched by the L2 hardware prefetcher but not used by demand access when evicted from the L2 cachel2_rqsts.misscacheThis event is deprecatedevent=0x24,period=200003,umask=0x3f10l2_rqsts.referencescacheThis event is deprecatedevent=0x24,period=200003,umask=0xff10mem_load_misc_retired.uccacheRetired instructions with at least 1 uncacheable load or Bus Lock  Supports address when precise (Precise event)event=0xd4,period=100007,umask=400Retired instructions with at least one load to uncacheable memory-type, or at least one cache-line split locked access (Bus Lock)  Supports address when precise (Precise event)ocr.demand_code_rd.l3_hit.anycacheCounts demand instruction fetches and L1 instruction cache prefetches that hit a cacheline in the L3 where a snoop was sent or notevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC03C000400ocr.demand_code_rd.l3_hit.snoop_hitmcacheCounts demand instruction fetches and L1 instruction cache prefetches that hit a cacheline in the L3 where a snoop hit in another cores caches, data forwarding is required as the data is modifiedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000400ocr.demand_code_rd.l3_hit.snoop_hit_no_fwdcacheCounts demand instruction fetches and L1 instruction cache prefetches that hit a cacheline in the L3 where a snoop hit in another core, data forwarding is not requiredevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C000400ocr.demand_code_rd.l3_hit.snoop_misscacheCounts demand instruction fetches and L1 instruction cache prefetches that hit a cacheline in the L3 where a snoop was sent but no other cores had the dataevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C000400ocr.demand_code_rd.l3_hit.snoop_not_neededcacheCounts demand instruction fetches and L1 instruction cache prefetches that hit a cacheline in the L3 where a snoop was not needed to satisfy the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C000400ocr.demand_code_rd.l3_hit.snoop_sentcacheCounts demand instruction fetches and L1 instruction cache prefetches that hit a cacheline in the L3 where a snoop was sentevent=0xb7,period=100003,umask=1,offcore_rsp=0x1E003C000400ocr.demand_data_rd.l3_hit.anycacheCounts demand data reads that hit a cacheline in the L3 where a snoop was sent or notevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC03C000100ocr.demand_data_rd.l3_hit.snoop_hitmcacheCounts demand data reads that hit a cacheline in the L3 where a snoop hit in another cores caches, data forwarding is required as the data is modifiedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000100ocr.demand_data_rd.l3_hit.snoop_hit_no_fwdcacheCounts demand data reads that hit a cacheline in the L3 where a snoop hit in another core, data forwarding is not requiredevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C000100ocr.demand_data_rd.l3_hit.snoop_misscacheCounts demand data reads that hit a cacheline in the L3 where a snoop was sent but no other cores had the dataevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C000100ocr.demand_data_rd.l3_hit.snoop_not_neededcacheCounts demand data reads that hit a cacheline in the L3 where a snoop was not needed to satisfy the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C000100ocr.demand_data_rd.l3_hit.snoop_sentcacheCounts demand data reads that hit a cacheline in the L3 where a snoop was sentevent=0xb7,period=100003,umask=1,offcore_rsp=0x1E003C000100ocr.demand_rfo.l3_hit.anycacheCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that hit a cacheline in the L3 where a snoop was sent or notevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC03C000200ocr.demand_rfo.l3_hit.snoop_hitmcacheCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that hit a cacheline in the L3 where a snoop hit in another cores caches, data forwarding is required as the data is modifiedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000200ocr.demand_rfo.l3_hit.snoop_hit_no_fwdcacheCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that hit a cacheline in the L3 where a snoop hit in another core, data forwarding is not requiredevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C000200ocr.demand_rfo.l3_hit.snoop_misscacheCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that hit a cacheline in the L3 where a snoop was sent but no other cores had the dataevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C000200ocr.demand_rfo.l3_hit.snoop_not_neededcacheCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that hit a cacheline in the L3 where a snoop was not needed to satisfy the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C000200ocr.demand_rfo.l3_hit.snoop_sentcacheCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that hit a cacheline in the L3 where a snoop was sentevent=0xb7,period=100003,umask=1,offcore_rsp=0x1E003C000200ocr.hwpf_l1d_and_swpf.l3_hit.anycacheCounts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that hit a cacheline in the L3 where a snoop was sent or notevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC03C040000ocr.hwpf_l1d_and_swpf.l3_hit.snoop_misscacheCounts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that hit a cacheline in the L3 where a snoop was sent but no other cores had the dataevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C040000ocr.hwpf_l1d_and_swpf.l3_hit.snoop_not_neededcacheCounts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that hit a cacheline in the L3 where a snoop was not needed to satisfy the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C040000ocr.hwpf_l2_data_rd.l3_hit.anycacheCounts hardware prefetch data reads (which bring data to L2)  that hit a cacheline in the L3 where a snoop was sent or notevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC03C001000ocr.hwpf_l2_data_rd.l3_hit.snoop_hitmcacheCounts hardware prefetch data reads (which bring data to L2)  that hit a cacheline in the L3 where a snoop hit in another cores caches, data forwarding is required as the data is modifiedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C001000ocr.hwpf_l2_data_rd.l3_hit.snoop_hit_no_fwdcacheCounts hardware prefetch data reads (which bring data to L2)  that hit a cacheline in the L3 where a snoop hit in another core, data forwarding is not requiredevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C001000ocr.hwpf_l2_data_rd.l3_hit.snoop_misscacheCounts hardware prefetch data reads (which bring data to L2)  that hit a cacheline in the L3 where a snoop was sent but no other cores had the dataevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C001000ocr.hwpf_l2_data_rd.l3_hit.snoop_not_neededcacheCounts hardware prefetch data reads (which bring data to L2)  that hit a cacheline in the L3 where a snoop was not needed to satisfy the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C001000ocr.hwpf_l2_data_rd.l3_hit.snoop_sentcacheCounts hardware prefetch data reads (which bring data to L2)  that hit a cacheline in the L3 where a snoop was sentevent=0xb7,period=100003,umask=1,offcore_rsp=0x1E003C001000ocr.hwpf_l2_rfo.l3_hit.anycacheCounts hardware prefetch RFOs (which bring data to L2) that hit a cacheline in the L3 where a snoop was sent or notevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC03C002000ocr.hwpf_l2_rfo.l3_hit.snoop_hitmcacheCounts hardware prefetch RFOs (which bring data to L2) that hit a cacheline in the L3 where a snoop hit in another cores caches, data forwarding is required as the data is modifiedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C002000ocr.hwpf_l2_rfo.l3_hit.snoop_hit_no_fwdcacheCounts hardware prefetch RFOs (which bring data to L2) that hit a cacheline in the L3 where a snoop hit in another core, data forwarding is not requiredevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C002000ocr.hwpf_l2_rfo.l3_hit.snoop_misscacheCounts hardware prefetch RFOs (which bring data to L2) that hit a cacheline in the L3 where a snoop was sent but no other cores had the dataevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C002000ocr.hwpf_l2_rfo.l3_hit.snoop_not_neededcacheCounts hardware prefetch RFOs (which bring data to L2) that hit a cacheline in the L3 where a snoop was not needed to satisfy the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C002000ocr.hwpf_l2_rfo.l3_hit.snoop_sentcacheCounts hardware prefetch RFOs (which bring data to L2) that hit a cacheline in the L3 where a snoop was sentevent=0xb7,period=100003,umask=1,offcore_rsp=0x1E003C002000ocr.hwpf_l3.l3_hit.anycacheCounts hardware prefetches to the L3 only that hit a cacheline in the L3 where a snoop was sent or notevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC03C238000ocr.other.l3_hit.snoop_hit_no_fwdcacheCounts miscellaneous requests, such as I/O and un-cacheable accesses that hit a cacheline in the L3 where a snoop hit in another core, data forwarding is not requiredevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C800000ocr.other.l3_hit.snoop_misscacheCounts miscellaneous requests, such as I/O and un-cacheable accesses that hit a cacheline in the L3 where a snoop was sent but no other cores had the dataevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003C800000ocr.other.l3_hit.snoop_not_neededcacheCounts miscellaneous requests, such as I/O and un-cacheable accesses that hit a cacheline in the L3 where a snoop was not needed to satisfy the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C800000ocr.other.l3_hit.snoop_sentcacheCounts miscellaneous requests, such as I/O and un-cacheable accesses that hit a cacheline in the L3 where a snoop was sentevent=0xb7,period=100003,umask=1,offcore_rsp=0x1E003C800000ocr.streaming_wr.l3_hit.anycacheCounts streaming stores that hit a cacheline in the L3 where a snoop was sent or notevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC03C080000offcore_requests.all_requestscacheCounts memory transactions sent to the uncoreevent=0xb0,period=100003,umask=0x8000Counts memory transactions sent to the uncore including requests initiated by the core, all L3 prefetches, reads resulting from page walks, and snoop responsesoffcore_requests_outstanding.all_data_rdcacheFor every cycle, increments by the number of outstanding data read requests pendingevent=0x60,period=1000003,umask=800For every cycle, increments by the number of outstanding data read requests pending.  Data read requests include cacheable demand reads and L2 prefetches, but do not include RFOs, code reads or prefetches to the L3.  Reads due to page walks resulting from any request type will also be counted.  Requests are considered outstanding from the time they miss the core's L2 cache until the transaction completion message is sent to the requestoroffcore_requests_outstanding.cycles_with_data_rdcacheCycles where at least 1 outstanding data read request is pendingevent=0x60,cmask=1,period=1000003,umask=800Cycles where at least 1 outstanding data read request is pending.  Data read requests include cacheable demand reads and L2 prefetches, but do not include RFOs, code reads or prefetches to the L3.  Reads due to page walks resulting from any request type will also be counted.  Requests are considered outstanding from the time they miss the core's L2 cache until the transaction completion message is sent to the requestoroffcore_requests_outstanding.cycles_with_demand_rfocacheCycles where at least 1 outstanding Demand RFO request is pendingevent=0x60,cmask=1,period=1000003,umask=400Cycles where at least 1 outstanding Demand RFO request is pending.   RFOs are initiated by a core as part of a data store operation.  Demand RFO requests include RFOs, locks, and ItoM transactions.  Requests are considered outstanding from the time they miss the core's L2 cache until the transaction completion message is sent to the requestoroffcore_requests_outstanding.demand_data_rdcacheFor every cycle, increments by the number of outstanding demand data read requests pendingevent=0x60,period=1000003,umask=100For every cycle, increments by the number of outstanding demand data read requests pending.   Requests are considered outstanding from the time they miss the core's L2 cache until the transaction completion message is sent to the requestoroffcore_requests_outstanding.demand_rfocacheStore Read transactions pending for off-core. Highly correlatedevent=0x60,period=1000003,umask=400Counts the number of off-core outstanding read-for-ownership (RFO) store transactions every cycle. An RFO transaction is considered to be in the Off-core outstanding state between L2 cache miss and transaction completionsq_misc.bus_lockcacheCounts bus locks, accounts for cache line split locks and UC locksevent=0xf4,period=100003,umask=0x1000Counts the more expensive bus lock needed to enforce cache coherency for certain memory accesses that need to be done atomically.  Can be created by issuing an atomic instruction (via the LOCK prefix) which causes a cache line split or accesses uncacheable memorysq_misc.sq_fullcacheCycles the queue waiting for offcore responses is fullevent=0xf4,period=100003,umask=400Counts the cycles for which the thread is active and the queue waiting for responses from the uncore cannot take any more entriessw_prefetch_access.anycacheCounts the number of PREFETCHNTA, PREFETCHW, PREFETCHT0, PREFETCHT1 or PREFETCHT2 instructions executedevent=0x32,period=100003,umask=0xf00sw_prefetch_access.ntacacheNumber of PREFETCHNTA instructions executedevent=0x32,period=100003,umask=100Counts the number of PREFETCHNTA instructions executedsw_prefetch_access.prefetchwcacheNumber of PREFETCHW instructions executedevent=0x32,period=100003,umask=800Counts the number of PREFETCHW instructions executedsw_prefetch_access.t0cacheNumber of PREFETCHT0 instructions executedevent=0x32,period=100003,umask=200Counts the number of PREFETCHT0 instructions executedsw_prefetch_access.t1_t2cacheNumber of PREFETCHT1 or PREFETCHT2 instructions executedevent=0x32,period=100003,umask=400Counts the number of PREFETCHT1 or PREFETCHT2 instructions executedfp_arith_inst_retired.scalarfloating pointNumber of SSE/AVX computational scalar floating-point instructions retired; some instructions will count twice as noted below.  Applies to SSE* and AVX* scalar, double and single precision floating-point: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB.  DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per elementevent=0xc7,period=1000003,umask=300Number of SSE/AVX computational scalar single precision and double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB.  FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.vectorfloating pointNumber of any Vector retired FP arithmetic instructionsevent=0xc7,period=1000003,umask=0xfc00decode.lcpfrontendStalls caused by changing prefix length of the instruction. [This event is alias to ILD_STALL.LCP]event=0x87,period=500009,umask=100Counts cycles that the Instruction Length decoder (ILD) stalls occurred due to dynamically changing prefix length of the decoded instruction (by operand size prefix instruction 0x66, address size prefix instruction 0x67 or REX.W for Intel64). Count is proportional to the number of prefixes in a 16B-line. This may result in a three-cycle penalty for each LCP (Length changing prefix) in a 16-byte chunk. [This event is alias to ILD_STALL.LCP]dsb2mite_switches.countfrontendDecode Stream Buffer (DSB)-to-MITE transitions countevent=0xab,cmask=1,edge=1,period=100003,umask=200Counts the number of Decode Stream Buffer (DSB a.k.a. Uop Cache)-to-MITE speculative transitionsdsb2mite_switches.penalty_cyclesfrontendDSB-to-MITE switch true penalty cyclesevent=0xab,period=100003,umask=200Decode Stream Buffer (DSB) is a Uop-cache that holds translations of previously fetched instructions that were decoded by the legacy x86 decode pipeline (MITE). This event counts fetch penalty cycles when a transition occurs from DSB to MITEfrontend_retired.latency_ge_1frontendRetired instructions after front-end starvation of at least 1 cycle (Precise event)event=0xc6,period=100007,umask=1,frontend=0x50010600Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of at least 1 cycle which was not interrupted by a back-end stall (Precise event)frontend_retired.latency_ge_128frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=1,frontend=0x50800600Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall (Precise event)frontend_retired.latency_ge_16frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 16 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=1,frontend=0x50100600Counts retired instructions that are delivered to the back-end after a front-end stall of at least 16 cycles. During this period the front-end delivered no uops (Precise event)frontend_retired.latency_ge_2frontendRetired instructions after front-end starvation of at least 2 cycles (Precise event)event=0xc6,period=100007,umask=1,frontend=0x50020600Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of at least 2 cycles which was not interrupted by a back-end stall (Precise event)frontend_retired.latency_ge_256frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=1,frontend=0x51000600Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall (Precise event)frontend_retired.latency_ge_32frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 32 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=1,frontend=0x50200600Counts retired instructions that are delivered to the back-end after a front-end stall of at least 32 cycles. During this period the front-end delivered no uops (Precise event)frontend_retired.latency_ge_4frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=1,frontend=0x50040600Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall (Precise event)frontend_retired.latency_ge_512frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=1,frontend=0x52000600Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall (Precise event)frontend_retired.latency_ge_64frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=1,frontend=0x50400600Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall (Precise event)frontend_retired.latency_ge_8frontendRetired instructions that are fetched after an interval where the front-end delivered no uops for a period of 8 cycles which was not interrupted by a back-end stall (Precise event)event=0xc6,period=100007,umask=1,frontend=0x50080600Counts retired instructions that are delivered to the back-end after a front-end stall of at least 8 cycles. During this period the front-end delivered no uops (Precise event)icache_16b.ifdata_stallfrontendCycles where a code fetch is stalled due to L1 instruction cache miss. [This event is alias to ICACHE_DATA.STALLS]event=0x80,period=500009,umask=400Counts cycles where a code line fetch is stalled due to an L1 instruction cache miss. The legacy decode pipeline works at a 16 Byte granularity. [This event is alias to ICACHE_DATA.STALLS]icache_64b.iftag_hitfrontendInstruction fetch tag lookups that hit in the instruction cache (L1I). Counts at 64-byte cache-line granularityevent=0x83,period=200003,umask=100Counts instruction fetch tag lookups that hit in the instruction cache (L1I). Counts at 64-byte cache-line granularity. Accounts for both cacheable and uncacheable accessesicache_64b.iftag_missfrontendInstruction fetch tag lookups that miss in the instruction cache (L1I). Counts at 64-byte cache-line granularityevent=0x83,period=200003,umask=200Counts instruction fetch tag lookups that miss in the instruction cache (L1I). Counts at 64-byte cache-line granularity. Accounts for both cacheable and uncacheable accessesicache_64b.iftag_stallfrontendCycles where a code fetch is stalled due to L1 instruction cache tag miss. [This event is alias to ICACHE_TAG.STALLS]event=0x83,period=200003,umask=400Counts cycles where a code fetch is stalled due to L1 instruction cache tag miss. [This event is alias to ICACHE_TAG.STALLS]icache_data.stallsfrontendCycles where a code fetch is stalled due to L1 instruction cache miss. [This event is alias to ICACHE_16B.IFDATA_STALL]event=0x80,period=500009,umask=400Counts cycles where a code line fetch is stalled due to an L1 instruction cache miss. The legacy decode pipeline works at a 16 Byte granularity. [This event is alias to ICACHE_16B.IFDATA_STALL]icache_tag.stallsfrontendCycles where a code fetch is stalled due to L1 instruction cache tag miss. [This event is alias to ICACHE_64B.IFTAG_STALL]event=0x83,period=200003,umask=400Counts cycles where a code fetch is stalled due to L1 instruction cache tag miss. [This event is alias to ICACHE_64B.IFTAG_STALL]idq.dsb_cycles_okfrontendCycles DSB is delivering optimal number of Uopsevent=0x79,cmask=5,period=2000003,umask=800Counts the number of cycles where optimal number of uops was delivered to the Instruction Decode Queue (IDQ) from the DSB (Decode Stream Buffer) path. Count includes uops that may 'bypass' the IDQidq.mite_cycles_okfrontendCycles MITE is delivering optimal number of Uopsevent=0x79,cmask=5,period=2000003,umask=400Counts the number of cycles where optimal number of uops was delivered to the Instruction Decode Queue (IDQ) from the MITE (legacy decode pipeline) path. During these cycles uops are not being delivered from the Decode Stream Buffer (DSB)idq.ms_cycles_anyfrontendCycles when uops are being delivered to IDQ while MS is busyevent=0x79,cmask=1,period=2000003,umask=0x3000Counts cycles during which uops are being delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Uops maybe initiated by Decode Stream Buffer (DSB) or MITEidq.ms_switchesfrontendNumber of switches from DSB or MITE to the MSevent=0x79,cmask=1,edge=1,period=100003,umask=0x3000Number of switches from DSB (Decode Stream Buffer) or MITE (legacy decode pipeline) to the Microcode Sequenceridq.ms_uopsfrontendUops delivered to IDQ while MS is busyevent=0x79,period=100003,umask=0x3000Counts the total number of uops delivered by the Microcode Sequencer (MS). Any instruction over 4 uops will be delivered by the MS. Some instructions such as transcendentals may additionally generate uops from the MSidq_uops_not_delivered.cycles_0_uops_deliv.corefrontendCycles when no uops are not delivered by the IDQ when backend of the machine is not stalledevent=0x9c,cmask=5,period=1000003,umask=100Counts the number of cycles when no uops were delivered by the Instruction Decode Queue (IDQ) to the back-end of the pipeline when there was no back-end stalls. This event counts for one SMT thread in a given cycleidq_uops_not_delivered.cycles_fe_was_okfrontendCycles when optimal number of uops was delivered to the back-end when the back-end is not stalledevent=0x9c,cmask=1,inv=1,period=1000003,umask=100Counts the number of cycles when the optimal number of uops were delivered by the Instruction Decode Queue (IDQ) to the back-end of the pipeline when there was no back-end stalls. This event counts for one SMT thread in a given cyclehle_retired.abortedmemoryNumber of times an HLE execution aborted due to any reasons (multiple categories may count as one)event=0xc8,period=100003,umask=400Counts the number of times HLE abort was triggeredhle_retired.aborted_eventsmemoryNumber of times an HLE execution aborted due to unfriendly events (such as interrupts)event=0xc8,period=100003,umask=0x8000Counts the number of times an HLE execution aborted due to unfriendly events (such as interrupts)hle_retired.aborted_memmemoryNumber of times an HLE execution aborted due to various memory events (e.g., read/write capacity and conflicts)event=0xc8,period=100003,umask=800Counts the number of times an HLE execution aborted due to various memory events (e.g., read/write capacity and conflicts)hle_retired.aborted_unfriendlymemoryNumber of times an HLE execution aborted due to HLE-unfriendly instructions and certain unfriendly events (such as AD assists etc.)event=0xc8,period=100003,umask=0x2000Counts the number of times an HLE execution aborted due to HLE-unfriendly instructions and certain unfriendly events (such as AD assists etc.)hle_retired.commitmemoryNumber of times an HLE execution successfully committedevent=0xc8,period=100003,umask=200Counts the number of times HLE commit succeededhle_retired.startmemoryNumber of times an HLE execution startedevent=0xc8,period=100003,umask=100Counts the number of times we entered an HLE region. Does not count nested transactionsocr.demand_code_rd.l3_missmemoryCounts demand instruction fetches and L1 instruction cache prefetches that was not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FFFC0000400ocr.demand_data_rd.l3_missmemoryCounts demand data reads that was not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FFFC0000100ocr.demand_rfo.l3_missmemoryCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that was not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FFFC0000200ocr.hwpf_l1d_and_swpf.l3_missmemoryCounts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that was not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FFFC0040000ocr.hwpf_l2_data_rd.l3_missmemoryCounts hardware prefetch data reads (which bring data to L2)  that was not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FFFC0001000ocr.hwpf_l2_rfo.l3_missmemoryCounts hardware prefetch RFOs (which bring data to L2) that was not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FFFC0002000ocr.other.l3_missmemoryCounts miscellaneous requests, such as I/O and un-cacheable accesses that was not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FFFC0800000ocr.streaming_wr.l3_missmemoryCounts streaming stores that was not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FFFC0080000offcore_requests.l3_miss_demand_data_rdmemoryCounts demand data read requests that miss the L3 cacheevent=0xb0,period=100003,umask=0x1000offcore_requests_outstanding.cycles_with_l3_miss_demand_data_rdmemoryCycles where at least one demand data read request known to have missed the L3 cache is pendingevent=0x60,cmask=1,period=1000003,umask=0x1000Cycles where at least one demand data read request known to have missed the L3 cache is pending.  Note that this does not capture all elapsed cycles while requests are outstanding - only cycles from when the requests were known to have missed the L3 cachetx_exec.misc2memoryCounts the number of times a class of instructions that may cause a transactional abort was executed inside a transactional regionevent=0x5d,period=100003,umask=200Counts Unfriendly TSX abort triggered by a vzeroupper instructiontx_exec.misc3memoryNumber of times an instruction execution caused the transactional nest count supported to be exceededevent=0x5d,period=100003,umask=400Counts Unfriendly TSX abort triggered by a nest count that is too deeptx_mem.abort_hle_elision_buffer_mismatchmemoryNumber of times an HLE transactional execution aborted due to XRELEASE lock not satisfying the address and value requirements in the elision bufferevent=0x54,period=100003,umask=0x1000Counts the number of times a TSX Abort was triggered due to release/commit but data and address mismatchtx_mem.abort_hle_elision_buffer_not_emptymemoryNumber of times an HLE transactional execution aborted due to NoAllocatedElisionBuffer being non-zeroevent=0x54,period=100003,umask=800Counts the number of times a TSX Abort was triggered due to commit but Lock Buffer not emptytx_mem.abort_hle_elision_buffer_unsupported_alignmentmemoryNumber of times an HLE transactional execution aborted due to an unsupported read alignment from the elision bufferevent=0x54,period=100003,umask=0x2000Counts the number of times a TSX Abort was triggered due to attempting an unsupported alignment from Lock Buffertx_mem.abort_hle_store_to_elided_lockmemoryNumber of times a HLE transactional region aborted due to a non XRELEASE prefixed instruction writing to an elided lock in the elision bufferevent=0x54,period=100003,umask=400Counts the number of times a TSX Abort was triggered due to a non-release/commit store to locktx_mem.hle_elision_buffer_fullmemoryNumber of times HLE lock could not be elided due to ElisionBufferAvailable being zeroevent=0x54,period=100003,umask=0x4000Counts the number of times we could not allocate Lock Buffercore_power.lvl0_turbo_licenseotherCore cycles where the core was running in a manner where Turbo may be clipped to the Non-AVX turbo scheduleevent=0x28,period=200003,umask=700Counts Core cycles where the core was running with power-delivery for baseline license level 0.  This includes non-AVX codes, SSE, AVX 128-bit, and low-current AVX 256-bit codescore_power.lvl1_turbo_licenseotherCore cycles where the core was running in a manner where Turbo may be clipped to the AVX2 turbo scheduleevent=0x28,period=200003,umask=0x1800Counts Core cycles where the core was running with power-delivery for license level 1.  This includes high current AVX 256-bit instructions as well as low current AVX 512-bit instructionsocr.demand_code_rd.dramotherCounts demand instruction fetches and L1 instruction cache prefetches that DRAM supplied the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400000400ocr.demand_code_rd.local_dramotherCounts demand instruction fetches and L1 instruction cache prefetches that DRAM supplied the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400000400ocr.demand_data_rd.dramotherCounts demand data reads that DRAM supplied the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400000100ocr.demand_data_rd.local_dramotherCounts demand data reads that DRAM supplied the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400000100ocr.demand_rfo.any_responseotherCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000200ocr.demand_rfo.dramotherCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that DRAM supplied the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400000200ocr.demand_rfo.local_dramotherCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that DRAM supplied the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400000200ocr.hwpf_l1d_and_swpf.any_responseotherCounts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x1040000ocr.hwpf_l1d_and_swpf.dramotherCounts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that DRAM supplied the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400040000ocr.hwpf_l1d_and_swpf.local_dramotherCounts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that DRAM supplied the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400040000ocr.hwpf_l2_data_rd.any_responseotherCounts hardware prefetch data reads (which bring data to L2)  that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x1001000ocr.hwpf_l2_data_rd.dramotherCounts hardware prefetch data reads (which bring data to L2)  that DRAM supplied the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400001000ocr.hwpf_l2_data_rd.local_dramotherCounts hardware prefetch data reads (which bring data to L2)  that DRAM supplied the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400001000ocr.hwpf_l2_rfo.any_responseotherCounts hardware prefetch RFOs (which bring data to L2) that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x1002000ocr.hwpf_l2_rfo.dramotherCounts hardware prefetch RFOs (which bring data to L2) that DRAM supplied the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400002000ocr.hwpf_l2_rfo.local_dramotherCounts hardware prefetch RFOs (which bring data to L2) that DRAM supplied the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400002000ocr.other.any_responseotherCounts miscellaneous requests, such as I/O and un-cacheable accesses that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x1800000ocr.other.dramotherCounts miscellaneous requests, such as I/O and un-cacheable accesses that DRAM supplied the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400800000ocr.other.local_dramotherCounts miscellaneous requests, such as I/O and un-cacheable accesses that DRAM supplied the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400800000ocr.streaming_wr.dramotherCounts streaming stores that DRAM supplied the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400080000ocr.streaming_wr.local_dramotherCounts streaming stores that DRAM supplied the requestevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400080000arith.divider_activepipelineCycles when divide unit is busy executing divide or square root operationsevent=0x14,cmask=1,period=1000003,umask=900Counts cycles when divide unit is busy executing divide or square root operations. Accounts for integer and floating-point operationsassists.anypipelineNumber of occurrences where a microcode assist is invoked by hardwareevent=0xc1,period=100003,umask=700Counts the number of occurrences where a microcode assist is invoked by hardware Examples include AD (page Access Dirty), FP and AVX related assistsbr_misp_retired.all_branchespipelineAll mispredicted branch instructions retired (Precise event)event=0xc5,period=5002100Counts all the retired branch instructions that were mispredicted by the processor. A branch misprediction occurs when the processor incorrectly predicts the destination of the branch.  When the misprediction is discovered at execution, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path (Precise event)br_misp_retired.condpipelineMispredicted conditional branch instructions retired (Precise event)event=0xc5,period=50021,umask=0x1100Counts mispredicted conditional branch instructions retired (Precise event)br_misp_retired.cond_ntakenpipelineMispredicted non-taken conditional branch instructions retired (Precise event)event=0xc5,period=50021,umask=0x1000Counts the number of conditional branch instructions retired that were mispredicted and the branch direction was not taken (Precise event)br_misp_retired.cond_takenpipelinenumber of branch instructions retired that were mispredicted and taken (Precise event)event=0xc5,period=50021,umask=100Counts taken conditional mispredicted branch instructions retired (Precise event)br_misp_retired.indirectpipelineAll miss-predicted indirect branch instructions retired (excluding RETs. TSX aborts is considered indirect branch) (Precise event)event=0xc5,period=50021,umask=0x8000Counts all miss-predicted indirect branch instructions retired (excluding RETs. TSX aborts is considered indirect branch) (Precise event)br_misp_retired.indirect_callpipelineMispredicted indirect CALL instructions retired (Precise event)event=0xc5,period=50021,umask=200Counts retired mispredicted indirect (near taken) CALL instructions, including both register and memory indirect (Precise event)br_misp_retired.near_takenpipelineNumber of near branch instructions retired that were mispredicted and taken (Precise event)event=0xc5,period=50021,umask=0x2000Counts number of near branch instructions retired that were mispredicted and taken (Precise event)br_misp_retired.retpipelineThis event counts the number of mispredicted ret instructions retired. Non PEBS (Precise event)event=0xc5,period=50021,umask=800This is a non-precise version (that is, does not use PEBS) of the event that counts mispredicted return instructions retired (Precise event)cpu_clk_unhalted.ref_tscpipelineReference cycles when the core is not in halt stateevent=0,period=2000003,umask=300Counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. This event has a constant ratio with the CPU_CLK_UNHALTED.REF_XCLK event. It is counted on a dedicated fixed counter, leaving the eight programmable counters available for other events. Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'.  The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'.  After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this casecpu_clk_unhalted.ref_xclkpipelineCore crystal clock cycles when the thread is unhaltedevent=0x3c,period=25003,umask=100Counts core crystal clock cycles when the thread is unhaltedcycle_activity.stalls_mem_anypipelineExecution stalls while memory subsystem has an outstanding loadevent=0xa3,cmask=20,period=1000003,umask=0x1400ild_stall.lcppipelineStalls caused by changing prefix length of the instruction. [This event is alias to DECODE.LCP]event=0x87,period=500009,umask=100Counts cycles that the Instruction Length decoder (ILD) stalls occurred due to dynamically changing prefix length of the decoded instruction (by operand size prefix instruction 0x66, address size prefix instruction 0x67 or REX.W for Intel64). Count is proportional to the number of prefixes in a 16B-line. This may result in a three-cycle penalty for each LCP (Length changing prefix) in a 16-byte chunk. [This event is alias to DECODE.LCP]inst_retired.anypipelineNumber of instructions retired. Fixed Counter - architectural event (Precise event)event=0xc0,period=200000300Counts the number of instructions retired - an Architectural PerfMon event. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter freeing up programmable counters to count other events. INST_RETIRED.ANY_P is counted by a programmable counter (Precise event)inst_retired.any_ppipelineNumber of instructions retired. General Counter - architectural event (Precise event)event=0xc0,period=200000300Counts the number of instructions retired - an Architectural PerfMon event. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter freeing up programmable counters to count other events. INST_RETIRED.ANY_P is counted by a programmable counter (Precise event)inst_retired.noppipelineNumber of all retired NOP instructions (Precise event)event=0xc0,period=2000003,umask=200inst_retired.prec_distpipelinePrecise instruction retired event with a reduced effect of PEBS shadow in IP distribution (Precise event)event=0,period=2000003,umask=100A version of INST_RETIRED that allows for a more unbiased distribution of samples across instructions retired. It utilizes the Precise Distribution of Instructions Retired (PDIR) feature to mitigate some bias in how retired instructions get sampled. Use on Fixed Counter 0 (Precise event)inst_retired.stall_cyclespipelineCycles without actually retired instructionsevent=0xc0,cmask=1,inv=1,period=1000003,umask=100This event counts cycles without actually retired instructionsint_misc.all_recovery_cyclespipelineCycles the Backend cluster is recovering after a miss-speculation or a Store Buffer or Load Buffer drain stallevent=0xd,cmask=1,period=2000003,umask=300Counts cycles the Backend cluster is recovering after a miss-speculation or a Store Buffer or Load Buffer drain stallint_misc.clears_countpipelineClears speculative countevent=0xd,cmask=1,edge=1,period=500009,umask=100Counts the number of speculative clears due to any type of branch misprediction or machine clearsint_misc.clear_resteer_cyclespipelineCounts cycles after recovery from a branch misprediction or machine clear till the first uop is issued from the resteered pathevent=0xd,period=500009,umask=0x8000Cycles after recovery from a branch misprediction or machine clear till the first uop is issued from the resteered pathint_misc.recovery_cyclespipelineCore cycles the allocator was stalled due to recovery from earlier clear event for this threadevent=0xd,period=500009,umask=100Counts core cycles when the Resource allocator was stalled due to recovery from an earlier branch misprediction or machine clear eventint_misc.uop_droppingpipelineTMA slots where uops got droppedevent=0xd,period=1000003,umask=0x1000Estimated number of Top-down Microarchitecture Analysis slots that got dropped due to non front-end reasonsld_blocks.no_srpipelineThe number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in useevent=3,period=100003,umask=800Counts the number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in useld_blocks_partial.address_aliaspipelineFalse dependencies due to partial compare on addressevent=7,period=100003,umask=100Counts the number of times a load got blocked due to false dependencies due to partial compare on addresslsd.cycles_okpipelineCycles optimal number of Uops delivered by the LSD, but did not come from the decoderevent=0xa8,cmask=5,period=2000003,umask=100Counts the cycles when optimal number of uops is delivered by the LSD (Loop-stream detector)misc_retired.lbr_insertspipelineIncrements whenever there is an update to the LBR arrayevent=0xcc,period=100003,umask=0x2000Increments when an entry is added to the Last Branch Record (LBR) array (or removed from the array in case of RETURNs in call stack mode). The event requires LBR to be enabled properlymisc_retired.pause_instpipelineNumber of retired PAUSE instructions. This event is not supported on first SKL and KBL productsevent=0xcc,period=100003,umask=0x4000Counts number of retired PAUSE instructions. This event is not supported on first SKL and KBL productsrs_events.empty_cyclespipelineCycles when Reservation Station (RS) is empty for the threadevent=0x5e,period=1000003,umask=100Counts cycles during which the reservation station (RS) is empty for this logical processor. This is usually caused when the front-end pipeline runs into starvation periods (e.g. branch mispredictions or i-cache misses)rs_events.empty_endpipelineCounts end of periods where the Reservation Station (RS) was emptyevent=0x5e,cmask=1,edge=1,inv=1,period=100003,umask=100Counts end of periods where the Reservation Station (RS) was empty. Could be useful to closely sample on front-end latency issues (see the FRONTEND_RETIRED event of designated precise events)topdown.backend_bound_slotspipelineTMA slots where no uops were being issued due to lack of back-end resourcesevent=0xa4,period=10000003,umask=200Counts the number of Top-down Microarchitecture Analysis (TMA) method's  slots where no micro-operations were being issued from front-end to back-end of the machine due to lack of back-end resourcesuops_decoded.dec0pipelineNumber of uops decoded out of instructions exclusively fetched by decoder 0event=0x56,period=1000003,umask=100Uops exclusively fetched by decoder 0uops_dispatched.port_0pipelineNumber of uops executed on port 0event=0xa1,period=2000003,umask=100Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 0uops_dispatched.port_1pipelineNumber of uops executed on port 1event=0xa1,period=2000003,umask=200Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 1uops_dispatched.port_2_3pipelineNumber of uops executed on port 2 and 3event=0xa1,period=2000003,umask=400Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to ports 2 and 3uops_dispatched.port_4_9pipelineNumber of uops executed on port 4 and 9event=0xa1,period=2000003,umask=0x1000Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to ports 5 and 9uops_dispatched.port_5pipelineNumber of uops executed on port 5event=0xa1,period=2000003,umask=0x2000Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 5uops_dispatched.port_6pipelineNumber of uops executed on port 6event=0xa1,period=2000003,umask=0x4000Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 6uops_dispatched.port_7_8pipelineNumber of uops executed on port 7 and 8event=0xa1,period=2000003,umask=0x8000Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to ports 7 and 8uops_issued.anypipelineUops that RAT issues to RSevent=0xe,period=2000003,umask=100Counts the number of uops that the Resource Allocation Table (RAT) issues to the Reservation Station (RS)uops_issued.stall_cyclespipelineCycles when RAT does not issue Uops to RS for the threadevent=0xe,cmask=1,inv=1,period=1000003,umask=100Counts cycles during which the Resource Allocation Table (RAT) does not issue any Uops to the reservation station (RS) for the current threaduops_issued.vector_width_mismatchpipelineUops inserted at issue-stage in order to preserve upper bits of vector registersevent=0xe,period=100003,umask=200Counts the number of Blend Uops issued by the Resource Allocation Table (RAT) to the reservation station (RS) in order to preserve upper bits of vector registers. Starting with the Skylake microarchitecture, these Blend uops are needed since every Intel SSE instruction executed in Dirty Upper State needs to preserve bits 128-255 of the destination register. For more information, refer to 'Mixing Intel AVX and Intel SSE Code' section of the Optimization Guideuops_retired.stall_cyclespipelineCycles without actually retired uopsevent=0xc2,cmask=1,inv=1,period=1000003,umask=200This event counts cycles without actually retired uopsuops_retired.total_cyclespipelineCycles with less than 10 actually retired uopsevent=0xc2,cmask=10,inv=1,period=1000003,umask=200Counts the number of cycles using always true condition (uops_ret < 16) applied to non PEBS uops retired eventunc_arb_coh_trk_requests.alluncore interconnectNumber of entries allocated. Account for Any type: e.g. Snoop,  etcevent=0x84,umask=101unc_arb_dat_occupancy.alluncore interconnectThis event is deprecated. Refer to new event UNC_ARB_IFA_OCCUPANCY.ALLevent=0x85,umask=111unc_arb_dat_occupancy.rduncore interconnectThis event is deprecatedevent=0x85,umask=211unc_arb_req_trk_occupancy.drduncore interconnectThis event is deprecated. Refer to new event UNC_ARB_TRK_OCCUPANCY.RDevent=0x80,umask=211unc_arb_req_trk_request.drduncore interconnectNumber of all coherent Data Read entries. Doesn't include prefetchesevent=0x81,umask=201unc_arb_trk_occupancy.alluncore interconnectThis event is deprecatedevent=0x80,umask=111unc_arb_trk_occupancy.rduncore interconnectThis event is deprecated. Refer to new event UNC_ARB_REQ_TRK_OCCUPANCY.DRDevent=0x80,umask=211unc_arb_trk_requests.alluncore interconnectTotal number of all outgoing entries allocated. Accounts for Coherent and non-coherent trafficevent=0x81,umask=101unc_arb_trk_requests.rduncore interconnectThis event is deprecated. Refer to new event UNC_ARB_REQ_TRK_REQUEST.DRDevent=0x81,umask=211unc_clock.socketuncore otherUNC_CLOCK.SOCKETevent=0xff01dtlb_load_misses.stlb_hitvirtual memoryLoads that miss the DTLB and hit the STLBevent=8,period=100003,umask=0x2000Counts loads that miss the DTLB (Data TLB) and hit the STLB (Second level TLB)dtlb_load_misses.walk_activevirtual memoryCycles when at least one PMH is busy with a page walk for a demand loadevent=8,cmask=1,period=100003,umask=0x1000Counts cycles when at least one PMH (Page Miss Handler) is busy with a page walk for a demand loaddtlb_load_misses.walk_completed_2m_4mvirtual memoryPage walks completed due to a demand data load to a 2M/4M pageevent=8,period=100003,umask=400Counts completed page walks  (2M/4M sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a faultdtlb_load_misses.walk_completed_4kvirtual memoryPage walks completed due to a demand data load to a 4K pageevent=8,period=100003,umask=200Counts completed page walks  (4K sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a faultdtlb_load_misses.walk_pendingvirtual memoryNumber of page walks outstanding for a demand load in the PMH each cycleevent=8,period=100003,umask=0x1000Counts the number of page walks outstanding for a demand load in the PMH (Page Miss Handler) each cycledtlb_store_misses.stlb_hitvirtual memoryStores that miss the DTLB and hit the STLBevent=0x49,period=100003,umask=0x2000Counts stores that miss the DTLB (Data TLB) and hit the STLB (2nd Level TLB)dtlb_store_misses.walk_activevirtual memoryCycles when at least one PMH is busy with a page walk for a storeevent=0x49,cmask=1,period=100003,umask=0x1000Counts cycles when at least one PMH (Page Miss Handler) is busy with a page walk for a storedtlb_store_misses.walk_completed_2m_4mvirtual memoryPage walks completed due to a demand data store to a 2M/4M pageevent=0x49,period=100003,umask=400Counts completed page walks  (2M/4M sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a faultdtlb_store_misses.walk_completed_4kvirtual memoryPage walks completed due to a demand data store to a 4K pageevent=0x49,period=100003,umask=200Counts completed page walks  (4K sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a faultdtlb_store_misses.walk_pendingvirtual memoryNumber of page walks outstanding for a store in the PMH each cycleevent=0x49,period=100003,umask=0x1000Counts the number of page walks outstanding for a store in the PMH (Page Miss Handler) each cycleitlb_misses.stlb_hitvirtual memoryInstruction fetch requests that miss the ITLB and hit the STLBevent=0x85,period=100003,umask=0x2000Counts instruction fetch requests that miss the ITLB (Instruction TLB) and hit the STLB (Second-level TLB)itlb_misses.walk_activevirtual memoryCycles when at least one PMH is busy with a page walk for code (instruction fetch) requestevent=0x85,cmask=1,period=100003,umask=0x1000Counts cycles when at least one PMH (Page Miss Handler) is busy with a page walk for a code (instruction fetch) requestitlb_misses.walk_pendingvirtual memoryNumber of page walks outstanding for an outstanding code request in the PMH each cycleevent=0x85,period=100003,umask=0x1000Counts the number of page walks outstanding for an outstanding code (instruction fetch) request in the PMH (Page Miss Handler) each cyclel2_lines_out.non_silentcacheCache lines that are evicted by L2 cache when triggered by an L2 cache fillevent=0xf2,period=200003,umask=200Counts the number of lines that are evicted by the L2 cache due to L2 cache fills.  Evicted lines are delivered to the L3, which may or may not cache them, according to system load and prioritiesmem_load_l3_hit_retired.xsnp_hitcacheThis event is deprecated. Refer to new event MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD  Supports address when precise (Precise event)event=0xd2,period=20011,umask=210mem_load_l3_hit_retired.xsnp_hitmcacheThis event is deprecated. Refer to new event MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD  Supports address when precise (Precise event)event=0xd2,period=20011,umask=410ocr.demand_code_rd.l3_hitcacheCounts demand instruction fetches and L1 instruction cache prefetches that hit in the L3 or were snooped from another core's caches on the same socketevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C000400ocr.demand_code_rd.l3_hit.snoop_hitmcacheCounts demand instruction fetches and L1 instruction cache prefetches that resulted in a snoop hit a modified line in another core's caches which forwarded the dataevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000400ocr.demand_code_rd.snc_cache.hitmcacheCounts demand instruction fetches and L1 instruction cache prefetches that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) modeevent=0xb7,period=100003,umask=1,offcore_rsp=0x100800000400ocr.demand_code_rd.snc_cache.hit_with_fwdcacheCounts demand instruction fetches and L1 instruction cache prefetches that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) modeevent=0xb7,period=100003,umask=1,offcore_rsp=0x80800000400ocr.demand_data_rd.l3_hitcacheCounts demand data reads that hit in the L3 or were snooped from another core's caches on the same socketevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C000100ocr.demand_data_rd.l3_hit.snoop_hitmcacheCounts demand data reads that resulted in a snoop hit a modified line in another core's caches which forwarded the dataevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000100ocr.demand_data_rd.l3_hit.snoop_hit_no_fwdcacheCounts demand data reads that resulted in a snoop that hit in another core, which did not forward the dataevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C000100ocr.demand_data_rd.l3_hit.snoop_hit_with_fwdcacheCounts demand data reads that resulted in a snoop hit in another core's caches which forwarded the unmodified data to the requesting coreevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C000100ocr.demand_data_rd.remote_cache.snoop_hitmcacheCounts demand data reads that were supplied by a cache on a remote socket where a snoop hit a modified line in another core's caches which forwarded the dataevent=0xb7,period=100003,umask=1,offcore_rsp=0x103000000100ocr.demand_data_rd.remote_cache.snoop_hit_with_fwdcacheCounts demand data reads that were supplied by a cache on a remote socket where a snoop hit in another core's caches which forwarded the unmodified data to the requesting coreevent=0xb7,period=100003,umask=1,offcore_rsp=0x83000000100ocr.demand_data_rd.snc_cache.hitmcacheCounts demand data reads that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) modeevent=0xb7,period=100003,umask=1,offcore_rsp=0x100800000100ocr.demand_data_rd.snc_cache.hit_with_fwdcacheCounts demand data reads that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) modeevent=0xb7,period=100003,umask=1,offcore_rsp=0x80800000100ocr.demand_rfo.l3_hitcacheCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that hit in the L3 or were snooped from another core's caches on the same socketevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C000200ocr.demand_rfo.l3_hit.snoop_hitmcacheCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that resulted in a snoop hit a modified line in another core's caches which forwarded the dataevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000200ocr.demand_rfo.snc_cache.hitmcacheCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) modeevent=0xb7,period=100003,umask=1,offcore_rsp=0x100800000200ocr.demand_rfo.snc_cache.hit_with_fwdcacheCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) modeevent=0xb7,period=100003,umask=1,offcore_rsp=0x80800000200ocr.hwpf_l1d_and_swpf.l3_hitcacheCounts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that hit in the L3 or were snooped from another core's caches on the same socketevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C040000ocr.hwpf_l3.l3_hitcacheCounts hardware prefetches to the L3 only that hit in the L3 or were snooped from another core's caches on the same socketevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008238000ocr.prefetches.l3_hitcacheCounts hardware and software prefetches to all cache levels that hit in the L3 or were snooped from another core's caches on the same socketevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C27F000ocr.reads_to_core.l3_hitcacheCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that hit in the L3 or were snooped from another core's caches on the same socketevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F003C047700ocr.reads_to_core.l3_hit.snoop_hitmcacheCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that resulted in a snoop hit a modified line in another core's caches which forwarded the dataevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C047700ocr.reads_to_core.l3_hit.snoop_hit_no_fwdcacheCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that resulted in a snoop that hit in another core, which did not forward the dataevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C047700ocr.reads_to_core.l3_hit.snoop_hit_with_fwdcacheCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that resulted in a snoop hit in another core's caches which forwarded the unmodified data to the requesting coreevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C047700ocr.reads_to_core.remote_cache.snoop_fwdcacheCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by a cache on a remote socket where a snoop was sent and data was returned (Modified or Not Modified)event=0xb7,period=100003,umask=1,offcore_rsp=0x183000047700ocr.reads_to_core.remote_cache.snoop_hitmcacheCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by a cache on a remote socket where a snoop hit a modified line in another core's caches which forwarded the dataevent=0xb7,period=100003,umask=1,offcore_rsp=0x103000047700ocr.reads_to_core.remote_cache.snoop_hit_with_fwdcacheCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by a cache on a remote socket where a snoop hit in another core's caches which forwarded the unmodified data to the requesting coreevent=0xb7,period=100003,umask=1,offcore_rsp=0x83000047700ocr.reads_to_core.snc_cache.hitmcacheCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) modeevent=0xb7,period=100003,umask=1,offcore_rsp=0x100800047700ocr.reads_to_core.snc_cache.hit_with_fwdcacheCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) modeevent=0xb7,period=100003,umask=1,offcore_rsp=0x80800047700ocr.streaming_wr.l3_hitcacheCounts streaming stores that hit in the L3 or were snooped from another core's caches on the same socketevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008080000offcore_requests.demand_code_rdcacheCounts cacheable and non-cacheable code reads to the coreevent=0xb0,period=100003,umask=200Counts both cacheable and non-cacheable code reads to the coreoffcore_requests_outstanding.cycles_with_demand_code_rdcacheCycles with outstanding code read requests pendingevent=0x60,cmask=1,period=1000003,umask=200Cycles with outstanding code read requests pending.  Code Read requests include both cacheable and non-cacheable Code Reads.  Requests are considered outstanding from the time they miss the core's L2 cache until the transaction completion message is sent to the requestoroffcore_requests_outstanding.demand_code_rdcacheFor every cycle, increments by the number of outstanding code read requests pendingevent=0x60,period=1000003,umask=200For every cycle, increments by the number of outstanding code read requests pending.  Code Read requests include both cacheable and non-cacheable Code Reads.   Requests are considered outstanding from the time they miss the core's L2 cache until the transaction completion message is sent to the requestorocr.demand_code_rd.l3_missmemoryCounts demand instruction fetches and L1 instruction cache prefetches that were not supplied by the local socket's L1, L2, or L3 cachesevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBFC0000400ocr.demand_code_rd.l3_miss_localmemoryCounts demand instruction fetches and L1 instruction cache prefetches that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locallyevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8440000400ocr.demand_data_rd.l3_missmemoryCounts demand data reads that were not supplied by the local socket's L1, L2, or L3 cachesevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBFC0000100ocr.demand_data_rd.l3_miss_localmemoryCounts demand data reads that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locallyevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8440000100ocr.demand_rfo.l3_missmemoryCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the local socket's L1, L2, or L3 cachesevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F3FC0000200ocr.demand_rfo.l3_miss_localmemoryCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the local socket's L1, L2, or L3 caches and were supplied by the local socketevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F0440000200ocr.hwpf_l1d_and_swpf.l3_missmemoryCounts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that were not supplied by the local socket's L1, L2, or L3 cachesevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBFC0040000ocr.hwpf_l1d_and_swpf.l3_miss_localmemoryCounts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locallyevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8440040000ocr.hwpf_l3.l3_missmemoryCounts hardware prefetches to the L3 only that missed the local socket's L1, L2, and L3 cachesevent=0xb7,period=100003,umask=1,offcore_rsp=0x9400238000ocr.hwpf_l3.l3_miss_localmemoryCounts hardware prefetches to the L3 only that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locallyevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400238000ocr.itom.l3_miss_localmemoryCounts full cacheline writes (ItoM) that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locallyevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400000200ocr.other.l3_missmemoryCounts miscellaneous requests, such as I/O and un-cacheable accesses that were not supplied by the local socket's L1, L2, or L3 cachesevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBFC0800000ocr.other.l3_miss_localmemoryCounts miscellaneous requests, such as I/O and un-cacheable accesses that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locallyevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F8440800000ocr.prefetches.l3_miss_localmemoryCounts hardware and software prefetches to all cache levels that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locallyevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F844027F000ocr.reads_to_core.l3_missmemoryCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 cachesevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F3FC0047700ocr.reads_to_core.l3_miss_localmemoryCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 caches and were supplied by the local socketevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F0440047700ocr.reads_to_core.l3_miss_local_socketmemoryCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that missed the L3 Cache and were supplied by the local socket (DRAM or PMM), whether or not in Sub NUMA Cluster(SNC) Mode.  In SNC Mode counts PMM or DRAM accesses that are controlled by the close or distant SNC Clusterevent=0xb7,period=100003,umask=1,offcore_rsp=0x70CC0047700ocr.streaming_wr.l3_missmemoryCounts streaming stores that missed the local socket's L1, L2, and L3 cachesevent=0xb7,period=100003,umask=1,offcore_rsp=0x9400080000ocr.streaming_wr.l3_miss_localmemoryCounts streaming stores that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locallyevent=0xb7,period=100003,umask=1,offcore_rsp=0x8400080000offcore_requests_outstanding.l3_miss_demand_data_rdmemoryThis event is deprecatedevent=0x60,period=2000003,umask=0x1010offcore_requests_outstanding.l3_miss_demand_data_rd_ge_6memoryCycles where the core is waiting on at least 6 outstanding demand data read requests known to have missed the L3 cacheevent=0x60,cmask=6,period=2000003,umask=0x1000Cycles where the core is waiting on at least 6 outstanding demand data read requests known to have missed the L3 cache.  Note that this event does not capture all elapsed cycles while the requests are outstanding - only cycles from when the requests were known to have missed the L3 cachecore_snoop_response.i_fwd_feotherHit snoop reply with data, line invalidatedevent=0xef,period=1000003,umask=0x2000Counts responses to snoops indicating the line will now be (I)nvalidated: removed from this core's cache, after the data is forwarded back to the requestor and indicating the data was found unmodified in the (FE) Forward or Exclusive State in this cores caches cache.  A single snoop response from the core counts on all hyperthreads of the corecore_snoop_response.i_fwd_motherHitM snoop reply with data, line invalidatedevent=0xef,period=1000003,umask=0x1000Counts responses to snoops indicating the line will now be (I)nvalidated: removed from this core's caches, after the data is forwarded back to the requestor, and indicating the data was found modified(M) in this cores caches cache (aka HitM response).  A single snoop response from the core counts on all hyperthreads of the corecore_snoop_response.i_hit_fseotherHit snoop reply without sending the data, line invalidatedevent=0xef,period=1000003,umask=200Counts responses to snoops indicating the line will now be (I)nvalidated in this core's caches without being forwarded back to the requestor. The line was in Forward, Shared or Exclusive (FSE) state in this cores caches.  A single snoop response from the core counts on all hyperthreads of the corecore_snoop_response.missotherLine not found snoop replyevent=0xef,period=1000003,umask=100Counts responses to snoops indicating that the data was not found (IHitI) in this core's caches. A single snoop response from the core counts on all hyperthreads of the Corecore_snoop_response.s_fwd_feotherHit snoop reply with data, line kept in Shared stateevent=0xef,period=1000003,umask=0x4000Counts responses to snoops indicating the line may be kept on this core in the (S)hared state, after the data is forwarded back to the requestor, initially the data was found in the cache in the (FS) Forward or Shared state.  A single snoop response from the core counts on all hyperthreads of the corecore_snoop_response.s_fwd_motherHitM snoop reply with data, line kept in Shared stateevent=0xef,period=1000003,umask=800Counts responses to snoops indicating the line may be kept on this core in the (S)hared state, after the data is forwarded back to the requestor, initially the data was found in the cache in the (M)odified state.  A single snoop response from the core counts on all hyperthreads of the corecore_snoop_response.s_hit_fseotherHit snoop reply without sending the data, line kept in Shared stateevent=0xef,period=1000003,umask=400Counts responses to snoops indicating the line was kept on this core in the (S)hared state, and that the data was found unmodified but not forwarded back to the requestor, initially the data was found in the cache in the (FSE) Forward, Shared state or Exclusive state.  A single snoop response from the core counts on all hyperthreads of the coreocr.demand_code_rd.dramotherCounts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x73C00000400ocr.demand_code_rd.local_dramotherCounts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode.  In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Clusterevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400000400ocr.demand_code_rd.snc_dramotherCounts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) modeevent=0xb7,period=100003,umask=1,offcore_rsp=0x70800000400ocr.demand_data_rd.dramotherCounts demand data reads that were supplied by DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x73C00000100ocr.demand_data_rd.local_dramotherCounts demand data reads that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode.  In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Clusterevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400000100ocr.demand_data_rd.local_pmmotherCounts demand data reads that were supplied by PMM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode.  In SNC Mode counts only those PMM accesses that are controlled by the close SNC Clusterevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040000100ocr.demand_data_rd.pmmotherCounts demand data reads that were supplied by PMMevent=0xb7,period=100003,umask=1,offcore_rsp=0x703C0000100ocr.demand_data_rd.remote_dramotherCounts demand data reads that were supplied by DRAM attached to another socketevent=0xb7,period=100003,umask=1,offcore_rsp=0x73000000100ocr.demand_data_rd.remote_pmmotherCounts demand data reads that were supplied by PMM attached to another socketevent=0xb7,period=100003,umask=1,offcore_rsp=0x70300000100ocr.demand_data_rd.snc_dramotherCounts demand data reads that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) modeevent=0xb7,period=100003,umask=1,offcore_rsp=0x70800000100ocr.demand_data_rd.snc_pmmotherCounts demand data reads that were supplied by PMM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) modeevent=0xb7,period=100003,umask=1,offcore_rsp=0x70080000100ocr.demand_rfo.any_responseotherCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F3FFC000200ocr.demand_rfo.dramotherCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x73C00000200ocr.demand_rfo.local_dramotherCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode.  In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Clusterevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400000200ocr.demand_rfo.local_pmmotherCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by PMM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode.  In SNC Mode counts only those PMM accesses that are controlled by the close SNC Clusterevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040000200ocr.demand_rfo.pmmotherCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by PMMevent=0xb7,period=100003,umask=1,offcore_rsp=0x703C0000200ocr.demand_rfo.remote_pmmotherCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by PMM attached to another socketevent=0xb7,period=100003,umask=1,offcore_rsp=0x70300000200ocr.demand_rfo.snc_dramotherCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) modeevent=0xb7,period=100003,umask=1,offcore_rsp=0x70800000200ocr.demand_rfo.snc_pmmotherCounts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by PMM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) modeevent=0xb7,period=100003,umask=1,offcore_rsp=0x70080000200ocr.hwpf_l1d_and_swpf.dramotherCounts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that were supplied by DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x73C00040000ocr.hwpf_l1d_and_swpf.local_dramotherCounts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode.  In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Clusterevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400040000ocr.hwpf_l2.any_responseotherCounts hardware prefetch (which bring data to L2) that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x1007000ocr.hwpf_l3.any_responseotherCounts hardware prefetches to the L3 only that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x1238000ocr.hwpf_l3.remoteotherCounts hardware prefetches to the L3 only that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline was homed in a remote socketevent=0xb7,period=100003,umask=1,offcore_rsp=0x9000238000ocr.itom.remoteotherCounts full cacheline writes (ItoM) that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline was homed in a remote socketevent=0xb7,period=100003,umask=1,offcore_rsp=0x9000000200ocr.reads_to_core.any_responseotherCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that have any type of responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F3FFC047700ocr.reads_to_core.dramotherCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x73C00047700ocr.reads_to_core.local_dramotherCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode.  In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Clusterevent=0xb7,period=100003,umask=1,offcore_rsp=0x10400047700ocr.reads_to_core.local_pmmotherCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by PMM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode.  In SNC Mode counts only those PMM accesses that are controlled by the close SNC Clusterevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040047700ocr.reads_to_core.local_socket_dramotherCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to this socket, whether or not in Sub NUMA Cluster(SNC) Mode.  In SNC Mode counts DRAM accesses that are controlled by the close or distant SNC Clusterevent=0xb7,period=100003,umask=1,offcore_rsp=0x70C00047700ocr.reads_to_core.local_socket_pmmotherCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by PMM attached to this socket, whether or not in Sub NUMA Cluster(SNC) Mode.  In SNC Mode counts PMM accesses that are controlled by the close or distant SNC Clusterevent=0xb7,period=100003,umask=1,offcore_rsp=0x700C0047700ocr.reads_to_core.remoteotherCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 caches and were supplied by a remote socketevent=0xb7,period=100003,umask=1,offcore_rsp=0x3F3300047700ocr.reads_to_core.remote_dramotherCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to another socketevent=0xb7,period=100003,umask=1,offcore_rsp=0x73000047700ocr.reads_to_core.remote_memoryotherCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM or PMM attached to another socketevent=0xb7,period=100003,umask=1,offcore_rsp=0x73180047700ocr.reads_to_core.remote_pmmotherCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by PMM attached to another socketevent=0xb7,period=100003,umask=1,offcore_rsp=0x70300047700ocr.reads_to_core.snc_dramotherCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) modeevent=0xb7,period=100003,umask=1,offcore_rsp=0x70800047700ocr.reads_to_core.snc_pmmotherCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by PMM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) modeevent=0xb7,period=100003,umask=1,offcore_rsp=0x70080047700ocr.write_estimate.memoryotherCounts Demand RFOs, ItoM's, PREFECTHW's, Hardware RFO Prefetches to the L1/L2 and Streaming stores that likely resulted in a store to Memory (DRAM or PMM)event=0xb7,period=100003,umask=1,offcore_rsp=0xFBFF8082200br_misp_retired.indirect_callpipelineMispredicted indirect CALL instructions retired (Precise event)event=0xc5,period=50021,umask=200Counts retired mispredicted indirect (near taken) calls, including both register and memory indirect (Precise event)unc_cha_2lm_nm_invitox.localuncore cacheThis event is deprecated. Refer to new event UNC_CHA_PMM_MEMMODE_NM_INVITOX.LOCALevent=0x65,umask=111unc_cha_2lm_nm_invitox.remoteuncore cacheThis event is deprecated. Refer to new event UNC_CHA_PMM_MEMMODE_NM_INVITOX.REMOTEevent=0x65,umask=211unc_cha_2lm_nm_invitox.setconflictuncore cacheThis event is deprecated. Refer to new event UNC_CHA_PMM_MEMMODE_NM_INVITOX.SETCONFLICTevent=0x65,umask=411unc_cha_2lm_nm_setconflicts.llcuncore cacheThis event is deprecated. Refer to new event UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS.LLCevent=0x64,umask=211unc_cha_2lm_nm_setconflicts.sfuncore cacheThis event is deprecated. Refer to new event UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS.SFevent=0x64,umask=111unc_cha_2lm_nm_setconflicts.toruncore cacheThis event is deprecated. Refer to new event UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS.TORevent=0x64,umask=411unc_cha_2lm_nm_setconflicts2.memwruncore cacheThis event is deprecated. Refer to new event UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS2.MEMWRevent=0x70,umask=211unc_cha_2lm_nm_setconflicts2.memwrniuncore cacheThis event is deprecated. Refer to new event UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS2.MEMWRNIevent=0x70,umask=411unc_cha_ag0_ad_crd_acquired0.tgr0uncore cacheCMS Agent0 AD Credits Acquired : For Transgress 0event=0x80,umask=101CMS Agent0 AD Credits Acquired : For Transgress 0 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_cha_ag0_ad_crd_acquired0.tgr1uncore cacheCMS Agent0 AD Credits Acquired : For Transgress 1event=0x80,umask=201CMS Agent0 AD Credits Acquired : For Transgress 1 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_cha_ag0_ad_crd_acquired0.tgr2uncore cacheCMS Agent0 AD Credits Acquired : For Transgress 2event=0x80,umask=401CMS Agent0 AD Credits Acquired : For Transgress 2 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_cha_ag0_ad_crd_acquired0.tgr3uncore cacheCMS Agent0 AD Credits Acquired : For Transgress 3event=0x80,umask=801CMS Agent0 AD Credits Acquired : For Transgress 3 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_cha_ag0_ad_crd_acquired0.tgr4uncore cacheCMS Agent0 AD Credits Acquired : For Transgress 4event=0x80,umask=0x1001CMS Agent0 AD Credits Acquired : For Transgress 4 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_cha_ag0_ad_crd_acquired0.tgr5uncore cacheCMS Agent0 AD Credits Acquired : For Transgress 5event=0x80,umask=0x2001CMS Agent0 AD Credits Acquired : For Transgress 5 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_cha_ag0_ad_crd_acquired0.tgr6uncore cacheCMS Agent0 AD Credits Acquired : For Transgress 6event=0x80,umask=0x4001CMS Agent0 AD Credits Acquired : For Transgress 6 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_cha_ag0_ad_crd_acquired0.tgr7uncore cacheCMS Agent0 AD Credits Acquired : For Transgress 7event=0x80,umask=0x8001CMS Agent0 AD Credits Acquired : For Transgress 7 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_cha_ag0_ad_crd_acquired1.tgr10uncore cacheCMS Agent0 AD Credits Acquired : For Transgress 10event=0x81,umask=401CMS Agent0 AD Credits Acquired : For Transgress 10 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_cha_ag0_ad_crd_acquired1.tgr8uncore cacheCMS Agent0 AD Credits Acquired : For Transgress 8event=0x81,umask=101CMS Agent0 AD Credits Acquired : For Transgress 8 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_cha_ag0_ad_crd_acquired1.tgr9uncore cacheCMS Agent0 AD Credits Acquired : For Transgress 9event=0x81,umask=201CMS Agent0 AD Credits Acquired : For Transgress 9 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_cha_ag0_ad_crd_occupancy0.tgr0uncore cacheCMS Agent0 AD Credits Occupancy : For Transgress 0event=0x82,umask=101CMS Agent0 AD Credits Occupancy : For Transgress 0 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_cha_ag0_ad_crd_occupancy0.tgr1uncore cacheCMS Agent0 AD Credits Occupancy : For Transgress 1event=0x82,umask=201CMS Agent0 AD Credits Occupancy : For Transgress 1 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_cha_ag0_ad_crd_occupancy0.tgr2uncore cacheCMS Agent0 AD Credits Occupancy : For Transgress 2event=0x82,umask=401CMS Agent0 AD Credits Occupancy : For Transgress 2 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_cha_ag0_ad_crd_occupancy0.tgr3uncore cacheCMS Agent0 AD Credits Occupancy : For Transgress 3event=0x82,umask=801CMS Agent0 AD Credits Occupancy : For Transgress 3 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_cha_ag0_ad_crd_occupancy0.tgr4uncore cacheCMS Agent0 AD Credits Occupancy : For Transgress 4event=0x82,umask=0x1001CMS Agent0 AD Credits Occupancy : For Transgress 4 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_cha_ag0_ad_crd_occupancy0.tgr5uncore cacheCMS Agent0 AD Credits Occupancy : For Transgress 5event=0x82,umask=0x2001CMS Agent0 AD Credits Occupancy : For Transgress 5 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_cha_ag0_ad_crd_occupancy0.tgr6uncore cacheCMS Agent0 AD Credits Occupancy : For Transgress 6event=0x82,umask=0x4001CMS Agent0 AD Credits Occupancy : For Transgress 6 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_cha_ag0_ad_crd_occupancy0.tgr7uncore cacheCMS Agent0 AD Credits Occupancy : For Transgress 7event=0x82,umask=0x8001CMS Agent0 AD Credits Occupancy : For Transgress 7 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_cha_ag0_ad_crd_occupancy1.tgr10uncore cacheCMS Agent0 AD Credits Occupancy : For Transgress 10event=0x83,umask=401CMS Agent0 AD Credits Occupancy : For Transgress 10 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_cha_ag0_ad_crd_occupancy1.tgr8uncore cacheCMS Agent0 AD Credits Occupancy : For Transgress 8event=0x83,umask=101CMS Agent0 AD Credits Occupancy : For Transgress 8 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_cha_ag0_ad_crd_occupancy1.tgr9uncore cacheCMS Agent0 AD Credits Occupancy : For Transgress 9event=0x83,umask=201CMS Agent0 AD Credits Occupancy : For Transgress 9 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_cha_ag0_bl_crd_acquired0.tgr0uncore cacheCMS Agent0 BL Credits Acquired : For Transgress 0event=0x88,umask=101CMS Agent0 BL Credits Acquired : For Transgress 0 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_cha_ag0_bl_crd_acquired0.tgr1uncore cacheCMS Agent0 BL Credits Acquired : For Transgress 1event=0x88,umask=201CMS Agent0 BL Credits Acquired : For Transgress 1 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_cha_ag0_bl_crd_acquired0.tgr2uncore cacheCMS Agent0 BL Credits Acquired : For Transgress 2event=0x88,umask=401CMS Agent0 BL Credits Acquired : For Transgress 2 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_cha_ag0_bl_crd_acquired0.tgr3uncore cacheCMS Agent0 BL Credits Acquired : For Transgress 3event=0x88,umask=801CMS Agent0 BL Credits Acquired : For Transgress 3 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_cha_ag0_bl_crd_acquired0.tgr4uncore cacheCMS Agent0 BL Credits Acquired : For Transgress 4event=0x88,umask=0x1001CMS Agent0 BL Credits Acquired : For Transgress 4 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_cha_ag0_bl_crd_acquired0.tgr5uncore cacheCMS Agent0 BL Credits Acquired : For Transgress 5event=0x88,umask=0x2001CMS Agent0 BL Credits Acquired : For Transgress 5 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_cha_ag0_bl_crd_acquired0.tgr6uncore cacheCMS Agent0 BL Credits Acquired : For Transgress 6event=0x88,umask=0x4001CMS Agent0 BL Credits Acquired : For Transgress 6 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_cha_ag0_bl_crd_acquired0.tgr7uncore cacheCMS Agent0 BL Credits Acquired : For Transgress 7event=0x88,umask=0x8001CMS Agent0 BL Credits Acquired : For Transgress 7 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_cha_ag0_bl_crd_acquired1.tgr10uncore cacheCMS Agent0 BL Credits Acquired : For Transgress 10event=0x89,umask=401CMS Agent0 BL Credits Acquired : For Transgress 10 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_cha_ag0_bl_crd_acquired1.tgr8uncore cacheCMS Agent0 BL Credits Acquired : For Transgress 8event=0x89,umask=101CMS Agent0 BL Credits Acquired : For Transgress 8 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_cha_ag0_bl_crd_acquired1.tgr9uncore cacheCMS Agent0 BL Credits Acquired : For Transgress 9event=0x89,umask=201CMS Agent0 BL Credits Acquired : For Transgress 9 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_cha_ag0_bl_crd_occupancy0.tgr0uncore cacheCMS Agent0 BL Credits Occupancy : For Transgress 0event=0x8a,umask=101CMS Agent0 BL Credits Occupancy : For Transgress 0 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_cha_ag0_bl_crd_occupancy0.tgr1uncore cacheCMS Agent0 BL Credits Occupancy : For Transgress 1event=0x8a,umask=201CMS Agent0 BL Credits Occupancy : For Transgress 1 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_cha_ag0_bl_crd_occupancy0.tgr2uncore cacheCMS Agent0 BL Credits Occupancy : For Transgress 2event=0x8a,umask=401CMS Agent0 BL Credits Occupancy : For Transgress 2 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_cha_ag0_bl_crd_occupancy0.tgr3uncore cacheCMS Agent0 BL Credits Occupancy : For Transgress 3event=0x8a,umask=801CMS Agent0 BL Credits Occupancy : For Transgress 3 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_cha_ag0_bl_crd_occupancy0.tgr4uncore cacheCMS Agent0 BL Credits Occupancy : For Transgress 4event=0x8a,umask=0x1001CMS Agent0 BL Credits Occupancy : For Transgress 4 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_cha_ag0_bl_crd_occupancy0.tgr5uncore cacheCMS Agent0 BL Credits Occupancy : For Transgress 5event=0x8a,umask=0x2001CMS Agent0 BL Credits Occupancy : For Transgress 5 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_cha_ag0_bl_crd_occupancy0.tgr6uncore cacheCMS Agent0 BL Credits Occupancy : For Transgress 6event=0x8a,umask=0x4001CMS Agent0 BL Credits Occupancy : For Transgress 6 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_cha_ag0_bl_crd_occupancy0.tgr7uncore cacheCMS Agent0 BL Credits Occupancy : For Transgress 7event=0x8a,umask=0x8001CMS Agent0 BL Credits Occupancy : For Transgress 7 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_cha_ag0_bl_crd_occupancy1.tgr10uncore cacheCMS Agent0 BL Credits Occupancy : For Transgress 10event=0x8b,umask=401CMS Agent0 BL Credits Occupancy : For Transgress 10 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_cha_ag0_bl_crd_occupancy1.tgr8uncore cacheCMS Agent0 BL Credits Occupancy : For Transgress 8event=0x8b,umask=101CMS Agent0 BL Credits Occupancy : For Transgress 8 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_cha_ag0_bl_crd_occupancy1.tgr9uncore cacheCMS Agent0 BL Credits Occupancy : For Transgress 9event=0x8b,umask=201CMS Agent0 BL Credits Occupancy : For Transgress 9 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_cha_ag1_ad_crd_acquired0.tgr0uncore cacheCMS Agent1 AD Credits Acquired : For Transgress 0event=0x84,umask=101CMS Agent1 AD Credits Acquired : For Transgress 0 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_cha_ag1_ad_crd_acquired0.tgr1uncore cacheCMS Agent1 AD Credits Acquired : For Transgress 1event=0x84,umask=201CMS Agent1 AD Credits Acquired : For Transgress 1 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_cha_ag1_ad_crd_acquired0.tgr2uncore cacheCMS Agent1 AD Credits Acquired : For Transgress 2event=0x84,umask=401CMS Agent1 AD Credits Acquired : For Transgress 2 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_cha_ag1_ad_crd_acquired0.tgr3uncore cacheCMS Agent1 AD Credits Acquired : For Transgress 3event=0x84,umask=801CMS Agent1 AD Credits Acquired : For Transgress 3 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_cha_ag1_ad_crd_acquired0.tgr4uncore cacheCMS Agent1 AD Credits Acquired : For Transgress 4event=0x84,umask=0x1001CMS Agent1 AD Credits Acquired : For Transgress 4 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_cha_ag1_ad_crd_acquired0.tgr5uncore cacheCMS Agent1 AD Credits Acquired : For Transgress 5event=0x84,umask=0x2001CMS Agent1 AD Credits Acquired : For Transgress 5 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_cha_ag1_ad_crd_acquired0.tgr6uncore cacheCMS Agent1 AD Credits Acquired : For Transgress 6event=0x84,umask=0x4001CMS Agent1 AD Credits Acquired : For Transgress 6 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_cha_ag1_ad_crd_acquired0.tgr7uncore cacheCMS Agent1 AD Credits Acquired : For Transgress 7event=0x84,umask=0x8001CMS Agent1 AD Credits Acquired : For Transgress 7 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_cha_ag1_ad_crd_acquired1.tgr10uncore cacheCMS Agent1 AD Credits Acquired : For Transgress 10event=0x85,umask=401CMS Agent1 AD Credits Acquired : For Transgress 10 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_cha_ag1_ad_crd_acquired1.tgr8uncore cacheCMS Agent1 AD Credits Acquired : For Transgress 8event=0x85,umask=101CMS Agent1 AD Credits Acquired : For Transgress 8 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_cha_ag1_ad_crd_acquired1.tgr9uncore cacheCMS Agent1 AD Credits Acquired : For Transgress 9event=0x85,umask=201CMS Agent1 AD Credits Acquired : For Transgress 9 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_cha_ag1_ad_crd_occupancy0.tgr0uncore cacheCMS Agent1 AD Credits Occupancy : For Transgress 0event=0x86,umask=101CMS Agent1 AD Credits Occupancy : For Transgress 0 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_cha_ag1_ad_crd_occupancy0.tgr1uncore cacheCMS Agent1 AD Credits Occupancy : For Transgress 1event=0x86,umask=201CMS Agent1 AD Credits Occupancy : For Transgress 1 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_cha_ag1_ad_crd_occupancy0.tgr2uncore cacheCMS Agent1 AD Credits Occupancy : For Transgress 2event=0x86,umask=401CMS Agent1 AD Credits Occupancy : For Transgress 2 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_cha_ag1_ad_crd_occupancy0.tgr3uncore cacheCMS Agent1 AD Credits Occupancy : For Transgress 3event=0x86,umask=801CMS Agent1 AD Credits Occupancy : For Transgress 3 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_cha_ag1_ad_crd_occupancy0.tgr4uncore cacheCMS Agent1 AD Credits Occupancy : For Transgress 4event=0x86,umask=0x1001CMS Agent1 AD Credits Occupancy : For Transgress 4 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_cha_ag1_ad_crd_occupancy0.tgr5uncore cacheCMS Agent1 AD Credits Occupancy : For Transgress 5event=0x86,umask=0x2001CMS Agent1 AD Credits Occupancy : For Transgress 5 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_cha_ag1_ad_crd_occupancy0.tgr6uncore cacheCMS Agent1 AD Credits Occupancy : For Transgress 6event=0x86,umask=0x4001CMS Agent1 AD Credits Occupancy : For Transgress 6 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_cha_ag1_ad_crd_occupancy0.tgr7uncore cacheCMS Agent1 AD Credits Occupancy : For Transgress 7event=0x86,umask=0x8001CMS Agent1 AD Credits Occupancy : For Transgress 7 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_cha_ag1_ad_crd_occupancy1.tgr10uncore cacheCMS Agent1 AD Credits Occupancy : For Transgress 10event=0x87,umask=401CMS Agent1 AD Credits Occupancy : For Transgress 10 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_cha_ag1_ad_crd_occupancy1.tgr8uncore cacheCMS Agent1 AD Credits Occupancy : For Transgress 8event=0x87,umask=101CMS Agent1 AD Credits Occupancy : For Transgress 8 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_cha_ag1_ad_crd_occupancy1.tgr9uncore cacheCMS Agent1 AD Credits Occupancy : For Transgress 9event=0x87,umask=201CMS Agent1 AD Credits Occupancy : For Transgress 9 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_cha_ag1_bl_crd_acquired0.tgr0uncore cacheCMS Agent1 BL Credits Acquired : For Transgress 0event=0x8c,umask=101CMS Agent1 BL Credits Acquired : For Transgress 0 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_cha_ag1_bl_crd_acquired0.tgr1uncore cacheCMS Agent1 BL Credits Acquired : For Transgress 1event=0x8c,umask=201CMS Agent1 BL Credits Acquired : For Transgress 1 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_cha_ag1_bl_crd_acquired0.tgr2uncore cacheCMS Agent1 BL Credits Acquired : For Transgress 2event=0x8c,umask=401CMS Agent1 BL Credits Acquired : For Transgress 2 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_cha_ag1_bl_crd_acquired0.tgr3uncore cacheCMS Agent1 BL Credits Acquired : For Transgress 3event=0x8c,umask=801CMS Agent1 BL Credits Acquired : For Transgress 3 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_cha_ag1_bl_crd_acquired0.tgr4uncore cacheCMS Agent1 BL Credits Acquired : For Transgress 4event=0x8c,umask=0x1001CMS Agent1 BL Credits Acquired : For Transgress 4 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_cha_ag1_bl_crd_acquired0.tgr5uncore cacheCMS Agent1 BL Credits Acquired : For Transgress 5event=0x8c,umask=0x2001CMS Agent1 BL Credits Acquired : For Transgress 5 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_cha_ag1_bl_crd_acquired0.tgr6uncore cacheCMS Agent1 BL Credits Acquired : For Transgress 4event=0x8c,umask=0x4001CMS Agent1 BL Credits Acquired : For Transgress 4 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_cha_ag1_bl_crd_acquired0.tgr7uncore cacheCMS Agent1 BL Credits Acquired : For Transgress 5event=0x8c,umask=0x8001CMS Agent1 BL Credits Acquired : For Transgress 5 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_cha_ag1_bl_crd_acquired1.tgr10uncore cacheCMS Agent1 BL Credits Acquired : For Transgress 10event=0x8d,umask=401CMS Agent1 BL Credits Acquired : For Transgress 10 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_cha_ag1_bl_crd_acquired1.tgr8uncore cacheCMS Agent1 BL Credits Acquired : For Transgress 8event=0x8d,umask=101CMS Agent1 BL Credits Acquired : For Transgress 8 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_cha_ag1_bl_crd_acquired1.tgr9uncore cacheCMS Agent1 BL Credits Acquired : For Transgress 9event=0x8d,umask=201CMS Agent1 BL Credits Acquired : For Transgress 9 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_cha_ag1_bl_crd_occupancy0.tgr0uncore cacheCMS Agent1 BL Credits Occupancy : For Transgress 0event=0x8e,umask=101CMS Agent1 BL Credits Occupancy : For Transgress 0 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_cha_ag1_bl_crd_occupancy0.tgr1uncore cacheCMS Agent1 BL Credits Occupancy : For Transgress 1event=0x8e,umask=201CMS Agent1 BL Credits Occupancy : For Transgress 1 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_cha_ag1_bl_crd_occupancy0.tgr2uncore cacheCMS Agent1 BL Credits Occupancy : For Transgress 2event=0x8e,umask=401CMS Agent1 BL Credits Occupancy : For Transgress 2 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_cha_ag1_bl_crd_occupancy0.tgr3uncore cacheCMS Agent1 BL Credits Occupancy : For Transgress 3event=0x8e,umask=801CMS Agent1 BL Credits Occupancy : For Transgress 3 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_cha_ag1_bl_crd_occupancy0.tgr4uncore cacheCMS Agent1 BL Credits Occupancy : For Transgress 4event=0x8e,umask=0x1001CMS Agent1 BL Credits Occupancy : For Transgress 4 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_cha_ag1_bl_crd_occupancy0.tgr5uncore cacheCMS Agent1 BL Credits Occupancy : For Transgress 5event=0x8e,umask=0x2001CMS Agent1 BL Credits Occupancy : For Transgress 5 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_cha_ag1_bl_crd_occupancy0.tgr6uncore cacheCMS Agent1 BL Credits Occupancy : For Transgress 6event=0x8e,umask=0x4001CMS Agent1 BL Credits Occupancy : For Transgress 6 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_cha_ag1_bl_crd_occupancy0.tgr7uncore cacheCMS Agent1 BL Credits Occupancy : For Transgress 7event=0x8e,umask=0x8001CMS Agent1 BL Credits Occupancy : For Transgress 7 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_cha_ag1_bl_crd_occupancy1.tgr10uncore cacheCMS Agent1 BL Credits Occupancy : For Transgress 10event=0x8f,umask=401CMS Agent1 BL Credits Occupancy : For Transgress 10 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_cha_ag1_bl_crd_occupancy1.tgr8uncore cacheCMS Agent1 BL Credits Occupancy : For Transgress 8event=0x8f,umask=101CMS Agent1 BL Credits Occupancy : For Transgress 8 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_cha_ag1_bl_crd_occupancy1.tgr9uncore cacheCMS Agent1 BL Credits Occupancy : For Transgress 9event=0x8f,umask=201CMS Agent1 BL Credits Occupancy : For Transgress 9 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_cha_clockticksuncore cacheClockticks of the uncore caching and home agent (CHA)event=001unc_cha_counter0_occupancyuncore cacheCounter 0 Occupancyevent=0x1f01Counter 0 Occupancy : Since occupancy counts can only be captured in the Cbo's 0 counter, this event allows a user to capture occupancy related information by filtering the Cb0 occupancy count captured in Counter 0.   The filtering available is found in the control register - threshold, invert and edge detect.   E.g. setting threshold to 1 can effectively monitor how many cycles the monitored queue has an entryunc_cha_dir_lookup.no_snpuncore cacheMulti-socket cacheline directory state lookups : Snoop Not Neededevent=0x53,umask=201Multi-socket cacheline directory state lookups : Snoop Not Needed : Counts the number of transactions that looked up the directory.  Can be filtered by requests that had to snoop and those that did not have to. : Filters for transactions that did not have to send any snoops because the directory was cleanunc_cha_dir_lookup.snpuncore cacheMulti-socket cacheline directory state lookups : Snoop Neededevent=0x53,umask=101Multi-socket cacheline directory state lookups : Snoop Needed : Counts the number of transactions that looked up the directory.  Can be filtered by requests that had to snoop and those that did not have to. : Filters for transactions that had to send one or more snoops because the directory was not cleanunc_cha_dir_update.hauncore cacheMulti-socket cacheline directory state updates; memory write due to directory update from the home agent (HA) pipeevent=0x54,umask=101Counts only multi-socket cacheline directory state updates memory writes issued from the home agent (HA) pipe. This does not include memory write requests which are for I (Invalid) or E (Exclusive) cachelinesunc_cha_dir_update.toruncore cacheMulti-socket cacheline directory state updates; memory write due to directory update from (table of requests) TOR pipeevent=0x54,umask=201Counts only multi-socket cacheline directory state updates due to memory writes issued from the table of requests (TOR) pipe which are the result of remote transaction hitting the SF/LLC and returning data Core2Core. This does not include memory write requests which are for I (Invalid) or E (Exclusive) cachelinesunc_cha_distress_asserted.dpt_localuncore cacheDistress signal asserted : DPT Localevent=0xaf,umask=401Distress signal asserted : DPT Local : Counts the number of cycles either the local or incoming distress signals are asserted. : Dynamic Prefetch Throttle triggered by this tileunc_cha_distress_asserted.dpt_nonlocaluncore cacheDistress signal asserted : DPT Remoteevent=0xaf,umask=801Distress signal asserted : DPT Remote : Counts the number of cycles either the local or incoming distress signals are asserted. : Dynamic Prefetch Throttle received by this tileunc_cha_distress_asserted.dpt_stall_ivuncore cacheDistress signal asserted : DPT Stalled - IVevent=0xaf,umask=0x4001Distress signal asserted : DPT Stalled - IV : Counts the number of cycles either the local or incoming distress signals are asserted. : DPT occurred while regular IVs were received, causing DPT to be stalledunc_cha_distress_asserted.dpt_stall_nocrduncore cacheDistress signal asserted : DPT Stalled -  No Creditevent=0xaf,umask=0x8001Distress signal asserted : DPT Stalled -  No Credit : Counts the number of cycles either the local or incoming distress signals are asserted. : DPT occurred while credit not available causing DPT to be stalledunc_cha_distress_asserted.horzuncore cacheDistress signal asserted : Horizontalevent=0xaf,umask=201Distress signal asserted : Horizontal : Counts the number of cycles either the local or incoming distress signals are asserted. : If TGR egress is full, then agents will throttle outgoing AD IDI transactionsunc_cha_distress_asserted.pmm_localuncore cacheDistress signal asserted : PMM Localevent=0xaf,umask=0x1001Distress signal asserted : PMM Local : Counts the number of cycles either the local or incoming distress signals are asserted. : If the CHA TOR has too many PMM transactions, this signal will throttle outgoing MS2IDI trafficunc_cha_distress_asserted.pmm_nonlocaluncore cacheDistress signal asserted : PMM Remoteevent=0xaf,umask=0x2001Distress signal asserted : PMM Remote : Counts the number of cycles either the local or incoming distress signals are asserted. : If another CHA TOR has too many PMM transactions, this signal will throttle outgoing MS2IDI trafficunc_cha_distress_asserted.vertuncore cacheDistress signal asserted : Verticalevent=0xaf,umask=101Distress signal asserted : Vertical : Counts the number of cycles either the local or incoming distress signals are asserted. : If IRQ egress is full, then agents will throttle outgoing AD IDI transactionsunc_cha_hitme_hit.shared_ownrequncore cacheCounts Number of Hits in HitMe Cache : Remote socket ownership read requests that hit in S stateevent=0x5f,umask=401Counts Number of Hits in HitMe Cache : Remote socket ownership read requests that hit in S state. : Shared hit and op is RdInvOwn, RdInv, Inv*unc_cha_hitme_hit.wbmtoeuncore cacheCounts Number of Hits in HitMe Cache : Remote socket WBMtoE requestsevent=0x5f,umask=801unc_cha_hitme_hit.wbmtoi_or_suncore cacheCounts Number of Hits in HitMe Cache : Remote socket writeback to I or S requestsevent=0x5f,umask=0x1001Counts Number of Hits in HitMe Cache : Remote socket writeback to I or S requests : op is WbMtoI, WbPushMtoI, WbFlush, or WbMtoSunc_cha_hitme_lookup.readuncore cacheCounts Number of times HitMe Cache is accessed : Remote socket read requestsevent=0x5e,umask=101Counts Number of times HitMe Cache is accessed : Remote socket read requests : op is RdCode, RdData, RdDataMigratory, RdCur, RdInvOwn, RdInv, Inv*unc_cha_hitme_lookup.writeuncore cacheCounts Number of times HitMe Cache is accessed : Remote socket write (i.e. writeback) requestsevent=0x5e,umask=201Counts Number of times HitMe Cache is accessed : Remote socket write (i.e. writeback) requests : op is WbMtoE, WbMtoI, WbPushMtoI, WbFlush, or WbMtoSunc_cha_hitme_miss.notshared_rdinvownuncore cacheCounts Number of Misses in HitMe Cache : Remote socket RdInvOwn requests that are not to shared lineevent=0x60,umask=0x4001Counts Number of Misses in HitMe Cache : Remote socket RdInvOwn requests that are not to shared line : No SF/LLC HitS/F and op is RdInvOwnunc_cha_hitme_miss.read_or_invuncore cacheCounts Number of Misses in HitMe Cache : Remote socket read or invalidate requestsevent=0x60,umask=0x8001Counts Number of Misses in HitMe Cache : Remote socket read or invalidate requests : op is RdCode, RdData, RdDataMigratory, RdCur, RdInv, Inv*unc_cha_hitme_miss.shared_rdinvownuncore cacheCounts Number of Misses in HitMe Cache : Remote socket RdInvOwn requests to shared lineevent=0x60,umask=0x2001Counts Number of Misses in HitMe Cache : Remote socket RdInvOwn requests to shared line : SF/LLC HitS/F and op is RdInvOwnunc_cha_horz_ring_ad_in_use.left_evenuncore cacheHorizontal AD Ring In Use : Left and Evenevent=0xb6,umask=101Horizontal AD Ring In Use : Left and Even : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_horz_ring_ad_in_use.left_odduncore cacheHorizontal AD Ring In Use : Left and Oddevent=0xb6,umask=201Horizontal AD Ring In Use : Left and Odd : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_horz_ring_ad_in_use.right_evenuncore cacheHorizontal AD Ring In Use : Right and Evenevent=0xb6,umask=401Horizontal AD Ring In Use : Right and Even : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_horz_ring_ad_in_use.right_odduncore cacheHorizontal AD Ring In Use : Right and Oddevent=0xb6,umask=801Horizontal AD Ring In Use : Right and Odd : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_horz_ring_akc_in_use.left_evenuncore cacheHorizontal AK Ring In Use : Left and Evenevent=0xbb,umask=101Horizontal AK Ring In Use : Left and Even : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_horz_ring_akc_in_use.left_odduncore cacheHorizontal AK Ring In Use : Left and Oddevent=0xbb,umask=201Horizontal AK Ring In Use : Left and Odd : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_horz_ring_akc_in_use.right_evenuncore cacheHorizontal AK Ring In Use : Right and Evenevent=0xbb,umask=401Horizontal AK Ring In Use : Right and Even : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_horz_ring_akc_in_use.right_odduncore cacheHorizontal AK Ring In Use : Right and Oddevent=0xbb,umask=801Horizontal AK Ring In Use : Right and Odd : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_horz_ring_ak_in_use.left_evenuncore cacheHorizontal AK Ring In Use : Left and Evenevent=0xb7,umask=101Horizontal AK Ring In Use : Left and Even : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_horz_ring_ak_in_use.left_odduncore cacheHorizontal AK Ring In Use : Left and Oddevent=0xb7,umask=201Horizontal AK Ring In Use : Left and Odd : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_horz_ring_ak_in_use.right_evenuncore cacheHorizontal AK Ring In Use : Right and Evenevent=0xb7,umask=401Horizontal AK Ring In Use : Right and Even : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_horz_ring_ak_in_use.right_odduncore cacheHorizontal AK Ring In Use : Right and Oddevent=0xb7,umask=801Horizontal AK Ring In Use : Right and Odd : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_horz_ring_bl_in_use.left_evenuncore cacheHorizontal BL Ring in Use : Left and Evenevent=0xb8,umask=101Horizontal BL Ring in Use : Left and Even : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_horz_ring_bl_in_use.left_odduncore cacheHorizontal BL Ring in Use : Left and Oddevent=0xb8,umask=201Horizontal BL Ring in Use : Left and Odd : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_horz_ring_bl_in_use.right_evenuncore cacheHorizontal BL Ring in Use : Right and Evenevent=0xb8,umask=401Horizontal BL Ring in Use : Right and Even : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_horz_ring_bl_in_use.right_odduncore cacheHorizontal BL Ring in Use : Right and Oddevent=0xb8,umask=801Horizontal BL Ring in Use : Right and Odd : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_horz_ring_iv_in_use.leftuncore cacheHorizontal IV Ring in Use : Leftevent=0xb9,umask=101Horizontal IV Ring in Use : Left : Counts the number of cycles that the Horizontal IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODDunc_cha_horz_ring_iv_in_use.rightuncore cacheHorizontal IV Ring in Use : Rightevent=0xb9,umask=401Horizontal IV Ring in Use : Right : Counts the number of cycles that the Horizontal IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODDunc_cha_imc_writes_count.fulluncore cacheCHA to iMC Full Line Writes Issued : Full Line Non-ISOCHevent=0x5b,umask=101Counts when a normal (Non-Isochronous) full line write is issued from the CHA to any of the memory controller channelsunc_cha_llc_lookup.all_remoteuncore cacheCache Lookups : All transactions from Remote Agentsevent=0x34,umask=0x1e20ff01Cache Lookups : All transactions from Remote Agents : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothingunc_cha_llc_lookup.any_funcore cacheCache Lookups : All Request Filterevent=0x3401Cache Lookups : All Request Filter : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothing. : Any local or remote transaction to the LLC, including prefetchunc_cha_llc_lookup.codeuncore cacheThis event is deprecatedevent=0x34,umask=0x1bd0ff11unc_cha_llc_lookup.code_localuncore cacheThis event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.CODE_READ_LOCALevent=0x34,umask=0x19d0ff11unc_cha_llc_lookup.code_readuncore cacheCache Lookups : Code Readsevent=0x34,umask=0x1bd0ff01Cache Lookups : Code Reads : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothingunc_cha_llc_lookup.code_read_funcore cacheCache Lookups : CRd Request Filterevent=0x3401Cache Lookups : CRd Request Filter : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothing. : Local or remote CRd transactions to the LLC.  This includes CRd prefetchunc_cha_llc_lookup.code_read_localuncore cacheCache Lookups : CRd Requests that come from the local socket (usually the core)event=0x34,umask=0x19d0ff01Cache Lookups : CRd Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Local or remote CRd transactions to the LLC.  This includes CRd prefetchunc_cha_llc_lookup.code_read_missuncore cacheCache Lookups : Code Read Missesevent=0x34,umask=0x1bd00101Cache Lookups : Code Read Misses : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothingunc_cha_llc_lookup.code_read_remoteuncore cacheCache Lookups : CRd Requests that come from a Remote socketevent=0x34,umask=0x1a10ff01Cache Lookups : CRd Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Local or remote CRd transactions to the LLC.  This includes CRd prefetchunc_cha_llc_lookup.code_remoteuncore cacheThis event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.CODE_READ_REMOTEevent=0x34,umask=0x1a10ff11unc_cha_llc_lookup.corepref_or_dmnd_local_funcore cacheCache Lookups : Local request Filterevent=0x3401Cache Lookups : Local request Filter : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothing. : Any local transaction to the LLC, including prefetches from the Coreunc_cha_llc_lookup.data_rduncore cacheThis event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.DATA_READevent=0x34,umask=0x1bc1ff11unc_cha_llc_lookup.data_readuncore cacheCache and Snoop Filter Lookups; Data Read Requestevent=0x34,umask=0x1bc1ff01Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactionsunc_cha_llc_lookup.data_read_alluncore cacheThis event is deprecatedevent=0x34,umask=0x1fc1ff11unc_cha_llc_lookup.data_read_funcore cacheCache Lookups : Data Read Request Filterevent=0x3401Cache Lookups : Data Read Request Filter : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothing. : Read transactionsunc_cha_llc_lookup.data_read_localuncore cacheCache and Snoop Filter Lookups; Data Read Request that come from the local socket (usually the core)event=0x34,umask=0x19c1ff01Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactionsunc_cha_llc_lookup.data_read_missuncore cacheCache Lookups : Data Read Missesevent=0x34,umask=0x1bc10101Cache Lookups : Data Read Misses : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothingunc_cha_llc_lookup.data_read_remoteuncore cacheCache and Snoop Filter Lookups; Data Read Requests that come from a Remote socketevent=0x34,umask=0x1a01ff01Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactionsunc_cha_llc_lookup.dmnd_read_localuncore cacheThis event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.DATA_READ_LOCALevent=0x34,umask=0x841ff11unc_cha_llc_lookup.euncore cacheCache Lookups : E Stateevent=0x34,umask=0x2001Cache Lookups : E State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothing. : Hit Exclusive Stateunc_cha_llc_lookup.funcore cacheCache Lookups : F Stateevent=0x34,umask=0x8001Cache Lookups : F State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothing. : Hit Forward Stateunc_cha_llc_lookup.flush_inv_localuncore cacheCache Lookups : Flush or Invalidate Requests that come from the local socket (usually the core)event=0x34,umask=0x1844ff01Cache Lookups : Flush : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothingunc_cha_llc_lookup.flush_inv_remoteuncore cacheCache Lookups : Flush or Invalidate requests that come from a Remote socketevent=0x34,umask=0x1a04ff01Cache Lookups : Flush : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothingunc_cha_llc_lookup.flush_or_inv_funcore cacheCache Lookups : Flush or Invalidate Filterevent=0x3401Cache Lookups : Flush or Invalidate Filter : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothingunc_cha_llc_lookup.iuncore cacheCache Lookups : I Stateevent=0x34,umask=101Cache Lookups : I State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothing. : Missunc_cha_llc_lookup.llcpref_localuncore cacheCache and Snoop Filter Lookups; Prefetch requests to the LLC that come from the local socket (usually the core)event=0x34,umask=0x189dff01Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactionsunc_cha_llc_lookup.llcpref_local_funcore cacheCache Lookups : Local LLC prefetch requests (from LLC) Filterevent=0x3401Cache Lookups : Local LLC prefetch requests (from LLC) Filter : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothing. : Any local LLC prefetch to the LLCunc_cha_llc_lookup.llc_pf_localuncore cacheThis event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.LLCPREF_LOCALevent=0x34,umask=0x189dff11unc_cha_llc_lookup.locally_homed_addressuncore cacheThis event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.LOC_HOMevent=0x34,umask=0xbdfff11unc_cha_llc_lookup.local_funcore cacheCache Lookups : Transactions homed locally Filterevent=0x3401Cache Lookups : Transactions homed locally Filter : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothing. : Transaction whose address resides in the local MCunc_cha_llc_lookup.loc_homuncore cacheCache Lookups : Transactions homed locallyevent=0x34,umask=0xbdfff01Cache Lookups : Transactions homed locally : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Transaction whose address resides in the local MCunc_cha_llc_lookup.muncore cacheCache Lookups : M Stateevent=0x34,umask=0x4001Cache Lookups : M State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothing. : Hit Modified Stateunc_cha_llc_lookup.miss_alluncore cacheCache Lookups : All Missesevent=0x34,umask=0x1fe00101Cache Lookups : All Misses : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothingunc_cha_llc_lookup.other_req_funcore cacheCache Lookups : Write Request Filterevent=0x3401Cache Lookups : Write Request Filter : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothing. : Writeback transactions to the LLC  This includes all write transactions -- both Cacheable and UCunc_cha_llc_lookup.pref_or_dmnd_remote_funcore cacheCache Lookups : Remote non-snoop request Filterevent=0x3401Cache Lookups : Remote non-snoop request Filter : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothing. : Non-snoop transactions to the LLC from remote agentunc_cha_llc_lookup.readuncore cacheCache Lookups : Readsevent=0x34,umask=0x1bd9ff01Cache Lookups : Reads : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothingunc_cha_llc_lookup.read_local_loc_homuncore cacheCache Lookups : Locally Requested Reads that are Locally HOMedevent=0x34,umask=0x9d9ff01Cache Lookups : Locally Requested Reads that are Locally HOMed : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothingunc_cha_llc_lookup.read_local_rem_homuncore cacheCache Lookups : Locally Requested Reads that are Remotely HOMedevent=0x34,umask=0x11d9ff01Cache Lookups : Locally Requested Reads that are Remotely HOMed : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothingunc_cha_llc_lookup.read_missuncore cacheCache Lookups : Read Missesevent=0x34,umask=0x1bd90101Cache Lookups : Read Misses : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothingunc_cha_llc_lookup.read_miss_loc_homuncore cacheCache Lookups : Locally HOMed Read Missesevent=0x34,umask=0xbd90101Cache Lookups : Locally HOMed Read Misses : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothingunc_cha_llc_lookup.read_miss_rem_homuncore cacheCache Lookups : Remotely HOMed Read Missesevent=0x34,umask=0x13d90101Cache Lookups : Remotely HOMed Read Misses : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothingunc_cha_llc_lookup.read_or_snoop_remote_miss_rem_homuncore cacheCache Lookups : Remotely requested Read or Snoop Misses that are Remotely HOMedevent=0x34,umask=0x16190101Cache Lookups : Remotely requested Read or Snoop Misses that are Remotely HOMed : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothingunc_cha_llc_lookup.read_remote_loc_homuncore cacheCache Lookups : Remotely Requested Reads that are Locally HOMedevent=0x34,umask=0xa19ff01Cache Lookups : Remotely Requested Reads that are Locally HOMed : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothingunc_cha_llc_lookup.read_sf_hituncore cacheCache Lookups : Reads that Hit the Snoop Filterevent=0x34,umask=0x1bd90e01Cache Lookups : Reads that Hit the Snoop Filter : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothingunc_cha_llc_lookup.remotely_homed_addressuncore cacheThis event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.REM_HOMevent=0x34,umask=0x15dfff11unc_cha_llc_lookup.remote_funcore cacheCache Lookups : Transactions homed remotely Filterevent=0x3401Cache Lookups : Transactions homed remotely Filter : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothing. : Transaction whose address resides in a remote MCunc_cha_llc_lookup.remote_snoop_funcore cacheCache Lookups : Remote snoop request Filterevent=0x3401Cache Lookups : Remote snoop request Filter : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothing. : Snoop transactions to the LLC from remote agentunc_cha_llc_lookup.rem_homuncore cacheCache Lookups : Transactions homed remotelyevent=0x34,umask=0x15dfff01Cache Lookups : Transactions homed remotely : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Transaction whose address resides in a remote MCunc_cha_llc_lookup.rfo_funcore cacheCache Lookups : RFO Request Filterevent=0x3401Cache Lookups : RFO Request Filter : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothing. : Local or remote RFO transactions to the LLC.  This includes RFO prefetchunc_cha_llc_lookup.rfo_localuncore cacheCache Lookups : RFO Requests that come from the local socket (usually the core)event=0x34,umask=0x19c8ff01Cache Lookups : RFO Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Local or remote RFO transactions to the LLC.  This includes RFO prefetchunc_cha_llc_lookup.rfo_missuncore cacheCache Lookups : RFO Missesevent=0x34,umask=0x1bc80101Cache Lookups : RFO Misses : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothingunc_cha_llc_lookup.rfo_pref_localuncore cacheThis event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.RFO_LOCALevent=0x34,umask=0x888ff11unc_cha_llc_lookup.rfo_remoteuncore cacheCache Lookups : RFO Requests that come from a Remote socketevent=0x34,umask=0x1a08ff01Cache Lookups : RFO Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Local or remote RFO transactions to the LLC.  This includes RFO prefetchunc_cha_llc_lookup.suncore cacheCache Lookups : S Stateevent=0x34,umask=0x1001Cache Lookups : S State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothing. : Hit Shared Stateunc_cha_llc_lookup.sf_euncore cacheCache Lookups : SnoopFilter - E Stateevent=0x34,umask=401Cache Lookups : SnoopFilter - E State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothing. : SF Hit Exclusive Stateunc_cha_llc_lookup.sf_huncore cacheCache Lookups : SnoopFilter - H Stateevent=0x34,umask=801Cache Lookups : SnoopFilter - H State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothing. : SF Hit HitMe Stateunc_cha_llc_lookup.sf_suncore cacheCache Lookups : SnoopFilter - S Stateevent=0x34,umask=201Cache Lookups : SnoopFilter - S State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS select a state or states (in the umask field) to match.  Otherwise, the event will count nothing. : SF Hit Shared Stateunc_cha_llc_lookup.writes_and_otheruncore cacheCache Lookups : Filters Requests for those that write info into the cacheevent=0x34,umask=0x1a42ff01Cache Lookups : Write Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set umask bit 0 and select a state or states to match.  Otherwise, the event will count nothing. : Writeback transactions from L2 to the LLC  This includes all write transactions -- both Cacheable and UCunc_cha_llc_lookup.write_localuncore cacheThis event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.WRITES_AND_OTHERevent=0x34,umask=0x842ff11unc_cha_llc_lookup.write_remoteuncore cacheThis event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.WRITES_AND_OTHERevent=0x34,umask=0x17c2ff11unc_cha_llc_victims.alluncore cacheLines Victimized : All Lines Victimizedevent=0x37,umask=0xf01Lines Victimized : All Lines Victimized : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.local_alluncore cacheLines Victimized : Local - All Linesevent=0x37,umask=0x200f01Lines Victimized : Local - All Lines : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.local_euncore cacheLines Victimized : Local - Lines in E Stateevent=0x37,umask=0x200201Lines Victimized : Local - Lines in E State : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.local_muncore cacheLines Victimized : Local - Lines in M Stateevent=0x37,umask=0x200101Lines Victimized : Local - Lines in M State : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.local_suncore cacheLines Victimized : Local - Lines in S Stateevent=0x37,umask=0x200401Lines Victimized : Local - Lines in S State : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.remote_alluncore cacheLines Victimized : Remote - All Linesevent=0x37,umask=0x800f01Lines Victimized : Remote - All Lines : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.remote_euncore cacheLines Victimized : Remote - Lines in E Stateevent=0x37,umask=0x800201Lines Victimized : Remote - Lines in E State : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.remote_muncore cacheLines Victimized : Remote - Lines in M Stateevent=0x37,umask=0x800101Lines Victimized : Remote - Lines in M State : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_llc_victims.remote_suncore cacheLines Victimized : Remote - Lines in S Stateevent=0x37,umask=0x800401Lines Victimized : Remote - Lines in S State : Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_cha_misc_external.mbe_inst0uncore cacheMiscellaneous Events (mostly from MS2IDI) : Number of cycles MBE is high for MS2IDI0event=0xe6,umask=101unc_cha_misc_external.mbe_inst1uncore cacheMiscellaneous Events (mostly from MS2IDI) : Number of cycles MBE is high for MS2IDI1event=0xe6,umask=201unc_cha_pipe_reject.adegrcredituncore cachePipe Rejectsevent=0x4201Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.akegrcredituncore cachePipe Rejectsevent=0x4201Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.allrsfways_resuncore cachePipe Rejectsevent=0x4201Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.blegrcredituncore cachePipe Rejectsevent=0x4201Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.fsf_vicpuncore cachePipe Rejectsevent=0x4201Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.gotrack_allowsnpuncore cachePipe Rejectsevent=0x42,umask=401Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.gotrack_allwayrsvuncore cachePipe Rejectsevent=0x42,umask=0x1001Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.gotrack_pamatchuncore cachePipe Rejectsevent=0x42,umask=201Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.gotrack_waymatchuncore cachePipe Rejectsevent=0x42,umask=801Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.hacredituncore cachePipe Rejectsevent=0x4201Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.idx_inpipeuncore cachePipe Rejectsevent=0x4201Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.ipq_setmatch_vicpuncore cachePipe Rejectsevent=0x4201Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.irq_pmmuncore cachePipe Rejectsevent=0x42,umask=0x2001Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.irq_setmatch_vicpuncore cachePipe Rejectsevent=0x4201Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.ismq_setmatch_vicpuncore cachePipe Rejectsevent=0x4201Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.ivegrcredituncore cachePipe Rejectsevent=0x4201Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.llc_ways_resuncore cachePipe Rejectsevent=0x4201Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.notallowsnoopuncore cachePipe Rejectsevent=0x4201Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.one_fsf_vicuncore cachePipe Rejectsevent=0x4201Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.one_rsp_conuncore cachePipe Rejectsevent=0x4201Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.pmm_memmode_tormatch_multiuncore cachePipe Rejectsevent=0x4201Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.pmm_memmode_tor_matchuncore cachePipe Rejectsevent=0x4201Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.prq_pmmuncore cachePipe Rejectsevent=0x42,umask=0x4001Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.ptl_inpipeuncore cachePipe Rejectsevent=0x42,umask=0x8001Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.rmw_setmatchuncore cachePipe Rejectsevent=0x42,umask=101Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.rrq_setmatch_vicpuncore cachePipe Rejectsevent=0x4201Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.setmatchentrywsctuncore cachePipe Rejectsevent=0x4201Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.sf_ways_resuncore cachePipe Rejectsevent=0x4201Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.topa_matchuncore cachePipe Rejectsevent=0x4201Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.torid_match_go_puncore cachePipe Rejectsevent=0x4201Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.vn_ad_requncore cachePipe Rejectsevent=0x4201Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.vn_ad_rspuncore cachePipe Rejectsevent=0x4201Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.vn_bl_rspuncore cachePipe Rejectsevent=0x4201Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pipe_reject.way_matchuncore cachePipe Rejectsevent=0x4201Pipe Rejects : More Miscellaneous events in the Cbounc_cha_pmm_memmode_nm_setconflicts.llcuncore cachePMM Memory Mode related events : Counts the number of times CHA saw NM Set conflict in SF/LLCevent=0x64,umask=201PMM Memory Mode related events : Counts the number of times CHA saw NM Set conflict in SF/LLC : NM evictions due to another read to the same near memory set in the LLCunc_cha_pmm_memmode_nm_setconflicts.sfuncore cachePMM Memory Mode related events : Counts the number of times CHA saw NM Set conflict in SF/LLCevent=0x64,umask=101PMM Memory Mode related events : Counts the number of times CHA saw NM Set conflict in SF/LLC : NM evictions due to another read to the same near memory set in the SFunc_cha_pmm_memmode_nm_setconflicts.toruncore cachePMM Memory Mode related events : Counts the number of times CHA saw NM Set conflict in TORevent=0x64,umask=401PMM Memory Mode related events : Counts the number of times CHA saw NM Set conflict in TOR : No Reject in the CHA due to a pending read to the same near memory set in the TORunc_cha_pmm_qos_occupancy.ddr_slow_fifouncore cacheUNC_CHA_PMM_QOS_OCCUPANCY.DDR_SLOW_FIFOevent=0x67,umask=101: count # of SLOW TOR Request inserted to ha_pmm_tor_req_fifounc_cha_read_no_credits.mc10uncore cacheCHA iMC CHNx READ Credits Empty : MC10event=0x5801CHA iMC CHNx READ Credits Empty : MC10 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 10 onlyunc_cha_read_no_credits.mc11uncore cacheCHA iMC CHNx READ Credits Empty : MC11event=0x5801CHA iMC CHNx READ Credits Empty : MC11 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 11 onlyunc_cha_read_no_credits.mc12uncore cacheCHA iMC CHNx READ Credits Empty : MC12event=0x5801CHA iMC CHNx READ Credits Empty : MC12 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 12 onlyunc_cha_read_no_credits.mc13uncore cacheCHA iMC CHNx READ Credits Empty : MC13event=0x5801CHA iMC CHNx READ Credits Empty : MC13 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 13 onlyunc_cha_read_no_credits.mc6uncore cacheCHA iMC CHNx READ Credits Empty : MC6event=0x58,umask=0x4001CHA iMC CHNx READ Credits Empty : MC6 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 6 onlyunc_cha_read_no_credits.mc7uncore cacheCHA iMC CHNx READ Credits Empty : MC7event=0x58,umask=0x8001CHA iMC CHNx READ Credits Empty : MC7 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 7 onlyunc_cha_read_no_credits.mc8uncore cacheCHA iMC CHNx READ Credits Empty : MC8event=0x5801CHA iMC CHNx READ Credits Empty : MC8 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 8 onlyunc_cha_read_no_credits.mc9uncore cacheCHA iMC CHNx READ Credits Empty : MC9event=0x5801CHA iMC CHNx READ Credits Empty : MC9 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 9 onlyunc_cha_requests.invitoeuncore cacheLocal INVITOE requests (exclusive ownership of a cache line without receiving data) that miss the SF/LLC and remote INVITOE requests sent to the CHA's home agentevent=0x50,umask=0x3001Counts the total number of requests coming from a unit on this socket for exclusive ownership of a cache line without receiving data (INVITOE) to the CHAunc_cha_requests.invitoe_localuncore cacheLocal INVITOE requests (exclusive ownership of a cache line without receiving data) that miss the SF/LLC and are sent to the CHA's home agentevent=0x50,umask=0x1001Counts the total number of requests coming from a unit on this socket for exclusive ownership of a cache line without receiving data (INVITOE) to the CHAunc_cha_requests.invitoe_remoteuncore cacheRemote INVITOE requests (exclusive ownership of a cache line without receiving data) sent to the CHA's home agentevent=0x50,umask=0x2001Counts the total number of requests coming from a remote socket for exclusive ownership of a cache line without receiving data (INVITOE) to the CHAunc_cha_requests.readsuncore cacheLocal read requests that miss the SF/LLC and remote read requests sent to the CHA's home agentevent=0x50,umask=301Counts read requests made into this CHA. Reads include all read opcodes (including RFO: the Read for Ownership issued before a  write) unc_cha_requests.reads_localuncore cacheLocal read requests that miss the SF/LLC and are sent to the CHA's home agentevent=0x50,umask=101Counts read requests coming from a unit on this socket made into this CHA. Reads include all read opcodes (including RFO: the Read for Ownership issued before a  write)unc_cha_requests.reads_remoteuncore cacheRemote read requests sent to the CHA's home agentevent=0x50,umask=201Counts read requests coming from a remote socket made into the CHA. Reads include all read opcodes (including RFO: the Read for Ownership issued before a  write)unc_cha_requests.writesuncore cacheLocal write requests that miss the SF/LLC and remote write requests sent to the CHA's home agentevent=0x50,umask=0xc01Counts write requests made into the CHA, including streaming, evictions, HitM (Reads from another core to a Modified cacheline), etcunc_cha_requests.writes_localuncore cacheLocal write requests that miss the SF/LLC and are sent to the CHA's home agentevent=0x50,umask=401Counts  write requests coming from a unit on this socket made into this CHA, including streaming, evictions, HitM (Reads from another core to a Modified cacheline), etcunc_cha_requests.writes_remoteuncore cacheRemote write requests sent to the CHA's home agentevent=0x50,umask=801Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO).  Writes include all writes (streaming, evictions, HitM, etc)unc_cha_ring_bounces_horz.aduncore cacheMessages that bounced on the Horizontal Ring. : ADevent=0xac,umask=101Messages that bounced on the Horizontal Ring. : AD : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring typeunc_cha_ring_bounces_horz.akuncore cacheMessages that bounced on the Horizontal Ring. : AKevent=0xac,umask=201Messages that bounced on the Horizontal Ring. : AK : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring typeunc_cha_ring_bounces_horz.bluncore cacheMessages that bounced on the Horizontal Ring. : BLevent=0xac,umask=401Messages that bounced on the Horizontal Ring. : BL : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring typeunc_cha_ring_bounces_horz.ivuncore cacheMessages that bounced on the Horizontal Ring. : IVevent=0xac,umask=801Messages that bounced on the Horizontal Ring. : IV : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring typeunc_cha_ring_bounces_vert.aduncore cacheMessages that bounced on the Vertical Ring. : ADevent=0xaa,umask=101Messages that bounced on the Vertical Ring. : AD : Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_cha_ring_bounces_vert.akuncore cacheMessages that bounced on the Vertical Ring. : Acknowledgements to coreevent=0xaa,umask=201Messages that bounced on the Vertical Ring. : Acknowledgements to core : Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_cha_ring_bounces_vert.akcuncore cacheMessages that bounced on the Vertical Ringevent=0xaa,umask=0x1001Messages that bounced on the Vertical Ring. : Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_cha_ring_bounces_vert.bluncore cacheMessages that bounced on the Vertical Ring. : Data Responses to coreevent=0xaa,umask=401Messages that bounced on the Vertical Ring. : Data Responses to core : Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_cha_ring_bounces_vert.ivuncore cacheMessages that bounced on the Vertical Ring. : Snoops of processor's cacheevent=0xaa,umask=801Messages that bounced on the Vertical Ring. : Snoops of processor's cache. : Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_cha_ring_sink_starved_horz.aduncore cacheSink Starvation on Horizontal Ring : ADevent=0xad,umask=101unc_cha_ring_sink_starved_horz.akuncore cacheSink Starvation on Horizontal Ring : AKevent=0xad,umask=201unc_cha_ring_sink_starved_horz.ak_ag1uncore cacheSink Starvation on Horizontal Ring : Acknowledgements to Agent 1event=0xad,umask=0x2001unc_cha_ring_sink_starved_horz.bluncore cacheSink Starvation on Horizontal Ring : BLevent=0xad,umask=401unc_cha_ring_sink_starved_horz.ivuncore cacheSink Starvation on Horizontal Ring : IVevent=0xad,umask=801unc_cha_ring_sink_starved_vert.aduncore cacheSink Starvation on Vertical Ring : ADevent=0xab,umask=101unc_cha_ring_sink_starved_vert.akuncore cacheSink Starvation on Vertical Ring : Acknowledgements to coreevent=0xab,umask=201unc_cha_ring_sink_starved_vert.akcuncore cacheSink Starvation on Vertical Ringevent=0xab,umask=0x1001unc_cha_ring_sink_starved_vert.bluncore cacheSink Starvation on Vertical Ring : Data Responses to coreevent=0xab,umask=401unc_cha_ring_sink_starved_vert.ivuncore cacheSink Starvation on Vertical Ring : Snoops of processor's cacheevent=0xab,umask=801unc_cha_ring_src_thrtluncore cacheSource Throttleevent=0xae01unc_cha_rxc_occupancy.irquncore cacheIngress (from CMS) Occupancy : IRQevent=0x11,umask=101Ingress (from CMS) Occupancy : IRQ : Counts number of entries in the specified Ingress queue in each cycleunc_cha_rxr_busy_starved.ad_alluncore cacheTransgress Injection Starvation : AD - Allevent=0xe5,umask=0x1101Transgress Injection Starvation : AD - All : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priority : All == Credited + Uncreditedunc_cha_rxr_busy_starved.ad_crduncore cacheTransgress Injection Starvation : AD - Creditedevent=0xe5,umask=0x1001Transgress Injection Starvation : AD - Credited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityunc_cha_rxr_busy_starved.ad_uncrduncore cacheTransgress Injection Starvation : AD - Uncreditedevent=0xe5,umask=101Transgress Injection Starvation : AD - Uncredited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityunc_cha_rxr_busy_starved.bl_alluncore cacheTransgress Injection Starvation : BL - Allevent=0xe5,umask=0x4401Transgress Injection Starvation : BL - All : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priority : All == Credited + Uncreditedunc_cha_rxr_busy_starved.bl_crduncore cacheTransgress Injection Starvation : BL - Creditedevent=0xe5,umask=0x4001Transgress Injection Starvation : BL - Credited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityunc_cha_rxr_busy_starved.bl_uncrduncore cacheTransgress Injection Starvation : BL - Uncreditedevent=0xe5,umask=401Transgress Injection Starvation : BL - Uncredited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityunc_cha_rxr_bypass.ad_alluncore cacheTransgress Ingress Bypass : AD - Allevent=0xe2,umask=0x1101Transgress Ingress Bypass : AD - All : Number of packets bypassing the CMS Ingress : All == Credited + Uncreditedunc_cha_rxr_bypass.ad_crduncore cacheTransgress Ingress Bypass : AD - Creditedevent=0xe2,umask=0x1001Transgress Ingress Bypass : AD - Credited : Number of packets bypassing the CMS Ingressunc_cha_rxr_bypass.ad_uncrduncore cacheTransgress Ingress Bypass : AD - Uncreditedevent=0xe2,umask=101Transgress Ingress Bypass : AD - Uncredited : Number of packets bypassing the CMS Ingressunc_cha_rxr_bypass.akuncore cacheTransgress Ingress Bypass : AKevent=0xe2,umask=201Transgress Ingress Bypass : AK : Number of packets bypassing the CMS Ingressunc_cha_rxr_bypass.akc_uncrduncore cacheTransgress Ingress Bypass : AKC - Uncreditedevent=0xe2,umask=0x8001Transgress Ingress Bypass : AKC - Uncredited : Number of packets bypassing the CMS Ingressunc_cha_rxr_bypass.bl_alluncore cacheTransgress Ingress Bypass : BL - Allevent=0xe2,umask=0x4401Transgress Ingress Bypass : BL - All : Number of packets bypassing the CMS Ingress : All == Credited + Uncreditedunc_cha_rxr_bypass.bl_crduncore cacheTransgress Ingress Bypass : BL - Creditedevent=0xe2,umask=0x4001Transgress Ingress Bypass : BL - Credited : Number of packets bypassing the CMS Ingressunc_cha_rxr_bypass.bl_uncrduncore cacheTransgress Ingress Bypass : BL - Uncreditedevent=0xe2,umask=401Transgress Ingress Bypass : BL - Uncredited : Number of packets bypassing the CMS Ingressunc_cha_rxr_bypass.ivuncore cacheTransgress Ingress Bypass : IVevent=0xe2,umask=801Transgress Ingress Bypass : IV : Number of packets bypassing the CMS Ingressunc_cha_rxr_crd_starved.ad_alluncore cacheTransgress Injection Starvation : AD - Allevent=0xe3,umask=0x1101Transgress Injection Starvation : AD - All : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of credit. : All == Credited + Uncreditedunc_cha_rxr_crd_starved.ad_crduncore cacheTransgress Injection Starvation : AD - Creditedevent=0xe3,umask=0x1001Transgress Injection Starvation : AD - Credited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_cha_rxr_crd_starved.ad_uncrduncore cacheTransgress Injection Starvation : AD - Uncreditedevent=0xe3,umask=101Transgress Injection Starvation : AD - Uncredited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_cha_rxr_crd_starved.akuncore cacheTransgress Injection Starvation : AKevent=0xe3,umask=201Transgress Injection Starvation : AK : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_cha_rxr_crd_starved.bl_alluncore cacheTransgress Injection Starvation : BL - Allevent=0xe3,umask=0x4401Transgress Injection Starvation : BL - All : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of credit. : All == Credited + Uncreditedunc_cha_rxr_crd_starved.bl_crduncore cacheTransgress Injection Starvation : BL - Creditedevent=0xe3,umask=0x4001Transgress Injection Starvation : BL - Credited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_cha_rxr_crd_starved.bl_uncrduncore cacheTransgress Injection Starvation : BL - Uncreditedevent=0xe3,umask=401Transgress Injection Starvation : BL - Uncredited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_cha_rxr_crd_starved.ifvuncore cacheTransgress Injection Starvation : IFV - Creditedevent=0xe3,umask=0x8001Transgress Injection Starvation : IFV - Credited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_cha_rxr_crd_starved.ivuncore cacheTransgress Injection Starvation : IVevent=0xe3,umask=801Transgress Injection Starvation : IV : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_cha_rxr_crd_starved_1uncore cacheTransgress Injection Starvationevent=0xe401Transgress Injection Starvation : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_cha_rxr_inserts.ad_alluncore cacheTransgress Ingress Allocations : AD - Allevent=0xe1,umask=0x1101Transgress Ingress Allocations : AD - All : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the mesh : All == Credited + Uncreditedunc_cha_rxr_inserts.ad_crduncore cacheTransgress Ingress Allocations : AD - Creditedevent=0xe1,umask=0x1001Transgress Ingress Allocations : AD - Credited : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_cha_rxr_inserts.ad_uncrduncore cacheTransgress Ingress Allocations : AD - Uncreditedevent=0xe1,umask=101Transgress Ingress Allocations : AD - Uncredited : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_cha_rxr_inserts.akuncore cacheTransgress Ingress Allocations : AKevent=0xe1,umask=201Transgress Ingress Allocations : AK : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_cha_rxr_inserts.akc_uncrduncore cacheTransgress Ingress Allocations : AKC - Uncreditedevent=0xe1,umask=0x8001Transgress Ingress Allocations : AKC - Uncredited : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_cha_rxr_inserts.bl_alluncore cacheTransgress Ingress Allocations : BL - Allevent=0xe1,umask=0x4401Transgress Ingress Allocations : BL - All : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the mesh : All == Credited + Uncreditedunc_cha_rxr_inserts.bl_crduncore cacheTransgress Ingress Allocations : BL - Creditedevent=0xe1,umask=0x4001Transgress Ingress Allocations : BL - Credited : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_cha_rxr_inserts.bl_uncrduncore cacheTransgress Ingress Allocations : BL - Uncreditedevent=0xe1,umask=401Transgress Ingress Allocations : BL - Uncredited : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_cha_rxr_inserts.ivuncore cacheTransgress Ingress Allocations : IVevent=0xe1,umask=801Transgress Ingress Allocations : IV : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_cha_rxr_occupancy.ad_alluncore cacheTransgress Ingress Occupancy : AD - Allevent=0xe0,umask=0x1101Transgress Ingress Occupancy : AD - All : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the mesh : All == Credited + Uncreditedunc_cha_rxr_occupancy.ad_crduncore cacheTransgress Ingress Occupancy : AD - Creditedevent=0xe0,umask=0x1001Transgress Ingress Occupancy : AD - Credited : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_cha_rxr_occupancy.ad_uncrduncore cacheTransgress Ingress Occupancy : AD - Uncreditedevent=0xe0,umask=101Transgress Ingress Occupancy : AD - Uncredited : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_cha_rxr_occupancy.akuncore cacheTransgress Ingress Occupancy : AKevent=0xe0,umask=201Transgress Ingress Occupancy : AK : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_cha_rxr_occupancy.akc_uncrduncore cacheTransgress Ingress Occupancy : AKC - Uncreditedevent=0xe0,umask=0x8001Transgress Ingress Occupancy : AKC - Uncredited : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_cha_rxr_occupancy.bl_alluncore cacheTransgress Ingress Occupancy : BL - Allevent=0xe0,umask=0x4401Transgress Ingress Occupancy : BL - All : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the mesh : All == Credited + Uncreditedunc_cha_rxr_occupancy.bl_crduncore cacheTransgress Ingress Occupancy : BL - Creditedevent=0xe0,umask=0x2001Transgress Ingress Occupancy : BL - Credited : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_cha_rxr_occupancy.bl_uncrduncore cacheTransgress Ingress Occupancy : BL - Uncreditedevent=0xe0,umask=401Transgress Ingress Occupancy : BL - Uncredited : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_cha_rxr_occupancy.ivuncore cacheTransgress Ingress Occupancy : IVevent=0xe0,umask=801Transgress Ingress Occupancy : IV : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_cha_sf_eviction.e_stateuncore cacheSnoop filter capacity evictions for E-state entriesevent=0x3d,umask=201Counts snoop filter capacity evictions for entries tracking exclusive lines in the cores? cache.? Snoop filter capacity evictions occur when the snoop filter is full and evicts an existing entry to track a new entry.? Does not count clean evictions such as when a core?s cache replaces a tracked cacheline with a new cachelineunc_cha_sf_eviction.m_stateuncore cacheSnoop filter capacity evictions for M-state entriesevent=0x3d,umask=101Counts snoop filter capacity evictions for entries tracking modified lines in the cores? cache.? Snoop filter capacity evictions occur when the snoop filter is full and evicts an existing entry to track a new entry.? Does not count clean evictions such as when a core?s cache replaces a tracked cacheline with a new cachelineunc_cha_sf_eviction.s_stateuncore cacheSnoop filter capacity evictions for S-state entriesevent=0x3d,umask=401Counts snoop filter capacity evictions for entries tracking shared lines in the cores? cache.? Snoop filter capacity evictions occur when the snoop filter is full and evicts an existing entry to track a new entry.? Does not count clean evictions such as when a core?s cache replaces a tracked cacheline with a new cachelineunc_cha_snoops_sent.bcst_localuncore cacheSnoops Sent : Broadcast snoops for Local Requestsevent=0x51,umask=0x1001Snoops Sent : Broadcast snoops for Local Requests : Counts the number of snoops issued by the HA. : Counts the number of broadcast snoops issued by the HA responding to local requestsunc_cha_snoops_sent.bcst_remoteuncore cacheSnoops Sent : Broadcast snoops for Remote Requestsevent=0x51,umask=0x2001Snoops Sent : Broadcast snoops for Remote Requests : Counts the number of snoops issued by the HA. : Counts the number of broadcast snoops issued by the HA responding to remote requestsunc_cha_snoops_sent.direct_localuncore cacheSnoops Sent : Directed snoops for Local Requestsevent=0x51,umask=0x4001Snoops Sent : Directed snoops for Local Requests : Counts the number of snoops issued by the HA. : Counts the number of directed snoops issued by the HA responding to local requestsunc_cha_snoops_sent.direct_remoteuncore cacheSnoops Sent : Directed snoops for Remote Requestsevent=0x51,umask=0x8001Snoops Sent : Directed snoops for Remote Requests : Counts the number of snoops issued by the HA. : Counts the number of directed snoops issued by the HA responding to remote requestsunc_cha_snoops_sent.localuncore cacheSnoops Sent : Snoops sent for Local Requestsevent=0x51,umask=401Snoops Sent : Snoops sent for Local Requests : Counts the number of snoops issued by the HA. : Counts the number of broadcast or directed snoops issued by the HA responding to local requestsunc_cha_snoops_sent.remoteuncore cacheSnoops Sent : Snoops sent for Remote Requestsevent=0x51,umask=801Snoops Sent : Snoops sent for Remote Requests : Counts the number of snoops issued by the HA. : Counts the number of broadcast or directed snoops issued by the HA responding to remote requestsunc_cha_snoop_resp.rspiuncore cacheSnoop Responses Received : RspIevent=0x5c,umask=101Counts when a transaction with the opcode type RspI Snoop Response was received which indicates the remote cache does not have the data, or when the remote cache silently evicts data (such as when an RFO: the Read for Ownership issued before a write hits non-modified data)unc_cha_snoop_resp.rspifwduncore cacheSnoop Responses Received : RspIFwdevent=0x5c,umask=401Counts when a a transaction with the opcode type RspIFwd Snoop Response was received which indicates a remote caching agent forwarded the data and the requesting agent is able to acquire the data in E (Exclusive) or M (modified) states.  This is commonly returned with RFO (the Read for Ownership issued before a write) transactions.  The snoop could have either been to a cacheline in the M,E,F (Modified, Exclusive or Forward)  statesunc_cha_snoop_resp.rspsuncore cacheSnoop Responses Received : RspSevent=0x5c,umask=201Counts when a transaction with the opcode type RspS Snoop Response was received which indicates when a remote cache has data but is not forwarding it.  It is a way to let the requesting socket know that it cannot allocate the data in E state.  No data is sent with S RspSunc_cha_snoop_resp.rspsfwduncore cacheSnoop Responses Received : RspSFwdevent=0x5c,umask=801Counts when a a transaction with the opcode type RspSFwd Snoop Response was received which indicates a remote caching agent forwarded the data but held on to its current copy.  This is common for data and code reads that hit in a remote socket in E (Exclusive) or F (Forward) stateunc_cha_snoop_resp_local.rspsfwduncore cacheSnoop Responses Received Local : RspSFwdevent=0x5d,umask=801Snoop Responses Received Local : RspSFwd : Number of snoop responses received for a Local  request : Filters for a snoop response of RspSFwd to local CA requests.  This is returned when a remote caching agent forwards data but holds on to its currently copy.  This is common for data and code reads that hit in a remote socket in E or F stateunc_cha_stall0_no_txr_horz_crd_ad_ag0.tgr0uncore cacheStall on No AD Agent0 Transgress Credits : For Transgress 0event=0xd0,umask=101Stall on No AD Agent0 Transgress Credits : For Transgress 0 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_ad_ag0.tgr1uncore cacheStall on No AD Agent0 Transgress Credits : For Transgress 1event=0xd0,umask=201Stall on No AD Agent0 Transgress Credits : For Transgress 1 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_ad_ag0.tgr2uncore cacheStall on No AD Agent0 Transgress Credits : For Transgress 2event=0xd0,umask=401Stall on No AD Agent0 Transgress Credits : For Transgress 2 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_ad_ag0.tgr3uncore cacheStall on No AD Agent0 Transgress Credits : For Transgress 3event=0xd0,umask=801Stall on No AD Agent0 Transgress Credits : For Transgress 3 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_ad_ag0.tgr4uncore cacheStall on No AD Agent0 Transgress Credits : For Transgress 4event=0xd0,umask=0x1001Stall on No AD Agent0 Transgress Credits : For Transgress 4 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_ad_ag0.tgr5uncore cacheStall on No AD Agent0 Transgress Credits : For Transgress 5event=0xd0,umask=0x2001Stall on No AD Agent0 Transgress Credits : For Transgress 5 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_ad_ag0.tgr6uncore cacheStall on No AD Agent0 Transgress Credits : For Transgress 6event=0xd0,umask=0x4001Stall on No AD Agent0 Transgress Credits : For Transgress 6 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_ad_ag0.tgr7uncore cacheStall on No AD Agent0 Transgress Credits : For Transgress 7event=0xd0,umask=0x8001Stall on No AD Agent0 Transgress Credits : For Transgress 7 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_ad_ag1.tgr0uncore cacheStall on No AD Agent1 Transgress Credits : For Transgress 0event=0xd2,umask=101Stall on No AD Agent1 Transgress Credits : For Transgress 0 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_ad_ag1.tgr1uncore cacheStall on No AD Agent1 Transgress Credits : For Transgress 1event=0xd2,umask=201Stall on No AD Agent1 Transgress Credits : For Transgress 1 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_ad_ag1.tgr2uncore cacheStall on No AD Agent1 Transgress Credits : For Transgress 2event=0xd2,umask=401Stall on No AD Agent1 Transgress Credits : For Transgress 2 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_ad_ag1.tgr3uncore cacheStall on No AD Agent1 Transgress Credits : For Transgress 3event=0xd2,umask=801Stall on No AD Agent1 Transgress Credits : For Transgress 3 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_ad_ag1.tgr4uncore cacheStall on No AD Agent1 Transgress Credits : For Transgress 4event=0xd2,umask=0x1001Stall on No AD Agent1 Transgress Credits : For Transgress 4 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_ad_ag1.tgr5uncore cacheStall on No AD Agent1 Transgress Credits : For Transgress 5event=0xd2,umask=0x2001Stall on No AD Agent1 Transgress Credits : For Transgress 5 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_ad_ag1.tgr6uncore cacheStall on No AD Agent1 Transgress Credits : For Transgress 6event=0xd2,umask=0x4001Stall on No AD Agent1 Transgress Credits : For Transgress 6 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_ad_ag1.tgr7uncore cacheStall on No AD Agent1 Transgress Credits : For Transgress 7event=0xd2,umask=0x8001Stall on No AD Agent1 Transgress Credits : For Transgress 7 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_bl_ag0.tgr0uncore cacheStall on No BL Agent0 Transgress Credits : For Transgress 0event=0xd4,umask=101Stall on No BL Agent0 Transgress Credits : For Transgress 0 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_bl_ag0.tgr1uncore cacheStall on No BL Agent0 Transgress Credits : For Transgress 1event=0xd4,umask=201Stall on No BL Agent0 Transgress Credits : For Transgress 1 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_bl_ag0.tgr2uncore cacheStall on No BL Agent0 Transgress Credits : For Transgress 2event=0xd4,umask=401Stall on No BL Agent0 Transgress Credits : For Transgress 2 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_bl_ag0.tgr3uncore cacheStall on No BL Agent0 Transgress Credits : For Transgress 3event=0xd4,umask=801Stall on No BL Agent0 Transgress Credits : For Transgress 3 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_bl_ag0.tgr4uncore cacheStall on No BL Agent0 Transgress Credits : For Transgress 4event=0xd4,umask=0x1001Stall on No BL Agent0 Transgress Credits : For Transgress 4 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_bl_ag0.tgr5uncore cacheStall on No BL Agent0 Transgress Credits : For Transgress 5event=0xd4,umask=0x2001Stall on No BL Agent0 Transgress Credits : For Transgress 5 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_bl_ag0.tgr6uncore cacheStall on No BL Agent0 Transgress Credits : For Transgress 6event=0xd4,umask=0x4001Stall on No BL Agent0 Transgress Credits : For Transgress 6 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_bl_ag0.tgr7uncore cacheStall on No BL Agent0 Transgress Credits : For Transgress 7event=0xd4,umask=0x8001Stall on No BL Agent0 Transgress Credits : For Transgress 7 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_bl_ag1.tgr0uncore cacheStall on No BL Agent1 Transgress Credits : For Transgress 0event=0xd6,umask=101Stall on No BL Agent1 Transgress Credits : For Transgress 0 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_bl_ag1.tgr1uncore cacheStall on No BL Agent1 Transgress Credits : For Transgress 1event=0xd6,umask=201Stall on No BL Agent1 Transgress Credits : For Transgress 1 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_bl_ag1.tgr2uncore cacheStall on No BL Agent1 Transgress Credits : For Transgress 2event=0xd6,umask=401Stall on No BL Agent1 Transgress Credits : For Transgress 2 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_bl_ag1.tgr3uncore cacheStall on No BL Agent1 Transgress Credits : For Transgress 3event=0xd6,umask=801Stall on No BL Agent1 Transgress Credits : For Transgress 3 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_bl_ag1.tgr4uncore cacheStall on No BL Agent1 Transgress Credits : For Transgress 4event=0xd6,umask=0x1001Stall on No BL Agent1 Transgress Credits : For Transgress 4 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_bl_ag1.tgr5uncore cacheStall on No BL Agent1 Transgress Credits : For Transgress 5event=0xd6,umask=0x2001Stall on No BL Agent1 Transgress Credits : For Transgress 5 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_bl_ag1.tgr6uncore cacheStall on No BL Agent1 Transgress Credits : For Transgress 6event=0xd6,umask=0x4001Stall on No BL Agent1 Transgress Credits : For Transgress 6 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall0_no_txr_horz_crd_bl_ag1.tgr7uncore cacheStall on No BL Agent1 Transgress Credits : For Transgress 7event=0xd6,umask=0x8001Stall on No BL Agent1 Transgress Credits : For Transgress 7 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall1_no_txr_horz_crd_ad_ag0.tgr10uncore cacheStall on No AD Agent0 Transgress Credits : For Transgress 10event=0xd1,umask=401Stall on No AD Agent0 Transgress Credits : For Transgress 10 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall1_no_txr_horz_crd_ad_ag0.tgr8uncore cacheStall on No AD Agent0 Transgress Credits : For Transgress 8event=0xd1,umask=101Stall on No AD Agent0 Transgress Credits : For Transgress 8 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall1_no_txr_horz_crd_ad_ag0.tgr9uncore cacheStall on No AD Agent0 Transgress Credits : For Transgress 9event=0xd1,umask=201Stall on No AD Agent0 Transgress Credits : For Transgress 9 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall1_no_txr_horz_crd_ad_ag1_1.tgr10uncore cacheStall on No AD Agent1 Transgress Credits : For Transgress 10event=0xd3,umask=401Stall on No AD Agent1 Transgress Credits : For Transgress 10 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall1_no_txr_horz_crd_ad_ag1_1.tgr8uncore cacheStall on No AD Agent1 Transgress Credits : For Transgress 8event=0xd3,umask=101Stall on No AD Agent1 Transgress Credits : For Transgress 8 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall1_no_txr_horz_crd_ad_ag1_1.tgr9uncore cacheStall on No AD Agent1 Transgress Credits : For Transgress 9event=0xd3,umask=201Stall on No AD Agent1 Transgress Credits : For Transgress 9 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall1_no_txr_horz_crd_bl_ag0_1.tgr10uncore cacheStall on No BL Agent0 Transgress Credits : For Transgress 10event=0xd5,umask=401Stall on No BL Agent0 Transgress Credits : For Transgress 10 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall1_no_txr_horz_crd_bl_ag0_1.tgr8uncore cacheStall on No BL Agent0 Transgress Credits : For Transgress 8event=0xd5,umask=101Stall on No BL Agent0 Transgress Credits : For Transgress 8 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall1_no_txr_horz_crd_bl_ag0_1.tgr9uncore cacheStall on No BL Agent0 Transgress Credits : For Transgress 9event=0xd5,umask=201Stall on No BL Agent0 Transgress Credits : For Transgress 9 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall1_no_txr_horz_crd_bl_ag1_1.tgr10uncore cacheStall on No BL Agent1 Transgress Credits : For Transgress 10event=0xd7,umask=401Stall on No BL Agent1 Transgress Credits : For Transgress 10 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall1_no_txr_horz_crd_bl_ag1_1.tgr8uncore cacheStall on No BL Agent1 Transgress Credits : For Transgress 8event=0xd7,umask=101Stall on No BL Agent1 Transgress Credits : For Transgress 8 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_stall1_no_txr_horz_crd_bl_ag1_1.tgr9uncore cacheStall on No BL Agent1 Transgress Credits : For Transgress 9event=0xd7,umask=201Stall on No BL Agent1 Transgress Credits : For Transgress 9 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_cha_tor_inserts.alluncore cacheTOR Inserts : Allevent=0x35,umask=0xc001ffff01TOR Inserts : All : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ddruncore cacheTOR Inserts : DDR4 Accessevent=0x3501TOR Inserts : DDR4 Access : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ddr4uncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.DDRevent=0x3511unc_cha_tor_inserts.evictuncore cacheTOR Inserts : SF/LLC Evictionsevent=0x35,umask=201TOR Inserts : SF/LLC Evictions : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interrupts. : TOR allocation occurred as a result of SF/LLC evictions (came from the ISMQ)unc_cha_tor_inserts.hituncore cacheTOR Inserts : Just Hitsevent=0x3501TOR Inserts : Just Hits : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.iauncore cacheTOR Inserts : All requests from iA Coresevent=0x35,umask=0xc001ff0101TOR Inserts : All requests from iA Cores : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_clflushuncore cacheTOR Inserts : CLFlushes issued by iA Coresevent=0x35,umask=0xc8c7ff0101TOR Inserts : CLFlushes issued by iA Cores : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_clflushoptuncore cacheTOR Inserts : CLFlushOpts issued by iA Coresevent=0x35,umask=0xc8d7ff0101TOR Inserts : CLFlushOpts issued by iA Cores : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_crduncore cacheTOR Inserts : CRDs issued by iA Coresevent=0x35,umask=0xc80fff0101TOR Inserts : CRDs issued by iA Cores : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_drduncore cacheTOR Inserts : DRds issued by iA Coresevent=0x35,umask=0xc817ff0101TOR Inserts : DRds issued by iA Cores : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_drd_optuncore cacheTOR Inserts : DRd_Opts issued by iA Coresevent=0x35,umask=0xc827ff0101TOR Inserts : DRd_Opts issued by iA Cores : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_drd_opt_prefuncore cacheTOR Inserts : DRd_Opt_Prefs issued by iA Coresevent=0x35,umask=0xc8a7ff0101TOR Inserts : DRd_Opt_Prefs issued by iA Cores : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_drd_prefuncore cacheTOR Inserts : DRd_Prefs issued by iA Coresevent=0x35,umask=0xc897ff0101TOR Inserts : DRd_Prefs issued by iA Cores : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_hituncore cacheTOR Inserts : All requests from iA Cores that Hit the LLCevent=0x35,umask=0xc001fd0101TOR Inserts : All requests from iA Cores that Hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_hit_crduncore cacheTOR Inserts : CRds issued by iA Cores that Hit the LLCevent=0x35,umask=0xc80ffd0101TOR Inserts : CRds issued by iA Cores that Hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_hit_crd_prefuncore cacheTOR Inserts : CRd_Prefs issued by iA Cores that hit the LLCevent=0x35,umask=0xc88ffd0101TOR Inserts : CRd_Prefs issued by iA Cores that hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_hit_drduncore cacheTOR Inserts : DRds issued by iA Cores that Hit the LLCevent=0x35,umask=0xc817fd0101TOR Inserts : DRds issued by iA Cores that Hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_hit_drd_optuncore cacheTOR Inserts : DRd_Opts issued by iA Cores that hit the LLCevent=0x35,umask=0xc827fd0101TOR Inserts : DRd_Opts issued by iA Cores that hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_hit_drd_opt_prefuncore cacheTOR Inserts : DRd_Opt_Prefs issued by iA Cores that hit the LLCevent=0x35,umask=0xc8a7fd0101TOR Inserts : DRd_Opt_Prefs issued by iA Cores that hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_hit_drd_prefuncore cacheTOR Inserts : DRd_Prefs issued by iA Cores that Hit the LLCevent=0x35,umask=0xc897fd0101TOR Inserts : DRd_Prefs issued by iA Cores that Hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_hit_itomuncore cacheTOR Inserts : ItoMs issued by iA Cores that Hit LLCevent=0x35,umask=0xcc47fd0101TOR Inserts : ItoMs issued by iA Cores that Hit LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_hit_llcprefcodeuncore cacheTOR Inserts : LLCPrefCode issued by iA Cores that hit the LLCevent=0x35,umask=0xcccffd0101TOR Inserts : LLCPrefCode issued by iA Cores that hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_hit_llcprefcrduncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IA_HIT_LLCPREFCODEevent=0x35,umask=0xcccffd0111unc_cha_tor_inserts.ia_hit_llcprefdatauncore cacheTOR Inserts : LLCPrefData issued by iA Cores that hit the LLCevent=0x35,umask=0xccd7fd0101TOR Inserts : LLCPrefData issued by iA Cores that hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_hit_llcprefdrduncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IA_HIT_LLCPREFDATAevent=0x35,umask=0xccd7fd0111unc_cha_tor_inserts.ia_hit_llcprefrfouncore cacheTOR Inserts : LLCPrefRFO issued by iA Cores that hit the LLCevent=0x35,umask=0xccc7fd0101TOR Inserts : LLCPrefRFO issued by iA Cores that hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_hit_rfouncore cacheTOR Inserts : RFOs issued by iA Cores that Hit the LLCevent=0x35,umask=0xc807fd0101TOR Inserts : RFOs issued by iA Cores that Hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_hit_rfo_prefuncore cacheTOR Inserts : RFO_Prefs issued by iA Cores that Hit the LLCevent=0x35,umask=0xc887fd0101TOR Inserts : RFO_Prefs issued by iA Cores that Hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_hit_specitomuncore cacheTOR Inserts : SpecItoMs issued by iA Cores that hit in the LLCevent=0x35,umask=0xcc57fd0101TOR Inserts : SpecItoMs issued by iA Cores that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_itomuncore cacheTOR Inserts : ItoMs issued by iA Coresevent=0x35,umask=0xcc47ff0101TOR Inserts : ItoMs issued by iA Cores : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_itomcachenearuncore cacheTOR Inserts : ItoMCacheNears issued by iA Coresevent=0x35,umask=0xcd47ff0101TOR Inserts : ItoMCacheNears issued by iA Cores : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_llcprefcodeuncore cacheTOR Inserts : LLCPrefCode issued by iA Coresevent=0x35,umask=0xcccfff0101TOR Inserts : LLCPrefCode issued by iA Cores : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_llcprefdatauncore cacheTOR Inserts : LLCPrefData issued by iA Coresevent=0x35,umask=0xccd7ff0101TOR Inserts : LLCPrefData issued by iA Cores : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_llcprefrfouncore cacheTOR Inserts : LLCPrefRFO issued by iA Coresevent=0x35,umask=0xccc7ff0101TOR Inserts : LLCPrefRFO issued by iA Cores : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_missuncore cacheTOR Inserts : All requests from iA Cores that Missed the LLCevent=0x35,umask=0xc001fe0101TOR Inserts : All requests from iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_crduncore cacheTOR Inserts : CRds issued by iA Cores that Missed the LLCevent=0x35,umask=0xc80ffe0101TOR Inserts : CRds issued by iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_crd_localuncore cacheTOR Inserts : CRd issued by iA Cores that Missed the LLC - HOMed locallyevent=0x35,umask=0xc80efe0101TOR Inserts : CRd issued by iA Cores that Missed the LLC - HOMed locally : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_crd_prefuncore cacheTOR Inserts : CRd_Prefs issued by iA Cores that Missed the LLCevent=0x35,umask=0xc88ffe0101TOR Inserts : CRd_Prefs issued by iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_crd_pref_localuncore cacheTOR Inserts : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed locallyevent=0x35,umask=0xc88efe0101TOR Inserts : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed locally : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_crd_pref_remoteuncore cacheTOR Inserts : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed remotelyevent=0x35,umask=0xc88f7e0101TOR Inserts : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed remotely : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_crd_remoteuncore cacheTOR Inserts : CRd issued by iA Cores that Missed the LLC - HOMed remotelyevent=0x35,umask=0xc80f7e0101TOR Inserts : CRd issued by iA Cores that Missed the LLC - HOMed remotely : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drduncore cacheTOR Inserts : DRds issued by iA Cores that Missed the LLCevent=0x35,umask=0xc817fe0101TOR Inserts : DRds issued by iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drd_ddruncore cacheTOR Inserts : DRds issued by iA Cores targeting DDR Mem that Missed the LLCevent=0x35,umask=0xc817860101TOR Inserts : DRds issued by iA Cores targeting DDR Mem that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drd_localuncore cacheTOR Inserts : DRds issued by iA Cores that Missed the LLC - HOMed locallyevent=0x35,umask=0xc816fe0101TOR Inserts : DRds issued by iA Cores that Missed the LLC - HOMed locally : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drd_local_ddruncore cacheTOR Inserts : DRds issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locallyevent=0x35,umask=0xc816860101TOR Inserts : DRds issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locally : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drd_local_pmmuncore cacheTOR Inserts : DRds issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locallyevent=0x35,umask=0xc8168a0101TOR Inserts : DRds issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locally : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drd_optuncore cacheTOR Inserts : DRd_Opt issued by iA Cores that missed the LLCevent=0x35,umask=0xc827fe0101TOR Inserts : DRd_Opt issued by iA Cores that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drd_opt_prefuncore cacheTOR Inserts : DRd_Opt_Prefs issued by iA Cores that missed the LLCevent=0x35,umask=0xc8a7fe0101TOR Inserts : DRd_Opt_Prefs issued by iA Cores that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drd_pmmuncore cacheTOR Inserts : DRds issued by iA Cores targeting PMM Mem that Missed the LLCevent=0x35,umask=0xc8178a0101TOR Inserts : DRds issued by iA Cores targeting PMM Mem that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drd_prefuncore cacheTOR Inserts : DRd_Prefs issued by iA Cores that Missed the LLCevent=0x35,umask=0xc897fe0101TOR Inserts : DRd_Prefs issued by iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drd_pref_ddruncore cacheTOR Inserts : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLCevent=0x35,umask=0xc897860101TOR Inserts : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drd_pref_localuncore cacheTOR Inserts; DRd Pref misses from local IAevent=0x35,umask=0xc896fe0101TOR Inserts; Data read prefetch from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_miss_drd_pref_local_ddruncore cacheTOR Inserts : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locallyevent=0x35,umask=0xc896860101TOR Inserts : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locally : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drd_pref_local_pmmuncore cacheTOR Inserts : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locallyevent=0x35,umask=0xc8968a0101TOR Inserts : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locally : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drd_pref_pmmuncore cacheTOR Inserts : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLCevent=0x35,umask=0xc8978a0101TOR Inserts : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drd_pref_remoteuncore cacheTOR Inserts; DRd Pref misses from local IAevent=0x35,umask=0xc8977e0101TOR Inserts; Data read prefetch from remote IA that misses in the snoop filterunc_cha_tor_inserts.ia_miss_drd_pref_remote_ddruncore cacheTOR Inserts : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotelyevent=0x35,umask=0xc897060101TOR Inserts : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotely : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drd_pref_remote_pmmuncore cacheTOR Inserts : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotelyevent=0x35,umask=0xc8970a0101TOR Inserts : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotely : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drd_remoteuncore cacheTOR Inserts : DRds issued by iA Cores that Missed the LLC - HOMed remotelyevent=0x35,umask=0xc8177e0101TOR Inserts : DRds issued by iA Cores that Missed the LLC - HOMed remotely : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drd_remote_ddruncore cacheTOR Inserts : DRds issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotelyevent=0x35,umask=0xc817060101TOR Inserts : DRds issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotely : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_drd_remote_pmmuncore cacheTOR Inserts : DRds issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotelyevent=0x35,umask=0xc8170a0101TOR Inserts : DRds issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotely : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_full_streaming_wruncore cacheTOR Inserts; WCiLF misses from local IAevent=0x35,umask=0xc867fe0101TOR Inserts; Data read from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_miss_full_streaming_wr_ddruncore cacheTOR Inserts; WCiLF misses from local IAevent=0x35,umask=0xc867860101TOR Inserts; Data read from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_miss_full_streaming_wr_dramuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IA_MISS_WCILF_DDRevent=0x35,umask=0xc867860111unc_cha_tor_inserts.ia_miss_full_streaming_wr_local_ddruncore cacheTOR Inserts; WCiLF misses from local IAevent=0x35,umask=0xc866860101TOR Inserts; Data read from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_miss_full_streaming_wr_local_dramuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IA_MISS_LOCAL_WCILF_DDRevent=0x35,umask=0xc866860111unc_cha_tor_inserts.ia_miss_full_streaming_wr_local_pmmuncore cacheTOR Inserts; WCiLF misses from local IAevent=0x35,umask=0xc8668a0101TOR Inserts; Data read from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_miss_full_streaming_wr_pmmuncore cacheTOR Inserts; WCiLF misses from local IAevent=0x35,umask=0xc8678a0101TOR Inserts; Data read from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_miss_full_streaming_wr_remote_ddruncore cacheTOR Inserts; WCiLF misses from local IAevent=0x35,umask=0xc867060101TOR Inserts; Data read from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_miss_full_streaming_wr_remote_dramuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IA_MISS_REMOTE_WCILF_DDRevent=0x35,umask=0xc867060111unc_cha_tor_inserts.ia_miss_full_streaming_wr_remote_pmmuncore cacheTOR Inserts; WCiLF misses from local IAevent=0x35,umask=0xc8670a0101TOR Inserts; Data read from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_miss_itomuncore cacheTOR Inserts : ItoMs issued by iA Cores that Missed LLCevent=0x35,umask=0xcc47fe0101TOR Inserts : ItoMs issued by iA Cores that Missed LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_llcprefcodeuncore cacheTOR Inserts : LLCPrefCode issued by iA Cores that missed the LLCevent=0x35,umask=0xcccffe0101TOR Inserts : LLCPrefCode issued by iA Cores that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_llcprefdatauncore cacheTOR Inserts : LLCPrefData issued by iA Cores that missed the LLCevent=0x35,umask=0xccd7fe0101TOR Inserts : LLCPrefData issued by iA Cores that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_llcprefrfouncore cacheTOR Inserts : LLCPrefRFO issued by iA Cores that missed the LLCevent=0x35,umask=0xccc7fe0101TOR Inserts : LLCPrefRFO issued by iA Cores that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_local_wcilf_ddruncore cacheTOR Inserts : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed locallyevent=0x35,umask=0xc866860101TOR Inserts : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed locally : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_local_wcilf_pmmuncore cacheTOR Inserts : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed locallyevent=0x35,umask=0xc8668a0101TOR Inserts : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed locally : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_local_wcil_ddruncore cacheTOR Inserts : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed locallyevent=0x35,umask=0xc86e860101TOR Inserts : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed locally : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_local_wcil_pmmuncore cacheTOR Inserts : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed locallyevent=0x35,umask=0xc86e8a0101TOR Inserts : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed locally : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_partial_streaming_wruncore cacheTOR Inserts; WCiL misses from local IAevent=0x35,umask=0xc86ffe0101TOR Inserts; Data read from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_miss_partial_streaming_wr_ddruncore cacheTOR Inserts; WCiL misses from local IAevent=0x35,umask=0xc86f860101TOR Inserts; Data read from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_miss_partial_streaming_wr_dramuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IA_MISS_WCIL_DDRevent=0x35,umask=0xc86f860111unc_cha_tor_inserts.ia_miss_partial_streaming_wr_local_ddruncore cacheTOR Inserts; WCiL misses from local IAevent=0x35,umask=0xc86e860101TOR Inserts; Data read from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_miss_partial_streaming_wr_local_dramuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IA_MISS_LOCAL_WCIL_DDRevent=0x35,umask=0xc86e860111unc_cha_tor_inserts.ia_miss_partial_streaming_wr_local_pmmuncore cacheTOR Inserts; WCiL misses from local IAevent=0x35,umask=0xc86e8a0101TOR Inserts; Data read from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_miss_partial_streaming_wr_pmmuncore cacheTOR Inserts; WCiL misses from local IAevent=0x35,umask=0xc86f8a0101TOR Inserts; Data read from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_miss_partial_streaming_wr_remote_ddruncore cacheTOR Inserts; WCiL misses from local IAevent=0x35,umask=0xc86f060101TOR Inserts; Data read from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_miss_partial_streaming_wr_remote_dramuncore cacheThis event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IA_MISS_REMOTE_WCIL_DDRevent=0x35,umask=0xc86f060111unc_cha_tor_inserts.ia_miss_partial_streaming_wr_remote_pmmuncore cacheTOR Inserts; WCiL misses from local IAevent=0x35,umask=0xc86f0a0101TOR Inserts; Data read from local IA that misses in the snoop filterunc_cha_tor_inserts.ia_miss_remote_wcilf_ddruncore cacheTOR Inserts : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed remotelyevent=0x35,umask=0xc867060101TOR Inserts : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed remotely : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_remote_wcilf_pmmuncore cacheTOR Inserts : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed remote memoryevent=0x35,umask=0xc8670a0101TOR Inserts : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed remotely : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_remote_wcil_ddruncore cacheTOR Inserts : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed remotelyevent=0x35,umask=0xc86f060101TOR Inserts : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed remotely : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_remote_wcil_pmmuncore cacheTOR Inserts : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed remotelyevent=0x35,umask=0xc86f0a0101TOR Inserts : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed remotely : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_rfouncore cacheTOR Inserts : RFOs issued by iA Cores that Missed the LLCevent=0x35,umask=0xc807fe0101TOR Inserts : RFOs issued by iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_rfo_localuncore cacheTOR Inserts : RFOs issued by iA Cores that Missed the LLC - HOMed locallyevent=0x35,umask=0xc806fe0101TOR Inserts : RFOs issued by iA Cores that Missed the LLC - HOMed locally : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_rfo_prefuncore cacheTOR Inserts : RFO_Prefs issued by iA Cores that Missed the LLCevent=0x35,umask=0xc887fe0101TOR Inserts : RFO_Prefs issued by iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_rfo_pref_localuncore cacheTOR Inserts : RFO_Prefs issued by iA Cores that Missed the LLC - HOMed locallyevent=0x35,umask=0xc886fe0101TOR Inserts : RFO_Prefs issued by iA Cores that Missed the LLC - HOMed locally : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_rfo_pref_remoteuncore cacheTOR Inserts : RFO_Prefs issued by iA Cores that Missed the LLC - HOMed remotelyevent=0x35,umask=0xc8877e0101TOR Inserts : RFO_Prefs issued by iA Cores that Missed the LLC - HOMed remotely : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_rfo_remoteuncore cacheTOR Inserts : RFOs issued by iA Cores that Missed the LLC - HOMed remotelyevent=0x35,umask=0xc8077e0101TOR Inserts : RFOs issued by iA Cores that Missed the LLC - HOMed remotely : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_specitomuncore cacheTOR Inserts : SpecItoMs issued by iA Cores that missed the LLCevent=0x35,umask=0xcc57fe0101TOR Inserts : SpecItoMs issued by iA Cores that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_ucrdfuncore cacheTOR Inserts : UCRdFs issued by iA Cores that Missed LLCevent=0x35,umask=0xc877de0101TOR Inserts : UCRdFs issued by iA Cores that Missed LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_wciluncore cacheTOR Inserts : WCiLs issued by iA Cores that Missed the LLCevent=0x35,umask=0xc86ffe0101TOR Inserts : WCiLs issued by iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_wcilfuncore cacheTOR Inserts : WCiLF issued by iA Cores that Missed the LLCevent=0x35,umask=0xc867fe0101TOR Inserts : WCiLF issued by iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_wcilf_ddruncore cacheTOR Inserts : WCiLFs issued by iA Cores targeting DDR that missed the LLCevent=0x35,umask=0xc867860101TOR Inserts : WCiLFs issued by iA Cores targeting DDR that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_wcilf_pmmuncore cacheTOR Inserts : WCiLFs issued by iA Cores targeting PMM that missed the LLCevent=0x35,umask=0xc8678a0101TOR Inserts : WCiLFs issued by iA Cores targeting PMM that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_wcil_ddruncore cacheTOR Inserts : WCiLs issued by iA Cores targeting DDR that missed the LLCevent=0x35,umask=0xc86f860101TOR Inserts : WCiLs issued by iA Cores targeting DDR that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_wcil_pmmuncore cacheTOR Inserts : WCiLs issued by iA Cores targeting PMM that missed the LLCevent=0x35,umask=0xc86f8a0101TOR Inserts : WCiLs issued by iA Cores targeting PMM that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_miss_wiluncore cacheTOR Inserts : WiLs issued by iA Cores that Missed LLCevent=0x35,umask=0xc87fde0101TOR Inserts : WiLs issued by iA Cores that Missed LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_rfouncore cacheTOR Inserts : RFOs issued by iA Coresevent=0x35,umask=0xc807ff0101TOR Inserts : RFOs issued by iA Cores : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_rfo_prefuncore cacheTOR Inserts : RFO_Prefs issued by iA Coresevent=0x35,umask=0xc887ff0101TOR Inserts : RFO_Prefs issued by iA Cores : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_specitomuncore cacheTOR Inserts : SpecItoMs issued by iA Coresevent=0x35,umask=0xcc57ff0101TOR Inserts : SpecItoMs issued by iA Cores : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_wbeftoiuncore cacheTOR Inserts : WBEFtoIs issued by an IA Core.  Non Modified Write Backsevent=0x35,umask=0xcc37ff0101WbEFtoIs issued by iA Cores .  (Non Modified Write Backs)  :Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.  Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_wbmtoeuncore cacheTOR Inserts : WBMtoEs issued by an IA Core.  Non Modified Write Backsevent=0x35,umask=0xcc2fff0101WbMtoEs issued by iA Cores .  (Non Modified Write Backs)  :Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.  Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_wbstoiuncore cacheTOR Inserts : WBStoIs issued by an IA Core.  Non Modified Write Backsevent=0x35,umask=0xcc67ff0101WbStoIs issued by iA Cores .  (Non Modified Write Backs)  :Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.  Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_wciluncore cacheTOR Inserts : WCiLs issued by iA Coresevent=0x35,umask=0xc86fff0101TOR Inserts : WCiLs issued by iA Cores : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ia_wcilfuncore cacheTOR Inserts : WCiLF issued by iA Coresevent=0x35,umask=0xc867ff0101TOR Inserts : WCiLF issued by iA Cores : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.iouncore cacheTOR Inserts : All requests from IO Devicesevent=0x35,umask=0xc001ff0401TOR Inserts : All requests from IO Devices : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_clflushuncore cacheTOR Inserts : CLFlushes issued by IO Devicesevent=0x35,umask=0xc8c3ff0401TOR Inserts : CLFlushes issued by IO Devices : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_hituncore cacheTOR Inserts : All requests from IO Devices that hit the LLCevent=0x35,umask=0xc001fd0401TOR Inserts : All requests from IO Devices that hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_hit_itomuncore cacheTOR Inserts : ItoMs issued by IO Devices that Hit the LLCevent=0x35,umask=0xcc43fd0401TOR Inserts : ItoMs issued by IO Devices that Hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_hit_pcirdcuruncore cacheTOR Inserts : PCIRdCurs issued by IO Devices that hit the LLCevent=0x35,umask=0xc8f3fd0401TOR Inserts : PCIRdCurs issued by IO Devices that hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_hit_rfouncore cacheTOR Inserts : RFOs issued by IO Devices that hit the LLCevent=0x35,umask=0xc803fd0401TOR Inserts : RFOs issued by IO Devices that hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_itomuncore cacheTOR Inserts : ItoMs issued by IO Devicesevent=0x35,umask=0xcc43ff0401TOR Inserts : ItoMs issued by IO Devices : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_itomcachenearuncore cacheTOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devicesevent=0x35,umask=0xcd43ff0401TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_itomcachenear_localuncore cacheTOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices to locally HOMed memoryevent=0x35,umask=0xcd42ff0401TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_itomcachenear_remoteuncore cacheTOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices to remotely HOMed memoryevent=0x35,umask=0xcd437f0401TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_itom_localuncore cacheTOR Inserts : ItoMs issued by IO Devices to locally HOMed memoryevent=0x35,umask=0xcc42ff0401TOR Inserts : ItoMs issued by IO Devices : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_itom_remoteuncore cacheTOR Inserts : ItoMs issued by IO Devices to remotely HOMed memoryevent=0x35,umask=0xcc437f0401TOR Inserts : ItoMs issued by IO Devices : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_missuncore cacheTOR Inserts : All requests from IO Devices that missed the LLCevent=0x35,umask=0xc001fe0401TOR Inserts : All requests from IO Devices that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_miss_itomuncore cacheTOR Inserts : ItoMs issued by IO Devices that missed the LLCevent=0x35,umask=0xcc43fe0401TOR Inserts : ItoMs issued by IO Devices that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_miss_pcirdcuruncore cacheTOR Inserts : PCIRdCurs issued by IO Devices that missed the LLCevent=0x35,umask=0xc8f3fe0401TOR Inserts : PCIRdCurs issued by IO Devices that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_miss_rfouncore cacheTOR Inserts : RFOs issued by IO Devices that missed the LLCevent=0x35,umask=0xc803fe0401TOR Inserts : RFOs issued by IO Devices that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_pcirdcuruncore cacheTOR Inserts : PCIRdCurs issued by IO Devicesevent=0x35,umask=0xc8f3ff0401TOR Inserts : PCIRdCurs issued by IO Devices : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_pcirdcur_localuncore cachePCIRDCUR (read) transactions from an IO device that addresses memory on the local socketevent=0x35,umask=0xc8f2ff0401TOR Inserts : PCIRdCurs issued by IO Devices and targets local memory : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_pcirdcur_remoteuncore cachePCIRDCUR (read) transactions from an IO device that addresses memory on a remote socketevent=0x35,umask=0xc8f37f0401TOR Inserts : PCIRdCurs issued by IO Devices and targets remote memory : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_rfouncore cacheTOR Inserts : RFOs issued by IO Devicesevent=0x35,umask=0xc803ff0401TOR Inserts : RFOs issued by IO Devices : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.io_wbmtoiuncore cacheTOR Inserts : WbMtoIs issued by IO Devicesevent=0x35,umask=0xcc23ff0401TOR Inserts : WbMtoIs issued by IO Devices : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.ipquncore cacheTOR Inserts : IPQevent=0x35,umask=801TOR Inserts : IPQ : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.irq_iauncore cacheTOR Inserts : IRQ - iAevent=0x35,umask=101TOR Inserts : IRQ - iA : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interrupts. : From an iA Coreunc_cha_tor_inserts.irq_non_iauncore cacheTOR Inserts : IRQ - Non iAevent=0x35,umask=0x1001TOR Inserts : IRQ - Non iA : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.isocuncore cacheTOR Inserts : Just ISOCevent=0x3501TOR Inserts : Just ISOC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.local_tgtuncore cacheTOR Inserts : Just Local Targetsevent=0x3501TOR Inserts : Just Local Targets : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.loc_alluncore cacheTOR Inserts : All from Local iA and IOevent=0x35,umask=0xc000ff0501TOR Inserts : All from Local iA and IO : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interrupts. : All locally initiated requestsunc_cha_tor_inserts.loc_iauncore cacheTOR Inserts : All from Local iAevent=0x35,umask=0xc000ff0101TOR Inserts : All from Local iA : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interrupts. : All locally initiated requests from iA Coresunc_cha_tor_inserts.loc_iouncore cacheTOR Inserts : All from Local IOevent=0x35,umask=0xc000ff0401TOR Inserts : All from Local IO : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interrupts. : All locally generated IO trafficunc_cha_tor_inserts.match_opcuncore cacheTOR Inserts : Match the Opcode in b[29:19] of the extended umask fieldevent=0x3501TOR Inserts : Match the Opcode in b[29:19] of the extended umask field : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.missuncore cacheTOR Inserts : Just Missesevent=0x3501TOR Inserts : Just Misses : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.mmcfguncore cacheTOR Inserts : MMCFG Accessevent=0x3501TOR Inserts : MMCFG Access : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.nearmemuncore cacheTOR Inserts : Just NearMemevent=0x3501TOR Inserts : Just NearMem : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.noncohuncore cacheTOR Inserts : Just NonCoherentevent=0x3501TOR Inserts : Just NonCoherent : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.not_nearmemuncore cacheTOR Inserts : Just NotNearMemevent=0x3501TOR Inserts : Just NotNearMem : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.pmmuncore cacheTOR Inserts : PMM Accessevent=0x3501TOR Inserts : PMM Access : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.premorph_opcuncore cacheTOR Inserts : Match the PreMorphed Opcode in b[29:19] of the extended umask fieldevent=0x3501TOR Inserts : Match the PreMorphed Opcode in b[29:19] of the extended umask field : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.prq_iosfuncore cacheTOR Inserts : PRQ - IOSFevent=0x35,umask=401TOR Inserts : PRQ - IOSF : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interrupts. : From a PCIe Deviceunc_cha_tor_inserts.prq_non_iosfuncore cacheTOR Inserts : PRQ - Non IOSFevent=0x35,umask=0x2001TOR Inserts : PRQ - Non IOSF : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.remote_tgtuncore cacheTOR Inserts : Just Remote Targetsevent=0x3501TOR Inserts : Just Remote Targets : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.rrquncore cacheTOR Inserts : RRQevent=0x35,umask=0x4001TOR Inserts : RRQ : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_inserts.wbquncore cacheTOR Inserts : WBQevent=0x35,umask=0x8001TOR Inserts : WBQ : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.   Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ddruncore cacheTOR Occupancy : DDR4 Accessevent=0x3601TOR Occupancy : DDR4 Access : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.evictuncore cacheTOR Occupancy : SF/LLC Evictionsevent=0x36,umask=201TOR Occupancy : SF/LLC Evictions : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interrupts. : TOR allocation occurred as a result of SF/LLC evictions (came from the ISMQ)unc_cha_tor_occupancy.hituncore cacheTOR Occupancy : Just Hitsevent=0x3601TOR Occupancy : Just Hits : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.iauncore cacheTOR Occupancy : All requests from iA Coresevent=0x36,umask=0xc001ff0101TOR Occupancy : All requests from iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_crduncore cacheTOR Occupancy : CRDs issued by iA Coresevent=0x36,umask=0xc80fff0101TOR Occupancy : CRDs issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_drduncore cacheTOR Occupancy : DRds issued by iA Coresevent=0x36,umask=0xc817ff0101TOR Occupancy : DRds issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_drd_optuncore cacheTOR Occupancy : DRd_Opts issued by iA Coresevent=0x36,umask=0xc827ff0101TOR Occupancy : DRd_Opts issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_drd_opt_prefuncore cacheTOR Occupancy : DRd_Opt_Prefs issued by iA Coresevent=0x36,umask=0xc8a7ff0101TOR Occupancy : DRd_Opt_Prefs issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_drd_prefuncore cacheTOR Occupancy : DRd_Prefs issued by iA Coresevent=0x36,umask=0xc897ff0101TOR Occupancy : DRd_Prefs issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_hituncore cacheTOR Occupancy : All requests from iA Cores that Hit the LLCevent=0x36,umask=0xc001fd0101TOR Occupancy : All requests from iA Cores that Hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_hit_crduncore cacheTOR Occupancy : CRds issued by iA Cores that Hit the LLCevent=0x36,umask=0xc80ffd0101TOR Occupancy : CRds issued by iA Cores that Hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_hit_crd_prefuncore cacheTOR Occupancy : CRd_Prefs issued by iA Cores that hit the LLCevent=0x36,umask=0xc88ffd0101TOR Occupancy : CRd_Prefs issued by iA Cores that hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_hit_drduncore cacheTOR Occupancy : DRds issued by iA Cores that Hit the LLCevent=0x36,umask=0xc817fd0101TOR Occupancy : DRds issued by iA Cores that Hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_hit_drd_optuncore cacheTOR Occupancy : DRd_Opts issued by iA Cores that hit the LLCevent=0x36,umask=0xc827fd0101TOR Occupancy : DRd_Opts issued by iA Cores that hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_hit_drd_opt_prefuncore cacheTOR Occupancy : DRd_Opt_Prefs issued by iA Cores that hit the LLCevent=0x36,umask=0xc8a7fd0101TOR Occupancy : DRd_Opt_Prefs issued by iA Cores that hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_hit_drd_prefuncore cacheTOR Occupancy : DRd_Prefs issued by iA Cores that Hit the LLCevent=0x36,umask=0xc897fd0101TOR Occupancy : DRd_Prefs issued by iA Cores that Hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_hit_llcprefcodeuncore cacheTOR Occupancy : LLCPrefCode issued by iA Cores that hit the LLCevent=0x36,umask=0xcccffd0101TOR Occupancy : LLCPrefCode issued by iA Cores that hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_hit_llcprefdatauncore cacheTOR Occupancy : LLCPrefData issued by iA Cores that hit the LLCevent=0x36,umask=0xccd7fd0101TOR Occupancy : LLCPrefData issued by iA Cores that hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_hit_llcprefrfouncore cacheTOR Occupancy : LLCPrefRFO issued by iA Cores that hit the LLCevent=0x36,umask=0xccc7fd0101TOR Occupancy : LLCPrefRFO issued by iA Cores that hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_hit_rfouncore cacheTOR Occupancy : RFOs issued by iA Cores that Hit the LLCevent=0x36,umask=0xc807fd0101TOR Occupancy : RFOs issued by iA Cores that Hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_hit_rfo_prefuncore cacheTOR Occupancy : RFO_Prefs issued by iA Cores that Hit the LLCevent=0x36,umask=0xc887fd0101TOR Occupancy : RFO_Prefs issued by iA Cores that Hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_llcprefcodeuncore cacheTOR Occupancy : LLCPrefCode issued by iA Coresevent=0x36,umask=0xcccfff0101TOR Occupancy : LLCPrefCode issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_llcprefdatauncore cacheTOR Occupancy : LLCPrefData issued by iA Coresevent=0x36,umask=0xccd7ff0101TOR Occupancy : LLCPrefData issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_llcprefrfouncore cacheTOR Occupancy : LLCPrefRFO issued by iA Coresevent=0x36,umask=0xccc7ff0101TOR Occupancy : LLCPrefRFO issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_missuncore cacheTOR Occupancy : All requests from iA Cores that Missed the LLCevent=0x36,umask=0xc001fe0101TOR Occupancy : All requests from iA Cores that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_crduncore cacheTOR Occupancy : CRds issued by iA Cores that Missed the LLCevent=0x36,umask=0xc80ffe0101TOR Occupancy : CRds issued by iA Cores that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_crd_prefuncore cacheTOR Occupancy : CRd_Prefs issued by iA Cores that Missed the LLCevent=0x36,umask=0xc88ffe0101TOR Occupancy : CRd_Prefs issued by iA Cores that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_drduncore cacheTOR Occupancy : DRds issued by iA Cores that Missed the LLCevent=0x36,umask=0xc817fe0101TOR Occupancy : DRds issued by iA Cores that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_drd_ddruncore cacheTOR Occupancy : DRds issued by iA Cores targeting DDR Mem that Missed the LLCevent=0x36,umask=0xc817860101TOR Occupancy : DRds issued by iA Cores targeting DDR Mem that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_drd_localuncore cacheTOR Occupancy : DRds issued by iA Cores that Missed the LLC - HOMed locallyevent=0x36,umask=0xc816fe0101TOR Occupancy : DRds issued by iA Cores that Missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_drd_optuncore cacheTOR Occupancy : DRd_Opt issued by iA Cores that missed the LLCevent=0x36,umask=0xc827fe0101TOR Occupancy : DRd_Opt issued by iA Cores that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_drd_opt_prefuncore cacheTOR Occupancy : DRd_Opt_Prefs issued by iA Cores that missed the LLCevent=0x36,umask=0xc8a7fe0101TOR Occupancy : DRd_Opt_Prefs issued by iA Cores that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_drd_pmmuncore cacheTOR Occupancy : DRds issued by iA Cores targeting PMM Mem that Missed the LLCevent=0x36,umask=0xc8178a0101TOR Occupancy : DRds issued by iA Cores targeting PMM Mem that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_drd_prefuncore cacheTOR Occupancy : DRd_Prefs issued by iA Cores that Missed the LLCevent=0x36,umask=0xc897fe0101TOR Occupancy : DRd_Prefs issued by iA Cores that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_drd_remoteuncore cacheTOR Occupancy : DRds issued by iA Cores that Missed the LLC - HOMed remotelyevent=0x36,umask=0xc8177e0101TOR Occupancy : DRds issued by iA Cores that Missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_full_streaming_wruncore cacheTOR Occupancy; WCiLF misses from local IAevent=0x36,umask=0xc867fe0101TOR Occupancy; Data read from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_full_streaming_wr_ddruncore cacheTOR Occupancy; WCiLF misses from local IAevent=0x36,umask=0xc867860101TOR Occupancy; Data read from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_full_streaming_wr_local_ddruncore cacheTOR Occupancy; WCiLF misses from local IAevent=0x36,umask=0xc866860101TOR Occupancy; Data read from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_full_streaming_wr_local_pmmuncore cacheTOR Occupancy; WCiLF misses from local IAevent=0x36,umask=0xc8668a0101TOR Occupancy; Data read from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_full_streaming_wr_pmmuncore cacheTOR Occupancy; WCiLF misses from local IAevent=0x36,umask=0xc8678a0101TOR Occupancy; Data read from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_full_streaming_wr_remote_ddruncore cacheTOR Occupancy; WCiLF misses from local IAevent=0x36,umask=0xc867060101TOR Occupancy; Data read from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_full_streaming_wr_remote_pmmuncore cacheTOR Occupancy; WCiLF misses from local IAevent=0x36,umask=0xc8670a0101TOR Occupancy; Data read from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_llcprefcodeuncore cacheTOR Occupancy : LLCPrefCode issued by iA Cores that missed the LLCevent=0x36,umask=0xcccffe0101TOR Occupancy : LLCPrefCode issued by iA Cores that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_llcprefdatauncore cacheTOR Occupancy : LLCPrefData issued by iA Cores that missed the LLCevent=0x36,umask=0xccd7fe0101TOR Occupancy : LLCPrefData issued by iA Cores that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_llcprefrfouncore cacheTOR Occupancy : LLCPrefRFO issued by iA Cores that missed the LLCevent=0x36,umask=0xccc7fe0101TOR Occupancy : LLCPrefRFO issued by iA Cores that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_partial_streaming_wruncore cacheTOR Occupancy; WCiL misses from local IAevent=0x36,umask=0xc86ffe0101TOR Occupancy; Data read from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_partial_streaming_wr_ddruncore cacheTOR Occupancy; WCiL misses from local IAevent=0x36,umask=0xc86f860101TOR Occupancy; Data read from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_partial_streaming_wr_local_ddruncore cacheTOR Occupancy; WCiL misses from local IAevent=0x36,umask=0xc86e860101TOR Occupancy; Data read from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_partial_streaming_wr_local_pmmuncore cacheTOR Occupancy; WCiL misses from local IAevent=0x36,umask=0xc86e8a0101TOR Occupancy; Data read from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_partial_streaming_wr_pmmuncore cacheTOR Occupancy; WCiL misses from local IAevent=0x36,umask=0xc86f8a0101TOR Occupancy; Data read from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_partial_streaming_wr_remote_ddruncore cacheTOR Occupancy; WCiL misses from local IAevent=0x36,umask=0xc86f060101TOR Occupancy; Data read from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_partial_streaming_wr_remote_pmmuncore cacheTOR Occupancy; WCiL misses from local IAevent=0x36,umask=0xc86f0a0101TOR Occupancy; Data read from local IA that misses in the snoop filterunc_cha_tor_occupancy.ia_miss_rfouncore cacheTOR Occupancy : RFOs issued by iA Cores that Missed the LLCevent=0x36,umask=0xc807fe0101TOR Occupancy : RFOs issued by iA Cores that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_rfo_localuncore cacheTOR Occupancy : RFOs issued by iA Cores that Missed the LLC - HOMed locallyevent=0x36,umask=0xc806fe0101TOR Occupancy : RFOs issued by iA Cores that Missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_rfo_prefuncore cacheTOR Occupancy : RFO_Prefs issued by iA Cores that Missed the LLCevent=0x36,umask=0xc887fe0101TOR Occupancy : RFO_Prefs issued by iA Cores that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_rfo_pref_localuncore cacheTOR Occupancy : RFO_Prefs issued by iA Cores that Missed the LLC - HOMed locallyevent=0x36,umask=0xc886fe0101TOR Occupancy : RFO_Prefs issued by iA Cores that Missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_rfo_pref_remoteuncore cacheTOR Occupancy : RFO_Prefs issued by iA Cores that Missed the LLC - HOMed remotelyevent=0x36,umask=0xc8877e0101TOR Occupancy : RFO_Prefs issued by iA Cores that Missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_rfo_remoteuncore cacheTOR Occupancy : RFOs issued by iA Cores that Missed the LLC - HOMed remotelyevent=0x36,umask=0xc8077e0101TOR Occupancy : RFOs issued by iA Cores that Missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_miss_specitomuncore cacheTOR Occupancy : SpecItoMs issued by iA Cores that missed the LLCevent=0x36,umask=0xcc57fe0101TOR Occupancy : SpecItoMs issued by iA Cores that missed the LLC: For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_rfouncore cacheTOR Occupancy : RFOs issued by iA Coresevent=0x36,umask=0xc807ff0101TOR Occupancy : RFOs issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ia_rfo_prefuncore cacheTOR Occupancy : RFO_Prefs issued by iA Coresevent=0x36,umask=0xc887ff0101TOR Occupancy : RFO_Prefs issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.iouncore cacheTOR Occupancy : All requests from IO Devicesevent=0x36,umask=0xc001ff0401TOR Occupancy : All requests from IO Devices : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_hituncore cacheTOR Occupancy : All requests from IO Devices that hit the LLCevent=0x36,umask=0xc001fd0401TOR Occupancy : All requests from IO Devices that hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_hit_itomuncore cacheTOR Occupancy : ItoMs issued by IO Devices that Hit the LLCevent=0x36,umask=0xcc43fd0401TOR Occupancy : ItoMs issued by IO Devices that Hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_hit_pcirdcuruncore cacheTOR Occupancy : PCIRdCurs issued by IO Devices that hit the LLCevent=0x36,umask=0xc8f3fd0401TOR Occupancy : PCIRdCurs issued by IO Devices that hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_hit_rfouncore cacheTOR Occupancy : RFOs issued by IO Devices that hit the LLCevent=0x36,umask=0xc803fd0401TOR Occupancy : RFOs issued by IO Devices that hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_itomuncore cacheTOR Occupancy : ItoMs issued by IO Devicesevent=0x36,umask=0xcc43ff0401TOR Occupancy : ItoMs issued by IO Devices : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_missuncore cacheTOR Occupancy : All requests from IO Devices that missed the LLCevent=0x36,umask=0xc001fe0401TOR Occupancy : All requests from IO Devices that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_miss_itomuncore cacheTOR Occupancy : ItoMs issued by IO Devices that missed the LLCevent=0x36,umask=0xcc43fe0401TOR Occupancy : ItoMs issued by IO Devices that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_miss_pcirdcuruncore cacheTOR Occupancy : PCIRdCurs issued by IO Devices that missed the LLCevent=0x36,umask=0xc8f3fe0401TOR Occupancy : PCIRdCurs issued by IO Devices that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_miss_rfouncore cacheTOR Occupancy : RFOs issued by IO Devices that missed the LLCevent=0x36,umask=0xc803fe0401TOR Occupancy : RFOs issued by IO Devices that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_pcirdcuruncore cacheTOR Occupancy : PCIRdCurs issued by IO Devicesevent=0x36,umask=0xc8f3ff0401TOR Occupancy : PCIRdCurs issued by IO Devices : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.io_rfouncore cacheTOR Occupancy : RFOs issued by IO Devicesevent=0x36,umask=0xc803ff0401TOR Occupancy : RFOs issued by IO Devices : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.ipquncore cacheTOR Occupancy : IPQevent=0x36,umask=801TOR Occupancy : IPQ : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.irq_iauncore cacheTOR Occupancy : IRQ - iAevent=0x36,umask=101TOR Occupancy : IRQ - iA : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interrupts. : From an iA Coreunc_cha_tor_occupancy.irq_non_iauncore cacheTOR Occupancy : IRQ - Non iAevent=0x36,umask=0x1001TOR Occupancy : IRQ - Non iA : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.isocuncore cacheTOR Occupancy : Just ISOCevent=0x3601TOR Occupancy : Just ISOC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.local_tgtuncore cacheTOR Occupancy : Just Local Targetsevent=0x3601TOR Occupancy : Just Local Targets : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.loc_alluncore cacheTOR Occupancy : All from Local iA and IOevent=0x36,umask=0xc000ff0501TOR Occupancy : All from Local iA and IO : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interrupts. : All locally initiated requestsunc_cha_tor_occupancy.loc_iauncore cacheTOR Occupancy : All from Local iAevent=0x36,umask=0xc000ff0101TOR Occupancy : All from Local iA : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interrupts. : All locally initiated requests from iA Coresunc_cha_tor_occupancy.loc_iouncore cacheTOR Occupancy : All from Local IOevent=0x36,umask=0xc000ff0401TOR Occupancy : All from Local IO : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interrupts. : All locally generated IO trafficunc_cha_tor_occupancy.match_opcuncore cacheTOR Occupancy : Match the Opcode in b[29:19] of the extended umask fieldevent=0x3601TOR Occupancy : Match the Opcode in b[29:19] of the extended umask field : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.missuncore cacheTOR Occupancy : Just Missesevent=0x3601TOR Occupancy : Just Misses : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.mmcfguncore cacheTOR Occupancy : MMCFG Accessevent=0x3601TOR Occupancy : MMCFG Access : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.nearmemuncore cacheTOR Occupancy : Just NearMemevent=0x3601TOR Occupancy : Just NearMem : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.noncohuncore cacheTOR Occupancy : Just NonCoherentevent=0x3601TOR Occupancy : Just NonCoherent : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.not_nearmemuncore cacheTOR Occupancy : Just NotNearMemevent=0x3601TOR Occupancy : Just NotNearMem : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.pmmuncore cacheTOR Occupancy : PMM Accessevent=0x3601TOR Occupancy : PMM Access : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.premorph_opcuncore cacheTOR Occupancy : Match the PreMorphed Opcode in b[29:19] of the extended umask fieldevent=0x3601TOR Occupancy : Match the PreMorphed Opcode in b[29:19] of the extended umask field : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.prquncore cacheTOR Occupancy : PRQ - IOSFevent=0x36,umask=401TOR Occupancy : PRQ - IOSF : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interrupts. : From a PCIe Deviceunc_cha_tor_occupancy.prq_non_iosfuncore cacheTOR Occupancy : PRQ - Non IOSFevent=0x36,umask=0x2001TOR Occupancy : PRQ - Non IOSF : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_tor_occupancy.remote_tgtuncore cacheTOR Occupancy : Just Remote Targetsevent=0x3601TOR Occupancy : Just Remote Targets : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.     Does not include addressless requests such as locks and interruptsunc_cha_txr_horz_ads_used.ad_alluncore cacheCMS Horizontal ADS Used : AD - Allevent=0xa6,umask=0x1101CMS Horizontal ADS Used : AD - All : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent. : All == Credited + Uncreditedunc_cha_txr_horz_ads_used.ad_crduncore cacheCMS Horizontal ADS Used : AD - Creditedevent=0xa6,umask=0x1001CMS Horizontal ADS Used : AD - Credited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_cha_txr_horz_ads_used.ad_uncrduncore cacheCMS Horizontal ADS Used : AD - Uncreditedevent=0xa6,umask=101CMS Horizontal ADS Used : AD - Uncredited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_cha_txr_horz_ads_used.bl_alluncore cacheCMS Horizontal ADS Used : BL - Allevent=0xa6,umask=0x4401CMS Horizontal ADS Used : BL - All : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent. : All == Credited + Uncreditedunc_cha_txr_horz_ads_used.bl_crduncore cacheCMS Horizontal ADS Used : BL - Creditedevent=0xa6,umask=0x4001CMS Horizontal ADS Used : BL - Credited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_cha_txr_horz_ads_used.bl_uncrduncore cacheCMS Horizontal ADS Used : BL - Uncreditedevent=0xa6,umask=401CMS Horizontal ADS Used : BL - Uncredited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_cha_txr_horz_bypass.ad_alluncore cacheCMS Horizontal Bypass Used : AD - Allevent=0xa7,umask=0x1101CMS Horizontal Bypass Used : AD - All : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent. : All == Credited + Uncreditedunc_cha_txr_horz_bypass.ad_crduncore cacheCMS Horizontal Bypass Used : AD - Creditedevent=0xa7,umask=0x1001CMS Horizontal Bypass Used : AD - Credited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_cha_txr_horz_bypass.ad_uncrduncore cacheCMS Horizontal Bypass Used : AD - Uncreditedevent=0xa7,umask=101CMS Horizontal Bypass Used : AD - Uncredited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_cha_txr_horz_bypass.akuncore cacheCMS Horizontal Bypass Used : AKevent=0xa7,umask=201CMS Horizontal Bypass Used : AK : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_cha_txr_horz_bypass.akc_uncrduncore cacheCMS Horizontal Bypass Used : AKC - Uncreditedevent=0xa7,umask=0x8001CMS Horizontal Bypass Used : AKC - Uncredited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_cha_txr_horz_bypass.bl_alluncore cacheCMS Horizontal Bypass Used : BL - Allevent=0xa7,umask=0x4401CMS Horizontal Bypass Used : BL - All : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent. : All == Credited + Uncreditedunc_cha_txr_horz_bypass.bl_crduncore cacheCMS Horizontal Bypass Used : BL - Creditedevent=0xa7,umask=0x4001CMS Horizontal Bypass Used : BL - Credited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_cha_txr_horz_bypass.bl_uncrduncore cacheCMS Horizontal Bypass Used : BL - Uncreditedevent=0xa7,umask=401CMS Horizontal Bypass Used : BL - Uncredited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_cha_txr_horz_bypass.ivuncore cacheCMS Horizontal Bypass Used : IVevent=0xa7,umask=801CMS Horizontal Bypass Used : IV : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_cha_txr_horz_cycles_full.ad_alluncore cacheCycles CMS Horizontal Egress Queue is Full : AD - Allevent=0xa2,umask=0x1101Cycles CMS Horizontal Egress Queue is Full : AD - All : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_cha_txr_horz_cycles_full.ad_crduncore cacheCycles CMS Horizontal Egress Queue is Full : AD - Creditedevent=0xa2,umask=0x1001Cycles CMS Horizontal Egress Queue is Full : AD - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_cycles_full.ad_uncrduncore cacheCycles CMS Horizontal Egress Queue is Full : AD - Uncreditedevent=0xa2,umask=101Cycles CMS Horizontal Egress Queue is Full : AD - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_cycles_full.akuncore cacheCycles CMS Horizontal Egress Queue is Full : AKevent=0xa2,umask=201Cycles CMS Horizontal Egress Queue is Full : AK : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_cycles_full.akc_uncrduncore cacheCycles CMS Horizontal Egress Queue is Full : AKC - Uncreditedevent=0xa2,umask=0x8001Cycles CMS Horizontal Egress Queue is Full : AKC - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_cycles_full.bl_alluncore cacheCycles CMS Horizontal Egress Queue is Full : BL - Allevent=0xa2,umask=0x4401Cycles CMS Horizontal Egress Queue is Full : BL - All : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_cha_txr_horz_cycles_full.bl_crduncore cacheCycles CMS Horizontal Egress Queue is Full : BL - Creditedevent=0xa2,umask=0x4001Cycles CMS Horizontal Egress Queue is Full : BL - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_cycles_full.bl_uncrduncore cacheCycles CMS Horizontal Egress Queue is Full : BL - Uncreditedevent=0xa2,umask=401Cycles CMS Horizontal Egress Queue is Full : BL - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_cycles_full.ivuncore cacheCycles CMS Horizontal Egress Queue is Full : IVevent=0xa2,umask=801Cycles CMS Horizontal Egress Queue is Full : IV : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_cycles_ne.ad_alluncore cacheCycles CMS Horizontal Egress Queue is Not Empty : AD - Allevent=0xa3,umask=0x1101Cycles CMS Horizontal Egress Queue is Not Empty : AD - All : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_cha_txr_horz_cycles_ne.ad_crduncore cacheCycles CMS Horizontal Egress Queue is Not Empty : AD - Creditedevent=0xa3,umask=0x1001Cycles CMS Horizontal Egress Queue is Not Empty : AD - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_cycles_ne.ad_uncrduncore cacheCycles CMS Horizontal Egress Queue is Not Empty : AD - Uncreditedevent=0xa3,umask=101Cycles CMS Horizontal Egress Queue is Not Empty : AD - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_cycles_ne.akuncore cacheCycles CMS Horizontal Egress Queue is Not Empty : AKevent=0xa3,umask=201Cycles CMS Horizontal Egress Queue is Not Empty : AK : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_cycles_ne.akc_uncrduncore cacheCycles CMS Horizontal Egress Queue is Not Empty : AKC - Uncreditedevent=0xa3,umask=0x8001Cycles CMS Horizontal Egress Queue is Not Empty : AKC - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_cycles_ne.bl_alluncore cacheCycles CMS Horizontal Egress Queue is Not Empty : BL - Allevent=0xa3,umask=0x4401Cycles CMS Horizontal Egress Queue is Not Empty : BL - All : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_cha_txr_horz_cycles_ne.bl_crduncore cacheCycles CMS Horizontal Egress Queue is Not Empty : BL - Creditedevent=0xa3,umask=0x4001Cycles CMS Horizontal Egress Queue is Not Empty : BL - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_cycles_ne.bl_uncrduncore cacheCycles CMS Horizontal Egress Queue is Not Empty : BL - Uncreditedevent=0xa3,umask=401Cycles CMS Horizontal Egress Queue is Not Empty : BL - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_cycles_ne.ivuncore cacheCycles CMS Horizontal Egress Queue is Not Empty : IVevent=0xa3,umask=801Cycles CMS Horizontal Egress Queue is Not Empty : IV : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_inserts.ad_alluncore cacheCMS Horizontal Egress Inserts : AD - Allevent=0xa1,umask=0x1101CMS Horizontal Egress Inserts : AD - All : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_cha_txr_horz_inserts.ad_crduncore cacheCMS Horizontal Egress Inserts : AD - Creditedevent=0xa1,umask=0x1001CMS Horizontal Egress Inserts : AD - Credited : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_inserts.ad_uncrduncore cacheCMS Horizontal Egress Inserts : AD - Uncreditedevent=0xa1,umask=101CMS Horizontal Egress Inserts : AD - Uncredited : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_inserts.akuncore cacheCMS Horizontal Egress Inserts : AKevent=0xa1,umask=201CMS Horizontal Egress Inserts : AK : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_inserts.akc_uncrduncore cacheCMS Horizontal Egress Inserts : AKC - Uncreditedevent=0xa1,umask=0x8001CMS Horizontal Egress Inserts : AKC - Uncredited : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_inserts.bl_alluncore cacheCMS Horizontal Egress Inserts : BL - Allevent=0xa1,umask=0x4401CMS Horizontal Egress Inserts : BL - All : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_cha_txr_horz_inserts.bl_crduncore cacheCMS Horizontal Egress Inserts : BL - Creditedevent=0xa1,umask=0x4001CMS Horizontal Egress Inserts : BL - Credited : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_inserts.bl_uncrduncore cacheCMS Horizontal Egress Inserts : BL - Uncreditedevent=0xa1,umask=401CMS Horizontal Egress Inserts : BL - Uncredited : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_inserts.ivuncore cacheCMS Horizontal Egress Inserts : IVevent=0xa1,umask=801CMS Horizontal Egress Inserts : IV : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_nack.ad_alluncore cacheCMS Horizontal Egress NACKs : AD - Allevent=0xa4,umask=0x1101CMS Horizontal Egress NACKs : AD - All : Counts number of Egress packets NACK'ed on to the Horizontal Ring : All == Credited + Uncreditedunc_cha_txr_horz_nack.ad_crduncore cacheCMS Horizontal Egress NACKs : AD - Creditedevent=0xa4,umask=0x1001CMS Horizontal Egress NACKs : AD - Credited : Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_cha_txr_horz_nack.ad_uncrduncore cacheCMS Horizontal Egress NACKs : AD - Uncreditedevent=0xa4,umask=101CMS Horizontal Egress NACKs : AD - Uncredited : Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_cha_txr_horz_nack.akuncore cacheCMS Horizontal Egress NACKs : AKevent=0xa4,umask=201CMS Horizontal Egress NACKs : AK : Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_cha_txr_horz_nack.akc_uncrduncore cacheCMS Horizontal Egress NACKs : AKC - Uncreditedevent=0xa4,umask=0x8001CMS Horizontal Egress NACKs : AKC - Uncredited : Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_cha_txr_horz_nack.bl_alluncore cacheCMS Horizontal Egress NACKs : BL - Allevent=0xa4,umask=0x4401CMS Horizontal Egress NACKs : BL - All : Counts number of Egress packets NACK'ed on to the Horizontal Ring : All == Credited + Uncreditedunc_cha_txr_horz_nack.bl_crduncore cacheCMS Horizontal Egress NACKs : BL - Creditedevent=0xa4,umask=0x4001CMS Horizontal Egress NACKs : BL - Credited : Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_cha_txr_horz_nack.bl_uncrduncore cacheCMS Horizontal Egress NACKs : BL - Uncreditedevent=0xa4,umask=401CMS Horizontal Egress NACKs : BL - Uncredited : Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_cha_txr_horz_nack.ivuncore cacheCMS Horizontal Egress NACKs : IVevent=0xa4,umask=801CMS Horizontal Egress NACKs : IV : Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_cha_txr_horz_occupancy.ad_alluncore cacheCMS Horizontal Egress Occupancy : AD - Allevent=0xa0,umask=0x1101CMS Horizontal Egress Occupancy : AD - All : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_cha_txr_horz_occupancy.ad_crduncore cacheCMS Horizontal Egress Occupancy : AD - Creditedevent=0xa0,umask=0x1001CMS Horizontal Egress Occupancy : AD - Credited : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_occupancy.ad_uncrduncore cacheCMS Horizontal Egress Occupancy : AD - Uncreditedevent=0xa0,umask=101CMS Horizontal Egress Occupancy : AD - Uncredited : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_occupancy.akuncore cacheCMS Horizontal Egress Occupancy : AKevent=0xa0,umask=201CMS Horizontal Egress Occupancy : AK : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_occupancy.akc_uncrduncore cacheCMS Horizontal Egress Occupancy : AKC - Uncreditedevent=0xa0,umask=0x8001CMS Horizontal Egress Occupancy : AKC - Uncredited : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_occupancy.bl_alluncore cacheCMS Horizontal Egress Occupancy : BL - Allevent=0xa0,umask=0x4401CMS Horizontal Egress Occupancy : BL - All : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_cha_txr_horz_occupancy.bl_crduncore cacheCMS Horizontal Egress Occupancy : BL - Creditedevent=0xa0,umask=0x4001CMS Horizontal Egress Occupancy : BL - Credited : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_occupancy.bl_uncrduncore cacheCMS Horizontal Egress Occupancy : BL - Uncreditedevent=0xa0,umask=401CMS Horizontal Egress Occupancy : BL - Uncredited : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_occupancy.ivuncore cacheCMS Horizontal Egress Occupancy : IVevent=0xa0,umask=801CMS Horizontal Egress Occupancy : IV : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_cha_txr_horz_starved.ad_alluncore cacheCMS Horizontal Egress Injection Starvation : AD - Allevent=0xa5,umask=101CMS Horizontal Egress Injection Starvation : AD - All : Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time. : All == Credited + Uncreditedunc_cha_txr_horz_starved.ad_uncrduncore cacheCMS Horizontal Egress Injection Starvation : AD - Uncreditedevent=0xa5,umask=101CMS Horizontal Egress Injection Starvation : AD - Uncredited : Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_cha_txr_horz_starved.akuncore cacheCMS Horizontal Egress Injection Starvation : AKevent=0xa5,umask=201CMS Horizontal Egress Injection Starvation : AK : Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_cha_txr_horz_starved.akc_uncrduncore cacheCMS Horizontal Egress Injection Starvation : AKC - Uncreditedevent=0xa5,umask=0x8001CMS Horizontal Egress Injection Starvation : AKC - Uncredited : Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_cha_txr_horz_starved.bl_alluncore cacheCMS Horizontal Egress Injection Starvation : BL - Allevent=0xa5,umask=401CMS Horizontal Egress Injection Starvation : BL - All : Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time. : All == Credited + Uncreditedunc_cha_txr_horz_starved.bl_uncrduncore cacheCMS Horizontal Egress Injection Starvation : BL - Uncreditedevent=0xa5,umask=401CMS Horizontal Egress Injection Starvation : BL - Uncredited : Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_cha_txr_horz_starved.ivuncore cacheCMS Horizontal Egress Injection Starvation : IVevent=0xa5,umask=801CMS Horizontal Egress Injection Starvation : IV : Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_cha_txr_vert_ads_used.ad_ag0uncore cacheCMS Vertical ADS Used : AD - Agent 0event=0x9c,umask=101CMS Vertical ADS Used : AD - Agent 0 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_cha_txr_vert_ads_used.ad_ag1uncore cacheCMS Vertical ADS Used : AD - Agent 1event=0x9c,umask=0x1001CMS Vertical ADS Used : AD - Agent 1 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_cha_txr_vert_ads_used.bl_ag0uncore cacheCMS Vertical ADS Used : BL - Agent 0event=0x9c,umask=401CMS Vertical ADS Used : BL - Agent 0 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_cha_txr_vert_ads_used.bl_ag1uncore cacheCMS Vertical ADS Used : BL - Agent 1event=0x9c,umask=0x4001CMS Vertical ADS Used : BL - Agent 1 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_cha_txr_vert_bypass.ad_ag0uncore cacheCMS Vertical ADS Used : AD - Agent 0event=0x9d,umask=101CMS Vertical ADS Used : AD - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_cha_txr_vert_bypass.ad_ag1uncore cacheCMS Vertical ADS Used : AD - Agent 1event=0x9d,umask=0x1001CMS Vertical ADS Used : AD - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_cha_txr_vert_bypass.ak_ag0uncore cacheCMS Vertical ADS Used : AK - Agent 0event=0x9d,umask=201CMS Vertical ADS Used : AK - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_cha_txr_vert_bypass.ak_ag1uncore cacheCMS Vertical ADS Used : AK - Agent 1event=0x9d,umask=0x2001CMS Vertical ADS Used : AK - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_cha_txr_vert_bypass.bl_ag0uncore cacheCMS Vertical ADS Used : BL - Agent 0event=0x9d,umask=401CMS Vertical ADS Used : BL - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_cha_txr_vert_bypass.bl_ag1uncore cacheCMS Vertical ADS Used : BL - Agent 1event=0x9d,umask=0x4001CMS Vertical ADS Used : BL - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_cha_txr_vert_bypass.iv_ag1uncore cacheCMS Vertical ADS Used : IV - Agent 1event=0x9d,umask=801CMS Vertical ADS Used : IV - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_cha_txr_vert_bypass_1.akc_ag0uncore cacheCMS Vertical ADS Used : AKC - Agent 0event=0x9e,umask=101CMS Vertical ADS Used : AKC - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_cha_txr_vert_bypass_1.akc_ag1uncore cacheCMS Vertical ADS Used : AKC - Agent 1event=0x9e,umask=201CMS Vertical ADS Used : AKC - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_cha_txr_vert_cycles_full0.ad_ag0uncore cacheCycles CMS Vertical Egress Queue Is Full : AD - Agent 0event=0x94,umask=101Cycles CMS Vertical Egress Queue Is Full : AD - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_cha_txr_vert_cycles_full0.ad_ag1uncore cacheCycles CMS Vertical Egress Queue Is Full : AD - Agent 1event=0x94,umask=0x1001Cycles CMS Vertical Egress Queue Is Full : AD - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring.  This is commonly used for outbound requestsunc_cha_txr_vert_cycles_full0.ak_ag0uncore cacheCycles CMS Vertical Egress Queue Is Full : AK - Agent 0event=0x94,umask=201Cycles CMS Vertical Egress Queue Is Full : AK - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_cha_txr_vert_cycles_full0.ak_ag1uncore cacheCycles CMS Vertical Egress Queue Is Full : AK - Agent 1event=0x94,umask=0x2001Cycles CMS Vertical Egress Queue Is Full : AK - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ringunc_cha_txr_vert_cycles_full0.bl_ag0uncore cacheCycles CMS Vertical Egress Queue Is Full : BL - Agent 0event=0x94,umask=401Cycles CMS Vertical Egress Queue Is Full : BL - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_cha_txr_vert_cycles_full0.bl_ag1uncore cacheCycles CMS Vertical Egress Queue Is Full : BL - Agent 1event=0x94,umask=0x4001Cycles CMS Vertical Egress Queue Is Full : BL - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_cha_txr_vert_cycles_full0.iv_ag0uncore cacheCycles CMS Vertical Egress Queue Is Full : IV - Agent 0event=0x94,umask=801Cycles CMS Vertical Egress Queue Is Full : IV - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring.  This is commonly used for snoops to the coresunc_cha_txr_vert_cycles_full1.akc_ag0uncore cacheCycles CMS Vertical Egress Queue Is Full : AKC - Agent 0event=0x95,umask=101Cycles CMS Vertical Egress Queue Is Full : AKC - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_cha_txr_vert_cycles_full1.akc_ag1uncore cacheCycles CMS Vertical Egress Queue Is Full : AKC - Agent 1event=0x95,umask=201Cycles CMS Vertical Egress Queue Is Full : AKC - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_cha_txr_vert_cycles_ne0.ad_ag0uncore cacheCycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 0event=0x96,umask=101Cycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_cha_txr_vert_cycles_ne0.ad_ag1uncore cacheCycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 1event=0x96,umask=0x1001Cycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring.  This is commonly used for outbound requestsunc_cha_txr_vert_cycles_ne0.ak_ag0uncore cacheCycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 0event=0x96,umask=201Cycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_cha_txr_vert_cycles_ne0.ak_ag1uncore cacheCycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 1event=0x96,umask=0x2001Cycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ringunc_cha_txr_vert_cycles_ne0.bl_ag0uncore cacheCycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 0event=0x96,umask=401Cycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_cha_txr_vert_cycles_ne0.bl_ag1uncore cacheCycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 1event=0x96,umask=0x4001Cycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_cha_txr_vert_cycles_ne0.iv_ag0uncore cacheCycles CMS Vertical Egress Queue Is Not Empty : IV - Agent 0event=0x96,umask=801Cycles CMS Vertical Egress Queue Is Not Empty : IV - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring.  This is commonly used for snoops to the coresunc_cha_txr_vert_cycles_ne1.akc_ag0uncore cacheCycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 0event=0x97,umask=101Cycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_cha_txr_vert_cycles_ne1.akc_ag1uncore cacheCycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 1event=0x97,umask=201Cycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_cha_txr_vert_inserts0.ad_ag0uncore cacheCMS Vert Egress Allocations : AD - Agent 0event=0x92,umask=101CMS Vert Egress Allocations : AD - Agent 0 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_cha_txr_vert_inserts0.ad_ag1uncore cacheCMS Vert Egress Allocations : AD - Agent 1event=0x92,umask=0x1001CMS Vert Egress Allocations : AD - Agent 1 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring.  This is commonly used for outbound requestsunc_cha_txr_vert_inserts0.ak_ag0uncore cacheCMS Vert Egress Allocations : AK - Agent 0event=0x92,umask=201CMS Vert Egress Allocations : AK - Agent 0 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_cha_txr_vert_inserts0.ak_ag1uncore cacheCMS Vert Egress Allocations : AK - Agent 1event=0x92,umask=0x2001CMS Vert Egress Allocations : AK - Agent 1 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ringunc_cha_txr_vert_inserts0.bl_ag0uncore cacheCMS Vert Egress Allocations : BL - Agent 0event=0x92,umask=401CMS Vert Egress Allocations : BL - Agent 0 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_cha_txr_vert_inserts0.bl_ag1uncore cacheCMS Vert Egress Allocations : BL - Agent 1event=0x92,umask=0x4001CMS Vert Egress Allocations : BL - Agent 1 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_cha_txr_vert_inserts0.iv_ag0uncore cacheCMS Vert Egress Allocations : IV - Agent 0event=0x92,umask=801CMS Vert Egress Allocations : IV - Agent 0 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring.  This is commonly used for snoops to the coresunc_cha_txr_vert_inserts1.akc_ag0uncore cacheCMS Vert Egress Allocations : AKC - Agent 0event=0x93,umask=101CMS Vert Egress Allocations : AKC - Agent 0 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_cha_txr_vert_inserts1.akc_ag1uncore cacheCMS Vert Egress Allocations : AKC - Agent 1event=0x93,umask=201CMS Vert Egress Allocations : AKC - Agent 1 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_cha_txr_vert_nack0.ad_ag0uncore cacheCMS Vertical Egress NACKs : AD - Agent 0event=0x98,umask=101CMS Vertical Egress NACKs : AD - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_cha_txr_vert_nack0.ad_ag1uncore cacheCMS Vertical Egress NACKs : AD - Agent 1event=0x98,umask=0x1001CMS Vertical Egress NACKs : AD - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_cha_txr_vert_nack0.ak_ag0uncore cacheCMS Vertical Egress NACKs : AK - Agent 0event=0x98,umask=201CMS Vertical Egress NACKs : AK - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_cha_txr_vert_nack0.ak_ag1uncore cacheCMS Vertical Egress NACKs : AK - Agent 1event=0x98,umask=0x2001CMS Vertical Egress NACKs : AK - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_cha_txr_vert_nack0.bl_ag0uncore cacheCMS Vertical Egress NACKs : BL - Agent 0event=0x98,umask=401CMS Vertical Egress NACKs : BL - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_cha_txr_vert_nack0.bl_ag1uncore cacheCMS Vertical Egress NACKs : BL - Agent 1event=0x98,umask=0x4001CMS Vertical Egress NACKs : BL - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_cha_txr_vert_nack0.iv_ag0uncore cacheCMS Vertical Egress NACKs : IVevent=0x98,umask=801CMS Vertical Egress NACKs : IV : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_cha_txr_vert_nack1.akc_ag0uncore cacheCMS Vertical Egress NACKs : AKC - Agent 0event=0x99,umask=101CMS Vertical Egress NACKs : AKC - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_cha_txr_vert_nack1.akc_ag1uncore cacheCMS Vertical Egress NACKs : AKC - Agent 1event=0x99,umask=201CMS Vertical Egress NACKs : AKC - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_cha_txr_vert_occupancy0.ad_ag0uncore cacheCMS Vert Egress Occupancy : AD - Agent 0event=0x90,umask=101CMS Vert Egress Occupancy : AD - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_cha_txr_vert_occupancy0.ad_ag1uncore cacheCMS Vert Egress Occupancy : AD - Agent 1event=0x90,umask=0x1001CMS Vert Egress Occupancy : AD - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring.  This is commonly used for outbound requestsunc_cha_txr_vert_occupancy0.ak_ag0uncore cacheCMS Vert Egress Occupancy : AK - Agent 0event=0x90,umask=201CMS Vert Egress Occupancy : AK - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_cha_txr_vert_occupancy0.ak_ag1uncore cacheCMS Vert Egress Occupancy : AK - Agent 1event=0x90,umask=0x2001CMS Vert Egress Occupancy : AK - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ringunc_cha_txr_vert_occupancy0.bl_ag0uncore cacheCMS Vert Egress Occupancy : BL - Agent 0event=0x90,umask=401CMS Vert Egress Occupancy : BL - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_cha_txr_vert_occupancy0.bl_ag1uncore cacheCMS Vert Egress Occupancy : BL - Agent 1event=0x90,umask=0x4001CMS Vert Egress Occupancy : BL - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_cha_txr_vert_occupancy0.iv_ag0uncore cacheCMS Vert Egress Occupancy : IV - Agent 0event=0x90,umask=801CMS Vert Egress Occupancy : IV - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring.  This is commonly used for snoops to the coresunc_cha_txr_vert_occupancy1.akc_ag0uncore cacheCMS Vert Egress Occupancy : AKC - Agent 0event=0x91,umask=101CMS Vert Egress Occupancy : AKC - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_cha_txr_vert_occupancy1.akc_ag1uncore cacheCMS Vert Egress Occupancy : AKC - Agent 1event=0x91,umask=201CMS Vert Egress Occupancy : AKC - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_cha_txr_vert_starved0.ad_ag0uncore cacheCMS Vertical Egress Injection Starvation : AD - Agent 0event=0x9a,umask=101CMS Vertical Egress Injection Starvation : AD - Agent 0 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_cha_txr_vert_starved0.ad_ag1uncore cacheCMS Vertical Egress Injection Starvation : AD - Agent 1event=0x9a,umask=0x1001CMS Vertical Egress Injection Starvation : AD - Agent 1 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_cha_txr_vert_starved0.ak_ag0uncore cacheCMS Vertical Egress Injection Starvation : AK - Agent 0event=0x9a,umask=201CMS Vertical Egress Injection Starvation : AK - Agent 0 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_cha_txr_vert_starved0.ak_ag1uncore cacheCMS Vertical Egress Injection Starvation : AK - Agent 1event=0x9a,umask=0x2001CMS Vertical Egress Injection Starvation : AK - Agent 1 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_cha_txr_vert_starved0.bl_ag0uncore cacheCMS Vertical Egress Injection Starvation : BL - Agent 0event=0x9a,umask=401CMS Vertical Egress Injection Starvation : BL - Agent 0 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_cha_txr_vert_starved0.bl_ag1uncore cacheCMS Vertical Egress Injection Starvation : BL - Agent 1event=0x9a,umask=0x4001CMS Vertical Egress Injection Starvation : BL - Agent 1 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_cha_txr_vert_starved0.iv_ag0uncore cacheCMS Vertical Egress Injection Starvation : IVevent=0x9a,umask=801CMS Vertical Egress Injection Starvation : IV : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_cha_txr_vert_starved1.akc_ag0uncore cacheCMS Vertical Egress Injection Starvation : AKC - Agent 0event=0x9b,umask=101CMS Vertical Egress Injection Starvation : AKC - Agent 0 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_cha_txr_vert_starved1.akc_ag1uncore cacheCMS Vertical Egress Injection Starvation : AKC - Agent 1event=0x9b,umask=201CMS Vertical Egress Injection Starvation : AKC - Agent 1 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_cha_txr_vert_starved1.tgcuncore cacheCMS Vertical Egress Injection Starvation : AKC - Agent 0event=0x9b,umask=401CMS Vertical Egress Injection Starvation : AKC - Agent 0 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_cha_vert_ring_ad_in_use.dn_evenuncore cacheVertical AD Ring In Use : Down and Evenevent=0xb0,umask=401Vertical AD Ring In Use : Down and Even : Counts the number of cycles that the Vertical AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings  -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_ad_in_use.dn_odduncore cacheVertical AD Ring In Use : Down and Oddevent=0xb0,umask=801Vertical AD Ring In Use : Down and Odd : Counts the number of cycles that the Vertical AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings  -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_ad_in_use.up_evenuncore cacheVertical AD Ring In Use : Up and Evenevent=0xb0,umask=101Vertical AD Ring In Use : Up and Even : Counts the number of cycles that the Vertical AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings  -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_ad_in_use.up_odduncore cacheVertical AD Ring In Use : Up and Oddevent=0xb0,umask=201Vertical AD Ring In Use : Up and Odd : Counts the number of cycles that the Vertical AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings  -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_akc_in_use.dn_evenuncore cacheVertical AKC Ring In Use : Down and Evenevent=0xb4,umask=401Vertical AKC Ring In Use : Down and Even : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_akc_in_use.dn_odduncore cacheVertical AKC Ring In Use : Down and Oddevent=0xb4,umask=801Vertical AKC Ring In Use : Down and Odd : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_akc_in_use.up_evenuncore cacheVertical AKC Ring In Use : Up and Evenevent=0xb4,umask=101Vertical AKC Ring In Use : Up and Even : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_akc_in_use.up_odduncore cacheVertical AKC Ring In Use : Up and Oddevent=0xb4,umask=201Vertical AKC Ring In Use : Up and Odd : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_ak_in_use.dn_evenuncore cacheVertical AK Ring In Use : Down and Evenevent=0xb1,umask=401Vertical AK Ring In Use : Down and Even : Counts the number of cycles that the Vertical AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_ak_in_use.dn_odduncore cacheVertical AK Ring In Use : Down and Oddevent=0xb1,umask=801Vertical AK Ring In Use : Down and Odd : Counts the number of cycles that the Vertical AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_ak_in_use.up_evenuncore cacheVertical AK Ring In Use : Up and Evenevent=0xb1,umask=101Vertical AK Ring In Use : Up and Even : Counts the number of cycles that the Vertical AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_ak_in_use.up_odduncore cacheVertical AK Ring In Use : Up and Oddevent=0xb1,umask=201Vertical AK Ring In Use : Up and Odd : Counts the number of cycles that the Vertical AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_bl_in_use.dn_evenuncore cacheVertical BL Ring in Use : Down and Evenevent=0xb2,umask=401Vertical BL Ring in Use : Down and Even : Counts the number of cycles that the Vertical BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_bl_in_use.dn_odduncore cacheVertical BL Ring in Use : Down and Oddevent=0xb2,umask=801Vertical BL Ring in Use : Down and Odd : Counts the number of cycles that the Vertical BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_bl_in_use.up_evenuncore cacheVertical BL Ring in Use : Up and Evenevent=0xb2,umask=101Vertical BL Ring in Use : Up and Even : Counts the number of cycles that the Vertical BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_bl_in_use.up_odduncore cacheVertical BL Ring in Use : Up and Oddevent=0xb2,umask=201Vertical BL Ring in Use : Up and Odd : Counts the number of cycles that the Vertical BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_iv_in_use.dnuncore cacheVertical IV Ring in Use : Downevent=0xb3,umask=401Vertical IV Ring in Use : Down : Counts the number of cycles that the Vertical IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODDunc_cha_vert_ring_iv_in_use.upuncore cacheVertical IV Ring in Use : Upevent=0xb3,umask=101Vertical IV Ring in Use : Up : Counts the number of cycles that the Vertical IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODDunc_cha_vert_ring_tgc_in_use.dn_evenuncore cacheVertical TGC Ring In Use : Down and Evenevent=0xb5,umask=401Vertical TGC Ring In Use : Down and Even : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_tgc_in_use.dn_odduncore cacheVertical TGC Ring In Use : Down and Oddevent=0xb5,umask=801Vertical TGC Ring In Use : Down and Odd : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_tgc_in_use.up_evenuncore cacheVertical TGC Ring In Use : Up and Evenevent=0xb5,umask=101Vertical TGC Ring In Use : Up and Even : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_vert_ring_tgc_in_use.up_odduncore cacheVertical TGC Ring In Use : Up and Oddevent=0xb5,umask=201Vertical TGC Ring In Use : Up and Odd : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_cha_write_no_credits.mc10uncore cacheCHA iMC CHNx WRITE Credits Empty : MC10event=0x5a01CHA iMC CHNx WRITE Credits Empty : MC10 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC.  In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 10 onlyunc_cha_write_no_credits.mc11uncore cacheCHA iMC CHNx WRITE Credits Empty : MC11event=0x5a01CHA iMC CHNx WRITE Credits Empty : MC11 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC.  In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 11 onlyunc_cha_write_no_credits.mc12uncore cacheCHA iMC CHNx WRITE Credits Empty : MC12event=0x5a01CHA iMC CHNx WRITE Credits Empty : MC12 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC.  In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 12 onlyunc_cha_write_no_credits.mc13uncore cacheCHA iMC CHNx WRITE Credits Empty : MC13event=0x5a01CHA iMC CHNx WRITE Credits Empty : MC13 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC.  In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 13 onlyunc_cha_write_no_credits.mc6uncore cacheCHA iMC CHNx WRITE Credits Empty : MC6event=0x5a,umask=0x4001CHA iMC CHNx WRITE Credits Empty : MC6 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC.  In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 6 onlyunc_cha_write_no_credits.mc7uncore cacheCHA iMC CHNx WRITE Credits Empty : MC7event=0x5a,umask=0x8001CHA iMC CHNx WRITE Credits Empty : MC7 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC.  In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 7 onlyunc_cha_write_no_credits.mc8uncore cacheCHA iMC CHNx WRITE Credits Empty : MC8event=0x5a01CHA iMC CHNx WRITE Credits Empty : MC8 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC.  In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 8 onlyunc_cha_write_no_credits.mc9uncore cacheCHA iMC CHNx WRITE Credits Empty : MC9event=0x5a01CHA iMC CHNx WRITE Credits Empty : MC9 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC.  In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 9 onlyunc_i_cache_total_occupancy.anyuncore interconnectTotal Write Cache Occupancy : Any Sourceevent=0xf,umask=101Total Write Cache Occupancy : Any Source : Accumulates the number of reads and writes that are outstanding in the uncore in each cycle.  This is effectively the sum of the READ_OCCUPANCY and WRITE_OCCUPANCY events. : Tracks all requests from any source portunc_i_cache_total_occupancy.iv_quncore interconnectTotal Write Cache Occupancy : Snoopsevent=0xf,umask=201Total Write Cache Occupancy : Snoops : Accumulates the number of reads and writes that are outstanding in the uncore in each cycle.  This is effectively the sum of the READ_OCCUPANCY and WRITE_OCCUPANCY eventsunc_i_clockticksuncore interconnectClockticks of the IO coherency tracker (IRP)event=101unc_i_coherent_ops.clflushuncore interconnectCoherent Ops : CLFlushevent=0x10,umask=0x8001Coherent Ops : CLFlush : Counts the number of coherency related operations serviced by the IRPunc_i_coherent_ops.wbmtoiuncore interconnectCoherent Ops : WbMtoIevent=0x10,umask=0x4001Coherent Ops : WbMtoI : Counts the number of coherency related operations serviced by the IRPunc_i_p2p_insertsuncore interconnectP2P Requestsevent=0x1401P2P Requests : P2P requests from the ITCunc_i_p2p_occupancyuncore interconnectP2P Occupancyevent=0x1501P2P Occupancy : P2P B & S Queue Occupancyunc_i_p2p_transactions.cmpluncore interconnectP2P Transactions : P2P completionsevent=0x13,umask=801unc_i_p2p_transactions.locuncore interconnectP2P Transactions : match if local onlyevent=0x13,umask=0x4001unc_i_p2p_transactions.loc_and_tgt_matchuncore interconnectP2P Transactions : match if local and target matchesevent=0x13,umask=0x8001unc_i_p2p_transactions.msguncore interconnectP2P Transactions : P2P Messageevent=0x13,umask=401unc_i_p2p_transactions.rduncore interconnectP2P Transactions : P2P readsevent=0x13,umask=101unc_i_p2p_transactions.remuncore interconnectP2P Transactions : Match if remote onlyevent=0x13,umask=0x1001unc_i_p2p_transactions.rem_and_tgt_matchuncore interconnectP2P Transactions : match if remote and target matchesevent=0x13,umask=0x2001unc_i_p2p_transactions.wruncore interconnectP2P Transactions : P2P Writesevent=0x13,umask=201unc_i_transactions.atomicuncore interconnectInbound Transaction Count : Atomicevent=0x11,umask=0x1001Inbound Transaction Count : Atomic : Counts the number of Inbound transactions from the IRP to the Uncore.  This can be filtered based on request type in addition to the source queue.  Note the special filtering equation.  We do OR-reduction on the request type.  If the SOURCE bit is set, then we also do AND qualification based on the source portID. : Tracks the number of atomic transactionsunc_i_transactions.otheruncore interconnectInbound Transaction Count : Otherevent=0x11,umask=0x2001Inbound Transaction Count : Other : Counts the number of Inbound transactions from the IRP to the Uncore.  This can be filtered based on request type in addition to the source queue.  Note the special filtering equation.  We do OR-reduction on the request type.  If the SOURCE bit is set, then we also do AND qualification based on the source portID. : Tracks the number of 'other' kinds of transactionsunc_i_transactions.writesuncore interconnectInbound Transaction Count : Writesevent=0x11,umask=201Inbound Transaction Count : Writes : Counts the number of Inbound transactions from the IRP to the Uncore.  This can be filtered based on request type in addition to the source queue.  Note the special filtering equation.  We do OR-reduction on the request type.  If the SOURCE bit is set, then we also do AND qualification based on the source portID. : Tracks only write requests.  Each write request should have a prefetch, so there is no need to explicitly track these requests.  For writes that are tickled and have to retry, the counter will be incremented for each retryunc_i_txr2_ad01_stall_credit_cyclesuncore interconnectUNC_I_TxR2_AD01_STALL_CREDIT_CYCLESevent=0x1c01: Counts the number times when it is not possible to issue a request to the M2PCIe because there are no Egress Credits available on AD0, A1 or AD0&AD1 both. Stalls on both AD0 and AD1 will count as 2unc_m2m_ag0_ad_crd_acquired0.tgr0uncore interconnectCMS Agent0 AD Credits Acquired : For Transgress 0event=0x80,umask=101CMS Agent0 AD Credits Acquired : For Transgress 0 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m2m_ag0_ad_crd_acquired0.tgr1uncore interconnectCMS Agent0 AD Credits Acquired : For Transgress 1event=0x80,umask=201CMS Agent0 AD Credits Acquired : For Transgress 1 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m2m_ag0_ad_crd_acquired0.tgr2uncore interconnectCMS Agent0 AD Credits Acquired : For Transgress 2event=0x80,umask=401CMS Agent0 AD Credits Acquired : For Transgress 2 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m2m_ag0_ad_crd_acquired0.tgr3uncore interconnectCMS Agent0 AD Credits Acquired : For Transgress 3event=0x80,umask=801CMS Agent0 AD Credits Acquired : For Transgress 3 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m2m_ag0_ad_crd_acquired0.tgr4uncore interconnectCMS Agent0 AD Credits Acquired : For Transgress 4event=0x80,umask=0x1001CMS Agent0 AD Credits Acquired : For Transgress 4 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m2m_ag0_ad_crd_acquired0.tgr5uncore interconnectCMS Agent0 AD Credits Acquired : For Transgress 5event=0x80,umask=0x2001CMS Agent0 AD Credits Acquired : For Transgress 5 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m2m_ag0_ad_crd_acquired0.tgr6uncore interconnectCMS Agent0 AD Credits Acquired : For Transgress 6event=0x80,umask=0x4001CMS Agent0 AD Credits Acquired : For Transgress 6 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m2m_ag0_ad_crd_acquired0.tgr7uncore interconnectCMS Agent0 AD Credits Acquired : For Transgress 7event=0x80,umask=0x8001CMS Agent0 AD Credits Acquired : For Transgress 7 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m2m_ag0_ad_crd_acquired1.tgr10uncore interconnectCMS Agent0 AD Credits Acquired : For Transgress 10event=0x81,umask=401CMS Agent0 AD Credits Acquired : For Transgress 10 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m2m_ag0_ad_crd_acquired1.tgr8uncore interconnectCMS Agent0 AD Credits Acquired : For Transgress 8event=0x81,umask=101CMS Agent0 AD Credits Acquired : For Transgress 8 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m2m_ag0_ad_crd_acquired1.tgr9uncore interconnectCMS Agent0 AD Credits Acquired : For Transgress 9event=0x81,umask=201CMS Agent0 AD Credits Acquired : For Transgress 9 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m2m_ag0_ad_crd_occupancy0.tgr0uncore interconnectCMS Agent0 AD Credits Occupancy : For Transgress 0event=0x82,umask=101CMS Agent0 AD Credits Occupancy : For Transgress 0 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m2m_ag0_ad_crd_occupancy0.tgr1uncore interconnectCMS Agent0 AD Credits Occupancy : For Transgress 1event=0x82,umask=201CMS Agent0 AD Credits Occupancy : For Transgress 1 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m2m_ag0_ad_crd_occupancy0.tgr2uncore interconnectCMS Agent0 AD Credits Occupancy : For Transgress 2event=0x82,umask=401CMS Agent0 AD Credits Occupancy : For Transgress 2 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m2m_ag0_ad_crd_occupancy0.tgr3uncore interconnectCMS Agent0 AD Credits Occupancy : For Transgress 3event=0x82,umask=801CMS Agent0 AD Credits Occupancy : For Transgress 3 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m2m_ag0_ad_crd_occupancy0.tgr4uncore interconnectCMS Agent0 AD Credits Occupancy : For Transgress 4event=0x82,umask=0x1001CMS Agent0 AD Credits Occupancy : For Transgress 4 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m2m_ag0_ad_crd_occupancy0.tgr5uncore interconnectCMS Agent0 AD Credits Occupancy : For Transgress 5event=0x82,umask=0x2001CMS Agent0 AD Credits Occupancy : For Transgress 5 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m2m_ag0_ad_crd_occupancy0.tgr6uncore interconnectCMS Agent0 AD Credits Occupancy : For Transgress 6event=0x82,umask=0x4001CMS Agent0 AD Credits Occupancy : For Transgress 6 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m2m_ag0_ad_crd_occupancy0.tgr7uncore interconnectCMS Agent0 AD Credits Occupancy : For Transgress 7event=0x82,umask=0x8001CMS Agent0 AD Credits Occupancy : For Transgress 7 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m2m_ag0_ad_crd_occupancy1.tgr10uncore interconnectCMS Agent0 AD Credits Occupancy : For Transgress 10event=0x83,umask=401CMS Agent0 AD Credits Occupancy : For Transgress 10 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m2m_ag0_ad_crd_occupancy1.tgr8uncore interconnectCMS Agent0 AD Credits Occupancy : For Transgress 8event=0x83,umask=101CMS Agent0 AD Credits Occupancy : For Transgress 8 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m2m_ag0_ad_crd_occupancy1.tgr9uncore interconnectCMS Agent0 AD Credits Occupancy : For Transgress 9event=0x83,umask=201CMS Agent0 AD Credits Occupancy : For Transgress 9 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m2m_ag0_bl_crd_acquired0.tgr0uncore interconnectCMS Agent0 BL Credits Acquired : For Transgress 0event=0x88,umask=101CMS Agent0 BL Credits Acquired : For Transgress 0 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m2m_ag0_bl_crd_acquired0.tgr1uncore interconnectCMS Agent0 BL Credits Acquired : For Transgress 1event=0x88,umask=201CMS Agent0 BL Credits Acquired : For Transgress 1 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m2m_ag0_bl_crd_acquired0.tgr2uncore interconnectCMS Agent0 BL Credits Acquired : For Transgress 2event=0x88,umask=401CMS Agent0 BL Credits Acquired : For Transgress 2 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m2m_ag0_bl_crd_acquired0.tgr3uncore interconnectCMS Agent0 BL Credits Acquired : For Transgress 3event=0x88,umask=801CMS Agent0 BL Credits Acquired : For Transgress 3 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m2m_ag0_bl_crd_acquired0.tgr4uncore interconnectCMS Agent0 BL Credits Acquired : For Transgress 4event=0x88,umask=0x1001CMS Agent0 BL Credits Acquired : For Transgress 4 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m2m_ag0_bl_crd_acquired0.tgr5uncore interconnectCMS Agent0 BL Credits Acquired : For Transgress 5event=0x88,umask=0x2001CMS Agent0 BL Credits Acquired : For Transgress 5 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m2m_ag0_bl_crd_acquired0.tgr6uncore interconnectCMS Agent0 BL Credits Acquired : For Transgress 6event=0x88,umask=0x4001CMS Agent0 BL Credits Acquired : For Transgress 6 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m2m_ag0_bl_crd_acquired0.tgr7uncore interconnectCMS Agent0 BL Credits Acquired : For Transgress 7event=0x88,umask=0x8001CMS Agent0 BL Credits Acquired : For Transgress 7 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m2m_ag0_bl_crd_acquired1.tgr10uncore interconnectCMS Agent0 BL Credits Acquired : For Transgress 10event=0x89,umask=401CMS Agent0 BL Credits Acquired : For Transgress 10 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m2m_ag0_bl_crd_acquired1.tgr8uncore interconnectCMS Agent0 BL Credits Acquired : For Transgress 8event=0x89,umask=101CMS Agent0 BL Credits Acquired : For Transgress 8 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m2m_ag0_bl_crd_acquired1.tgr9uncore interconnectCMS Agent0 BL Credits Acquired : For Transgress 9event=0x89,umask=201CMS Agent0 BL Credits Acquired : For Transgress 9 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m2m_ag0_bl_crd_occupancy0.tgr0uncore interconnectCMS Agent0 BL Credits Occupancy : For Transgress 0event=0x8a,umask=101CMS Agent0 BL Credits Occupancy : For Transgress 0 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m2m_ag0_bl_crd_occupancy0.tgr1uncore interconnectCMS Agent0 BL Credits Occupancy : For Transgress 1event=0x8a,umask=201CMS Agent0 BL Credits Occupancy : For Transgress 1 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m2m_ag0_bl_crd_occupancy0.tgr2uncore interconnectCMS Agent0 BL Credits Occupancy : For Transgress 2event=0x8a,umask=401CMS Agent0 BL Credits Occupancy : For Transgress 2 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m2m_ag0_bl_crd_occupancy0.tgr3uncore interconnectCMS Agent0 BL Credits Occupancy : For Transgress 3event=0x8a,umask=801CMS Agent0 BL Credits Occupancy : For Transgress 3 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m2m_ag0_bl_crd_occupancy0.tgr4uncore interconnectCMS Agent0 BL Credits Occupancy : For Transgress 4event=0x8a,umask=0x1001CMS Agent0 BL Credits Occupancy : For Transgress 4 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m2m_ag0_bl_crd_occupancy0.tgr5uncore interconnectCMS Agent0 BL Credits Occupancy : For Transgress 5event=0x8a,umask=0x2001CMS Agent0 BL Credits Occupancy : For Transgress 5 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m2m_ag0_bl_crd_occupancy0.tgr6uncore interconnectCMS Agent0 BL Credits Occupancy : For Transgress 6event=0x8a,umask=0x4001CMS Agent0 BL Credits Occupancy : For Transgress 6 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m2m_ag0_bl_crd_occupancy0.tgr7uncore interconnectCMS Agent0 BL Credits Occupancy : For Transgress 7event=0x8a,umask=0x8001CMS Agent0 BL Credits Occupancy : For Transgress 7 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m2m_ag0_bl_crd_occupancy1.tgr10uncore interconnectCMS Agent0 BL Credits Occupancy : For Transgress 10event=0x8b,umask=401CMS Agent0 BL Credits Occupancy : For Transgress 10 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m2m_ag0_bl_crd_occupancy1.tgr8uncore interconnectCMS Agent0 BL Credits Occupancy : For Transgress 8event=0x8b,umask=101CMS Agent0 BL Credits Occupancy : For Transgress 8 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m2m_ag0_bl_crd_occupancy1.tgr9uncore interconnectCMS Agent0 BL Credits Occupancy : For Transgress 9event=0x8b,umask=201CMS Agent0 BL Credits Occupancy : For Transgress 9 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m2m_ag1_ad_crd_acquired0.tgr0uncore interconnectCMS Agent1 AD Credits Acquired : For Transgress 0event=0x84,umask=101CMS Agent1 AD Credits Acquired : For Transgress 0 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m2m_ag1_ad_crd_acquired0.tgr1uncore interconnectCMS Agent1 AD Credits Acquired : For Transgress 1event=0x84,umask=201CMS Agent1 AD Credits Acquired : For Transgress 1 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m2m_ag1_ad_crd_acquired0.tgr2uncore interconnectCMS Agent1 AD Credits Acquired : For Transgress 2event=0x84,umask=401CMS Agent1 AD Credits Acquired : For Transgress 2 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m2m_ag1_ad_crd_acquired0.tgr3uncore interconnectCMS Agent1 AD Credits Acquired : For Transgress 3event=0x84,umask=801CMS Agent1 AD Credits Acquired : For Transgress 3 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m2m_ag1_ad_crd_acquired0.tgr4uncore interconnectCMS Agent1 AD Credits Acquired : For Transgress 4event=0x84,umask=0x1001CMS Agent1 AD Credits Acquired : For Transgress 4 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m2m_ag1_ad_crd_acquired0.tgr5uncore interconnectCMS Agent1 AD Credits Acquired : For Transgress 5event=0x84,umask=0x2001CMS Agent1 AD Credits Acquired : For Transgress 5 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m2m_ag1_ad_crd_acquired0.tgr6uncore interconnectCMS Agent1 AD Credits Acquired : For Transgress 6event=0x84,umask=0x4001CMS Agent1 AD Credits Acquired : For Transgress 6 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m2m_ag1_ad_crd_acquired0.tgr7uncore interconnectCMS Agent1 AD Credits Acquired : For Transgress 7event=0x84,umask=0x8001CMS Agent1 AD Credits Acquired : For Transgress 7 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m2m_ag1_ad_crd_acquired1.tgr10uncore interconnectCMS Agent1 AD Credits Acquired : For Transgress 10event=0x85,umask=401CMS Agent1 AD Credits Acquired : For Transgress 10 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m2m_ag1_ad_crd_acquired1.tgr8uncore interconnectCMS Agent1 AD Credits Acquired : For Transgress 8event=0x85,umask=101CMS Agent1 AD Credits Acquired : For Transgress 8 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m2m_ag1_ad_crd_acquired1.tgr9uncore interconnectCMS Agent1 AD Credits Acquired : For Transgress 9event=0x85,umask=201CMS Agent1 AD Credits Acquired : For Transgress 9 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m2m_ag1_ad_crd_occupancy0.tgr0uncore interconnectCMS Agent1 AD Credits Occupancy : For Transgress 0event=0x86,umask=101CMS Agent1 AD Credits Occupancy : For Transgress 0 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m2m_ag1_ad_crd_occupancy0.tgr1uncore interconnectCMS Agent1 AD Credits Occupancy : For Transgress 1event=0x86,umask=201CMS Agent1 AD Credits Occupancy : For Transgress 1 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m2m_ag1_ad_crd_occupancy0.tgr2uncore interconnectCMS Agent1 AD Credits Occupancy : For Transgress 2event=0x86,umask=401CMS Agent1 AD Credits Occupancy : For Transgress 2 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m2m_ag1_ad_crd_occupancy0.tgr3uncore interconnectCMS Agent1 AD Credits Occupancy : For Transgress 3event=0x86,umask=801CMS Agent1 AD Credits Occupancy : For Transgress 3 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m2m_ag1_ad_crd_occupancy0.tgr4uncore interconnectCMS Agent1 AD Credits Occupancy : For Transgress 4event=0x86,umask=0x1001CMS Agent1 AD Credits Occupancy : For Transgress 4 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m2m_ag1_ad_crd_occupancy0.tgr5uncore interconnectCMS Agent1 AD Credits Occupancy : For Transgress 5event=0x86,umask=0x2001CMS Agent1 AD Credits Occupancy : For Transgress 5 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m2m_ag1_ad_crd_occupancy0.tgr6uncore interconnectCMS Agent1 AD Credits Occupancy : For Transgress 6event=0x86,umask=0x4001CMS Agent1 AD Credits Occupancy : For Transgress 6 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m2m_ag1_ad_crd_occupancy0.tgr7uncore interconnectCMS Agent1 AD Credits Occupancy : For Transgress 7event=0x86,umask=0x8001CMS Agent1 AD Credits Occupancy : For Transgress 7 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m2m_ag1_ad_crd_occupancy1.tgr10uncore interconnectCMS Agent1 AD Credits Occupancy : For Transgress 10event=0x87,umask=401CMS Agent1 AD Credits Occupancy : For Transgress 10 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m2m_ag1_ad_crd_occupancy1.tgr8uncore interconnectCMS Agent1 AD Credits Occupancy : For Transgress 8event=0x87,umask=101CMS Agent1 AD Credits Occupancy : For Transgress 8 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m2m_ag1_ad_crd_occupancy1.tgr9uncore interconnectCMS Agent1 AD Credits Occupancy : For Transgress 9event=0x87,umask=201CMS Agent1 AD Credits Occupancy : For Transgress 9 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m2m_ag1_bl_crd_acquired0.tgr0uncore interconnectCMS Agent1 BL Credits Acquired : For Transgress 0event=0x8c,umask=101CMS Agent1 BL Credits Acquired : For Transgress 0 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m2m_ag1_bl_crd_acquired0.tgr1uncore interconnectCMS Agent1 BL Credits Acquired : For Transgress 1event=0x8c,umask=201CMS Agent1 BL Credits Acquired : For Transgress 1 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m2m_ag1_bl_crd_acquired0.tgr2uncore interconnectCMS Agent1 BL Credits Acquired : For Transgress 2event=0x8c,umask=401CMS Agent1 BL Credits Acquired : For Transgress 2 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m2m_ag1_bl_crd_acquired0.tgr3uncore interconnectCMS Agent1 BL Credits Acquired : For Transgress 3event=0x8c,umask=801CMS Agent1 BL Credits Acquired : For Transgress 3 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m2m_ag1_bl_crd_acquired0.tgr4uncore interconnectCMS Agent1 BL Credits Acquired : For Transgress 4event=0x8c,umask=0x1001CMS Agent1 BL Credits Acquired : For Transgress 4 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m2m_ag1_bl_crd_acquired0.tgr5uncore interconnectCMS Agent1 BL Credits Acquired : For Transgress 5event=0x8c,umask=0x2001CMS Agent1 BL Credits Acquired : For Transgress 5 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m2m_ag1_bl_crd_acquired0.tgr6uncore interconnectCMS Agent1 BL Credits Acquired : For Transgress 4event=0x8c,umask=0x4001CMS Agent1 BL Credits Acquired : For Transgress 4 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m2m_ag1_bl_crd_acquired0.tgr7uncore interconnectCMS Agent1 BL Credits Acquired : For Transgress 5event=0x8c,umask=0x8001CMS Agent1 BL Credits Acquired : For Transgress 5 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m2m_ag1_bl_crd_acquired1.tgr10uncore interconnectCMS Agent1 BL Credits Acquired : For Transgress 10event=0x8d,umask=401CMS Agent1 BL Credits Acquired : For Transgress 10 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m2m_ag1_bl_crd_acquired1.tgr8uncore interconnectCMS Agent1 BL Credits Acquired : For Transgress 8event=0x8d,umask=101CMS Agent1 BL Credits Acquired : For Transgress 8 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m2m_ag1_bl_crd_acquired1.tgr9uncore interconnectCMS Agent1 BL Credits Acquired : For Transgress 9event=0x8d,umask=201CMS Agent1 BL Credits Acquired : For Transgress 9 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m2m_ag1_bl_crd_occupancy0.tgr0uncore interconnectCMS Agent1 BL Credits Occupancy : For Transgress 0event=0x8e,umask=101CMS Agent1 BL Credits Occupancy : For Transgress 0 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m2m_ag1_bl_crd_occupancy0.tgr1uncore interconnectCMS Agent1 BL Credits Occupancy : For Transgress 1event=0x8e,umask=201CMS Agent1 BL Credits Occupancy : For Transgress 1 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m2m_ag1_bl_crd_occupancy0.tgr2uncore interconnectCMS Agent1 BL Credits Occupancy : For Transgress 2event=0x8e,umask=401CMS Agent1 BL Credits Occupancy : For Transgress 2 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m2m_ag1_bl_crd_occupancy0.tgr3uncore interconnectCMS Agent1 BL Credits Occupancy : For Transgress 3event=0x8e,umask=801CMS Agent1 BL Credits Occupancy : For Transgress 3 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m2m_ag1_bl_crd_occupancy0.tgr4uncore interconnectCMS Agent1 BL Credits Occupancy : For Transgress 4event=0x8e,umask=0x1001CMS Agent1 BL Credits Occupancy : For Transgress 4 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m2m_ag1_bl_crd_occupancy0.tgr5uncore interconnectCMS Agent1 BL Credits Occupancy : For Transgress 5event=0x8e,umask=0x2001CMS Agent1 BL Credits Occupancy : For Transgress 5 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m2m_ag1_bl_crd_occupancy0.tgr6uncore interconnectCMS Agent1 BL Credits Occupancy : For Transgress 6event=0x8e,umask=0x4001CMS Agent1 BL Credits Occupancy : For Transgress 6 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m2m_ag1_bl_crd_occupancy0.tgr7uncore interconnectCMS Agent1 BL Credits Occupancy : For Transgress 7event=0x8e,umask=0x8001CMS Agent1 BL Credits Occupancy : For Transgress 7 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m2m_ag1_bl_crd_occupancy1.tgr10uncore interconnectCMS Agent1 BL Credits Occupancy : For Transgress 10event=0x8f,umask=401CMS Agent1 BL Credits Occupancy : For Transgress 10 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m2m_ag1_bl_crd_occupancy1.tgr8uncore interconnectCMS Agent1 BL Credits Occupancy : For Transgress 8event=0x8f,umask=101CMS Agent1 BL Credits Occupancy : For Transgress 8 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m2m_ag1_bl_crd_occupancy1.tgr9uncore interconnectCMS Agent1 BL Credits Occupancy : For Transgress 9event=0x8f,umask=201CMS Agent1 BL Credits Occupancy : For Transgress 9 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m2m_bypass_m2m_egress.not_takenuncore interconnectM2M to iMC Bypass : Not Takenevent=0x22,umask=201unc_m2m_bypass_m2m_egress.takenuncore interconnectM2M to iMC Bypass : Takenevent=0x22,umask=101unc_m2m_bypass_m2m_ingress.not_takenuncore interconnectM2M to iMC Bypass : Not Takenevent=0x21,umask=201unc_m2m_bypass_m2m_ingress.takenuncore interconnectM2M to iMC Bypass : Takenevent=0x21,umask=101unc_m2m_clockticksuncore interconnectClockticks of the mesh to memory (M2M)event=001unc_m2m_direct2core_not_taken_dirstateuncore interconnectCycles when direct to core mode, which bypasses the CHA, was disabledevent=0x2401unc_m2m_direct2core_not_taken_notforkeduncore interconnectUNC_M2M_DIRECT2CORE_NOT_TAKEN_NOTFORKEDevent=0x6001unc_m2m_direct2core_txn_overrideuncore interconnectNumber of reads in which direct to core transaction was overriddenevent=0x2501unc_m2m_direct2upi_not_taken_creditsuncore interconnectNumber of reads in which direct to Intel UPI transactions were overriddenevent=0x2801unc_m2m_direct2upi_not_taken_dirstateuncore interconnectCycles when Direct2UPI was Disabledevent=0x2701unc_m2m_direct2upi_txn_overrideuncore interconnectNumber of reads that a message sent direct2 Intel UPI was overriddenevent=0x2901Clockticks of the mesh to PCI (M2P)unc_m2m_directory_hit.clean_auncore interconnectDirectory Hit : On NonDirty Line in A Stateevent=0x2a,umask=0x8001unc_m2m_directory_hit.clean_iuncore interconnectDirectory Hit : On NonDirty Line in I Stateevent=0x2a,umask=0x1001unc_m2m_directory_hit.clean_puncore interconnectDirectory Hit : On NonDirty Line in L Stateevent=0x2a,umask=0x4001unc_m2m_directory_hit.clean_suncore interconnectDirectory Hit : On NonDirty Line in S Stateevent=0x2a,umask=0x2001unc_m2m_directory_hit.dirty_auncore interconnectDirectory Hit : On Dirty Line in A Stateevent=0x2a,umask=801unc_m2m_directory_hit.dirty_iuncore interconnectDirectory Hit : On Dirty Line in I Stateevent=0x2a,umask=101unc_m2m_directory_hit.dirty_puncore interconnectDirectory Hit : On Dirty Line in L Stateevent=0x2a,umask=401unc_m2m_directory_hit.dirty_suncore interconnectDirectory Hit : On Dirty Line in S Stateevent=0x2a,umask=201unc_m2m_directory_lookup.anyuncore interconnectMulti-socket cacheline Directory Lookups : Found in any stateevent=0x2d,umask=101unc_m2m_directory_lookup.state_auncore interconnectMulti-socket cacheline Directory Lookups : Found in A stateevent=0x2d,umask=801unc_m2m_directory_lookup.state_iuncore interconnectMulti-socket cacheline Directory Lookups : Found in I stateevent=0x2d,umask=201unc_m2m_directory_lookup.state_suncore interconnectMulti-socket cacheline Directory Lookups : Found in S stateevent=0x2d,umask=401unc_m2m_directory_miss.clean_auncore interconnectDirectory Miss : On NonDirty Line in A Stateevent=0x2b,umask=0x8001unc_m2m_directory_miss.clean_iuncore interconnectDirectory Miss : On NonDirty Line in I Stateevent=0x2b,umask=0x1001unc_m2m_directory_miss.clean_puncore interconnectDirectory Miss : On NonDirty Line in L Stateevent=0x2b,umask=0x4001unc_m2m_directory_miss.clean_suncore interconnectDirectory Miss : On NonDirty Line in S Stateevent=0x2b,umask=0x2001unc_m2m_directory_miss.dirty_auncore interconnectDirectory Miss : On Dirty Line in A Stateevent=0x2b,umask=801unc_m2m_directory_miss.dirty_iuncore interconnectDirectory Miss : On Dirty Line in I Stateevent=0x2b,umask=101unc_m2m_directory_miss.dirty_puncore interconnectDirectory Miss : On Dirty Line in L Stateevent=0x2b,umask=401unc_m2m_directory_miss.dirty_suncore interconnectDirectory Miss : On Dirty Line in S Stateevent=0x2b,umask=201unc_m2m_directory_update.anyuncore interconnectMulti-socket cacheline Directory Updates : From/to any state. Note: event counts are incorrect in 2LM modeevent=0x2e,umask=101unc_m2m_distress_asserted.dpt_localuncore interconnectDistress signal asserted : DPT Localevent=0xaf,umask=401Distress signal asserted : DPT Local : Counts the number of cycles either the local or incoming distress signals are asserted. : Dynamic Prefetch Throttle triggered by this tileunc_m2m_distress_asserted.dpt_nonlocaluncore interconnectDistress signal asserted : DPT Remoteevent=0xaf,umask=801Distress signal asserted : DPT Remote : Counts the number of cycles either the local or incoming distress signals are asserted. : Dynamic Prefetch Throttle received by this tileunc_m2m_distress_asserted.dpt_stall_ivuncore interconnectDistress signal asserted : DPT Stalled - IVevent=0xaf,umask=0x4001Distress signal asserted : DPT Stalled - IV : Counts the number of cycles either the local or incoming distress signals are asserted. : DPT occurred while regular IVs were received, causing DPT to be stalledunc_m2m_distress_asserted.dpt_stall_nocrduncore interconnectDistress signal asserted : DPT Stalled -  No Creditevent=0xaf,umask=0x8001Distress signal asserted : DPT Stalled -  No Credit : Counts the number of cycles either the local or incoming distress signals are asserted. : DPT occurred while credit not available causing DPT to be stalledunc_m2m_distress_asserted.horzuncore interconnectDistress signal asserted : Horizontalevent=0xaf,umask=201Distress signal asserted : Horizontal : Counts the number of cycles either the local or incoming distress signals are asserted. : If TGR egress is full, then agents will throttle outgoing AD IDI transactionsunc_m2m_distress_asserted.pmm_localuncore interconnectDistress signal asserted : PMM Localevent=0xaf,umask=0x1001Distress signal asserted : PMM Local : Counts the number of cycles either the local or incoming distress signals are asserted. : If the CHA TOR has too many PMM transactions, this signal will throttle outgoing MS2IDI trafficunc_m2m_distress_asserted.pmm_nonlocaluncore interconnectDistress signal asserted : PMM Remoteevent=0xaf,umask=0x2001Distress signal asserted : PMM Remote : Counts the number of cycles either the local or incoming distress signals are asserted. : If another CHA TOR has too many PMM transactions, this signal will throttle outgoing MS2IDI trafficunc_m2m_distress_asserted.vertuncore interconnectDistress signal asserted : Verticalevent=0xaf,umask=101Distress signal asserted : Vertical : Counts the number of cycles either the local or incoming distress signals are asserted. : If IRQ egress is full, then agents will throttle outgoing AD IDI transactionsunc_m2m_distress_pmmuncore interconnectUNC_M2M_DISTRESS_PMMevent=0xf201unc_m2m_distress_pmm_memmodeuncore interconnectUNC_M2M_DISTRESS_PMM_MEMMODEevent=0xf101unc_m2m_egress_ordering.iv_snoopgo_dnuncore interconnectEgress Blocking due to Ordering requirements : Downevent=0xba,umask=401Egress Blocking due to Ordering requirements : Down : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirementsunc_m2m_egress_ordering.iv_snoopgo_upuncore interconnectEgress Blocking due to Ordering requirements : Upevent=0xba,umask=101Egress Blocking due to Ordering requirements : Up : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirementsunc_m2m_horz_ring_ad_in_use.left_evenuncore interconnectHorizontal AD Ring In Use : Left and Evenevent=0xb6,umask=101Horizontal AD Ring In Use : Left and Even : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_horz_ring_ad_in_use.left_odduncore interconnectHorizontal AD Ring In Use : Left and Oddevent=0xb6,umask=201Horizontal AD Ring In Use : Left and Odd : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_horz_ring_ad_in_use.right_evenuncore interconnectHorizontal AD Ring In Use : Right and Evenevent=0xb6,umask=401Horizontal AD Ring In Use : Right and Even : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_horz_ring_ad_in_use.right_odduncore interconnectHorizontal AD Ring In Use : Right and Oddevent=0xb6,umask=801Horizontal AD Ring In Use : Right and Odd : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_horz_ring_akc_in_use.left_evenuncore interconnectHorizontal AK Ring In Use : Left and Evenevent=0xbb,umask=101Horizontal AK Ring In Use : Left and Even : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_horz_ring_akc_in_use.left_odduncore interconnectHorizontal AK Ring In Use : Left and Oddevent=0xbb,umask=201Horizontal AK Ring In Use : Left and Odd : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_horz_ring_akc_in_use.right_evenuncore interconnectHorizontal AK Ring In Use : Right and Evenevent=0xbb,umask=401Horizontal AK Ring In Use : Right and Even : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_horz_ring_akc_in_use.right_odduncore interconnectHorizontal AK Ring In Use : Right and Oddevent=0xbb,umask=801Horizontal AK Ring In Use : Right and Odd : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_horz_ring_ak_in_use.left_evenuncore interconnectHorizontal AK Ring In Use : Left and Evenevent=0xb7,umask=101Horizontal AK Ring In Use : Left and Even : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_horz_ring_ak_in_use.left_odduncore interconnectHorizontal AK Ring In Use : Left and Oddevent=0xb7,umask=201Horizontal AK Ring In Use : Left and Odd : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_horz_ring_ak_in_use.right_evenuncore interconnectHorizontal AK Ring In Use : Right and Evenevent=0xb7,umask=401Horizontal AK Ring In Use : Right and Even : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_horz_ring_ak_in_use.right_odduncore interconnectHorizontal AK Ring In Use : Right and Oddevent=0xb7,umask=801Horizontal AK Ring In Use : Right and Odd : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_horz_ring_bl_in_use.left_evenuncore interconnectHorizontal BL Ring in Use : Left and Evenevent=0xb8,umask=101Horizontal BL Ring in Use : Left and Even : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_horz_ring_bl_in_use.left_odduncore interconnectHorizontal BL Ring in Use : Left and Oddevent=0xb8,umask=201Horizontal BL Ring in Use : Left and Odd : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_horz_ring_bl_in_use.right_evenuncore interconnectHorizontal BL Ring in Use : Right and Evenevent=0xb8,umask=401Horizontal BL Ring in Use : Right and Even : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_horz_ring_bl_in_use.right_odduncore interconnectHorizontal BL Ring in Use : Right and Oddevent=0xb8,umask=801Horizontal BL Ring in Use : Right and Odd : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_horz_ring_iv_in_use.leftuncore interconnectHorizontal IV Ring in Use : Leftevent=0xb9,umask=101Horizontal IV Ring in Use : Left : Counts the number of cycles that the Horizontal IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODDunc_m2m_horz_ring_iv_in_use.rightuncore interconnectHorizontal IV Ring in Use : Rightevent=0xb9,umask=401Horizontal IV Ring in Use : Right : Counts the number of cycles that the Horizontal IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODDunc_m2m_imc_reads.alluncore interconnectM2M Reads Issued to iMC : All, regardless of priority. - All Channelsevent=0x37,umask=0x70401unc_m2m_imc_reads.ch0_alluncore interconnectM2M Reads Issued to iMC : All, regardless of priority. - Ch0event=0x37,umask=0x10401unc_m2m_imc_reads.ch0_from_tgruncore interconnectM2M Reads Issued to iMC : From TGR - Ch0event=0x37,umask=0x14001unc_m2m_imc_reads.ch0_isochuncore interconnectM2M Reads Issued to iMC : Critical Priority - Ch0event=0x37,umask=0x10201unc_m2m_imc_reads.ch0_normaluncore interconnectM2M Reads Issued to iMC : Normal Priority - Ch0event=0x37,umask=0x10101unc_m2m_imc_reads.ch0_to_ddr_as_cacheuncore interconnectM2M Reads Issued to iMC : DDR, acting as Cache - Ch0event=0x37,umask=0x11001unc_m2m_imc_reads.ch0_to_ddr_as_memuncore interconnectM2M Reads Issued to iMC : DDR - Ch0event=0x37,umask=0x10801unc_m2m_imc_reads.ch0_to_pmmuncore interconnectM2M Reads Issued to iMC : PMM - Ch0event=0x37,umask=0x12001M2M Reads Issued to iMC : PMM - Ch0 : Counts all PMM dimm read requests(full line) sent from M2M to iMCunc_m2m_imc_reads.ch1_alluncore interconnectM2M Reads Issued to iMC : All, regardless of priority. - Ch1event=0x37,umask=0x20401unc_m2m_imc_reads.ch1_from_tgruncore interconnectM2M Reads Issued to iMC : From TGR - Ch1event=0x37,umask=0x24001unc_m2m_imc_reads.ch1_isochuncore interconnectM2M Reads Issued to iMC : Critical Priority - Ch1event=0x37,umask=0x20201unc_m2m_imc_reads.ch1_normaluncore interconnectM2M Reads Issued to iMC : Normal Priority - Ch1event=0x37,umask=0x20101unc_m2m_imc_reads.ch1_to_ddr_as_cacheuncore interconnectM2M Reads Issued to iMC : DDR, acting as Cache - Ch1event=0x37,umask=0x21001unc_m2m_imc_reads.ch1_to_ddr_as_memuncore interconnectM2M Reads Issued to iMC : DDR - Ch1event=0x37,umask=0x20801unc_m2m_imc_reads.ch1_to_pmmuncore interconnectM2M Reads Issued to iMC : PMM - Ch1event=0x37,umask=0x22001M2M Reads Issued to iMC : PMM - Ch1 : Counts all PMM dimm read requests(full line) sent from M2M to iMCunc_m2m_imc_reads.ch2_from_tgruncore interconnectM2M Reads Issued to iMC : From TGR - Ch2event=0x37,umask=0x44001unc_m2m_imc_reads.from_tgruncore interconnectM2M Reads Issued to iMC : From TGR - All Channelsevent=0x37,umask=0x74001unc_m2m_imc_reads.isochuncore interconnectM2M Reads Issued to iMC : Critical Priority - All Channelsevent=0x37,umask=0x70201unc_m2m_imc_reads.normaluncore interconnectM2M Reads Issued to iMC : Normal Priority - All Channelsevent=0x37,umask=0x70101unc_m2m_imc_reads.to_ddr_as_cacheuncore interconnectM2M Reads Issued to iMC : DDR, acting as Cache - All Channelsevent=0x37,umask=0x71001unc_m2m_imc_reads.to_ddr_as_memuncore interconnectM2M Reads Issued to iMC : DDR - All Channelsevent=0x37,umask=0x70801unc_m2m_imc_reads.to_pmmuncore interconnectM2M Reads Issued to iMC : PMM - All Channelsevent=0x37,umask=0x72001unc_m2m_imc_writes.alluncore interconnectM2M Writes Issued to iMC : All Writes - All Channelsevent=0x38,umask=0x1c1001unc_m2m_imc_writes.ch0_alluncore interconnectM2M Writes Issued to iMC : All Writes - Ch0event=0x38,umask=0x41001unc_m2m_imc_writes.ch0_from_tgruncore interconnectM2M Writes Issued to iMC : From TGR - Ch0event=0x3801unc_m2m_imc_writes.ch0_fulluncore interconnectM2M Writes Issued to iMC : Full Line Non-ISOCH - Ch0event=0x38,umask=0x40101unc_m2m_imc_writes.ch0_full_isochuncore interconnectM2M Writes Issued to iMC : ISOCH Full Line - Ch0event=0x38,umask=0x40401unc_m2m_imc_writes.ch0_niuncore interconnectM2M Writes Issued to iMC : Non-Inclusive - Ch0event=0x3801unc_m2m_imc_writes.ch0_ni_missuncore interconnectM2M Writes Issued to iMC : Non-Inclusive Miss - Ch0event=0x3801unc_m2m_imc_writes.ch0_partialuncore interconnectM2M Writes Issued to iMC : Partial Non-ISOCH - Ch0event=0x38,umask=0x40201unc_m2m_imc_writes.ch0_partial_isochuncore interconnectM2M Writes Issued to iMC : ISOCH Partial - Ch0event=0x38,umask=0x40801unc_m2m_imc_writes.ch0_to_ddr_as_cacheuncore interconnectM2M Writes Issued to iMC : DDR, acting as Cache - Ch0event=0x38,umask=0x44001unc_m2m_imc_writes.ch0_to_ddr_as_memuncore interconnectM2M Writes Issued to iMC : DDR - Ch0event=0x38,umask=0x42001unc_m2m_imc_writes.ch0_to_pmmuncore interconnectM2M Writes Issued to iMC : PMM - Ch0event=0x38,umask=0x48001M2M Writes Issued to iMC : PMM - Ch0 : Counts all PMM dimm writes requests(full line and partial) sent from M2M to iMCunc_m2m_imc_writes.ch1_alluncore interconnectM2M Writes Issued to iMC : All Writes - Ch1event=0x38,umask=0x81001unc_m2m_imc_writes.ch1_from_tgruncore interconnectM2M Writes Issued to iMC : From TGR - Ch1event=0x3801unc_m2m_imc_writes.ch1_fulluncore interconnectM2M Writes Issued to iMC : Full Line Non-ISOCH - Ch1event=0x38,umask=0x80101unc_m2m_imc_writes.ch1_full_isochuncore interconnectM2M Writes Issued to iMC : ISOCH Full Line - Ch1event=0x38,umask=0x80401unc_m2m_imc_writes.ch1_niuncore interconnectM2M Writes Issued to iMC : Non-Inclusive - Ch1event=0x3801unc_m2m_imc_writes.ch1_ni_missuncore interconnectM2M Writes Issued to iMC : Non-Inclusive Miss - Ch1event=0x3801unc_m2m_imc_writes.ch1_partialuncore interconnectM2M Writes Issued to iMC : Partial Non-ISOCH - Ch1event=0x38,umask=0x80201unc_m2m_imc_writes.ch1_partial_isochuncore interconnectM2M Writes Issued to iMC : ISOCH Partial - Ch1event=0x38,umask=0x80801unc_m2m_imc_writes.ch1_to_ddr_as_cacheuncore interconnectM2M Writes Issued to iMC : DDR, acting as Cache - Ch1event=0x38,umask=0x84001unc_m2m_imc_writes.ch1_to_ddr_as_memuncore interconnectM2M Writes Issued to iMC : DDR - Ch1event=0x38,umask=0x82001unc_m2m_imc_writes.ch1_to_pmmuncore interconnectM2M Writes Issued to iMC : PMM - Ch1event=0x38,umask=0x88001M2M Writes Issued to iMC : PMM - Ch1 : Counts all PMM dimm writes requests(full line and partial) sent from M2M to iMCunc_m2m_imc_writes.from_tgruncore interconnectM2M Writes Issued to iMC : From TGR - All Channelsevent=0x3801unc_m2m_imc_writes.fulluncore interconnectM2M Writes Issued to iMC : Full Line Non-ISOCH - All Channelsevent=0x38,umask=0x1c0101unc_m2m_imc_writes.full_isochuncore interconnectM2M Writes Issued to iMC : ISOCH Full Line - All Channelsevent=0x38,umask=0x1c0401unc_m2m_imc_writes.niuncore interconnectM2M Writes Issued to iMC : Non-Inclusive - All Channelsevent=0x3801unc_m2m_imc_writes.ni_missuncore interconnectM2M Writes Issued to iMC : Non-Inclusive Miss - All Channelsevent=0x3801unc_m2m_imc_writes.partialuncore interconnectM2M Writes Issued to iMC : Partial Non-ISOCH - All Channelsevent=0x38,umask=0x1c0201unc_m2m_imc_writes.partial_isochuncore interconnectM2M Writes Issued to iMC : ISOCH Partial - All Channelsevent=0x38,umask=0x1c0801unc_m2m_imc_writes.to_ddr_as_cacheuncore interconnectM2M Writes Issued to iMC : DDR, acting as Cache - All Channelsevent=0x38,umask=0x1c4001unc_m2m_imc_writes.to_ddr_as_memuncore interconnectM2M Writes Issued to iMC : DDR - All Channelsevent=0x38,umask=0x1c2001unc_m2m_imc_writes.to_pmmuncore interconnectM2M Writes Issued to iMC : PMM - All Channelsevent=0x38,umask=0x1c8001unc_m2m_mirr_wrq_insertsuncore interconnectWrite Tracker Insertsevent=0x6401unc_m2m_mirr_wrq_occupancyuncore interconnectWrite Tracker Occupancyevent=0x6501unc_m2m_misc_external.mbe_inst0uncore interconnectMiscellaneous Events (mostly from MS2IDI) : Number of cycles MBE is high for MS2IDI0event=0xe6,umask=101unc_m2m_misc_external.mbe_inst1uncore interconnectMiscellaneous Events (mostly from MS2IDI) : Number of cycles MBE is high for MS2IDI1event=0xe6,umask=201unc_m2m_pkt_match.mcuncore interconnectNumber Packet Header Matches : MC Matchevent=0x4c,umask=201unc_m2m_pkt_match.meshuncore interconnectNumber Packet Header Matches : Mesh Matchevent=0x4c,umask=101unc_m2m_prefcam_cis_dropsuncore interconnectUNC_M2M_PREFCAM_CIS_DROPSevent=0x7301unc_m2m_prefcam_cycles_full.allchuncore interconnectPrefetch CAM Cycles Full : All Channelsevent=0x6b,umask=701unc_m2m_prefcam_cycles_full.ch0uncore interconnectPrefetch CAM Cycles Full : Channel 0event=0x6b,umask=101unc_m2m_prefcam_cycles_full.ch1uncore interconnectPrefetch CAM Cycles Full : Channel 1event=0x6b,umask=201unc_m2m_prefcam_cycles_full.ch2uncore interconnectPrefetch CAM Cycles Full : Channel 2event=0x6b,umask=401unc_m2m_prefcam_cycles_ne.allchuncore interconnectPrefetch CAM Cycles Not Empty : All Channelsevent=0x6c,umask=701unc_m2m_prefcam_cycles_ne.ch0uncore interconnectPrefetch CAM Cycles Not Empty : Channel 0event=0x6c,umask=101unc_m2m_prefcam_cycles_ne.ch1uncore interconnectPrefetch CAM Cycles Not Empty : Channel 1event=0x6c,umask=201unc_m2m_prefcam_cycles_ne.ch2uncore interconnectPrefetch CAM Cycles Not Empty : Channel 2event=0x6c,umask=401unc_m2m_prefcam_deallocs.ch0_hita0_invaluncore interconnectPrefetch CAM Deallocsevent=0x6e,umask=101unc_m2m_prefcam_deallocs.ch0_hita1_invaluncore interconnectPrefetch CAM Deallocsevent=0x6e,umask=201unc_m2m_prefcam_deallocs.ch0_miss_invaluncore interconnectPrefetch CAM Deallocsevent=0x6e,umask=401unc_m2m_prefcam_deallocs.ch0_rsp_pdresetuncore interconnectPrefetch CAM Deallocsevent=0x6e,umask=801unc_m2m_prefcam_deallocs.ch1_hita0_invaluncore interconnectPrefetch CAM Deallocsevent=0x6e,umask=0x1001unc_m2m_prefcam_deallocs.ch1_hita1_invaluncore interconnectPrefetch CAM Deallocsevent=0x6e,umask=0x2001unc_m2m_prefcam_deallocs.ch1_miss_invaluncore interconnectPrefetch CAM Deallocsevent=0x6e,umask=0x4001unc_m2m_prefcam_deallocs.ch1_rsp_pdresetuncore interconnectPrefetch CAM Deallocsevent=0x6e,umask=0x8001unc_m2m_prefcam_deallocs.ch2_hita0_invaluncore interconnectPrefetch CAM Deallocsevent=0x6e01unc_m2m_prefcam_deallocs.ch2_hita1_invaluncore interconnectPrefetch CAM Deallocsevent=0x6e01unc_m2m_prefcam_deallocs.ch2_miss_invaluncore interconnectPrefetch CAM Deallocsevent=0x6e01unc_m2m_prefcam_deallocs.ch2_rsp_pdresetuncore interconnectPrefetch CAM Deallocsevent=0x6e01unc_m2m_prefcam_demand_drops.ch0_upiuncore interconnectData Prefetches Dropped : UPI - Ch 0event=0x6f,umask=201unc_m2m_prefcam_demand_drops.ch0_xptuncore interconnectData Prefetches Dropped : XPT - Ch 0event=0x6f,umask=101unc_m2m_prefcam_demand_drops.ch1_upiuncore interconnectData Prefetches Dropped : UPI - Ch 1event=0x6f,umask=801unc_m2m_prefcam_demand_drops.ch1_xptuncore interconnectData Prefetches Dropped : XPT - Ch 1event=0x6f,umask=401unc_m2m_prefcam_demand_drops.ch2_upiuncore interconnectData Prefetches Dropped : UPI - Ch 2event=0x6f,umask=0x2001unc_m2m_prefcam_demand_drops.ch2_xptuncore interconnectData Prefetches Dropped : XPT - Ch 2event=0x6f,umask=0x1001unc_m2m_prefcam_demand_drops.upi_allchuncore interconnectData Prefetches Dropped : UPI - All Channelsevent=0x6f,umask=0x2a01unc_m2m_prefcam_demand_drops.xpt_allchuncore interconnectData Prefetches Dropped : XPT - All Channelsevent=0x6f,umask=0x1501unc_m2m_prefcam_demand_merge.ch0_xptupiuncore interconnectDemands Merged with CAMed Prefetches : XPT & UPI- Ch 0event=0x74,umask=101Demands Merged with CAMed Prefetches : XPT & UPI - Ch 0unc_m2m_prefcam_demand_merge.ch1_xptupiuncore interconnectDemands Merged with CAMed Prefetches : XPT & UPI - Ch 1event=0x74,umask=401Demands Merged with CAMed Prefetches : XPT & UPI- Ch 1unc_m2m_prefcam_demand_merge.ch2_xptupiuncore interconnectDemands Merged with CAMed Prefetches : XPT & UPI- Ch 2event=0x74,umask=0x1001Demands Merged with CAMed Prefetches : XPT & UPI - Ch 2unc_m2m_prefcam_demand_merge.xptupi_allchuncore interconnectDemands Merged with CAMed Prefetches : XPT & UPI- All Channelsevent=0x74,umask=0x1501Demands Merged with CAMed Prefetches : XPT & UPI - All Channelsunc_m2m_prefcam_demand_no_merge.ch0_xptupiuncore interconnectDemands Not Merged with CAMed Prefetches : XPT & UPI - Ch 0event=0x75,umask=101Demands Not Merged with CAMed Prefetches : XPT & UPI- Ch 0unc_m2m_prefcam_demand_no_merge.ch1_xptupiuncore interconnectDemands Not Merged with CAMed Prefetches : XPT & UPI - Ch 1event=0x75,umask=401Demands Not Merged with CAMed Prefetches : XPT & UPI- Ch 1unc_m2m_prefcam_demand_no_merge.ch2_xptupiuncore interconnectDemands Not Merged with CAMed Prefetches : XPT & UPI - Ch 2event=0x75,umask=0x1001unc_m2m_prefcam_demand_no_merge.xptupi_allchuncore interconnectDemands Not Merged with CAMed Prefetches : XPT & UPI - All Channelsevent=0x75,umask=0x1501unc_m2m_prefcam_drop_reasons_ch0.errorblk_rxcuncore interconnectData Prefetches Dropped Ch0 - Reasonsevent=0x70,umask=0x1001unc_m2m_prefcam_drop_reasons_ch0.not_pf_sad_regionuncore interconnectData Prefetches Dropped Ch0 - Reasonsevent=0x70,umask=201unc_m2m_prefcam_drop_reasons_ch0.pf_ad_crduncore interconnectData Prefetches Dropped Ch0 - Reasonsevent=0x70,umask=0x2001unc_m2m_prefcam_drop_reasons_ch0.pf_cam_fulluncore interconnectData Prefetches Dropped Ch0 - Reasonsevent=0x70,umask=0x4001unc_m2m_prefcam_drop_reasons_ch0.pf_cam_hituncore interconnectData Prefetches Dropped Ch0 - Reasonsevent=0x70,umask=401unc_m2m_prefcam_drop_reasons_ch0.pf_secure_dropuncore interconnectData Prefetches Dropped Ch0 - Reasonsevent=0x70,umask=101unc_m2m_prefcam_drop_reasons_ch0.rpq_proxyuncore interconnectData Prefetches Dropped Ch0 - Reasonsevent=0x7001unc_m2m_prefcam_drop_reasons_ch0.stop_b2buncore interconnectData Prefetches Dropped Ch0 - Reasonsevent=0x70,umask=801unc_m2m_prefcam_drop_reasons_ch0.upi_threshuncore interconnectData Prefetches Dropped Ch0 - Reasonsevent=0x7001unc_m2m_prefcam_drop_reasons_ch0.wpq_proxyuncore interconnectData Prefetches Dropped Ch0 - Reasonsevent=0x70,umask=0x8001unc_m2m_prefcam_drop_reasons_ch0.xpt_threshuncore interconnectData Prefetches Dropped Ch0 - Reasonsevent=0x7001unc_m2m_prefcam_drop_reasons_ch1.errorblk_rxcuncore interconnectData Prefetches Dropped Ch1 - Reasonsevent=0x71,umask=0x1001unc_m2m_prefcam_drop_reasons_ch1.not_pf_sad_regionuncore interconnectData Prefetches Dropped Ch1 - Reasonsevent=0x71,umask=201unc_m2m_prefcam_drop_reasons_ch1.pf_ad_crduncore interconnectData Prefetches Dropped Ch1 - Reasonsevent=0x71,umask=0x2001unc_m2m_prefcam_drop_reasons_ch1.pf_cam_fulluncore interconnectData Prefetches Dropped Ch1 - Reasonsevent=0x71,umask=0x4001unc_m2m_prefcam_drop_reasons_ch1.pf_cam_hituncore interconnectData Prefetches Dropped Ch1 - Reasonsevent=0x71,umask=401unc_m2m_prefcam_drop_reasons_ch1.pf_secure_dropuncore interconnectData Prefetches Dropped Ch1 - Reasonsevent=0x71,umask=101unc_m2m_prefcam_drop_reasons_ch1.rpq_proxyuncore interconnectData Prefetches Dropped Ch1 - Reasonsevent=0x7101unc_m2m_prefcam_drop_reasons_ch1.stop_b2buncore interconnectData Prefetches Dropped Ch1 - Reasonsevent=0x71,umask=801unc_m2m_prefcam_drop_reasons_ch1.upi_threshuncore interconnectData Prefetches Dropped Ch1 - Reasonsevent=0x7101unc_m2m_prefcam_drop_reasons_ch1.wpq_proxyuncore interconnectData Prefetches Dropped Ch1 - Reasonsevent=0x71,umask=0x8001unc_m2m_prefcam_drop_reasons_ch1.xpt_threshuncore interconnectData Prefetches Dropped Ch1 - Reasonsevent=0x7101unc_m2m_prefcam_drop_reasons_ch2.errorblk_rxcuncore interconnectData Prefetches Dropped Ch2 - Reasonsevent=0x72,umask=0x1001unc_m2m_prefcam_drop_reasons_ch2.not_pf_sad_regionuncore interconnectData Prefetches Dropped Ch2 - Reasonsevent=0x72,umask=201unc_m2m_prefcam_drop_reasons_ch2.pf_ad_crduncore interconnectData Prefetches Dropped Ch2 - Reasonsevent=0x72,umask=0x2001unc_m2m_prefcam_drop_reasons_ch2.pf_cam_fulluncore interconnectData Prefetches Dropped Ch2 - Reasonsevent=0x72,umask=0x4001unc_m2m_prefcam_drop_reasons_ch2.pf_cam_hituncore interconnectData Prefetches Dropped Ch2 - Reasonsevent=0x72,umask=401unc_m2m_prefcam_drop_reasons_ch2.pf_secure_dropuncore interconnectData Prefetches Dropped Ch2 - Reasonsevent=0x72,umask=101unc_m2m_prefcam_drop_reasons_ch2.rpq_proxyuncore interconnectData Prefetches Dropped Ch2 - Reasonsevent=0x7201unc_m2m_prefcam_drop_reasons_ch2.stop_b2buncore interconnectData Prefetches Dropped Ch2 - Reasonsevent=0x72,umask=801unc_m2m_prefcam_drop_reasons_ch2.upi_threshuncore interconnectData Prefetches Dropped Ch2 - Reasonsevent=0x7201unc_m2m_prefcam_drop_reasons_ch2.wpq_proxyuncore interconnectData Prefetches Dropped Ch2 - Reasonsevent=0x72,umask=0x8001unc_m2m_prefcam_drop_reasons_ch2.xpt_threshuncore interconnectData Prefetches Dropped Ch2 - Reasonsevent=0x7201unc_m2m_prefcam_inserts.ch0_upiuncore interconnectPrefetch CAM Inserts : UPI - Ch 0event=0x6d,umask=201unc_m2m_prefcam_inserts.ch0_xptuncore interconnectPrefetch CAM Inserts : XPT - Ch 0event=0x6d,umask=101unc_m2m_prefcam_inserts.ch1_upiuncore interconnectPrefetch CAM Inserts : UPI - Ch 1event=0x6d,umask=801unc_m2m_prefcam_inserts.ch1_xptuncore interconnectPrefetch CAM Inserts : XPT - Ch 1event=0x6d,umask=401unc_m2m_prefcam_inserts.ch2_upiuncore interconnectPrefetch CAM Inserts : UPI - Ch 2event=0x6d,umask=0x2001unc_m2m_prefcam_inserts.ch2_xptuncore interconnectPrefetch CAM Inserts : XPT - Ch 2event=0x6d,umask=0x1001unc_m2m_prefcam_inserts.upi_allchuncore interconnectPrefetch CAM Inserts : UPI - All Channelsevent=0x6d,umask=0x2a01unc_m2m_prefcam_inserts.xpt_allchuncore interconnectPrefetch CAM Inserts : XPT - All Channelsevent=0x6d,umask=0x1501unc_m2m_prefcam_occupancy.allchuncore interconnectPrefetch CAM Occupancy : All Channelsevent=0x6a,umask=701unc_m2m_prefcam_occupancy.ch0uncore interconnectPrefetch CAM Occupancy : Channel 0event=0x6a,umask=101unc_m2m_prefcam_occupancy.ch1uncore interconnectPrefetch CAM Occupancy : Channel 1event=0x6a,umask=201unc_m2m_prefcam_occupancy.ch2uncore interconnectPrefetch CAM Occupancy : Channel 2event=0x6a,umask=401unc_m2m_prefcam_resp_miss.allchuncore interconnect: All Channelsevent=0x76,umask=701unc_m2m_prefcam_resp_miss.ch0uncore interconnect: Channel 0event=0x76,umask=101unc_m2m_prefcam_resp_miss.ch1uncore interconnect: Channel 1event=0x76,umask=201unc_m2m_prefcam_resp_miss.ch2uncore interconnect: Channel 2event=0x76,umask=401unc_m2m_prefcam_rxc_cycles_neuncore interconnectUNC_M2M_PREFCAM_RxC_CYCLES_NEevent=0x7901unc_m2m_prefcam_rxc_deallocs.1lm_posteduncore interconnectUNC_M2M_PREFCAM_RxC_DEALLOCS.1LM_POSTEDevent=0x7a,umask=201unc_m2m_prefcam_rxc_deallocs.cisuncore interconnectUNC_M2M_PREFCAM_RxC_DEALLOCS.CISevent=0x7a,umask=801unc_m2m_prefcam_rxc_deallocs.pmm_memmode_acceptuncore interconnectUNC_M2M_PREFCAM_RxC_DEALLOCS.PMM_MEMMODE_ACCEPTevent=0x7a,umask=401unc_m2m_prefcam_rxc_deallocs.squasheduncore interconnectUNC_M2M_PREFCAM_RxC_DEALLOCS.SQUASHEDevent=0x7a,umask=101unc_m2m_prefcam_rxc_insertsuncore interconnectUNC_M2M_PREFCAM_RxC_INSERTSevent=0x7801unc_m2m_prefcam_rxc_occupancyuncore interconnectUNC_M2M_PREFCAM_RxC_OCCUPANCYevent=0x7701unc_m2m_ring_bounces_horz.aduncore interconnectMessages that bounced on the Horizontal Ring. : ADevent=0xac,umask=101Messages that bounced on the Horizontal Ring. : AD : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring typeunc_m2m_ring_bounces_horz.akuncore interconnectMessages that bounced on the Horizontal Ring. : AKevent=0xac,umask=201Messages that bounced on the Horizontal Ring. : AK : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring typeunc_m2m_ring_bounces_horz.bluncore interconnectMessages that bounced on the Horizontal Ring. : BLevent=0xac,umask=401Messages that bounced on the Horizontal Ring. : BL : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring typeunc_m2m_ring_bounces_horz.ivuncore interconnectMessages that bounced on the Horizontal Ring. : IVevent=0xac,umask=801Messages that bounced on the Horizontal Ring. : IV : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring typeunc_m2m_ring_bounces_vert.aduncore interconnectMessages that bounced on the Vertical Ring. : ADevent=0xaa,umask=101Messages that bounced on the Vertical Ring. : AD : Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_m2m_ring_bounces_vert.akuncore interconnectMessages that bounced on the Vertical Ring. : Acknowledgements to coreevent=0xaa,umask=201Messages that bounced on the Vertical Ring. : Acknowledgements to core : Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_m2m_ring_bounces_vert.akcuncore interconnectMessages that bounced on the Vertical Ringevent=0xaa,umask=0x1001Messages that bounced on the Vertical Ring. : Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_m2m_ring_bounces_vert.bluncore interconnectMessages that bounced on the Vertical Ring. : Data Responses to coreevent=0xaa,umask=401Messages that bounced on the Vertical Ring. : Data Responses to core : Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_m2m_ring_bounces_vert.ivuncore interconnectMessages that bounced on the Vertical Ring. : Snoops of processor's cacheevent=0xaa,umask=801Messages that bounced on the Vertical Ring. : Snoops of processor's cache. : Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_m2m_ring_sink_starved_horz.aduncore interconnectSink Starvation on Horizontal Ring : ADevent=0xad,umask=101unc_m2m_ring_sink_starved_horz.akuncore interconnectSink Starvation on Horizontal Ring : AKevent=0xad,umask=201unc_m2m_ring_sink_starved_horz.ak_ag1uncore interconnectSink Starvation on Horizontal Ring : Acknowledgements to Agent 1event=0xad,umask=0x2001unc_m2m_ring_sink_starved_horz.bluncore interconnectSink Starvation on Horizontal Ring : BLevent=0xad,umask=401unc_m2m_ring_sink_starved_horz.ivuncore interconnectSink Starvation on Horizontal Ring : IVevent=0xad,umask=801unc_m2m_ring_sink_starved_vert.aduncore interconnectSink Starvation on Vertical Ring : ADevent=0xab,umask=101unc_m2m_ring_sink_starved_vert.akuncore interconnectSink Starvation on Vertical Ring : Acknowledgements to coreevent=0xab,umask=201unc_m2m_ring_sink_starved_vert.akcuncore interconnectSink Starvation on Vertical Ringevent=0xab,umask=0x1001unc_m2m_ring_sink_starved_vert.bluncore interconnectSink Starvation on Vertical Ring : Data Responses to coreevent=0xab,umask=401unc_m2m_ring_sink_starved_vert.ivuncore interconnectSink Starvation on Vertical Ring : Snoops of processor's cacheevent=0xab,umask=801unc_m2m_ring_src_thrtluncore interconnectSource Throttleevent=0xae01unc_m2m_rpq_no_reg_crd.ch0uncore interconnectM2M to iMC RPQ Cycles w/Credits - Regular : Channel 0event=0x43,umask=101unc_m2m_rpq_no_reg_crd.ch1uncore interconnectM2M to iMC RPQ Cycles w/Credits - Regular : Channel 1event=0x43,umask=201unc_m2m_rpq_no_reg_crd.ch2uncore interconnectM2M to iMC RPQ Cycles w/Credits - Regular : Channel 2event=0x43,umask=401unc_m2m_rpq_no_reg_crd_pmm.chn0uncore interconnectM2M->iMC RPQ Cycles w/Credits - PMM : Channel 0event=0x4f,umask=101unc_m2m_rpq_no_reg_crd_pmm.chn1uncore interconnectM2M->iMC RPQ Cycles w/Credits - PMM : Channel 1event=0x4f,umask=201unc_m2m_rpq_no_reg_crd_pmm.chn2uncore interconnectM2M->iMC RPQ Cycles w/Credits - PMM : Channel 2event=0x4f,umask=401unc_m2m_rpq_no_spec_crd.ch0uncore interconnectM2M to iMC RPQ Cycles w/Credits - Special : Channel 0event=0x44,umask=101unc_m2m_rpq_no_spec_crd.ch1uncore interconnectM2M to iMC RPQ Cycles w/Credits - Special : Channel 1event=0x44,umask=201unc_m2m_rpq_no_spec_crd.ch2uncore interconnectM2M to iMC RPQ Cycles w/Credits - Special : Channel 2event=0x44,umask=401unc_m2m_rxc_ad_insertsuncore interconnectAD Ingress (from CMS) Allocationsevent=101unc_m2m_rxc_ad_pref_occupancyuncore interconnectAD Ingress (from CMS) Occupancy - Prefetchesevent=0x7701unc_m2m_rxc_ak_wr_cmpuncore interconnectAK Egress (to CMS) Allocationsevent=0x5c01unc_m2m_rxr_busy_starved.ad_alluncore interconnectTransgress Injection Starvation : AD - Allevent=0xe5,umask=0x1101Transgress Injection Starvation : AD - All : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priority : All == Credited + Uncreditedunc_m2m_rxr_busy_starved.ad_crduncore interconnectTransgress Injection Starvation : AD - Creditedevent=0xe5,umask=0x1001Transgress Injection Starvation : AD - Credited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityunc_m2m_rxr_busy_starved.ad_uncrduncore interconnectTransgress Injection Starvation : AD - Uncreditedevent=0xe5,umask=101Transgress Injection Starvation : AD - Uncredited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityunc_m2m_rxr_busy_starved.bl_alluncore interconnectTransgress Injection Starvation : BL - Allevent=0xe5,umask=0x4401Transgress Injection Starvation : BL - All : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priority : All == Credited + Uncreditedunc_m2m_rxr_busy_starved.bl_crduncore interconnectTransgress Injection Starvation : BL - Creditedevent=0xe5,umask=0x4001Transgress Injection Starvation : BL - Credited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityunc_m2m_rxr_busy_starved.bl_uncrduncore interconnectTransgress Injection Starvation : BL - Uncreditedevent=0xe5,umask=401Transgress Injection Starvation : BL - Uncredited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityunc_m2m_rxr_bypass.ad_alluncore interconnectTransgress Ingress Bypass : AD - Allevent=0xe2,umask=0x1101Transgress Ingress Bypass : AD - All : Number of packets bypassing the CMS Ingress : All == Credited + Uncreditedunc_m2m_rxr_bypass.ad_crduncore interconnectTransgress Ingress Bypass : AD - Creditedevent=0xe2,umask=0x1001Transgress Ingress Bypass : AD - Credited : Number of packets bypassing the CMS Ingressunc_m2m_rxr_bypass.ad_uncrduncore interconnectTransgress Ingress Bypass : AD - Uncreditedevent=0xe2,umask=101Transgress Ingress Bypass : AD - Uncredited : Number of packets bypassing the CMS Ingressunc_m2m_rxr_bypass.akuncore interconnectTransgress Ingress Bypass : AKevent=0xe2,umask=201Transgress Ingress Bypass : AK : Number of packets bypassing the CMS Ingressunc_m2m_rxr_bypass.akc_uncrduncore interconnectTransgress Ingress Bypass : AKC - Uncreditedevent=0xe2,umask=0x8001Transgress Ingress Bypass : AKC - Uncredited : Number of packets bypassing the CMS Ingressunc_m2m_rxr_bypass.bl_alluncore interconnectTransgress Ingress Bypass : BL - Allevent=0xe2,umask=0x4401Transgress Ingress Bypass : BL - All : Number of packets bypassing the CMS Ingress : All == Credited + Uncreditedunc_m2m_rxr_bypass.bl_crduncore interconnectTransgress Ingress Bypass : BL - Creditedevent=0xe2,umask=0x4001Transgress Ingress Bypass : BL - Credited : Number of packets bypassing the CMS Ingressunc_m2m_rxr_bypass.bl_uncrduncore interconnectTransgress Ingress Bypass : BL - Uncreditedevent=0xe2,umask=401Transgress Ingress Bypass : BL - Uncredited : Number of packets bypassing the CMS Ingressunc_m2m_rxr_bypass.ivuncore interconnectTransgress Ingress Bypass : IVevent=0xe2,umask=801Transgress Ingress Bypass : IV : Number of packets bypassing the CMS Ingressunc_m2m_rxr_crd_starved.ad_alluncore interconnectTransgress Injection Starvation : AD - Allevent=0xe3,umask=0x1101Transgress Injection Starvation : AD - All : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of credit. : All == Credited + Uncreditedunc_m2m_rxr_crd_starved.ad_crduncore interconnectTransgress Injection Starvation : AD - Creditedevent=0xe3,umask=0x1001Transgress Injection Starvation : AD - Credited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m2m_rxr_crd_starved.ad_uncrduncore interconnectTransgress Injection Starvation : AD - Uncreditedevent=0xe3,umask=101Transgress Injection Starvation : AD - Uncredited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m2m_rxr_crd_starved.akuncore interconnectTransgress Injection Starvation : AKevent=0xe3,umask=201Transgress Injection Starvation : AK : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m2m_rxr_crd_starved.bl_alluncore interconnectTransgress Injection Starvation : BL - Allevent=0xe3,umask=0x4401Transgress Injection Starvation : BL - All : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of credit. : All == Credited + Uncreditedunc_m2m_rxr_crd_starved.bl_crduncore interconnectTransgress Injection Starvation : BL - Creditedevent=0xe3,umask=0x4001Transgress Injection Starvation : BL - Credited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m2m_rxr_crd_starved.bl_uncrduncore interconnectTransgress Injection Starvation : BL - Uncreditedevent=0xe3,umask=401Transgress Injection Starvation : BL - Uncredited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m2m_rxr_crd_starved.ifvuncore interconnectTransgress Injection Starvation : IFV - Creditedevent=0xe3,umask=0x8001Transgress Injection Starvation : IFV - Credited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m2m_rxr_crd_starved.ivuncore interconnectTransgress Injection Starvation : IVevent=0xe3,umask=801Transgress Injection Starvation : IV : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m2m_rxr_crd_starved_1uncore interconnectTransgress Injection Starvationevent=0xe401Transgress Injection Starvation : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m2m_rxr_inserts.ad_alluncore interconnectTransgress Ingress Allocations : AD - Allevent=0xe1,umask=0x1101Transgress Ingress Allocations : AD - All : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the mesh : All == Credited + Uncreditedunc_m2m_rxr_inserts.ad_crduncore interconnectTransgress Ingress Allocations : AD - Creditedevent=0xe1,umask=0x1001Transgress Ingress Allocations : AD - Credited : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m2m_rxr_inserts.ad_uncrduncore interconnectTransgress Ingress Allocations : AD - Uncreditedevent=0xe1,umask=101Transgress Ingress Allocations : AD - Uncredited : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m2m_rxr_inserts.akuncore interconnectTransgress Ingress Allocations : AKevent=0xe1,umask=201Transgress Ingress Allocations : AK : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m2m_rxr_inserts.akc_uncrduncore interconnectTransgress Ingress Allocations : AKC - Uncreditedevent=0xe1,umask=0x8001Transgress Ingress Allocations : AKC - Uncredited : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m2m_rxr_inserts.bl_alluncore interconnectTransgress Ingress Allocations : BL - Allevent=0xe1,umask=0x4401Transgress Ingress Allocations : BL - All : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the mesh : All == Credited + Uncreditedunc_m2m_rxr_inserts.bl_crduncore interconnectTransgress Ingress Allocations : BL - Creditedevent=0xe1,umask=0x4001Transgress Ingress Allocations : BL - Credited : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m2m_rxr_inserts.bl_uncrduncore interconnectTransgress Ingress Allocations : BL - Uncreditedevent=0xe1,umask=401Transgress Ingress Allocations : BL - Uncredited : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m2m_rxr_inserts.ivuncore interconnectTransgress Ingress Allocations : IVevent=0xe1,umask=801Transgress Ingress Allocations : IV : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m2m_rxr_occupancy.ad_alluncore interconnectTransgress Ingress Occupancy : AD - Allevent=0xe0,umask=0x1101Transgress Ingress Occupancy : AD - All : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the mesh : All == Credited + Uncreditedunc_m2m_rxr_occupancy.ad_crduncore interconnectTransgress Ingress Occupancy : AD - Creditedevent=0xe0,umask=0x1001Transgress Ingress Occupancy : AD - Credited : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m2m_rxr_occupancy.ad_uncrduncore interconnectTransgress Ingress Occupancy : AD - Uncreditedevent=0xe0,umask=101Transgress Ingress Occupancy : AD - Uncredited : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m2m_rxr_occupancy.akuncore interconnectTransgress Ingress Occupancy : AKevent=0xe0,umask=201Transgress Ingress Occupancy : AK : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m2m_rxr_occupancy.akc_uncrduncore interconnectTransgress Ingress Occupancy : AKC - Uncreditedevent=0xe0,umask=0x8001Transgress Ingress Occupancy : AKC - Uncredited : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m2m_rxr_occupancy.bl_alluncore interconnectTransgress Ingress Occupancy : BL - Allevent=0xe0,umask=0x4401Transgress Ingress Occupancy : BL - All : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the mesh : All == Credited + Uncreditedunc_m2m_rxr_occupancy.bl_crduncore interconnectTransgress Ingress Occupancy : BL - Creditedevent=0xe0,umask=0x2001Transgress Ingress Occupancy : BL - Credited : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m2m_rxr_occupancy.bl_uncrduncore interconnectTransgress Ingress Occupancy : BL - Uncreditedevent=0xe0,umask=401Transgress Ingress Occupancy : BL - Uncredited : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m2m_rxr_occupancy.ivuncore interconnectTransgress Ingress Occupancy : IVevent=0xe0,umask=801Transgress Ingress Occupancy : IV : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m2m_scoreboard_ad_retry_acceptsuncore interconnectUNC_M2M_SCOREBOARD_AD_RETRY_ACCEPTSevent=0x3301unc_m2m_scoreboard_ad_retry_rejectsuncore interconnectUNC_M2M_SCOREBOARD_AD_RETRY_REJECTSevent=0x3401unc_m2m_scoreboard_bl_retry_acceptsuncore interconnectRetry - Mem Mirroring Modeevent=0x3501unc_m2m_scoreboard_bl_retry_rejectsuncore interconnectRetry - Mem Mirroring Modeevent=0x3601unc_m2m_scoreboard_rd_acceptsuncore interconnectScoreboard Acceptsevent=0x2f01unc_m2m_scoreboard_rd_rejectsuncore interconnectScoreboard Rejectsevent=0x3001unc_m2m_scoreboard_wr_acceptsuncore interconnectScoreboard Acceptsevent=0x3101unc_m2m_scoreboard_wr_rejectsuncore interconnectScoreboard Rejectsevent=0x3201unc_m2m_stall0_no_txr_horz_crd_ad_ag0.tgr0uncore interconnectStall on No AD Agent0 Transgress Credits : For Transgress 0event=0xd0,umask=101Stall on No AD Agent0 Transgress Credits : For Transgress 0 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_ad_ag0.tgr1uncore interconnectStall on No AD Agent0 Transgress Credits : For Transgress 1event=0xd0,umask=201Stall on No AD Agent0 Transgress Credits : For Transgress 1 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_ad_ag0.tgr2uncore interconnectStall on No AD Agent0 Transgress Credits : For Transgress 2event=0xd0,umask=401Stall on No AD Agent0 Transgress Credits : For Transgress 2 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_ad_ag0.tgr3uncore interconnectStall on No AD Agent0 Transgress Credits : For Transgress 3event=0xd0,umask=801Stall on No AD Agent0 Transgress Credits : For Transgress 3 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_ad_ag0.tgr4uncore interconnectStall on No AD Agent0 Transgress Credits : For Transgress 4event=0xd0,umask=0x1001Stall on No AD Agent0 Transgress Credits : For Transgress 4 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_ad_ag0.tgr5uncore interconnectStall on No AD Agent0 Transgress Credits : For Transgress 5event=0xd0,umask=0x2001Stall on No AD Agent0 Transgress Credits : For Transgress 5 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_ad_ag0.tgr6uncore interconnectStall on No AD Agent0 Transgress Credits : For Transgress 6event=0xd0,umask=0x4001Stall on No AD Agent0 Transgress Credits : For Transgress 6 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_ad_ag0.tgr7uncore interconnectStall on No AD Agent0 Transgress Credits : For Transgress 7event=0xd0,umask=0x8001Stall on No AD Agent0 Transgress Credits : For Transgress 7 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_ad_ag1.tgr0uncore interconnectStall on No AD Agent1 Transgress Credits : For Transgress 0event=0xd2,umask=101Stall on No AD Agent1 Transgress Credits : For Transgress 0 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_ad_ag1.tgr1uncore interconnectStall on No AD Agent1 Transgress Credits : For Transgress 1event=0xd2,umask=201Stall on No AD Agent1 Transgress Credits : For Transgress 1 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_ad_ag1.tgr2uncore interconnectStall on No AD Agent1 Transgress Credits : For Transgress 2event=0xd2,umask=401Stall on No AD Agent1 Transgress Credits : For Transgress 2 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_ad_ag1.tgr3uncore interconnectStall on No AD Agent1 Transgress Credits : For Transgress 3event=0xd2,umask=801Stall on No AD Agent1 Transgress Credits : For Transgress 3 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_ad_ag1.tgr4uncore interconnectStall on No AD Agent1 Transgress Credits : For Transgress 4event=0xd2,umask=0x1001Stall on No AD Agent1 Transgress Credits : For Transgress 4 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_ad_ag1.tgr5uncore interconnectStall on No AD Agent1 Transgress Credits : For Transgress 5event=0xd2,umask=0x2001Stall on No AD Agent1 Transgress Credits : For Transgress 5 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_ad_ag1.tgr6uncore interconnectStall on No AD Agent1 Transgress Credits : For Transgress 6event=0xd2,umask=0x4001Stall on No AD Agent1 Transgress Credits : For Transgress 6 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_ad_ag1.tgr7uncore interconnectStall on No AD Agent1 Transgress Credits : For Transgress 7event=0xd2,umask=0x8001Stall on No AD Agent1 Transgress Credits : For Transgress 7 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_bl_ag0.tgr0uncore interconnectStall on No BL Agent0 Transgress Credits : For Transgress 0event=0xd4,umask=101Stall on No BL Agent0 Transgress Credits : For Transgress 0 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_bl_ag0.tgr1uncore interconnectStall on No BL Agent0 Transgress Credits : For Transgress 1event=0xd4,umask=201Stall on No BL Agent0 Transgress Credits : For Transgress 1 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_bl_ag0.tgr2uncore interconnectStall on No BL Agent0 Transgress Credits : For Transgress 2event=0xd4,umask=401Stall on No BL Agent0 Transgress Credits : For Transgress 2 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_bl_ag0.tgr3uncore interconnectStall on No BL Agent0 Transgress Credits : For Transgress 3event=0xd4,umask=801Stall on No BL Agent0 Transgress Credits : For Transgress 3 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_bl_ag0.tgr4uncore interconnectStall on No BL Agent0 Transgress Credits : For Transgress 4event=0xd4,umask=0x1001Stall on No BL Agent0 Transgress Credits : For Transgress 4 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_bl_ag0.tgr5uncore interconnectStall on No BL Agent0 Transgress Credits : For Transgress 5event=0xd4,umask=0x2001Stall on No BL Agent0 Transgress Credits : For Transgress 5 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_bl_ag0.tgr6uncore interconnectStall on No BL Agent0 Transgress Credits : For Transgress 6event=0xd4,umask=0x4001Stall on No BL Agent0 Transgress Credits : For Transgress 6 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_bl_ag0.tgr7uncore interconnectStall on No BL Agent0 Transgress Credits : For Transgress 7event=0xd4,umask=0x8001Stall on No BL Agent0 Transgress Credits : For Transgress 7 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_bl_ag1.tgr0uncore interconnectStall on No BL Agent1 Transgress Credits : For Transgress 0event=0xd6,umask=101Stall on No BL Agent1 Transgress Credits : For Transgress 0 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_bl_ag1.tgr1uncore interconnectStall on No BL Agent1 Transgress Credits : For Transgress 1event=0xd6,umask=201Stall on No BL Agent1 Transgress Credits : For Transgress 1 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_bl_ag1.tgr2uncore interconnectStall on No BL Agent1 Transgress Credits : For Transgress 2event=0xd6,umask=401Stall on No BL Agent1 Transgress Credits : For Transgress 2 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_bl_ag1.tgr3uncore interconnectStall on No BL Agent1 Transgress Credits : For Transgress 3event=0xd6,umask=801Stall on No BL Agent1 Transgress Credits : For Transgress 3 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_bl_ag1.tgr4uncore interconnectStall on No BL Agent1 Transgress Credits : For Transgress 4event=0xd6,umask=0x1001Stall on No BL Agent1 Transgress Credits : For Transgress 4 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_bl_ag1.tgr5uncore interconnectStall on No BL Agent1 Transgress Credits : For Transgress 5event=0xd6,umask=0x2001Stall on No BL Agent1 Transgress Credits : For Transgress 5 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_bl_ag1.tgr6uncore interconnectStall on No BL Agent1 Transgress Credits : For Transgress 6event=0xd6,umask=0x4001Stall on No BL Agent1 Transgress Credits : For Transgress 6 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall0_no_txr_horz_crd_bl_ag1.tgr7uncore interconnectStall on No BL Agent1 Transgress Credits : For Transgress 7event=0xd6,umask=0x8001Stall on No BL Agent1 Transgress Credits : For Transgress 7 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall1_no_txr_horz_crd_ad_ag0.tgr10uncore interconnectStall on No AD Agent0 Transgress Credits : For Transgress 10event=0xd1,umask=401Stall on No AD Agent0 Transgress Credits : For Transgress 10 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall1_no_txr_horz_crd_ad_ag0.tgr8uncore interconnectStall on No AD Agent0 Transgress Credits : For Transgress 8event=0xd1,umask=101Stall on No AD Agent0 Transgress Credits : For Transgress 8 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall1_no_txr_horz_crd_ad_ag0.tgr9uncore interconnectStall on No AD Agent0 Transgress Credits : For Transgress 9event=0xd1,umask=201Stall on No AD Agent0 Transgress Credits : For Transgress 9 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall1_no_txr_horz_crd_ad_ag1_1.tgr10uncore interconnectStall on No AD Agent1 Transgress Credits : For Transgress 10event=0xd3,umask=401Stall on No AD Agent1 Transgress Credits : For Transgress 10 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall1_no_txr_horz_crd_ad_ag1_1.tgr8uncore interconnectStall on No AD Agent1 Transgress Credits : For Transgress 8event=0xd3,umask=101Stall on No AD Agent1 Transgress Credits : For Transgress 8 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall1_no_txr_horz_crd_ad_ag1_1.tgr9uncore interconnectStall on No AD Agent1 Transgress Credits : For Transgress 9event=0xd3,umask=201Stall on No AD Agent1 Transgress Credits : For Transgress 9 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall1_no_txr_horz_crd_bl_ag0_1.tgr10uncore interconnectStall on No BL Agent0 Transgress Credits : For Transgress 10event=0xd5,umask=401Stall on No BL Agent0 Transgress Credits : For Transgress 10 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall1_no_txr_horz_crd_bl_ag0_1.tgr8uncore interconnectStall on No BL Agent0 Transgress Credits : For Transgress 8event=0xd5,umask=101Stall on No BL Agent0 Transgress Credits : For Transgress 8 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall1_no_txr_horz_crd_bl_ag0_1.tgr9uncore interconnectStall on No BL Agent0 Transgress Credits : For Transgress 9event=0xd5,umask=201Stall on No BL Agent0 Transgress Credits : For Transgress 9 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall1_no_txr_horz_crd_bl_ag1_1.tgr10uncore interconnectStall on No BL Agent1 Transgress Credits : For Transgress 10event=0xd7,umask=401Stall on No BL Agent1 Transgress Credits : For Transgress 10 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall1_no_txr_horz_crd_bl_ag1_1.tgr8uncore interconnectStall on No BL Agent1 Transgress Credits : For Transgress 8event=0xd7,umask=101Stall on No BL Agent1 Transgress Credits : For Transgress 8 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_stall1_no_txr_horz_crd_bl_ag1_1.tgr9uncore interconnectStall on No BL Agent1 Transgress Credits : For Transgress 9event=0xd7,umask=201Stall on No BL Agent1 Transgress Credits : For Transgress 9 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2m_tag_hit.nm_rd_hit_cleanuncore interconnectTag Hit : Clean NearMem Read Hitevent=0x2c,umask=101Tag Hit : Clean NearMem Read Hit : Tag Hit indicates when a request sent to the iMC hit in Near Memory. : Counts clean full line read hits (reads and RFOs)unc_m2m_tag_hit.nm_rd_hit_dirtyuncore interconnectTag Hit : Dirty NearMem Read Hitevent=0x2c,umask=201Tag Hit : Dirty NearMem Read Hit : Tag Hit indicates when a request sent to the iMC hit in Near Memory. : Counts dirty full line read hits (reads and RFOs)unc_m2m_tag_hit.nm_ufill_hit_cleanuncore interconnectTag Hit : Clean NearMem Underfill Hitevent=0x2c,umask=401Tag Hit : Clean NearMem Underfill Hit : Tag Hit indicates when a request sent to the iMC hit in Near Memory. : Counts clean underfill hits due to a partial writeunc_m2m_tag_hit.nm_ufill_hit_dirtyuncore interconnectTag Hit : Dirty NearMem Underfill Hitevent=0x2c,umask=801Tag Hit : Dirty NearMem Underfill Hit : Tag Hit indicates when a request sent to the iMC hit in Near Memory. : Counts dirty underfill read hits due to a partial writeunc_m2m_tag_missuncore interconnectTag Missevent=0x6101unc_m2m_tracker_full.ch0uncore interconnectTracker Cycles Full : Channel 0event=0x45,umask=101unc_m2m_tracker_full.ch1uncore interconnectTracker Cycles Full : Channel 1event=0x45,umask=201unc_m2m_tracker_full.ch2uncore interconnectTracker Cycles Full : Channel 2event=0x45,umask=401unc_m2m_tracker_inserts.ch0uncore interconnectTracker Inserts : Channel 0event=0x49,umask=101unc_m2m_tracker_inserts.ch1uncore interconnectTracker Inserts : Channel 1event=0x49,umask=201unc_m2m_tracker_inserts.ch2uncore interconnectTracker Inserts : Channel 2event=0x49,umask=401unc_m2m_tracker_ne.ch0uncore interconnectTracker Cycles Not Empty : Channel 0event=0x46,umask=101unc_m2m_tracker_ne.ch1uncore interconnectTracker Cycles Not Empty : Channel 1event=0x46,umask=201unc_m2m_tracker_ne.ch2uncore interconnectTracker Cycles Not Empty : Channel 2event=0x46,umask=401unc_m2m_tracker_occupancy.ch0uncore interconnectTracker Occupancy : Channel 0event=0x47,umask=101unc_m2m_tracker_occupancy.ch1uncore interconnectTracker Occupancy : Channel 1event=0x47,umask=201unc_m2m_tracker_occupancy.ch2uncore interconnectTracker Occupancy : Channel 2event=0x47,umask=401unc_m2m_txc_ak.crd_cbouncore interconnectOutbound Ring Transactions on AK : CRD Transactions to Cboevent=0x39,umask=201unc_m2m_txc_ak.ndruncore interconnectOutbound Ring Transactions on AK : NDR Transactionsevent=0x39,umask=101unc_m2m_txc_akc_creditsuncore interconnectAKC Creditsevent=0x5f01unc_m2m_txc_ak_credits_acquired.cms0uncore interconnectAK Egress (to CMS) Credit Acquired : Common Mesh Stop - Near Sideevent=0x1d,umask=101unc_m2m_txc_ak_credits_acquired.cms1uncore interconnectAK Egress (to CMS) Credit Acquired : Common Mesh Stop - Far Sideevent=0x1d,umask=201unc_m2m_txc_ak_cycles_full.alluncore interconnectAK Egress (to CMS) Full : Allevent=0x14,umask=301unc_m2m_txc_ak_cycles_full.cms0uncore interconnectAK Egress (to CMS) Full : Common Mesh Stop - Near Sideevent=0x14,umask=101unc_m2m_txc_ak_cycles_full.cms1uncore interconnectAK Egress (to CMS) Full : Common Mesh Stop - Far Sideevent=0x14,umask=201unc_m2m_txc_ak_cycles_full.rdcrd0uncore interconnectAK Egress (to CMS) Fullevent=0x14,umask=801unc_m2m_txc_ak_cycles_full.rdcrd1uncore interconnectAK Egress (to CMS) Fullevent=0x14,umask=0x8801unc_m2m_txc_ak_cycles_full.wrcmp0uncore interconnectAK Egress (to CMS) Fullevent=0x14,umask=0x2001unc_m2m_txc_ak_cycles_full.wrcmp1uncore interconnectAK Egress (to CMS) Fullevent=0x14,umask=0xa001unc_m2m_txc_ak_cycles_full.wrcrd0uncore interconnectAK Egress (to CMS) Fullevent=0x14,umask=0x1001unc_m2m_txc_ak_cycles_full.wrcrd1uncore interconnectAK Egress (to CMS) Fullevent=0x14,umask=0x9001unc_m2m_txc_ak_cycles_ne.alluncore interconnectAK Egress (to CMS) Not Empty : Allevent=0x13,umask=301unc_m2m_txc_ak_cycles_ne.cms0uncore interconnectAK Egress (to CMS) Not Empty : Common Mesh Stop - Near Sideevent=0x13,umask=101unc_m2m_txc_ak_cycles_ne.cms1uncore interconnectAK Egress (to CMS) Not Empty : Common Mesh Stop - Far Sideevent=0x13,umask=201unc_m2m_txc_ak_cycles_ne.rdcrduncore interconnectAK Egress (to CMS) Not Emptyevent=0x13,umask=801unc_m2m_txc_ak_cycles_ne.wrcmpuncore interconnectAK Egress (to CMS) Not Emptyevent=0x13,umask=0x2001unc_m2m_txc_ak_cycles_ne.wrcrduncore interconnectAK Egress (to CMS) Not Emptyevent=0x13,umask=0x1001unc_m2m_txc_ak_inserts.alluncore interconnectAK Egress (to CMS) Allocations : Allevent=0x11,umask=301unc_m2m_txc_ak_inserts.cms0uncore interconnectAK Egress (to CMS) Allocations : Common Mesh Stop - Near Sideevent=0x11,umask=101unc_m2m_txc_ak_inserts.cms1uncore interconnectAK Egress (to CMS) Allocations : Common Mesh Stop - Far Sideevent=0x11,umask=201unc_m2m_txc_ak_inserts.pref_rd_cam_hituncore interconnectAK Egress (to CMS) Allocationsevent=0x11,umask=0x4001unc_m2m_txc_ak_inserts.rdcrduncore interconnectAK Egress (to CMS) Allocationsevent=0x11,umask=801unc_m2m_txc_ak_inserts.wrcmpuncore interconnectAK Egress (to CMS) Allocationsevent=0x11,umask=0x2001unc_m2m_txc_ak_inserts.wrcrduncore interconnectAK Egress (to CMS) Allocationsevent=0x11,umask=0x1001unc_m2m_txc_ak_no_credit_cycles.cms0uncore interconnectCycles with No AK Egress (to CMS) Credits : Common Mesh Stop - Near Sideevent=0x1f,umask=101unc_m2m_txc_ak_no_credit_cycles.cms1uncore interconnectCycles with No AK Egress (to CMS) Credits : Common Mesh Stop - Far Sideevent=0x1f,umask=201unc_m2m_txc_ak_no_credit_stalled.cms0uncore interconnectCycles Stalled with No AK Egress (to CMS) Credits : Common Mesh Stop - Near Sideevent=0x20,umask=101unc_m2m_txc_ak_no_credit_stalled.cms1uncore interconnectCycles Stalled with No AK Egress (to CMS) Credits : Common Mesh Stop - Far Sideevent=0x20,umask=201unc_m2m_txc_ak_occupancy.alluncore interconnectAK Egress (to CMS) Occupancy : Allevent=0x12,umask=301unc_m2m_txc_ak_occupancy.cms0uncore interconnectAK Egress (to CMS) Occupancy : Common Mesh Stop - Near Sideevent=0x12,umask=101unc_m2m_txc_ak_occupancy.cms1uncore interconnectAK Egress (to CMS) Occupancy : Common Mesh Stop - Far Sideevent=0x12,umask=201unc_m2m_txc_ak_occupancy.rdcrduncore interconnectAK Egress (to CMS) Occupancyevent=0x12,umask=801unc_m2m_txc_ak_occupancy.wrcmpuncore interconnectAK Egress (to CMS) Occupancyevent=0x12,umask=0x2001unc_m2m_txc_ak_occupancy.wrcrduncore interconnectAK Egress (to CMS) Occupancyevent=0x12,umask=0x1001unc_m2m_txc_bl.drs_cacheuncore interconnectOutbound DRS Ring Transactions to Cache : Data to Cacheevent=0x40,umask=101unc_m2m_txc_bl.drs_coreuncore interconnectOutbound DRS Ring Transactions to Cache : Data to Coreevent=0x40,umask=201unc_m2m_txc_bl.drs_upiuncore interconnectOutbound DRS Ring Transactions to Cache : Data to QPIevent=0x40,umask=401unc_m2m_txc_bl_credits_acquired.cms0uncore interconnectBL Egress (to CMS) Credit Acquired : Common Mesh Stop - Near Sideevent=0x19,umask=101unc_m2m_txc_bl_credits_acquired.cms1uncore interconnectBL Egress (to CMS) Credit Acquired : Common Mesh Stop - Far Sideevent=0x19,umask=201unc_m2m_txc_bl_cycles_full.alluncore interconnectBL Egress (to CMS) Full : Allevent=0x18,umask=301unc_m2m_txc_bl_cycles_full.cms0uncore interconnectBL Egress (to CMS) Full : Common Mesh Stop - Near Sideevent=0x18,umask=101unc_m2m_txc_bl_cycles_full.cms1uncore interconnectBL Egress (to CMS) Full : Common Mesh Stop - Far Sideevent=0x18,umask=201unc_m2m_txc_bl_cycles_ne.alluncore interconnectBL Egress (to CMS) Not Empty : Allevent=0x17,umask=301unc_m2m_txc_bl_cycles_ne.cms0uncore interconnectBL Egress (to CMS) Not Empty : Common Mesh Stop - Near Sideevent=0x17,umask=101unc_m2m_txc_bl_cycles_ne.cms1uncore interconnectBL Egress (to CMS) Not Empty : Common Mesh Stop - Far Sideevent=0x17,umask=201unc_m2m_txc_bl_inserts.alluncore interconnectBL Egress (to CMS) Allocations : Allevent=0x15,umask=301unc_m2m_txc_bl_inserts.cms0uncore interconnectBL Egress (to CMS) Allocations : Common Mesh Stop - Near Sideevent=0x15,umask=101unc_m2m_txc_bl_inserts.cms1uncore interconnectBL Egress (to CMS) Allocations : Common Mesh Stop - Far Sideevent=0x15,umask=201unc_m2m_txc_bl_no_credit_cycles.cms0uncore interconnectCycles with No BL Egress (to CMS) Credits : Common Mesh Stop - Near Sideevent=0x1b,umask=101unc_m2m_txc_bl_no_credit_cycles.cms1uncore interconnectCycles with No BL Egress (to CMS) Credits : Common Mesh Stop - Far Sideevent=0x1b,umask=201unc_m2m_txc_bl_no_credit_stalled.cms0uncore interconnectCycles Stalled with No BL Egress (to CMS) Credits : Common Mesh Stop - Near Sideevent=0x1c,umask=101unc_m2m_txc_bl_no_credit_stalled.cms1uncore interconnectCycles Stalled with No BL Egress (to CMS) Credits : Common Mesh Stop - Far Sideevent=0x1c,umask=201unc_m2m_txr_horz_ads_used.ad_alluncore interconnectCMS Horizontal ADS Used : AD - Allevent=0xa6,umask=0x1101CMS Horizontal ADS Used : AD - All : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent. : All == Credited + Uncreditedunc_m2m_txr_horz_ads_used.ad_crduncore interconnectCMS Horizontal ADS Used : AD - Creditedevent=0xa6,umask=0x1001CMS Horizontal ADS Used : AD - Credited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m2m_txr_horz_ads_used.ad_uncrduncore interconnectCMS Horizontal ADS Used : AD - Uncreditedevent=0xa6,umask=101CMS Horizontal ADS Used : AD - Uncredited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m2m_txr_horz_ads_used.bl_alluncore interconnectCMS Horizontal ADS Used : BL - Allevent=0xa6,umask=0x4401CMS Horizontal ADS Used : BL - All : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent. : All == Credited + Uncreditedunc_m2m_txr_horz_ads_used.bl_crduncore interconnectCMS Horizontal ADS Used : BL - Creditedevent=0xa6,umask=0x4001CMS Horizontal ADS Used : BL - Credited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m2m_txr_horz_ads_used.bl_uncrduncore interconnectCMS Horizontal ADS Used : BL - Uncreditedevent=0xa6,umask=401CMS Horizontal ADS Used : BL - Uncredited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m2m_txr_horz_bypass.ad_alluncore interconnectCMS Horizontal Bypass Used : AD - Allevent=0xa7,umask=0x1101CMS Horizontal Bypass Used : AD - All : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent. : All == Credited + Uncreditedunc_m2m_txr_horz_bypass.ad_crduncore interconnectCMS Horizontal Bypass Used : AD - Creditedevent=0xa7,umask=0x1001CMS Horizontal Bypass Used : AD - Credited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m2m_txr_horz_bypass.ad_uncrduncore interconnectCMS Horizontal Bypass Used : AD - Uncreditedevent=0xa7,umask=101CMS Horizontal Bypass Used : AD - Uncredited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m2m_txr_horz_bypass.akuncore interconnectCMS Horizontal Bypass Used : AKevent=0xa7,umask=201CMS Horizontal Bypass Used : AK : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m2m_txr_horz_bypass.akc_uncrduncore interconnectCMS Horizontal Bypass Used : AKC - Uncreditedevent=0xa7,umask=0x8001CMS Horizontal Bypass Used : AKC - Uncredited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m2m_txr_horz_bypass.bl_alluncore interconnectCMS Horizontal Bypass Used : BL - Allevent=0xa7,umask=0x4401CMS Horizontal Bypass Used : BL - All : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent. : All == Credited + Uncreditedunc_m2m_txr_horz_bypass.bl_crduncore interconnectCMS Horizontal Bypass Used : BL - Creditedevent=0xa7,umask=0x4001CMS Horizontal Bypass Used : BL - Credited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m2m_txr_horz_bypass.bl_uncrduncore interconnectCMS Horizontal Bypass Used : BL - Uncreditedevent=0xa7,umask=401CMS Horizontal Bypass Used : BL - Uncredited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m2m_txr_horz_bypass.ivuncore interconnectCMS Horizontal Bypass Used : IVevent=0xa7,umask=801CMS Horizontal Bypass Used : IV : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m2m_txr_horz_cycles_full.ad_alluncore interconnectCycles CMS Horizontal Egress Queue is Full : AD - Allevent=0xa2,umask=0x1101Cycles CMS Horizontal Egress Queue is Full : AD - All : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_m2m_txr_horz_cycles_full.ad_crduncore interconnectCycles CMS Horizontal Egress Queue is Full : AD - Creditedevent=0xa2,umask=0x1001Cycles CMS Horizontal Egress Queue is Full : AD - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_cycles_full.ad_uncrduncore interconnectCycles CMS Horizontal Egress Queue is Full : AD - Uncreditedevent=0xa2,umask=101Cycles CMS Horizontal Egress Queue is Full : AD - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_cycles_full.akuncore interconnectCycles CMS Horizontal Egress Queue is Full : AKevent=0xa2,umask=201Cycles CMS Horizontal Egress Queue is Full : AK : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_cycles_full.akc_uncrduncore interconnectCycles CMS Horizontal Egress Queue is Full : AKC - Uncreditedevent=0xa2,umask=0x8001Cycles CMS Horizontal Egress Queue is Full : AKC - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_cycles_full.bl_alluncore interconnectCycles CMS Horizontal Egress Queue is Full : BL - Allevent=0xa2,umask=0x4401Cycles CMS Horizontal Egress Queue is Full : BL - All : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_m2m_txr_horz_cycles_full.bl_crduncore interconnectCycles CMS Horizontal Egress Queue is Full : BL - Creditedevent=0xa2,umask=0x4001Cycles CMS Horizontal Egress Queue is Full : BL - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_cycles_full.bl_uncrduncore interconnectCycles CMS Horizontal Egress Queue is Full : BL - Uncreditedevent=0xa2,umask=401Cycles CMS Horizontal Egress Queue is Full : BL - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_cycles_full.ivuncore interconnectCycles CMS Horizontal Egress Queue is Full : IVevent=0xa2,umask=801Cycles CMS Horizontal Egress Queue is Full : IV : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_cycles_ne.ad_alluncore interconnectCycles CMS Horizontal Egress Queue is Not Empty : AD - Allevent=0xa3,umask=0x1101Cycles CMS Horizontal Egress Queue is Not Empty : AD - All : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_m2m_txr_horz_cycles_ne.ad_crduncore interconnectCycles CMS Horizontal Egress Queue is Not Empty : AD - Creditedevent=0xa3,umask=0x1001Cycles CMS Horizontal Egress Queue is Not Empty : AD - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_cycles_ne.ad_uncrduncore interconnectCycles CMS Horizontal Egress Queue is Not Empty : AD - Uncreditedevent=0xa3,umask=101Cycles CMS Horizontal Egress Queue is Not Empty : AD - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_cycles_ne.akuncore interconnectCycles CMS Horizontal Egress Queue is Not Empty : AKevent=0xa3,umask=201Cycles CMS Horizontal Egress Queue is Not Empty : AK : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_cycles_ne.akc_uncrduncore interconnectCycles CMS Horizontal Egress Queue is Not Empty : AKC - Uncreditedevent=0xa3,umask=0x8001Cycles CMS Horizontal Egress Queue is Not Empty : AKC - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_cycles_ne.bl_alluncore interconnectCycles CMS Horizontal Egress Queue is Not Empty : BL - Allevent=0xa3,umask=0x4401Cycles CMS Horizontal Egress Queue is Not Empty : BL - All : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_m2m_txr_horz_cycles_ne.bl_crduncore interconnectCycles CMS Horizontal Egress Queue is Not Empty : BL - Creditedevent=0xa3,umask=0x4001Cycles CMS Horizontal Egress Queue is Not Empty : BL - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_cycles_ne.bl_uncrduncore interconnectCycles CMS Horizontal Egress Queue is Not Empty : BL - Uncreditedevent=0xa3,umask=401Cycles CMS Horizontal Egress Queue is Not Empty : BL - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_cycles_ne.ivuncore interconnectCycles CMS Horizontal Egress Queue is Not Empty : IVevent=0xa3,umask=801Cycles CMS Horizontal Egress Queue is Not Empty : IV : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_inserts.ad_alluncore interconnectCMS Horizontal Egress Inserts : AD - Allevent=0xa1,umask=0x1101CMS Horizontal Egress Inserts : AD - All : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_m2m_txr_horz_inserts.ad_crduncore interconnectCMS Horizontal Egress Inserts : AD - Creditedevent=0xa1,umask=0x1001CMS Horizontal Egress Inserts : AD - Credited : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_inserts.ad_uncrduncore interconnectCMS Horizontal Egress Inserts : AD - Uncreditedevent=0xa1,umask=101CMS Horizontal Egress Inserts : AD - Uncredited : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_inserts.akuncore interconnectCMS Horizontal Egress Inserts : AKevent=0xa1,umask=201CMS Horizontal Egress Inserts : AK : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_inserts.akc_uncrduncore interconnectCMS Horizontal Egress Inserts : AKC - Uncreditedevent=0xa1,umask=0x8001CMS Horizontal Egress Inserts : AKC - Uncredited : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_inserts.bl_alluncore interconnectCMS Horizontal Egress Inserts : BL - Allevent=0xa1,umask=0x4401CMS Horizontal Egress Inserts : BL - All : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_m2m_txr_horz_inserts.bl_crduncore interconnectCMS Horizontal Egress Inserts : BL - Creditedevent=0xa1,umask=0x4001CMS Horizontal Egress Inserts : BL - Credited : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_inserts.bl_uncrduncore interconnectCMS Horizontal Egress Inserts : BL - Uncreditedevent=0xa1,umask=401CMS Horizontal Egress Inserts : BL - Uncredited : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_inserts.ivuncore interconnectCMS Horizontal Egress Inserts : IVevent=0xa1,umask=801CMS Horizontal Egress Inserts : IV : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_nack.ad_alluncore interconnectCMS Horizontal Egress NACKs : AD - Allevent=0xa4,umask=0x1101CMS Horizontal Egress NACKs : AD - All : Counts number of Egress packets NACK'ed on to the Horizontal Ring : All == Credited + Uncreditedunc_m2m_txr_horz_nack.ad_crduncore interconnectCMS Horizontal Egress NACKs : AD - Creditedevent=0xa4,umask=0x1001CMS Horizontal Egress NACKs : AD - Credited : Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m2m_txr_horz_nack.ad_uncrduncore interconnectCMS Horizontal Egress NACKs : AD - Uncreditedevent=0xa4,umask=101CMS Horizontal Egress NACKs : AD - Uncredited : Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m2m_txr_horz_nack.akuncore interconnectCMS Horizontal Egress NACKs : AKevent=0xa4,umask=201CMS Horizontal Egress NACKs : AK : Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m2m_txr_horz_nack.akc_uncrduncore interconnectCMS Horizontal Egress NACKs : AKC - Uncreditedevent=0xa4,umask=0x8001CMS Horizontal Egress NACKs : AKC - Uncredited : Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m2m_txr_horz_nack.bl_alluncore interconnectCMS Horizontal Egress NACKs : BL - Allevent=0xa4,umask=0x4401CMS Horizontal Egress NACKs : BL - All : Counts number of Egress packets NACK'ed on to the Horizontal Ring : All == Credited + Uncreditedunc_m2m_txr_horz_nack.bl_crduncore interconnectCMS Horizontal Egress NACKs : BL - Creditedevent=0xa4,umask=0x4001CMS Horizontal Egress NACKs : BL - Credited : Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m2m_txr_horz_nack.bl_uncrduncore interconnectCMS Horizontal Egress NACKs : BL - Uncreditedevent=0xa4,umask=401CMS Horizontal Egress NACKs : BL - Uncredited : Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m2m_txr_horz_nack.ivuncore interconnectCMS Horizontal Egress NACKs : IVevent=0xa4,umask=801CMS Horizontal Egress NACKs : IV : Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m2m_txr_horz_occupancy.ad_alluncore interconnectCMS Horizontal Egress Occupancy : AD - Allevent=0xa0,umask=0x1101CMS Horizontal Egress Occupancy : AD - All : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_m2m_txr_horz_occupancy.ad_crduncore interconnectCMS Horizontal Egress Occupancy : AD - Creditedevent=0xa0,umask=0x1001CMS Horizontal Egress Occupancy : AD - Credited : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_occupancy.ad_uncrduncore interconnectCMS Horizontal Egress Occupancy : AD - Uncreditedevent=0xa0,umask=101CMS Horizontal Egress Occupancy : AD - Uncredited : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_occupancy.akuncore interconnectCMS Horizontal Egress Occupancy : AKevent=0xa0,umask=201CMS Horizontal Egress Occupancy : AK : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_occupancy.akc_uncrduncore interconnectCMS Horizontal Egress Occupancy : AKC - Uncreditedevent=0xa0,umask=0x8001CMS Horizontal Egress Occupancy : AKC - Uncredited : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_occupancy.bl_alluncore interconnectCMS Horizontal Egress Occupancy : BL - Allevent=0xa0,umask=0x4401CMS Horizontal Egress Occupancy : BL - All : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_m2m_txr_horz_occupancy.bl_crduncore interconnectCMS Horizontal Egress Occupancy : BL - Creditedevent=0xa0,umask=0x4001CMS Horizontal Egress Occupancy : BL - Credited : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_occupancy.bl_uncrduncore interconnectCMS Horizontal Egress Occupancy : BL - Uncreditedevent=0xa0,umask=401CMS Horizontal Egress Occupancy : BL - Uncredited : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_occupancy.ivuncore interconnectCMS Horizontal Egress Occupancy : IVevent=0xa0,umask=801CMS Horizontal Egress Occupancy : IV : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2m_txr_horz_starved.ad_alluncore interconnectCMS Horizontal Egress Injection Starvation : AD - Allevent=0xa5,umask=101CMS Horizontal Egress Injection Starvation : AD - All : Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time. : All == Credited + Uncreditedunc_m2m_txr_horz_starved.ad_uncrduncore interconnectCMS Horizontal Egress Injection Starvation : AD - Uncreditedevent=0xa5,umask=101CMS Horizontal Egress Injection Starvation : AD - Uncredited : Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_m2m_txr_horz_starved.akuncore interconnectCMS Horizontal Egress Injection Starvation : AKevent=0xa5,umask=201CMS Horizontal Egress Injection Starvation : AK : Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_m2m_txr_horz_starved.akc_uncrduncore interconnectCMS Horizontal Egress Injection Starvation : AKC - Uncreditedevent=0xa5,umask=0x8001CMS Horizontal Egress Injection Starvation : AKC - Uncredited : Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_m2m_txr_horz_starved.bl_alluncore interconnectCMS Horizontal Egress Injection Starvation : BL - Allevent=0xa5,umask=401CMS Horizontal Egress Injection Starvation : BL - All : Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time. : All == Credited + Uncreditedunc_m2m_txr_horz_starved.bl_uncrduncore interconnectCMS Horizontal Egress Injection Starvation : BL - Uncreditedevent=0xa5,umask=401CMS Horizontal Egress Injection Starvation : BL - Uncredited : Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_m2m_txr_horz_starved.ivuncore interconnectCMS Horizontal Egress Injection Starvation : IVevent=0xa5,umask=801CMS Horizontal Egress Injection Starvation : IV : Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_m2m_txr_vert_ads_used.ad_ag0uncore interconnectCMS Vertical ADS Used : AD - Agent 0event=0x9c,umask=101CMS Vertical ADS Used : AD - Agent 0 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m2m_txr_vert_ads_used.ad_ag1uncore interconnectCMS Vertical ADS Used : AD - Agent 1event=0x9c,umask=0x1001CMS Vertical ADS Used : AD - Agent 1 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m2m_txr_vert_ads_used.bl_ag0uncore interconnectCMS Vertical ADS Used : BL - Agent 0event=0x9c,umask=401CMS Vertical ADS Used : BL - Agent 0 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m2m_txr_vert_ads_used.bl_ag1uncore interconnectCMS Vertical ADS Used : BL - Agent 1event=0x9c,umask=0x4001CMS Vertical ADS Used : BL - Agent 1 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m2m_txr_vert_bypass.ad_ag0uncore interconnectCMS Vertical ADS Used : AD - Agent 0event=0x9d,umask=101CMS Vertical ADS Used : AD - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m2m_txr_vert_bypass.ad_ag1uncore interconnectCMS Vertical ADS Used : AD - Agent 1event=0x9d,umask=0x1001CMS Vertical ADS Used : AD - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m2m_txr_vert_bypass.ak_ag0uncore interconnectCMS Vertical ADS Used : AK - Agent 0event=0x9d,umask=201CMS Vertical ADS Used : AK - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m2m_txr_vert_bypass.ak_ag1uncore interconnectCMS Vertical ADS Used : AK - Agent 1event=0x9d,umask=0x2001CMS Vertical ADS Used : AK - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m2m_txr_vert_bypass.bl_ag0uncore interconnectCMS Vertical ADS Used : BL - Agent 0event=0x9d,umask=401CMS Vertical ADS Used : BL - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m2m_txr_vert_bypass.bl_ag1uncore interconnectCMS Vertical ADS Used : BL - Agent 1event=0x9d,umask=0x4001CMS Vertical ADS Used : BL - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m2m_txr_vert_bypass.iv_ag1uncore interconnectCMS Vertical ADS Used : IV - Agent 1event=0x9d,umask=801CMS Vertical ADS Used : IV - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m2m_txr_vert_bypass_1.akc_ag0uncore interconnectCMS Vertical ADS Used : AKC - Agent 0event=0x9e,umask=101CMS Vertical ADS Used : AKC - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m2m_txr_vert_bypass_1.akc_ag1uncore interconnectCMS Vertical ADS Used : AKC - Agent 1event=0x9e,umask=201CMS Vertical ADS Used : AKC - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m2m_txr_vert_cycles_full0.ad_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Full : AD - Agent 0event=0x94,umask=101Cycles CMS Vertical Egress Queue Is Full : AD - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m2m_txr_vert_cycles_full0.ad_ag1uncore interconnectCycles CMS Vertical Egress Queue Is Full : AD - Agent 1event=0x94,umask=0x1001Cycles CMS Vertical Egress Queue Is Full : AD - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring.  This is commonly used for outbound requestsunc_m2m_txr_vert_cycles_full0.ak_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Full : AK - Agent 0event=0x94,umask=201Cycles CMS Vertical Egress Queue Is Full : AK - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m2m_txr_vert_cycles_full0.ak_ag1uncore interconnectCycles CMS Vertical Egress Queue Is Full : AK - Agent 1event=0x94,umask=0x2001Cycles CMS Vertical Egress Queue Is Full : AK - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ringunc_m2m_txr_vert_cycles_full0.bl_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Full : BL - Agent 0event=0x94,umask=401Cycles CMS Vertical Egress Queue Is Full : BL - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_m2m_txr_vert_cycles_full0.bl_ag1uncore interconnectCycles CMS Vertical Egress Queue Is Full : BL - Agent 1event=0x94,umask=0x4001Cycles CMS Vertical Egress Queue Is Full : BL - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_m2m_txr_vert_cycles_full0.iv_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Full : IV - Agent 0event=0x94,umask=801Cycles CMS Vertical Egress Queue Is Full : IV - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring.  This is commonly used for snoops to the coresunc_m2m_txr_vert_cycles_full1.akc_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Full : AKC - Agent 0event=0x95,umask=101Cycles CMS Vertical Egress Queue Is Full : AKC - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m2m_txr_vert_cycles_full1.akc_ag1uncore interconnectCycles CMS Vertical Egress Queue Is Full : AKC - Agent 1event=0x95,umask=201Cycles CMS Vertical Egress Queue Is Full : AKC - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m2m_txr_vert_cycles_ne0.ad_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 0event=0x96,umask=101Cycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m2m_txr_vert_cycles_ne0.ad_ag1uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 1event=0x96,umask=0x1001Cycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring.  This is commonly used for outbound requestsunc_m2m_txr_vert_cycles_ne0.ak_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 0event=0x96,umask=201Cycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m2m_txr_vert_cycles_ne0.ak_ag1uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 1event=0x96,umask=0x2001Cycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ringunc_m2m_txr_vert_cycles_ne0.bl_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 0event=0x96,umask=401Cycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_m2m_txr_vert_cycles_ne0.bl_ag1uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 1event=0x96,umask=0x4001Cycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_m2m_txr_vert_cycles_ne0.iv_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty : IV - Agent 0event=0x96,umask=801Cycles CMS Vertical Egress Queue Is Not Empty : IV - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring.  This is commonly used for snoops to the coresunc_m2m_txr_vert_cycles_ne1.akc_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 0event=0x97,umask=101Cycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m2m_txr_vert_cycles_ne1.akc_ag1uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 1event=0x97,umask=201Cycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m2m_txr_vert_inserts0.ad_ag0uncore interconnectCMS Vert Egress Allocations : AD - Agent 0event=0x92,umask=101CMS Vert Egress Allocations : AD - Agent 0 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m2m_txr_vert_inserts0.ad_ag1uncore interconnectCMS Vert Egress Allocations : AD - Agent 1event=0x92,umask=0x1001CMS Vert Egress Allocations : AD - Agent 1 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring.  This is commonly used for outbound requestsunc_m2m_txr_vert_inserts0.ak_ag0uncore interconnectCMS Vert Egress Allocations : AK - Agent 0event=0x92,umask=201CMS Vert Egress Allocations : AK - Agent 0 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m2m_txr_vert_inserts0.ak_ag1uncore interconnectCMS Vert Egress Allocations : AK - Agent 1event=0x92,umask=0x2001CMS Vert Egress Allocations : AK - Agent 1 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ringunc_m2m_txr_vert_inserts0.bl_ag0uncore interconnectCMS Vert Egress Allocations : BL - Agent 0event=0x92,umask=401CMS Vert Egress Allocations : BL - Agent 0 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_m2m_txr_vert_inserts0.bl_ag1uncore interconnectCMS Vert Egress Allocations : BL - Agent 1event=0x92,umask=0x4001CMS Vert Egress Allocations : BL - Agent 1 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_m2m_txr_vert_inserts0.iv_ag0uncore interconnectCMS Vert Egress Allocations : IV - Agent 0event=0x92,umask=801CMS Vert Egress Allocations : IV - Agent 0 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring.  This is commonly used for snoops to the coresunc_m2m_txr_vert_inserts1.akc_ag0uncore interconnectCMS Vert Egress Allocations : AKC - Agent 0event=0x93,umask=101CMS Vert Egress Allocations : AKC - Agent 0 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m2m_txr_vert_inserts1.akc_ag1uncore interconnectCMS Vert Egress Allocations : AKC - Agent 1event=0x93,umask=201CMS Vert Egress Allocations : AKC - Agent 1 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m2m_txr_vert_nack0.ad_ag0uncore interconnectCMS Vertical Egress NACKs : AD - Agent 0event=0x98,umask=101CMS Vertical Egress NACKs : AD - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m2m_txr_vert_nack0.ad_ag1uncore interconnectCMS Vertical Egress NACKs : AD - Agent 1event=0x98,umask=0x1001CMS Vertical Egress NACKs : AD - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m2m_txr_vert_nack0.ak_ag0uncore interconnectCMS Vertical Egress NACKs : AK - Agent 0event=0x98,umask=201CMS Vertical Egress NACKs : AK - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m2m_txr_vert_nack0.ak_ag1uncore interconnectCMS Vertical Egress NACKs : AK - Agent 1event=0x98,umask=0x2001CMS Vertical Egress NACKs : AK - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m2m_txr_vert_nack0.bl_ag0uncore interconnectCMS Vertical Egress NACKs : BL - Agent 0event=0x98,umask=401CMS Vertical Egress NACKs : BL - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m2m_txr_vert_nack0.bl_ag1uncore interconnectCMS Vertical Egress NACKs : BL - Agent 1event=0x98,umask=0x4001CMS Vertical Egress NACKs : BL - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m2m_txr_vert_nack0.iv_ag0uncore interconnectCMS Vertical Egress NACKs : IVevent=0x98,umask=801CMS Vertical Egress NACKs : IV : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m2m_txr_vert_nack1.akc_ag0uncore interconnectCMS Vertical Egress NACKs : AKC - Agent 0event=0x99,umask=101CMS Vertical Egress NACKs : AKC - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m2m_txr_vert_nack1.akc_ag1uncore interconnectCMS Vertical Egress NACKs : AKC - Agent 1event=0x99,umask=201CMS Vertical Egress NACKs : AKC - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m2m_txr_vert_occupancy0.ad_ag0uncore interconnectCMS Vert Egress Occupancy : AD - Agent 0event=0x90,umask=101CMS Vert Egress Occupancy : AD - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m2m_txr_vert_occupancy0.ad_ag1uncore interconnectCMS Vert Egress Occupancy : AD - Agent 1event=0x90,umask=0x1001CMS Vert Egress Occupancy : AD - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring.  This is commonly used for outbound requestsunc_m2m_txr_vert_occupancy0.ak_ag0uncore interconnectCMS Vert Egress Occupancy : AK - Agent 0event=0x90,umask=201CMS Vert Egress Occupancy : AK - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m2m_txr_vert_occupancy0.ak_ag1uncore interconnectCMS Vert Egress Occupancy : AK - Agent 1event=0x90,umask=0x2001CMS Vert Egress Occupancy : AK - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ringunc_m2m_txr_vert_occupancy0.bl_ag0uncore interconnectCMS Vert Egress Occupancy : BL - Agent 0event=0x90,umask=401CMS Vert Egress Occupancy : BL - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_m2m_txr_vert_occupancy0.bl_ag1uncore interconnectCMS Vert Egress Occupancy : BL - Agent 1event=0x90,umask=0x4001CMS Vert Egress Occupancy : BL - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_m2m_txr_vert_occupancy0.iv_ag0uncore interconnectCMS Vert Egress Occupancy : IV - Agent 0event=0x90,umask=801CMS Vert Egress Occupancy : IV - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring.  This is commonly used for snoops to the coresunc_m2m_txr_vert_occupancy1.akc_ag0uncore interconnectCMS Vert Egress Occupancy : AKC - Agent 0event=0x91,umask=101CMS Vert Egress Occupancy : AKC - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m2m_txr_vert_occupancy1.akc_ag1uncore interconnectCMS Vert Egress Occupancy : AKC - Agent 1event=0x91,umask=201CMS Vert Egress Occupancy : AKC - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m2m_txr_vert_starved0.ad_ag0uncore interconnectCMS Vertical Egress Injection Starvation : AD - Agent 0event=0x9a,umask=101CMS Vertical Egress Injection Starvation : AD - Agent 0 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m2m_txr_vert_starved0.ad_ag1uncore interconnectCMS Vertical Egress Injection Starvation : AD - Agent 1event=0x9a,umask=0x1001CMS Vertical Egress Injection Starvation : AD - Agent 1 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m2m_txr_vert_starved0.ak_ag0uncore interconnectCMS Vertical Egress Injection Starvation : AK - Agent 0event=0x9a,umask=201CMS Vertical Egress Injection Starvation : AK - Agent 0 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m2m_txr_vert_starved0.ak_ag1uncore interconnectCMS Vertical Egress Injection Starvation : AK - Agent 1event=0x9a,umask=0x2001CMS Vertical Egress Injection Starvation : AK - Agent 1 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m2m_txr_vert_starved0.bl_ag0uncore interconnectCMS Vertical Egress Injection Starvation : BL - Agent 0event=0x9a,umask=401CMS Vertical Egress Injection Starvation : BL - Agent 0 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m2m_txr_vert_starved0.bl_ag1uncore interconnectCMS Vertical Egress Injection Starvation : BL - Agent 1event=0x9a,umask=0x4001CMS Vertical Egress Injection Starvation : BL - Agent 1 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m2m_txr_vert_starved0.iv_ag0uncore interconnectCMS Vertical Egress Injection Starvation : IVevent=0x9a,umask=801CMS Vertical Egress Injection Starvation : IV : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m2m_txr_vert_starved1.akc_ag0uncore interconnectCMS Vertical Egress Injection Starvation : AKC - Agent 0event=0x9b,umask=101CMS Vertical Egress Injection Starvation : AKC - Agent 0 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m2m_txr_vert_starved1.akc_ag1uncore interconnectCMS Vertical Egress Injection Starvation : AKC - Agent 1event=0x9b,umask=201CMS Vertical Egress Injection Starvation : AKC - Agent 1 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m2m_txr_vert_starved1.tgcuncore interconnectCMS Vertical Egress Injection Starvation : AKC - Agent 0event=0x9b,umask=401CMS Vertical Egress Injection Starvation : AKC - Agent 0 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m2m_vert_ring_ad_in_use.dn_evenuncore interconnectVertical AD Ring In Use : Down and Evenevent=0xb0,umask=401Vertical AD Ring In Use : Down and Even : Counts the number of cycles that the Vertical AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings  -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_ad_in_use.dn_odduncore interconnectVertical AD Ring In Use : Down and Oddevent=0xb0,umask=801Vertical AD Ring In Use : Down and Odd : Counts the number of cycles that the Vertical AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings  -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_ad_in_use.up_evenuncore interconnectVertical AD Ring In Use : Up and Evenevent=0xb0,umask=101Vertical AD Ring In Use : Up and Even : Counts the number of cycles that the Vertical AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings  -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_ad_in_use.up_odduncore interconnectVertical AD Ring In Use : Up and Oddevent=0xb0,umask=201Vertical AD Ring In Use : Up and Odd : Counts the number of cycles that the Vertical AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings  -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_akc_in_use.dn_evenuncore interconnectVertical AKC Ring In Use : Down and Evenevent=0xb4,umask=401Vertical AKC Ring In Use : Down and Even : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_akc_in_use.dn_odduncore interconnectVertical AKC Ring In Use : Down and Oddevent=0xb4,umask=801Vertical AKC Ring In Use : Down and Odd : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_akc_in_use.up_evenuncore interconnectVertical AKC Ring In Use : Up and Evenevent=0xb4,umask=101Vertical AKC Ring In Use : Up and Even : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_akc_in_use.up_odduncore interconnectVertical AKC Ring In Use : Up and Oddevent=0xb4,umask=201Vertical AKC Ring In Use : Up and Odd : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_ak_in_use.dn_evenuncore interconnectVertical AK Ring In Use : Down and Evenevent=0xb1,umask=401Vertical AK Ring In Use : Down and Even : Counts the number of cycles that the Vertical AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_ak_in_use.dn_odduncore interconnectVertical AK Ring In Use : Down and Oddevent=0xb1,umask=801Vertical AK Ring In Use : Down and Odd : Counts the number of cycles that the Vertical AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_ak_in_use.up_evenuncore interconnectVertical AK Ring In Use : Up and Evenevent=0xb1,umask=101Vertical AK Ring In Use : Up and Even : Counts the number of cycles that the Vertical AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_ak_in_use.up_odduncore interconnectVertical AK Ring In Use : Up and Oddevent=0xb1,umask=201Vertical AK Ring In Use : Up and Odd : Counts the number of cycles that the Vertical AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_bl_in_use.dn_evenuncore interconnectVertical BL Ring in Use : Down and Evenevent=0xb2,umask=401Vertical BL Ring in Use : Down and Even : Counts the number of cycles that the Vertical BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_bl_in_use.dn_odduncore interconnectVertical BL Ring in Use : Down and Oddevent=0xb2,umask=801Vertical BL Ring in Use : Down and Odd : Counts the number of cycles that the Vertical BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_bl_in_use.up_evenuncore interconnectVertical BL Ring in Use : Up and Evenevent=0xb2,umask=101Vertical BL Ring in Use : Up and Even : Counts the number of cycles that the Vertical BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_bl_in_use.up_odduncore interconnectVertical BL Ring in Use : Up and Oddevent=0xb2,umask=201Vertical BL Ring in Use : Up and Odd : Counts the number of cycles that the Vertical BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_iv_in_use.dnuncore interconnectVertical IV Ring in Use : Downevent=0xb3,umask=401Vertical IV Ring in Use : Down : Counts the number of cycles that the Vertical IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODDunc_m2m_vert_ring_iv_in_use.upuncore interconnectVertical IV Ring in Use : Upevent=0xb3,umask=101Vertical IV Ring in Use : Up : Counts the number of cycles that the Vertical IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODDunc_m2m_vert_ring_tgc_in_use.dn_evenuncore interconnectVertical TGC Ring In Use : Down and Evenevent=0xb5,umask=401Vertical TGC Ring In Use : Down and Even : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_tgc_in_use.dn_odduncore interconnectVertical TGC Ring In Use : Down and Oddevent=0xb5,umask=801Vertical TGC Ring In Use : Down and Odd : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_tgc_in_use.up_evenuncore interconnectVertical TGC Ring In Use : Up and Evenevent=0xb5,umask=101Vertical TGC Ring In Use : Up and Even : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_vert_ring_tgc_in_use.up_odduncore interconnectVertical TGC Ring In Use : Up and Oddevent=0xb5,umask=201Vertical TGC Ring In Use : Up and Odd : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2m_wpq_flush.ch0uncore interconnectWPQ Flush : Channel 0event=0x58,umask=101unc_m2m_wpq_flush.ch1uncore interconnectWPQ Flush : Channel 1event=0x58,umask=201unc_m2m_wpq_flush.ch2uncore interconnectWPQ Flush : Channel 2event=0x58,umask=401unc_m2m_wpq_no_reg_crd.chn0uncore interconnectM2M->iMC WPQ Cycles w/Credits - Regular : Channel 0event=0x4d,umask=101unc_m2m_wpq_no_reg_crd.chn1uncore interconnectM2M->iMC WPQ Cycles w/Credits - Regular : Channel 1event=0x4d,umask=201unc_m2m_wpq_no_reg_crd.chn2uncore interconnectM2M->iMC WPQ Cycles w/Credits - Regular : Channel 2event=0x4d,umask=401unc_m2m_wpq_no_reg_crd_pmm.chn0uncore interconnectM2M->iMC WPQ Cycles w/Credits - PMM : Channel 0event=0x51,umask=101unc_m2m_wpq_no_reg_crd_pmm.chn1uncore interconnectM2M->iMC WPQ Cycles w/Credits - PMM : Channel 1event=0x51,umask=201unc_m2m_wpq_no_reg_crd_pmm.chn2uncore interconnectM2M->iMC WPQ Cycles w/Credits - PMM : Channel 2event=0x51,umask=401unc_m2m_wpq_no_spec_crd.chn0uncore interconnectM2M->iMC WPQ Cycles w/Credits - Special : Channel 0event=0x4e,umask=101unc_m2m_wpq_no_spec_crd.chn1uncore interconnectM2M->iMC WPQ Cycles w/Credits - Special : Channel 1event=0x4e,umask=201unc_m2m_wpq_no_spec_crd.chn2uncore interconnectM2M->iMC WPQ Cycles w/Credits - Special : Channel 2event=0x4e,umask=401unc_m2m_wr_tracker_full.ch0uncore interconnectWrite Tracker Cycles Full : Channel 0event=0x4a,umask=101unc_m2m_wr_tracker_full.ch1uncore interconnectWrite Tracker Cycles Full : Channel 1event=0x4a,umask=201unc_m2m_wr_tracker_full.ch2uncore interconnectWrite Tracker Cycles Full : Channel 2event=0x4a,umask=401unc_m2m_wr_tracker_full.mirruncore interconnectWrite Tracker Cycles Full : Mirrorevent=0x4a,umask=801unc_m2m_wr_tracker_inserts.ch0uncore interconnectWrite Tracker Inserts : Channel 0event=0x56,umask=101unc_m2m_wr_tracker_inserts.ch1uncore interconnectWrite Tracker Inserts : Channel 1event=0x56,umask=201unc_m2m_wr_tracker_inserts.ch2uncore interconnectWrite Tracker Inserts : Channel 2event=0x56,umask=401unc_m2m_wr_tracker_ne.ch0uncore interconnectWrite Tracker Cycles Not Empty : Channel 0event=0x4b,umask=101unc_m2m_wr_tracker_ne.ch1uncore interconnectWrite Tracker Cycles Not Empty : Channel 1event=0x4b,umask=201unc_m2m_wr_tracker_ne.ch2uncore interconnectWrite Tracker Cycles Not Empty : Channel 2event=0x4b,umask=401unc_m2m_wr_tracker_ne.mirruncore interconnectWrite Tracker Cycles Not Empty : Mirrorevent=0x4b,umask=801unc_m2m_wr_tracker_ne.mirr_nontgruncore interconnectWrite Tracker Cycles Not Emptyevent=0x4b,umask=0x1001unc_m2m_wr_tracker_ne.mirr_pwruncore interconnectWrite Tracker Cycles Not Emptyevent=0x4b,umask=0x2001unc_m2m_wr_tracker_nonposted_inserts.ch0uncore interconnectWrite Tracker Non-Posted Inserts : Channel 0event=0x63,umask=101unc_m2m_wr_tracker_nonposted_inserts.ch1uncore interconnectWrite Tracker Non-Posted Inserts : Channel 1event=0x63,umask=201unc_m2m_wr_tracker_nonposted_inserts.ch2uncore interconnectWrite Tracker Non-Posted Inserts : Channel 2event=0x63,umask=401unc_m2m_wr_tracker_nonposted_occupancy.ch0uncore interconnectWrite Tracker Non-Posted Occupancy : Channel 0event=0x62,umask=101unc_m2m_wr_tracker_nonposted_occupancy.ch1uncore interconnectWrite Tracker Non-Posted Occupancy : Channel 1event=0x62,umask=201unc_m2m_wr_tracker_nonposted_occupancy.ch2uncore interconnectWrite Tracker Non-Posted Occupancy : Channel 2event=0x62,umask=401unc_m2m_wr_tracker_occupancy.ch0uncore interconnectWrite Tracker Occupancy : Channel 0event=0x55,umask=101unc_m2m_wr_tracker_occupancy.ch1uncore interconnectWrite Tracker Occupancy : Channel 1event=0x55,umask=201unc_m2m_wr_tracker_occupancy.ch2uncore interconnectWrite Tracker Occupancy : Channel 2event=0x55,umask=401unc_m2m_wr_tracker_occupancy.mirruncore interconnectWrite Tracker Occupancy : Mirrorevent=0x55,umask=801unc_m2m_wr_tracker_occupancy.mirr_nontgruncore interconnectWrite Tracker Occupancyevent=0x55,umask=0x1001unc_m2m_wr_tracker_occupancy.mirr_pwruncore interconnectWrite Tracker Occupancyevent=0x55,umask=0x2001unc_m2m_wr_tracker_posted_inserts.ch0uncore interconnectWrite Tracker Posted Inserts : Channel 0event=0x5e,umask=101unc_m2m_wr_tracker_posted_inserts.ch1uncore interconnectWrite Tracker Posted Inserts : Channel 1event=0x5e,umask=201unc_m2m_wr_tracker_posted_inserts.ch2uncore interconnectWrite Tracker Posted Inserts : Channel 2event=0x5e,umask=401unc_m2m_wr_tracker_posted_occupancy.ch0uncore interconnectWrite Tracker Posted Occupancy : Channel 0event=0x5d,umask=101unc_m2m_wr_tracker_posted_occupancy.ch1uncore interconnectWrite Tracker Posted Occupancy : Channel 1event=0x5d,umask=201unc_m2m_wr_tracker_posted_occupancy.ch2uncore interconnectWrite Tracker Posted Occupancy : Channel 2event=0x5d,umask=401unc_m3upi_ag0_ad_crd_acquired0.tgr0uncore interconnectCMS Agent0 AD Credits Acquired : For Transgress 0event=0x80,umask=101CMS Agent0 AD Credits Acquired : For Transgress 0 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m3upi_ag0_ad_crd_acquired0.tgr1uncore interconnectCMS Agent0 AD Credits Acquired : For Transgress 1event=0x80,umask=201CMS Agent0 AD Credits Acquired : For Transgress 1 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m3upi_ag0_ad_crd_acquired0.tgr2uncore interconnectCMS Agent0 AD Credits Acquired : For Transgress 2event=0x80,umask=401CMS Agent0 AD Credits Acquired : For Transgress 2 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m3upi_ag0_ad_crd_acquired0.tgr3uncore interconnectCMS Agent0 AD Credits Acquired : For Transgress 3event=0x80,umask=801CMS Agent0 AD Credits Acquired : For Transgress 3 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m3upi_ag0_ad_crd_acquired0.tgr4uncore interconnectCMS Agent0 AD Credits Acquired : For Transgress 4event=0x80,umask=0x1001CMS Agent0 AD Credits Acquired : For Transgress 4 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m3upi_ag0_ad_crd_acquired0.tgr5uncore interconnectCMS Agent0 AD Credits Acquired : For Transgress 5event=0x80,umask=0x2001CMS Agent0 AD Credits Acquired : For Transgress 5 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m3upi_ag0_ad_crd_acquired0.tgr6uncore interconnectCMS Agent0 AD Credits Acquired : For Transgress 6event=0x80,umask=0x4001CMS Agent0 AD Credits Acquired : For Transgress 6 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m3upi_ag0_ad_crd_acquired0.tgr7uncore interconnectCMS Agent0 AD Credits Acquired : For Transgress 7event=0x80,umask=0x8001CMS Agent0 AD Credits Acquired : For Transgress 7 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m3upi_ag0_ad_crd_acquired1.tgr10uncore interconnectCMS Agent0 AD Credits Acquired : For Transgress 10event=0x81,umask=401CMS Agent0 AD Credits Acquired : For Transgress 10 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m3upi_ag0_ad_crd_acquired1.tgr8uncore interconnectCMS Agent0 AD Credits Acquired : For Transgress 8event=0x81,umask=101CMS Agent0 AD Credits Acquired : For Transgress 8 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m3upi_ag0_ad_crd_acquired1.tgr9uncore interconnectCMS Agent0 AD Credits Acquired : For Transgress 9event=0x81,umask=201CMS Agent0 AD Credits Acquired : For Transgress 9 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m3upi_ag0_ad_crd_occupancy0.tgr0uncore interconnectCMS Agent0 AD Credits Occupancy : For Transgress 0event=0x82,umask=101CMS Agent0 AD Credits Occupancy : For Transgress 0 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m3upi_ag0_ad_crd_occupancy0.tgr1uncore interconnectCMS Agent0 AD Credits Occupancy : For Transgress 1event=0x82,umask=201CMS Agent0 AD Credits Occupancy : For Transgress 1 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m3upi_ag0_ad_crd_occupancy0.tgr2uncore interconnectCMS Agent0 AD Credits Occupancy : For Transgress 2event=0x82,umask=401CMS Agent0 AD Credits Occupancy : For Transgress 2 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m3upi_ag0_ad_crd_occupancy0.tgr3uncore interconnectCMS Agent0 AD Credits Occupancy : For Transgress 3event=0x82,umask=801CMS Agent0 AD Credits Occupancy : For Transgress 3 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m3upi_ag0_ad_crd_occupancy0.tgr4uncore interconnectCMS Agent0 AD Credits Occupancy : For Transgress 4event=0x82,umask=0x1001CMS Agent0 AD Credits Occupancy : For Transgress 4 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m3upi_ag0_ad_crd_occupancy0.tgr5uncore interconnectCMS Agent0 AD Credits Occupancy : For Transgress 5event=0x82,umask=0x2001CMS Agent0 AD Credits Occupancy : For Transgress 5 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m3upi_ag0_ad_crd_occupancy0.tgr6uncore interconnectCMS Agent0 AD Credits Occupancy : For Transgress 6event=0x82,umask=0x4001CMS Agent0 AD Credits Occupancy : For Transgress 6 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m3upi_ag0_ad_crd_occupancy0.tgr7uncore interconnectCMS Agent0 AD Credits Occupancy : For Transgress 7event=0x82,umask=0x8001CMS Agent0 AD Credits Occupancy : For Transgress 7 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m3upi_ag0_ad_crd_occupancy1.tgr10uncore interconnectCMS Agent0 AD Credits Occupancy : For Transgress 10event=0x83,umask=401CMS Agent0 AD Credits Occupancy : For Transgress 10 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m3upi_ag0_ad_crd_occupancy1.tgr8uncore interconnectCMS Agent0 AD Credits Occupancy : For Transgress 8event=0x83,umask=101CMS Agent0 AD Credits Occupancy : For Transgress 8 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m3upi_ag0_ad_crd_occupancy1.tgr9uncore interconnectCMS Agent0 AD Credits Occupancy : For Transgress 9event=0x83,umask=201CMS Agent0 AD Credits Occupancy : For Transgress 9 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m3upi_ag0_bl_crd_acquired0.tgr0uncore interconnectCMS Agent0 BL Credits Acquired : For Transgress 0event=0x88,umask=101CMS Agent0 BL Credits Acquired : For Transgress 0 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m3upi_ag0_bl_crd_acquired0.tgr1uncore interconnectCMS Agent0 BL Credits Acquired : For Transgress 1event=0x88,umask=201CMS Agent0 BL Credits Acquired : For Transgress 1 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m3upi_ag0_bl_crd_acquired0.tgr2uncore interconnectCMS Agent0 BL Credits Acquired : For Transgress 2event=0x88,umask=401CMS Agent0 BL Credits Acquired : For Transgress 2 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m3upi_ag0_bl_crd_acquired0.tgr3uncore interconnectCMS Agent0 BL Credits Acquired : For Transgress 3event=0x88,umask=801CMS Agent0 BL Credits Acquired : For Transgress 3 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m3upi_ag0_bl_crd_acquired0.tgr4uncore interconnectCMS Agent0 BL Credits Acquired : For Transgress 4event=0x88,umask=0x1001CMS Agent0 BL Credits Acquired : For Transgress 4 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m3upi_ag0_bl_crd_acquired0.tgr5uncore interconnectCMS Agent0 BL Credits Acquired : For Transgress 5event=0x88,umask=0x2001CMS Agent0 BL Credits Acquired : For Transgress 5 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m3upi_ag0_bl_crd_acquired0.tgr6uncore interconnectCMS Agent0 BL Credits Acquired : For Transgress 6event=0x88,umask=0x4001CMS Agent0 BL Credits Acquired : For Transgress 6 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m3upi_ag0_bl_crd_acquired0.tgr7uncore interconnectCMS Agent0 BL Credits Acquired : For Transgress 7event=0x88,umask=0x8001CMS Agent0 BL Credits Acquired : For Transgress 7 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m3upi_ag0_bl_crd_acquired1.tgr10uncore interconnectCMS Agent0 BL Credits Acquired : For Transgress 10event=0x89,umask=401CMS Agent0 BL Credits Acquired : For Transgress 10 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m3upi_ag0_bl_crd_acquired1.tgr8uncore interconnectCMS Agent0 BL Credits Acquired : For Transgress 8event=0x89,umask=101CMS Agent0 BL Credits Acquired : For Transgress 8 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m3upi_ag0_bl_crd_acquired1.tgr9uncore interconnectCMS Agent0 BL Credits Acquired : For Transgress 9event=0x89,umask=201CMS Agent0 BL Credits Acquired : For Transgress 9 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m3upi_ag0_bl_crd_occupancy0.tgr0uncore interconnectCMS Agent0 BL Credits Occupancy : For Transgress 0event=0x8a,umask=101CMS Agent0 BL Credits Occupancy : For Transgress 0 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m3upi_ag0_bl_crd_occupancy0.tgr1uncore interconnectCMS Agent0 BL Credits Occupancy : For Transgress 1event=0x8a,umask=201CMS Agent0 BL Credits Occupancy : For Transgress 1 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m3upi_ag0_bl_crd_occupancy0.tgr2uncore interconnectCMS Agent0 BL Credits Occupancy : For Transgress 2event=0x8a,umask=401CMS Agent0 BL Credits Occupancy : For Transgress 2 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m3upi_ag0_bl_crd_occupancy0.tgr3uncore interconnectCMS Agent0 BL Credits Occupancy : For Transgress 3event=0x8a,umask=801CMS Agent0 BL Credits Occupancy : For Transgress 3 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m3upi_ag0_bl_crd_occupancy0.tgr4uncore interconnectCMS Agent0 BL Credits Occupancy : For Transgress 4event=0x8a,umask=0x1001CMS Agent0 BL Credits Occupancy : For Transgress 4 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m3upi_ag0_bl_crd_occupancy0.tgr5uncore interconnectCMS Agent0 BL Credits Occupancy : For Transgress 5event=0x8a,umask=0x2001CMS Agent0 BL Credits Occupancy : For Transgress 5 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m3upi_ag0_bl_crd_occupancy0.tgr6uncore interconnectCMS Agent0 BL Credits Occupancy : For Transgress 6event=0x8a,umask=0x4001CMS Agent0 BL Credits Occupancy : For Transgress 6 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m3upi_ag0_bl_crd_occupancy0.tgr7uncore interconnectCMS Agent0 BL Credits Occupancy : For Transgress 7event=0x8a,umask=0x8001CMS Agent0 BL Credits Occupancy : For Transgress 7 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m3upi_ag0_bl_crd_occupancy1.tgr10uncore interconnectCMS Agent0 BL Credits Occupancy : For Transgress 10event=0x8b,umask=401CMS Agent0 BL Credits Occupancy : For Transgress 10 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m3upi_ag0_bl_crd_occupancy1.tgr8uncore interconnectCMS Agent0 BL Credits Occupancy : For Transgress 8event=0x8b,umask=101CMS Agent0 BL Credits Occupancy : For Transgress 8 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m3upi_ag0_bl_crd_occupancy1.tgr9uncore interconnectCMS Agent0 BL Credits Occupancy : For Transgress 9event=0x8b,umask=201CMS Agent0 BL Credits Occupancy : For Transgress 9 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m3upi_ag1_ad_crd_acquired0.tgr0uncore interconnectCMS Agent1 AD Credits Acquired : For Transgress 0event=0x84,umask=101CMS Agent1 AD Credits Acquired : For Transgress 0 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m3upi_ag1_ad_crd_acquired0.tgr1uncore interconnectCMS Agent1 AD Credits Acquired : For Transgress 1event=0x84,umask=201CMS Agent1 AD Credits Acquired : For Transgress 1 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m3upi_ag1_ad_crd_acquired0.tgr2uncore interconnectCMS Agent1 AD Credits Acquired : For Transgress 2event=0x84,umask=401CMS Agent1 AD Credits Acquired : For Transgress 2 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m3upi_ag1_ad_crd_acquired0.tgr3uncore interconnectCMS Agent1 AD Credits Acquired : For Transgress 3event=0x84,umask=801CMS Agent1 AD Credits Acquired : For Transgress 3 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m3upi_ag1_ad_crd_acquired0.tgr4uncore interconnectCMS Agent1 AD Credits Acquired : For Transgress 4event=0x84,umask=0x1001CMS Agent1 AD Credits Acquired : For Transgress 4 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m3upi_ag1_ad_crd_acquired0.tgr5uncore interconnectCMS Agent1 AD Credits Acquired : For Transgress 5event=0x84,umask=0x2001CMS Agent1 AD Credits Acquired : For Transgress 5 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m3upi_ag1_ad_crd_acquired0.tgr6uncore interconnectCMS Agent1 AD Credits Acquired : For Transgress 6event=0x84,umask=0x4001CMS Agent1 AD Credits Acquired : For Transgress 6 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m3upi_ag1_ad_crd_acquired0.tgr7uncore interconnectCMS Agent1 AD Credits Acquired : For Transgress 7event=0x84,umask=0x8001CMS Agent1 AD Credits Acquired : For Transgress 7 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m3upi_ag1_ad_crd_acquired1.tgr10uncore interconnectCMS Agent1 AD Credits Acquired : For Transgress 10event=0x85,umask=401CMS Agent1 AD Credits Acquired : For Transgress 10 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m3upi_ag1_ad_crd_acquired1.tgr8uncore interconnectCMS Agent1 AD Credits Acquired : For Transgress 8event=0x85,umask=101CMS Agent1 AD Credits Acquired : For Transgress 8 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m3upi_ag1_ad_crd_acquired1.tgr9uncore interconnectCMS Agent1 AD Credits Acquired : For Transgress 9event=0x85,umask=201CMS Agent1 AD Credits Acquired : For Transgress 9 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m3upi_ag1_ad_crd_occupancy0.tgr0uncore interconnectCMS Agent1 AD Credits Occupancy : For Transgress 0event=0x86,umask=101CMS Agent1 AD Credits Occupancy : For Transgress 0 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m3upi_ag1_ad_crd_occupancy0.tgr1uncore interconnectCMS Agent1 AD Credits Occupancy : For Transgress 1event=0x86,umask=201CMS Agent1 AD Credits Occupancy : For Transgress 1 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m3upi_ag1_ad_crd_occupancy0.tgr2uncore interconnectCMS Agent1 AD Credits Occupancy : For Transgress 2event=0x86,umask=401CMS Agent1 AD Credits Occupancy : For Transgress 2 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m3upi_ag1_ad_crd_occupancy0.tgr3uncore interconnectCMS Agent1 AD Credits Occupancy : For Transgress 3event=0x86,umask=801CMS Agent1 AD Credits Occupancy : For Transgress 3 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m3upi_ag1_ad_crd_occupancy0.tgr4uncore interconnectCMS Agent1 AD Credits Occupancy : For Transgress 4event=0x86,umask=0x1001CMS Agent1 AD Credits Occupancy : For Transgress 4 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m3upi_ag1_ad_crd_occupancy0.tgr5uncore interconnectCMS Agent1 AD Credits Occupancy : For Transgress 5event=0x86,umask=0x2001CMS Agent1 AD Credits Occupancy : For Transgress 5 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m3upi_ag1_ad_crd_occupancy0.tgr6uncore interconnectCMS Agent1 AD Credits Occupancy : For Transgress 6event=0x86,umask=0x4001CMS Agent1 AD Credits Occupancy : For Transgress 6 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m3upi_ag1_ad_crd_occupancy0.tgr7uncore interconnectCMS Agent1 AD Credits Occupancy : For Transgress 7event=0x86,umask=0x8001CMS Agent1 AD Credits Occupancy : For Transgress 7 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m3upi_ag1_ad_crd_occupancy1.tgr10uncore interconnectCMS Agent1 AD Credits Occupancy : For Transgress 10event=0x87,umask=401CMS Agent1 AD Credits Occupancy : For Transgress 10 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m3upi_ag1_ad_crd_occupancy1.tgr8uncore interconnectCMS Agent1 AD Credits Occupancy : For Transgress 8event=0x87,umask=101CMS Agent1 AD Credits Occupancy : For Transgress 8 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m3upi_ag1_ad_crd_occupancy1.tgr9uncore interconnectCMS Agent1 AD Credits Occupancy : For Transgress 9event=0x87,umask=201CMS Agent1 AD Credits Occupancy : For Transgress 9 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m3upi_ag1_bl_crd_acquired0.tgr0uncore interconnectCMS Agent1 BL Credits Acquired : For Transgress 0event=0x8c,umask=101CMS Agent1 BL Credits Acquired : For Transgress 0 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m3upi_ag1_bl_crd_acquired0.tgr1uncore interconnectCMS Agent1 BL Credits Acquired : For Transgress 1event=0x8c,umask=201CMS Agent1 BL Credits Acquired : For Transgress 1 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m3upi_ag1_bl_crd_acquired0.tgr2uncore interconnectCMS Agent1 BL Credits Acquired : For Transgress 2event=0x8c,umask=401CMS Agent1 BL Credits Acquired : For Transgress 2 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m3upi_ag1_bl_crd_acquired0.tgr3uncore interconnectCMS Agent1 BL Credits Acquired : For Transgress 3event=0x8c,umask=801CMS Agent1 BL Credits Acquired : For Transgress 3 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m3upi_ag1_bl_crd_acquired0.tgr4uncore interconnectCMS Agent1 BL Credits Acquired : For Transgress 4event=0x8c,umask=0x1001CMS Agent1 BL Credits Acquired : For Transgress 4 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m3upi_ag1_bl_crd_acquired0.tgr5uncore interconnectCMS Agent1 BL Credits Acquired : For Transgress 5event=0x8c,umask=0x2001CMS Agent1 BL Credits Acquired : For Transgress 5 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m3upi_ag1_bl_crd_acquired0.tgr6uncore interconnectCMS Agent1 BL Credits Acquired : For Transgress 4event=0x8c,umask=0x4001CMS Agent1 BL Credits Acquired : For Transgress 4 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m3upi_ag1_bl_crd_acquired0.tgr7uncore interconnectCMS Agent1 BL Credits Acquired : For Transgress 5event=0x8c,umask=0x8001CMS Agent1 BL Credits Acquired : For Transgress 5 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m3upi_ag1_bl_crd_acquired1.tgr10uncore interconnectCMS Agent1 BL Credits Acquired : For Transgress 10event=0x8d,umask=401CMS Agent1 BL Credits Acquired : For Transgress 10 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m3upi_ag1_bl_crd_acquired1.tgr8uncore interconnectCMS Agent1 BL Credits Acquired : For Transgress 8event=0x8d,umask=101CMS Agent1 BL Credits Acquired : For Transgress 8 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m3upi_ag1_bl_crd_acquired1.tgr9uncore interconnectCMS Agent1 BL Credits Acquired : For Transgress 9event=0x8d,umask=201CMS Agent1 BL Credits Acquired : For Transgress 9 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m3upi_ag1_bl_crd_occupancy0.tgr0uncore interconnectCMS Agent1 BL Credits Occupancy : For Transgress 0event=0x8e,umask=101CMS Agent1 BL Credits Occupancy : For Transgress 0 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m3upi_ag1_bl_crd_occupancy0.tgr1uncore interconnectCMS Agent1 BL Credits Occupancy : For Transgress 1event=0x8e,umask=201CMS Agent1 BL Credits Occupancy : For Transgress 1 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m3upi_ag1_bl_crd_occupancy0.tgr2uncore interconnectCMS Agent1 BL Credits Occupancy : For Transgress 2event=0x8e,umask=401CMS Agent1 BL Credits Occupancy : For Transgress 2 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m3upi_ag1_bl_crd_occupancy0.tgr3uncore interconnectCMS Agent1 BL Credits Occupancy : For Transgress 3event=0x8e,umask=801CMS Agent1 BL Credits Occupancy : For Transgress 3 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m3upi_ag1_bl_crd_occupancy0.tgr4uncore interconnectCMS Agent1 BL Credits Occupancy : For Transgress 4event=0x8e,umask=0x1001CMS Agent1 BL Credits Occupancy : For Transgress 4 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m3upi_ag1_bl_crd_occupancy0.tgr5uncore interconnectCMS Agent1 BL Credits Occupancy : For Transgress 5event=0x8e,umask=0x2001CMS Agent1 BL Credits Occupancy : For Transgress 5 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m3upi_ag1_bl_crd_occupancy0.tgr6uncore interconnectCMS Agent1 BL Credits Occupancy : For Transgress 6event=0x8e,umask=0x4001CMS Agent1 BL Credits Occupancy : For Transgress 6 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m3upi_ag1_bl_crd_occupancy0.tgr7uncore interconnectCMS Agent1 BL Credits Occupancy : For Transgress 7event=0x8e,umask=0x8001CMS Agent1 BL Credits Occupancy : For Transgress 7 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m3upi_ag1_bl_crd_occupancy1.tgr10uncore interconnectCMS Agent1 BL Credits Occupancy : For Transgress 10event=0x8f,umask=401CMS Agent1 BL Credits Occupancy : For Transgress 10 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m3upi_ag1_bl_crd_occupancy1.tgr8uncore interconnectCMS Agent1 BL Credits Occupancy : For Transgress 8event=0x8f,umask=101CMS Agent1 BL Credits Occupancy : For Transgress 8 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m3upi_ag1_bl_crd_occupancy1.tgr9uncore interconnectCMS Agent1 BL Credits Occupancy : For Transgress 9event=0x8f,umask=201CMS Agent1 BL Credits Occupancy : For Transgress 9 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m3upi_clockticksuncore interconnectClockticks of the mesh to UPI (M3UPI)event=101Clockticks of the mesh to UPI (M3UPI) : Counts the number of uclks in the M3 uclk domain.  This could be slightly different than the count in the Ubox because of enable/freeze delays.  However, because the M3 is close to the Ubox, they generally should not diverge by more than a handful of cyclesunc_m3upi_distress_asserted.dpt_localuncore interconnectDistress signal asserted : DPT Localevent=0xaf,umask=401Distress signal asserted : DPT Local : Counts the number of cycles either the local or incoming distress signals are asserted. : Dynamic Prefetch Throttle triggered by this tileunc_m3upi_distress_asserted.dpt_nonlocaluncore interconnectDistress signal asserted : DPT Remoteevent=0xaf,umask=801Distress signal asserted : DPT Remote : Counts the number of cycles either the local or incoming distress signals are asserted. : Dynamic Prefetch Throttle received by this tileunc_m3upi_distress_asserted.dpt_stall_ivuncore interconnectDistress signal asserted : DPT Stalled - IVevent=0xaf,umask=0x4001Distress signal asserted : DPT Stalled - IV : Counts the number of cycles either the local or incoming distress signals are asserted. : DPT occurred while regular IVs were received, causing DPT to be stalledunc_m3upi_distress_asserted.dpt_stall_nocrduncore interconnectDistress signal asserted : DPT Stalled -  No Creditevent=0xaf,umask=0x8001Distress signal asserted : DPT Stalled -  No Credit : Counts the number of cycles either the local or incoming distress signals are asserted. : DPT occurred while credit not available causing DPT to be stalledunc_m3upi_distress_asserted.horzuncore interconnectDistress signal asserted : Horizontalevent=0xaf,umask=201Distress signal asserted : Horizontal : Counts the number of cycles either the local or incoming distress signals are asserted. : If TGR egress is full, then agents will throttle outgoing AD IDI transactionsunc_m3upi_distress_asserted.pmm_localuncore interconnectDistress signal asserted : PMM Localevent=0xaf,umask=0x1001Distress signal asserted : PMM Local : Counts the number of cycles either the local or incoming distress signals are asserted. : If the CHA TOR has too many PMM transactions, this signal will throttle outgoing MS2IDI trafficunc_m3upi_distress_asserted.pmm_nonlocaluncore interconnectDistress signal asserted : PMM Remoteevent=0xaf,umask=0x2001Distress signal asserted : PMM Remote : Counts the number of cycles either the local or incoming distress signals are asserted. : If another CHA TOR has too many PMM transactions, this signal will throttle outgoing MS2IDI trafficunc_m3upi_distress_asserted.vertuncore interconnectDistress signal asserted : Verticalevent=0xaf,umask=101Distress signal asserted : Vertical : Counts the number of cycles either the local or incoming distress signals are asserted. : If IRQ egress is full, then agents will throttle outgoing AD IDI transactionsunc_m3upi_horz_ring_ad_in_use.left_evenuncore interconnectHorizontal AD Ring In Use : Left and Evenevent=0xb6,umask=101Horizontal AD Ring In Use : Left and Even : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_horz_ring_ad_in_use.left_odduncore interconnectHorizontal AD Ring In Use : Left and Oddevent=0xb6,umask=201Horizontal AD Ring In Use : Left and Odd : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_horz_ring_ad_in_use.right_evenuncore interconnectHorizontal AD Ring In Use : Right and Evenevent=0xb6,umask=401Horizontal AD Ring In Use : Right and Even : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_horz_ring_ad_in_use.right_odduncore interconnectHorizontal AD Ring In Use : Right and Oddevent=0xb6,umask=801Horizontal AD Ring In Use : Right and Odd : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_horz_ring_akc_in_use.left_evenuncore interconnectHorizontal AK Ring In Use : Left and Evenevent=0xbb,umask=101Horizontal AK Ring In Use : Left and Even : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_horz_ring_akc_in_use.left_odduncore interconnectHorizontal AK Ring In Use : Left and Oddevent=0xbb,umask=201Horizontal AK Ring In Use : Left and Odd : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_horz_ring_akc_in_use.right_evenuncore interconnectHorizontal AK Ring In Use : Right and Evenevent=0xbb,umask=401Horizontal AK Ring In Use : Right and Even : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_horz_ring_akc_in_use.right_odduncore interconnectHorizontal AK Ring In Use : Right and Oddevent=0xbb,umask=801Horizontal AK Ring In Use : Right and Odd : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_horz_ring_ak_in_use.left_evenuncore interconnectHorizontal AK Ring In Use : Left and Evenevent=0xb7,umask=101Horizontal AK Ring In Use : Left and Even : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_horz_ring_ak_in_use.left_odduncore interconnectHorizontal AK Ring In Use : Left and Oddevent=0xb7,umask=201Horizontal AK Ring In Use : Left and Odd : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_horz_ring_ak_in_use.right_evenuncore interconnectHorizontal AK Ring In Use : Right and Evenevent=0xb7,umask=401Horizontal AK Ring In Use : Right and Even : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_horz_ring_ak_in_use.right_odduncore interconnectHorizontal AK Ring In Use : Right and Oddevent=0xb7,umask=801Horizontal AK Ring In Use : Right and Odd : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_horz_ring_bl_in_use.left_evenuncore interconnectHorizontal BL Ring in Use : Left and Evenevent=0xb8,umask=101Horizontal BL Ring in Use : Left and Even : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_horz_ring_bl_in_use.left_odduncore interconnectHorizontal BL Ring in Use : Left and Oddevent=0xb8,umask=201Horizontal BL Ring in Use : Left and Odd : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_horz_ring_bl_in_use.right_evenuncore interconnectHorizontal BL Ring in Use : Right and Evenevent=0xb8,umask=401Horizontal BL Ring in Use : Right and Even : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_horz_ring_bl_in_use.right_odduncore interconnectHorizontal BL Ring in Use : Right and Oddevent=0xb8,umask=801Horizontal BL Ring in Use : Right and Odd : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_horz_ring_iv_in_use.leftuncore interconnectHorizontal IV Ring in Use : Leftevent=0xb9,umask=101Horizontal IV Ring in Use : Left : Counts the number of cycles that the Horizontal IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODDunc_m3upi_horz_ring_iv_in_use.rightuncore interconnectHorizontal IV Ring in Use : Rightevent=0xb9,umask=401Horizontal IV Ring in Use : Right : Counts the number of cycles that the Horizontal IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODDunc_m3upi_misc_external.mbe_inst0uncore interconnectMiscellaneous Events (mostly from MS2IDI) : Number of cycles MBE is high for MS2IDI0event=0xe6,umask=101unc_m3upi_misc_external.mbe_inst1uncore interconnectMiscellaneous Events (mostly from MS2IDI) : Number of cycles MBE is high for MS2IDI1event=0xe6,umask=201unc_m3upi_ring_bounces_horz.aduncore interconnectMessages that bounced on the Horizontal Ring. : ADevent=0xac,umask=101Messages that bounced on the Horizontal Ring. : AD : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring typeunc_m3upi_ring_bounces_horz.akuncore interconnectMessages that bounced on the Horizontal Ring. : AKevent=0xac,umask=201Messages that bounced on the Horizontal Ring. : AK : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring typeunc_m3upi_ring_bounces_horz.bluncore interconnectMessages that bounced on the Horizontal Ring. : BLevent=0xac,umask=401Messages that bounced on the Horizontal Ring. : BL : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring typeunc_m3upi_ring_bounces_horz.ivuncore interconnectMessages that bounced on the Horizontal Ring. : IVevent=0xac,umask=801Messages that bounced on the Horizontal Ring. : IV : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring typeunc_m3upi_ring_bounces_vert.aduncore interconnectMessages that bounced on the Vertical Ring. : ADevent=0xaa,umask=101Messages that bounced on the Vertical Ring. : AD : Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_m3upi_ring_bounces_vert.akuncore interconnectMessages that bounced on the Vertical Ring. : Acknowledgements to coreevent=0xaa,umask=201Messages that bounced on the Vertical Ring. : Acknowledgements to core : Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_m3upi_ring_bounces_vert.akcuncore interconnectMessages that bounced on the Vertical Ringevent=0xaa,umask=0x1001Messages that bounced on the Vertical Ring. : Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_m3upi_ring_bounces_vert.bluncore interconnectMessages that bounced on the Vertical Ring. : Data Responses to coreevent=0xaa,umask=401Messages that bounced on the Vertical Ring. : Data Responses to core : Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_m3upi_ring_bounces_vert.ivuncore interconnectMessages that bounced on the Vertical Ring. : Snoops of processor's cacheevent=0xaa,umask=801Messages that bounced on the Vertical Ring. : Snoops of processor's cache. : Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_m3upi_ring_sink_starved_horz.aduncore interconnectSink Starvation on Horizontal Ring : ADevent=0xad,umask=101unc_m3upi_ring_sink_starved_horz.akuncore interconnectSink Starvation on Horizontal Ring : AKevent=0xad,umask=201unc_m3upi_ring_sink_starved_horz.ak_ag1uncore interconnectSink Starvation on Horizontal Ring : Acknowledgements to Agent 1event=0xad,umask=0x2001unc_m3upi_ring_sink_starved_horz.bluncore interconnectSink Starvation on Horizontal Ring : BLevent=0xad,umask=401unc_m3upi_ring_sink_starved_horz.ivuncore interconnectSink Starvation on Horizontal Ring : IVevent=0xad,umask=801unc_m3upi_ring_sink_starved_vert.aduncore interconnectSink Starvation on Vertical Ring : ADevent=0xab,umask=101unc_m3upi_ring_sink_starved_vert.akuncore interconnectSink Starvation on Vertical Ring : Acknowledgements to coreevent=0xab,umask=201unc_m3upi_ring_sink_starved_vert.akcuncore interconnectSink Starvation on Vertical Ringevent=0xab,umask=0x1001unc_m3upi_ring_sink_starved_vert.bluncore interconnectSink Starvation on Vertical Ring : Data Responses to coreevent=0xab,umask=401unc_m3upi_ring_sink_starved_vert.ivuncore interconnectSink Starvation on Vertical Ring : Snoops of processor's cacheevent=0xab,umask=801unc_m3upi_ring_src_thrtluncore interconnectSource Throttleevent=0xae01unc_m3upi_rxc_cycles_ne_vn1.ad_requncore interconnectVN1 Ingress (from CMS) Queue - Cycles Not Empty : REQ on ADevent=0x44,umask=101VN1 Ingress (from CMS) Queue - Cycles Not Empty : REQ on AD : Counts the number of allocations into the UPI VN1  Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters. : Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_cycles_ne_vn1.ad_rspuncore interconnectVN1 Ingress (from CMS) Queue - Cycles Not Empty : RSP on ADevent=0x44,umask=401VN1 Ingress (from CMS) Queue - Cycles Not Empty : RSP on AD : Counts the number of allocations into the UPI VN1  Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters. : Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_cycles_ne_vn1.ad_snpuncore interconnectVN1 Ingress (from CMS) Queue - Cycles Not Empty : SNP on ADevent=0x44,umask=201VN1 Ingress (from CMS) Queue - Cycles Not Empty : SNP on AD : Counts the number of allocations into the UPI VN1  Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters. : Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_cycles_ne_vn1.bl_ncbuncore interconnectVN1 Ingress (from CMS) Queue - Cycles Not Empty : NCB on BLevent=0x44,umask=0x2001VN1 Ingress (from CMS) Queue - Cycles Not Empty : NCB on BL : Counts the number of allocations into the UPI VN1  Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters. : Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_cycles_ne_vn1.bl_ncsuncore interconnectVN1 Ingress (from CMS) Queue - Cycles Not Empty : NCS on BLevent=0x44,umask=0x4001VN1 Ingress (from CMS) Queue - Cycles Not Empty : NCS on BL : Counts the number of allocations into the UPI VN1  Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters. : Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_cycles_ne_vn1.bl_rspuncore interconnectVN1 Ingress (from CMS) Queue - Cycles Not Empty : RSP on BLevent=0x44,umask=801VN1 Ingress (from CMS) Queue - Cycles Not Empty : RSP on BL : Counts the number of allocations into the UPI VN1  Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters. : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_cycles_ne_vn1.bl_wbuncore interconnectVN1 Ingress (from CMS) Queue - Cycles Not Empty : WB on BLevent=0x44,umask=0x1001VN1 Ingress (from CMS) Queue - Cycles Not Empty : WB on BL : Counts the number of allocations into the UPI VN1  Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters. : Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_inserts_vn0.ad_requncore interconnectVN0 Ingress (from CMS) Queue - Inserts : REQ on ADevent=0x41,umask=101VN0 Ingress (from CMS) Queue - Inserts : REQ on AD : Counts the number of allocations into the UPI Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters. : Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_inserts_vn0.ad_rspuncore interconnectVN0 Ingress (from CMS) Queue - Inserts : RSP on ADevent=0x41,umask=401VN0 Ingress (from CMS) Queue - Inserts : RSP on AD : Counts the number of allocations into the UPI Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters. : Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_inserts_vn0.ad_snpuncore interconnectVN0 Ingress (from CMS) Queue - Inserts : SNP on ADevent=0x41,umask=201VN0 Ingress (from CMS) Queue - Inserts : SNP on AD : Counts the number of allocations into the UPI Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters. : Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_inserts_vn0.bl_ncbuncore interconnectVN0 Ingress (from CMS) Queue - Inserts : NCB on BLevent=0x41,umask=0x2001VN0 Ingress (from CMS) Queue - Inserts : NCB on BL : Counts the number of allocations into the UPI Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters. : Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_inserts_vn0.bl_ncsuncore interconnectVN0 Ingress (from CMS) Queue - Inserts : NCS on BLevent=0x41,umask=0x4001VN0 Ingress (from CMS) Queue - Inserts : NCS on BL : Counts the number of allocations into the UPI Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters. : Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_inserts_vn0.bl_rspuncore interconnectVN0 Ingress (from CMS) Queue - Inserts : RSP on BLevent=0x41,umask=801VN0 Ingress (from CMS) Queue - Inserts : RSP on BL : Counts the number of allocations into the UPI Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters. : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_inserts_vn0.bl_wbuncore interconnectVN0 Ingress (from CMS) Queue - Inserts : WB on BLevent=0x41,umask=0x1001VN0 Ingress (from CMS) Queue - Inserts : WB on BL : Counts the number of allocations into the UPI Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters. : Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_inserts_vn1.ad_requncore interconnectVN1 Ingress (from CMS) Queue - Inserts : REQ on ADevent=0x42,umask=101VN1 Ingress (from CMS) Queue - Inserts : REQ on AD : Counts the number of allocations into the UPI VN1  Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters. : Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_inserts_vn1.ad_rspuncore interconnectVN1 Ingress (from CMS) Queue - Inserts : RSP on ADevent=0x42,umask=401VN1 Ingress (from CMS) Queue - Inserts : RSP on AD : Counts the number of allocations into the UPI VN1  Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters. : Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_inserts_vn1.ad_snpuncore interconnectVN1 Ingress (from CMS) Queue - Inserts : SNP on ADevent=0x42,umask=201VN1 Ingress (from CMS) Queue - Inserts : SNP on AD : Counts the number of allocations into the UPI VN1  Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters. : Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_inserts_vn1.bl_ncbuncore interconnectVN1 Ingress (from CMS) Queue - Inserts : NCB on BLevent=0x42,umask=0x2001VN1 Ingress (from CMS) Queue - Inserts : NCB on BL : Counts the number of allocations into the UPI VN1  Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters. : Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_inserts_vn1.bl_ncsuncore interconnectVN1 Ingress (from CMS) Queue - Inserts : NCS on BLevent=0x42,umask=0x4001VN1 Ingress (from CMS) Queue - Inserts : NCS on BL : Counts the number of allocations into the UPI VN1  Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters. : Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_inserts_vn1.bl_rspuncore interconnectVN1 Ingress (from CMS) Queue - Inserts : RSP on BLevent=0x42,umask=801VN1 Ingress (from CMS) Queue - Inserts : RSP on BL : Counts the number of allocations into the UPI VN1  Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters. : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_inserts_vn1.bl_wbuncore interconnectVN1 Ingress (from CMS) Queue - Inserts : WB on BLevent=0x42,umask=0x1001VN1 Ingress (from CMS) Queue - Inserts : WB on BL : Counts the number of allocations into the UPI VN1  Ingress.  This tracks one of the three rings that are used by the UPI agent.  This can be used in conjunction with the UPI VN1  Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple counters. : Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_occupancy_vn0.ad_requncore interconnectVN0 Ingress (from CMS) Queue - Occupancy : REQ on ADevent=0x45,umask=101VN0 Ingress (from CMS) Queue - Occupancy : REQ on AD : Accumulates the occupancy of a given UPI VN1  Ingress queue in each cycle.  This tracks one of the three ring Ingress buffers.  This can be used with the UPI VN1  Ingress Not Empty event to calculate average occupancy or the UPI VN1  Ingress Allocations event in order to calculate average queuing latency. : Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_occupancy_vn0.ad_rspuncore interconnectVN0 Ingress (from CMS) Queue - Occupancy : RSP on ADevent=0x45,umask=401VN0 Ingress (from CMS) Queue - Occupancy : RSP on AD : Accumulates the occupancy of a given UPI VN1  Ingress queue in each cycle.  This tracks one of the three ring Ingress buffers.  This can be used with the UPI VN1  Ingress Not Empty event to calculate average occupancy or the UPI VN1  Ingress Allocations event in order to calculate average queuing latency. : Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_occupancy_vn0.ad_snpuncore interconnectVN0 Ingress (from CMS) Queue - Occupancy : SNP on ADevent=0x45,umask=201VN0 Ingress (from CMS) Queue - Occupancy : SNP on AD : Accumulates the occupancy of a given UPI VN1  Ingress queue in each cycle.  This tracks one of the three ring Ingress buffers.  This can be used with the UPI VN1  Ingress Not Empty event to calculate average occupancy or the UPI VN1  Ingress Allocations event in order to calculate average queuing latency. : Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_occupancy_vn0.bl_ncbuncore interconnectVN0 Ingress (from CMS) Queue - Occupancy : NCB on BLevent=0x45,umask=0x2001VN0 Ingress (from CMS) Queue - Occupancy : NCB on BL : Accumulates the occupancy of a given UPI VN1  Ingress queue in each cycle.  This tracks one of the three ring Ingress buffers.  This can be used with the UPI VN1  Ingress Not Empty event to calculate average occupancy or the UPI VN1  Ingress Allocations event in order to calculate average queuing latency. : Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_occupancy_vn0.bl_ncsuncore interconnectVN0 Ingress (from CMS) Queue - Occupancy : NCS on BLevent=0x45,umask=0x4001VN0 Ingress (from CMS) Queue - Occupancy : NCS on BL : Accumulates the occupancy of a given UPI VN1  Ingress queue in each cycle.  This tracks one of the three ring Ingress buffers.  This can be used with the UPI VN1  Ingress Not Empty event to calculate average occupancy or the UPI VN1  Ingress Allocations event in order to calculate average queuing latency. : Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_occupancy_vn0.bl_rspuncore interconnectVN0 Ingress (from CMS) Queue - Occupancy : RSP on BLevent=0x45,umask=801VN0 Ingress (from CMS) Queue - Occupancy : RSP on BL : Accumulates the occupancy of a given UPI VN1  Ingress queue in each cycle.  This tracks one of the three ring Ingress buffers.  This can be used with the UPI VN1  Ingress Not Empty event to calculate average occupancy or the UPI VN1  Ingress Allocations event in order to calculate average queuing latency. : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_occupancy_vn0.bl_wbuncore interconnectVN0 Ingress (from CMS) Queue - Occupancy : WB on BLevent=0x45,umask=0x1001VN0 Ingress (from CMS) Queue - Occupancy : WB on BL : Accumulates the occupancy of a given UPI VN1  Ingress queue in each cycle.  This tracks one of the three ring Ingress buffers.  This can be used with the UPI VN1  Ingress Not Empty event to calculate average occupancy or the UPI VN1  Ingress Allocations event in order to calculate average queuing latency. : Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxc_occupancy_vn1.ad_requncore interconnectVN1 Ingress (from CMS) Queue - Occupancy : REQ on ADevent=0x46,umask=101VN1 Ingress (from CMS) Queue - Occupancy : REQ on AD : Accumulates the occupancy of a given UPI VN1  Ingress queue in each cycle.  This tracks one of the three ring Ingress buffers.  This can be used with the UPI VN1  Ingress Not Empty event to calculate average occupancy or the UPI VN1  Ingress Allocations event in order to calculate average queuing latency. : Home (REQ) messages on AD.  REQ is generally used to send requests, request responses, and snoop responsesunc_m3upi_rxc_occupancy_vn1.ad_rspuncore interconnectVN1 Ingress (from CMS) Queue - Occupancy : RSP on ADevent=0x46,umask=401VN1 Ingress (from CMS) Queue - Occupancy : RSP on AD : Accumulates the occupancy of a given UPI VN1  Ingress queue in each cycle.  This tracks one of the three ring Ingress buffers.  This can be used with the UPI VN1  Ingress Not Empty event to calculate average occupancy or the UPI VN1  Ingress Allocations event in order to calculate average queuing latency. : Response (RSP) messages on AD.  RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_occupancy_vn1.ad_snpuncore interconnectVN1 Ingress (from CMS) Queue - Occupancy : SNP on ADevent=0x46,umask=201VN1 Ingress (from CMS) Queue - Occupancy : SNP on AD : Accumulates the occupancy of a given UPI VN1  Ingress queue in each cycle.  This tracks one of the three ring Ingress buffers.  This can be used with the UPI VN1  Ingress Not Empty event to calculate average occupancy or the UPI VN1  Ingress Allocations event in order to calculate average queuing latency. : Snoops (SNP) messages on AD.  SNP is used for outgoing snoopsunc_m3upi_rxc_occupancy_vn1.bl_ncbuncore interconnectVN1 Ingress (from CMS) Queue - Occupancy : NCB on BLevent=0x46,umask=0x2001VN1 Ingress (from CMS) Queue - Occupancy : NCB on BL : Accumulates the occupancy of a given UPI VN1  Ingress queue in each cycle.  This tracks one of the three ring Ingress buffers.  This can be used with the UPI VN1  Ingress Not Empty event to calculate average occupancy or the UPI VN1  Ingress Allocations event in order to calculate average queuing latency. : Non-Coherent Broadcast (NCB) messages on BL.  NCB is generally used to transmit data without coherency.  For example, non-coherent read data returnsunc_m3upi_rxc_occupancy_vn1.bl_ncsuncore interconnectVN1 Ingress (from CMS) Queue - Occupancy : NCS on BLevent=0x46,umask=0x4001VN1 Ingress (from CMS) Queue - Occupancy : NCS on BL : Accumulates the occupancy of a given UPI VN1  Ingress queue in each cycle.  This tracks one of the three ring Ingress buffers.  This can be used with the UPI VN1  Ingress Not Empty event to calculate average occupancy or the UPI VN1  Ingress Allocations event in order to calculate average queuing latency. : Non-Coherent Standard (NCS) messages on BLunc_m3upi_rxc_occupancy_vn1.bl_rspuncore interconnectVN1 Ingress (from CMS) Queue - Occupancy : RSP on BLevent=0x46,umask=801VN1 Ingress (from CMS) Queue - Occupancy : RSP on BL : Accumulates the occupancy of a given UPI VN1  Ingress queue in each cycle.  This tracks one of the three ring Ingress buffers.  This can be used with the UPI VN1  Ingress Not Empty event to calculate average occupancy or the UPI VN1  Ingress Allocations event in order to calculate average queuing latency. : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP)unc_m3upi_rxc_occupancy_vn1.bl_wbuncore interconnectVN1 Ingress (from CMS) Queue - Occupancy : WB on BLevent=0x46,umask=0x1001VN1 Ingress (from CMS) Queue - Occupancy : WB on BL : Accumulates the occupancy of a given UPI VN1  Ingress queue in each cycle.  This tracks one of the three ring Ingress buffers.  This can be used with the UPI VN1  Ingress Not Empty event to calculate average occupancy or the UPI VN1  Ingress Allocations event in order to calculate average queuing latency. : Data Response (WB) messages on BL.  WB is generally used to transmit data with coherency.  For example, remote reads and writes, or cache to cache transfers will transmit their data using WBunc_m3upi_rxr_busy_starved.ad_alluncore interconnectTransgress Injection Starvation : AD - Allevent=0xe5,umask=0x1101Transgress Injection Starvation : AD - All : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priority : All == Credited + Uncreditedunc_m3upi_rxr_busy_starved.ad_crduncore interconnectTransgress Injection Starvation : AD - Creditedevent=0xe5,umask=0x1001Transgress Injection Starvation : AD - Credited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityunc_m3upi_rxr_busy_starved.ad_uncrduncore interconnectTransgress Injection Starvation : AD - Uncreditedevent=0xe5,umask=101Transgress Injection Starvation : AD - Uncredited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityunc_m3upi_rxr_busy_starved.bl_alluncore interconnectTransgress Injection Starvation : BL - Allevent=0xe5,umask=0x4401Transgress Injection Starvation : BL - All : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priority : All == Credited + Uncreditedunc_m3upi_rxr_busy_starved.bl_crduncore interconnectTransgress Injection Starvation : BL - Creditedevent=0xe5,umask=0x4001Transgress Injection Starvation : BL - Credited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityunc_m3upi_rxr_busy_starved.bl_uncrduncore interconnectTransgress Injection Starvation : BL - Uncreditedevent=0xe5,umask=401Transgress Injection Starvation : BL - Uncredited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityunc_m3upi_rxr_bypass.ad_alluncore interconnectTransgress Ingress Bypass : AD - Allevent=0xe2,umask=0x1101Transgress Ingress Bypass : AD - All : Number of packets bypassing the CMS Ingress : All == Credited + Uncreditedunc_m3upi_rxr_bypass.ad_crduncore interconnectTransgress Ingress Bypass : AD - Creditedevent=0xe2,umask=0x1001Transgress Ingress Bypass : AD - Credited : Number of packets bypassing the CMS Ingressunc_m3upi_rxr_bypass.ad_uncrduncore interconnectTransgress Ingress Bypass : AD - Uncreditedevent=0xe2,umask=101Transgress Ingress Bypass : AD - Uncredited : Number of packets bypassing the CMS Ingressunc_m3upi_rxr_bypass.akuncore interconnectTransgress Ingress Bypass : AKevent=0xe2,umask=201Transgress Ingress Bypass : AK : Number of packets bypassing the CMS Ingressunc_m3upi_rxr_bypass.akc_uncrduncore interconnectTransgress Ingress Bypass : AKC - Uncreditedevent=0xe2,umask=0x8001Transgress Ingress Bypass : AKC - Uncredited : Number of packets bypassing the CMS Ingressunc_m3upi_rxr_bypass.bl_alluncore interconnectTransgress Ingress Bypass : BL - Allevent=0xe2,umask=0x4401Transgress Ingress Bypass : BL - All : Number of packets bypassing the CMS Ingress : All == Credited + Uncreditedunc_m3upi_rxr_bypass.bl_crduncore interconnectTransgress Ingress Bypass : BL - Creditedevent=0xe2,umask=0x4001Transgress Ingress Bypass : BL - Credited : Number of packets bypassing the CMS Ingressunc_m3upi_rxr_bypass.bl_uncrduncore interconnectTransgress Ingress Bypass : BL - Uncreditedevent=0xe2,umask=401Transgress Ingress Bypass : BL - Uncredited : Number of packets bypassing the CMS Ingressunc_m3upi_rxr_bypass.ivuncore interconnectTransgress Ingress Bypass : IVevent=0xe2,umask=801Transgress Ingress Bypass : IV : Number of packets bypassing the CMS Ingressunc_m3upi_rxr_crd_starved.ad_alluncore interconnectTransgress Injection Starvation : AD - Allevent=0xe3,umask=0x1101Transgress Injection Starvation : AD - All : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of credit. : All == Credited + Uncreditedunc_m3upi_rxr_crd_starved.ad_crduncore interconnectTransgress Injection Starvation : AD - Creditedevent=0xe3,umask=0x1001Transgress Injection Starvation : AD - Credited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m3upi_rxr_crd_starved.ad_uncrduncore interconnectTransgress Injection Starvation : AD - Uncreditedevent=0xe3,umask=101Transgress Injection Starvation : AD - Uncredited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m3upi_rxr_crd_starved.akuncore interconnectTransgress Injection Starvation : AKevent=0xe3,umask=201Transgress Injection Starvation : AK : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m3upi_rxr_crd_starved.bl_alluncore interconnectTransgress Injection Starvation : BL - Allevent=0xe3,umask=0x4401Transgress Injection Starvation : BL - All : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of credit. : All == Credited + Uncreditedunc_m3upi_rxr_crd_starved.bl_crduncore interconnectTransgress Injection Starvation : BL - Creditedevent=0xe3,umask=0x4001Transgress Injection Starvation : BL - Credited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m3upi_rxr_crd_starved.bl_uncrduncore interconnectTransgress Injection Starvation : BL - Uncreditedevent=0xe3,umask=401Transgress Injection Starvation : BL - Uncredited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m3upi_rxr_crd_starved.ifvuncore interconnectTransgress Injection Starvation : IFV - Creditedevent=0xe3,umask=0x8001Transgress Injection Starvation : IFV - Credited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m3upi_rxr_crd_starved.ivuncore interconnectTransgress Injection Starvation : IVevent=0xe3,umask=801Transgress Injection Starvation : IV : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m3upi_rxr_crd_starved_1uncore interconnectTransgress Injection Starvationevent=0xe401Transgress Injection Starvation : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m3upi_rxr_inserts.ad_alluncore interconnectTransgress Ingress Allocations : AD - Allevent=0xe1,umask=0x1101Transgress Ingress Allocations : AD - All : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the mesh : All == Credited + Uncreditedunc_m3upi_rxr_inserts.ad_crduncore interconnectTransgress Ingress Allocations : AD - Creditedevent=0xe1,umask=0x1001Transgress Ingress Allocations : AD - Credited : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m3upi_rxr_inserts.ad_uncrduncore interconnectTransgress Ingress Allocations : AD - Uncreditedevent=0xe1,umask=101Transgress Ingress Allocations : AD - Uncredited : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m3upi_rxr_inserts.akuncore interconnectTransgress Ingress Allocations : AKevent=0xe1,umask=201Transgress Ingress Allocations : AK : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m3upi_rxr_inserts.akc_uncrduncore interconnectTransgress Ingress Allocations : AKC - Uncreditedevent=0xe1,umask=0x8001Transgress Ingress Allocations : AKC - Uncredited : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m3upi_rxr_inserts.bl_alluncore interconnectTransgress Ingress Allocations : BL - Allevent=0xe1,umask=0x4401Transgress Ingress Allocations : BL - All : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the mesh : All == Credited + Uncreditedunc_m3upi_rxr_inserts.bl_crduncore interconnectTransgress Ingress Allocations : BL - Creditedevent=0xe1,umask=0x4001Transgress Ingress Allocations : BL - Credited : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m3upi_rxr_inserts.bl_uncrduncore interconnectTransgress Ingress Allocations : BL - Uncreditedevent=0xe1,umask=401Transgress Ingress Allocations : BL - Uncredited : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m3upi_rxr_inserts.ivuncore interconnectTransgress Ingress Allocations : IVevent=0xe1,umask=801Transgress Ingress Allocations : IV : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m3upi_rxr_occupancy.ad_alluncore interconnectTransgress Ingress Occupancy : AD - Allevent=0xe0,umask=0x1101Transgress Ingress Occupancy : AD - All : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the mesh : All == Credited + Uncreditedunc_m3upi_rxr_occupancy.ad_crduncore interconnectTransgress Ingress Occupancy : AD - Creditedevent=0xe0,umask=0x1001Transgress Ingress Occupancy : AD - Credited : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m3upi_rxr_occupancy.ad_uncrduncore interconnectTransgress Ingress Occupancy : AD - Uncreditedevent=0xe0,umask=101Transgress Ingress Occupancy : AD - Uncredited : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m3upi_rxr_occupancy.akuncore interconnectTransgress Ingress Occupancy : AKevent=0xe0,umask=201Transgress Ingress Occupancy : AK : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m3upi_rxr_occupancy.akc_uncrduncore interconnectTransgress Ingress Occupancy : AKC - Uncreditedevent=0xe0,umask=0x8001Transgress Ingress Occupancy : AKC - Uncredited : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m3upi_rxr_occupancy.bl_alluncore interconnectTransgress Ingress Occupancy : BL - Allevent=0xe0,umask=0x4401Transgress Ingress Occupancy : BL - All : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the mesh : All == Credited + Uncreditedunc_m3upi_rxr_occupancy.bl_crduncore interconnectTransgress Ingress Occupancy : BL - Creditedevent=0xe0,umask=0x2001Transgress Ingress Occupancy : BL - Credited : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m3upi_rxr_occupancy.bl_uncrduncore interconnectTransgress Ingress Occupancy : BL - Uncreditedevent=0xe0,umask=401Transgress Ingress Occupancy : BL - Uncredited : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m3upi_rxr_occupancy.ivuncore interconnectTransgress Ingress Occupancy : IVevent=0xe0,umask=801Transgress Ingress Occupancy : IV : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m3upi_stall0_no_txr_horz_crd_ad_ag0.tgr0uncore interconnectStall on No AD Agent0 Transgress Credits : For Transgress 0event=0xd0,umask=101Stall on No AD Agent0 Transgress Credits : For Transgress 0 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_ad_ag0.tgr1uncore interconnectStall on No AD Agent0 Transgress Credits : For Transgress 1event=0xd0,umask=201Stall on No AD Agent0 Transgress Credits : For Transgress 1 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_ad_ag0.tgr2uncore interconnectStall on No AD Agent0 Transgress Credits : For Transgress 2event=0xd0,umask=401Stall on No AD Agent0 Transgress Credits : For Transgress 2 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_ad_ag0.tgr3uncore interconnectStall on No AD Agent0 Transgress Credits : For Transgress 3event=0xd0,umask=801Stall on No AD Agent0 Transgress Credits : For Transgress 3 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_ad_ag0.tgr4uncore interconnectStall on No AD Agent0 Transgress Credits : For Transgress 4event=0xd0,umask=0x1001Stall on No AD Agent0 Transgress Credits : For Transgress 4 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_ad_ag0.tgr5uncore interconnectStall on No AD Agent0 Transgress Credits : For Transgress 5event=0xd0,umask=0x2001Stall on No AD Agent0 Transgress Credits : For Transgress 5 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_ad_ag0.tgr6uncore interconnectStall on No AD Agent0 Transgress Credits : For Transgress 6event=0xd0,umask=0x4001Stall on No AD Agent0 Transgress Credits : For Transgress 6 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_ad_ag0.tgr7uncore interconnectStall on No AD Agent0 Transgress Credits : For Transgress 7event=0xd0,umask=0x8001Stall on No AD Agent0 Transgress Credits : For Transgress 7 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_ad_ag1.tgr0uncore interconnectStall on No AD Agent1 Transgress Credits : For Transgress 0event=0xd2,umask=101Stall on No AD Agent1 Transgress Credits : For Transgress 0 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_ad_ag1.tgr1uncore interconnectStall on No AD Agent1 Transgress Credits : For Transgress 1event=0xd2,umask=201Stall on No AD Agent1 Transgress Credits : For Transgress 1 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_ad_ag1.tgr2uncore interconnectStall on No AD Agent1 Transgress Credits : For Transgress 2event=0xd2,umask=401Stall on No AD Agent1 Transgress Credits : For Transgress 2 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_ad_ag1.tgr3uncore interconnectStall on No AD Agent1 Transgress Credits : For Transgress 3event=0xd2,umask=801Stall on No AD Agent1 Transgress Credits : For Transgress 3 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_ad_ag1.tgr4uncore interconnectStall on No AD Agent1 Transgress Credits : For Transgress 4event=0xd2,umask=0x1001Stall on No AD Agent1 Transgress Credits : For Transgress 4 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_ad_ag1.tgr5uncore interconnectStall on No AD Agent1 Transgress Credits : For Transgress 5event=0xd2,umask=0x2001Stall on No AD Agent1 Transgress Credits : For Transgress 5 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_ad_ag1.tgr6uncore interconnectStall on No AD Agent1 Transgress Credits : For Transgress 6event=0xd2,umask=0x4001Stall on No AD Agent1 Transgress Credits : For Transgress 6 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_ad_ag1.tgr7uncore interconnectStall on No AD Agent1 Transgress Credits : For Transgress 7event=0xd2,umask=0x8001Stall on No AD Agent1 Transgress Credits : For Transgress 7 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_bl_ag0.tgr0uncore interconnectStall on No BL Agent0 Transgress Credits : For Transgress 0event=0xd4,umask=101Stall on No BL Agent0 Transgress Credits : For Transgress 0 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_bl_ag0.tgr1uncore interconnectStall on No BL Agent0 Transgress Credits : For Transgress 1event=0xd4,umask=201Stall on No BL Agent0 Transgress Credits : For Transgress 1 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_bl_ag0.tgr2uncore interconnectStall on No BL Agent0 Transgress Credits : For Transgress 2event=0xd4,umask=401Stall on No BL Agent0 Transgress Credits : For Transgress 2 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_bl_ag0.tgr3uncore interconnectStall on No BL Agent0 Transgress Credits : For Transgress 3event=0xd4,umask=801Stall on No BL Agent0 Transgress Credits : For Transgress 3 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_bl_ag0.tgr4uncore interconnectStall on No BL Agent0 Transgress Credits : For Transgress 4event=0xd4,umask=0x1001Stall on No BL Agent0 Transgress Credits : For Transgress 4 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_bl_ag0.tgr5uncore interconnectStall on No BL Agent0 Transgress Credits : For Transgress 5event=0xd4,umask=0x2001Stall on No BL Agent0 Transgress Credits : For Transgress 5 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_bl_ag0.tgr6uncore interconnectStall on No BL Agent0 Transgress Credits : For Transgress 6event=0xd4,umask=0x4001Stall on No BL Agent0 Transgress Credits : For Transgress 6 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_bl_ag0.tgr7uncore interconnectStall on No BL Agent0 Transgress Credits : For Transgress 7event=0xd4,umask=0x8001Stall on No BL Agent0 Transgress Credits : For Transgress 7 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_bl_ag1.tgr0uncore interconnectStall on No BL Agent1 Transgress Credits : For Transgress 0event=0xd6,umask=101Stall on No BL Agent1 Transgress Credits : For Transgress 0 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_bl_ag1.tgr1uncore interconnectStall on No BL Agent1 Transgress Credits : For Transgress 1event=0xd6,umask=201Stall on No BL Agent1 Transgress Credits : For Transgress 1 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_bl_ag1.tgr2uncore interconnectStall on No BL Agent1 Transgress Credits : For Transgress 2event=0xd6,umask=401Stall on No BL Agent1 Transgress Credits : For Transgress 2 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_bl_ag1.tgr3uncore interconnectStall on No BL Agent1 Transgress Credits : For Transgress 3event=0xd6,umask=801Stall on No BL Agent1 Transgress Credits : For Transgress 3 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_bl_ag1.tgr4uncore interconnectStall on No BL Agent1 Transgress Credits : For Transgress 4event=0xd6,umask=0x1001Stall on No BL Agent1 Transgress Credits : For Transgress 4 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_bl_ag1.tgr5uncore interconnectStall on No BL Agent1 Transgress Credits : For Transgress 5event=0xd6,umask=0x2001Stall on No BL Agent1 Transgress Credits : For Transgress 5 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_bl_ag1.tgr6uncore interconnectStall on No BL Agent1 Transgress Credits : For Transgress 6event=0xd6,umask=0x4001Stall on No BL Agent1 Transgress Credits : For Transgress 6 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall0_no_txr_horz_crd_bl_ag1.tgr7uncore interconnectStall on No BL Agent1 Transgress Credits : For Transgress 7event=0xd6,umask=0x8001Stall on No BL Agent1 Transgress Credits : For Transgress 7 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall1_no_txr_horz_crd_ad_ag0.tgr10uncore interconnectStall on No AD Agent0 Transgress Credits : For Transgress 10event=0xd1,umask=401Stall on No AD Agent0 Transgress Credits : For Transgress 10 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall1_no_txr_horz_crd_ad_ag0.tgr8uncore interconnectStall on No AD Agent0 Transgress Credits : For Transgress 8event=0xd1,umask=101Stall on No AD Agent0 Transgress Credits : For Transgress 8 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall1_no_txr_horz_crd_ad_ag0.tgr9uncore interconnectStall on No AD Agent0 Transgress Credits : For Transgress 9event=0xd1,umask=201Stall on No AD Agent0 Transgress Credits : For Transgress 9 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall1_no_txr_horz_crd_ad_ag1_1.tgr10uncore interconnectStall on No AD Agent1 Transgress Credits : For Transgress 10event=0xd3,umask=401Stall on No AD Agent1 Transgress Credits : For Transgress 10 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall1_no_txr_horz_crd_ad_ag1_1.tgr8uncore interconnectStall on No AD Agent1 Transgress Credits : For Transgress 8event=0xd3,umask=101Stall on No AD Agent1 Transgress Credits : For Transgress 8 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall1_no_txr_horz_crd_ad_ag1_1.tgr9uncore interconnectStall on No AD Agent1 Transgress Credits : For Transgress 9event=0xd3,umask=201Stall on No AD Agent1 Transgress Credits : For Transgress 9 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall1_no_txr_horz_crd_bl_ag0_1.tgr10uncore interconnectStall on No BL Agent0 Transgress Credits : For Transgress 10event=0xd5,umask=401Stall on No BL Agent0 Transgress Credits : For Transgress 10 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall1_no_txr_horz_crd_bl_ag0_1.tgr8uncore interconnectStall on No BL Agent0 Transgress Credits : For Transgress 8event=0xd5,umask=101Stall on No BL Agent0 Transgress Credits : For Transgress 8 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall1_no_txr_horz_crd_bl_ag0_1.tgr9uncore interconnectStall on No BL Agent0 Transgress Credits : For Transgress 9event=0xd5,umask=201Stall on No BL Agent0 Transgress Credits : For Transgress 9 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall1_no_txr_horz_crd_bl_ag1_1.tgr10uncore interconnectStall on No BL Agent1 Transgress Credits : For Transgress 10event=0xd7,umask=401Stall on No BL Agent1 Transgress Credits : For Transgress 10 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall1_no_txr_horz_crd_bl_ag1_1.tgr8uncore interconnectStall on No BL Agent1 Transgress Credits : For Transgress 8event=0xd7,umask=101Stall on No BL Agent1 Transgress Credits : For Transgress 8 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_stall1_no_txr_horz_crd_bl_ag1_1.tgr9uncore interconnectStall on No BL Agent1 Transgress Credits : For Transgress 9event=0xd7,umask=201Stall on No BL Agent1 Transgress Credits : For Transgress 9 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m3upi_txr_horz_ads_used.ad_alluncore interconnectCMS Horizontal ADS Used : AD - Allevent=0xa6,umask=0x1101CMS Horizontal ADS Used : AD - All : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent. : All == Credited + Uncreditedunc_m3upi_txr_horz_ads_used.ad_crduncore interconnectCMS Horizontal ADS Used : AD - Creditedevent=0xa6,umask=0x1001CMS Horizontal ADS Used : AD - Credited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m3upi_txr_horz_ads_used.ad_uncrduncore interconnectCMS Horizontal ADS Used : AD - Uncreditedevent=0xa6,umask=101CMS Horizontal ADS Used : AD - Uncredited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m3upi_txr_horz_ads_used.bl_alluncore interconnectCMS Horizontal ADS Used : BL - Allevent=0xa6,umask=0x4401CMS Horizontal ADS Used : BL - All : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent. : All == Credited + Uncreditedunc_m3upi_txr_horz_ads_used.bl_crduncore interconnectCMS Horizontal ADS Used : BL - Creditedevent=0xa6,umask=0x4001CMS Horizontal ADS Used : BL - Credited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m3upi_txr_horz_ads_used.bl_uncrduncore interconnectCMS Horizontal ADS Used : BL - Uncreditedevent=0xa6,umask=401CMS Horizontal ADS Used : BL - Uncredited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m3upi_txr_horz_bypass.ad_alluncore interconnectCMS Horizontal Bypass Used : AD - Allevent=0xa7,umask=0x1101CMS Horizontal Bypass Used : AD - All : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent. : All == Credited + Uncreditedunc_m3upi_txr_horz_bypass.ad_crduncore interconnectCMS Horizontal Bypass Used : AD - Creditedevent=0xa7,umask=0x1001CMS Horizontal Bypass Used : AD - Credited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m3upi_txr_horz_bypass.ad_uncrduncore interconnectCMS Horizontal Bypass Used : AD - Uncreditedevent=0xa7,umask=101CMS Horizontal Bypass Used : AD - Uncredited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m3upi_txr_horz_bypass.akuncore interconnectCMS Horizontal Bypass Used : AKevent=0xa7,umask=201CMS Horizontal Bypass Used : AK : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m3upi_txr_horz_bypass.akc_uncrduncore interconnectCMS Horizontal Bypass Used : AKC - Uncreditedevent=0xa7,umask=0x8001CMS Horizontal Bypass Used : AKC - Uncredited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m3upi_txr_horz_bypass.bl_alluncore interconnectCMS Horizontal Bypass Used : BL - Allevent=0xa7,umask=0x4401CMS Horizontal Bypass Used : BL - All : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent. : All == Credited + Uncreditedunc_m3upi_txr_horz_bypass.bl_crduncore interconnectCMS Horizontal Bypass Used : BL - Creditedevent=0xa7,umask=0x4001CMS Horizontal Bypass Used : BL - Credited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m3upi_txr_horz_bypass.bl_uncrduncore interconnectCMS Horizontal Bypass Used : BL - Uncreditedevent=0xa7,umask=401CMS Horizontal Bypass Used : BL - Uncredited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m3upi_txr_horz_bypass.ivuncore interconnectCMS Horizontal Bypass Used : IVevent=0xa7,umask=801CMS Horizontal Bypass Used : IV : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m3upi_txr_horz_cycles_full.ad_alluncore interconnectCycles CMS Horizontal Egress Queue is Full : AD - Allevent=0xa2,umask=0x1101Cycles CMS Horizontal Egress Queue is Full : AD - All : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_m3upi_txr_horz_cycles_full.ad_crduncore interconnectCycles CMS Horizontal Egress Queue is Full : AD - Creditedevent=0xa2,umask=0x1001Cycles CMS Horizontal Egress Queue is Full : AD - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_cycles_full.ad_uncrduncore interconnectCycles CMS Horizontal Egress Queue is Full : AD - Uncreditedevent=0xa2,umask=101Cycles CMS Horizontal Egress Queue is Full : AD - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_cycles_full.akuncore interconnectCycles CMS Horizontal Egress Queue is Full : AKevent=0xa2,umask=201Cycles CMS Horizontal Egress Queue is Full : AK : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_cycles_full.akc_uncrduncore interconnectCycles CMS Horizontal Egress Queue is Full : AKC - Uncreditedevent=0xa2,umask=0x8001Cycles CMS Horizontal Egress Queue is Full : AKC - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_cycles_full.bl_alluncore interconnectCycles CMS Horizontal Egress Queue is Full : BL - Allevent=0xa2,umask=0x4401Cycles CMS Horizontal Egress Queue is Full : BL - All : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_m3upi_txr_horz_cycles_full.bl_crduncore interconnectCycles CMS Horizontal Egress Queue is Full : BL - Creditedevent=0xa2,umask=0x4001Cycles CMS Horizontal Egress Queue is Full : BL - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_cycles_full.bl_uncrduncore interconnectCycles CMS Horizontal Egress Queue is Full : BL - Uncreditedevent=0xa2,umask=401Cycles CMS Horizontal Egress Queue is Full : BL - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_cycles_full.ivuncore interconnectCycles CMS Horizontal Egress Queue is Full : IVevent=0xa2,umask=801Cycles CMS Horizontal Egress Queue is Full : IV : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_cycles_ne.ad_alluncore interconnectCycles CMS Horizontal Egress Queue is Not Empty : AD - Allevent=0xa3,umask=0x1101Cycles CMS Horizontal Egress Queue is Not Empty : AD - All : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_m3upi_txr_horz_cycles_ne.ad_crduncore interconnectCycles CMS Horizontal Egress Queue is Not Empty : AD - Creditedevent=0xa3,umask=0x1001Cycles CMS Horizontal Egress Queue is Not Empty : AD - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_cycles_ne.ad_uncrduncore interconnectCycles CMS Horizontal Egress Queue is Not Empty : AD - Uncreditedevent=0xa3,umask=101Cycles CMS Horizontal Egress Queue is Not Empty : AD - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_cycles_ne.akuncore interconnectCycles CMS Horizontal Egress Queue is Not Empty : AKevent=0xa3,umask=201Cycles CMS Horizontal Egress Queue is Not Empty : AK : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_cycles_ne.akc_uncrduncore interconnectCycles CMS Horizontal Egress Queue is Not Empty : AKC - Uncreditedevent=0xa3,umask=0x8001Cycles CMS Horizontal Egress Queue is Not Empty : AKC - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_cycles_ne.bl_alluncore interconnectCycles CMS Horizontal Egress Queue is Not Empty : BL - Allevent=0xa3,umask=0x4401Cycles CMS Horizontal Egress Queue is Not Empty : BL - All : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_m3upi_txr_horz_cycles_ne.bl_crduncore interconnectCycles CMS Horizontal Egress Queue is Not Empty : BL - Creditedevent=0xa3,umask=0x4001Cycles CMS Horizontal Egress Queue is Not Empty : BL - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_cycles_ne.bl_uncrduncore interconnectCycles CMS Horizontal Egress Queue is Not Empty : BL - Uncreditedevent=0xa3,umask=401Cycles CMS Horizontal Egress Queue is Not Empty : BL - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_cycles_ne.ivuncore interconnectCycles CMS Horizontal Egress Queue is Not Empty : IVevent=0xa3,umask=801Cycles CMS Horizontal Egress Queue is Not Empty : IV : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_inserts.ad_alluncore interconnectCMS Horizontal Egress Inserts : AD - Allevent=0xa1,umask=0x1101CMS Horizontal Egress Inserts : AD - All : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_m3upi_txr_horz_inserts.ad_crduncore interconnectCMS Horizontal Egress Inserts : AD - Creditedevent=0xa1,umask=0x1001CMS Horizontal Egress Inserts : AD - Credited : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_inserts.ad_uncrduncore interconnectCMS Horizontal Egress Inserts : AD - Uncreditedevent=0xa1,umask=101CMS Horizontal Egress Inserts : AD - Uncredited : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_inserts.akuncore interconnectCMS Horizontal Egress Inserts : AKevent=0xa1,umask=201CMS Horizontal Egress Inserts : AK : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_inserts.akc_uncrduncore interconnectCMS Horizontal Egress Inserts : AKC - Uncreditedevent=0xa1,umask=0x8001CMS Horizontal Egress Inserts : AKC - Uncredited : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_inserts.bl_alluncore interconnectCMS Horizontal Egress Inserts : BL - Allevent=0xa1,umask=0x4401CMS Horizontal Egress Inserts : BL - All : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_m3upi_txr_horz_inserts.bl_crduncore interconnectCMS Horizontal Egress Inserts : BL - Creditedevent=0xa1,umask=0x4001CMS Horizontal Egress Inserts : BL - Credited : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_inserts.bl_uncrduncore interconnectCMS Horizontal Egress Inserts : BL - Uncreditedevent=0xa1,umask=401CMS Horizontal Egress Inserts : BL - Uncredited : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_inserts.ivuncore interconnectCMS Horizontal Egress Inserts : IVevent=0xa1,umask=801CMS Horizontal Egress Inserts : IV : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_nack.ad_alluncore interconnectCMS Horizontal Egress NACKs : AD - Allevent=0xa4,umask=0x1101CMS Horizontal Egress NACKs : AD - All : Counts number of Egress packets NACK'ed on to the Horizontal Ring : All == Credited + Uncreditedunc_m3upi_txr_horz_nack.ad_crduncore interconnectCMS Horizontal Egress NACKs : AD - Creditedevent=0xa4,umask=0x1001CMS Horizontal Egress NACKs : AD - Credited : Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m3upi_txr_horz_nack.ad_uncrduncore interconnectCMS Horizontal Egress NACKs : AD - Uncreditedevent=0xa4,umask=101CMS Horizontal Egress NACKs : AD - Uncredited : Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m3upi_txr_horz_nack.akuncore interconnectCMS Horizontal Egress NACKs : AKevent=0xa4,umask=201CMS Horizontal Egress NACKs : AK : Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m3upi_txr_horz_nack.akc_uncrduncore interconnectCMS Horizontal Egress NACKs : AKC - Uncreditedevent=0xa4,umask=0x8001CMS Horizontal Egress NACKs : AKC - Uncredited : Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m3upi_txr_horz_nack.bl_alluncore interconnectCMS Horizontal Egress NACKs : BL - Allevent=0xa4,umask=0x4401CMS Horizontal Egress NACKs : BL - All : Counts number of Egress packets NACK'ed on to the Horizontal Ring : All == Credited + Uncreditedunc_m3upi_txr_horz_nack.bl_crduncore interconnectCMS Horizontal Egress NACKs : BL - Creditedevent=0xa4,umask=0x4001CMS Horizontal Egress NACKs : BL - Credited : Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m3upi_txr_horz_nack.bl_uncrduncore interconnectCMS Horizontal Egress NACKs : BL - Uncreditedevent=0xa4,umask=401CMS Horizontal Egress NACKs : BL - Uncredited : Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m3upi_txr_horz_nack.ivuncore interconnectCMS Horizontal Egress NACKs : IVevent=0xa4,umask=801CMS Horizontal Egress NACKs : IV : Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m3upi_txr_horz_occupancy.ad_alluncore interconnectCMS Horizontal Egress Occupancy : AD - Allevent=0xa0,umask=0x1101CMS Horizontal Egress Occupancy : AD - All : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_m3upi_txr_horz_occupancy.ad_crduncore interconnectCMS Horizontal Egress Occupancy : AD - Creditedevent=0xa0,umask=0x1001CMS Horizontal Egress Occupancy : AD - Credited : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_occupancy.ad_uncrduncore interconnectCMS Horizontal Egress Occupancy : AD - Uncreditedevent=0xa0,umask=101CMS Horizontal Egress Occupancy : AD - Uncredited : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_occupancy.akuncore interconnectCMS Horizontal Egress Occupancy : AKevent=0xa0,umask=201CMS Horizontal Egress Occupancy : AK : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_occupancy.akc_uncrduncore interconnectCMS Horizontal Egress Occupancy : AKC - Uncreditedevent=0xa0,umask=0x8001CMS Horizontal Egress Occupancy : AKC - Uncredited : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_occupancy.bl_alluncore interconnectCMS Horizontal Egress Occupancy : BL - Allevent=0xa0,umask=0x4401CMS Horizontal Egress Occupancy : BL - All : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_m3upi_txr_horz_occupancy.bl_crduncore interconnectCMS Horizontal Egress Occupancy : BL - Creditedevent=0xa0,umask=0x4001CMS Horizontal Egress Occupancy : BL - Credited : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_occupancy.bl_uncrduncore interconnectCMS Horizontal Egress Occupancy : BL - Uncreditedevent=0xa0,umask=401CMS Horizontal Egress Occupancy : BL - Uncredited : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_occupancy.ivuncore interconnectCMS Horizontal Egress Occupancy : IVevent=0xa0,umask=801CMS Horizontal Egress Occupancy : IV : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m3upi_txr_horz_starved.ad_alluncore interconnectCMS Horizontal Egress Injection Starvation : AD - Allevent=0xa5,umask=101CMS Horizontal Egress Injection Starvation : AD - All : Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time. : All == Credited + Uncreditedunc_m3upi_txr_horz_starved.ad_uncrduncore interconnectCMS Horizontal Egress Injection Starvation : AD - Uncreditedevent=0xa5,umask=101CMS Horizontal Egress Injection Starvation : AD - Uncredited : Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_m3upi_txr_horz_starved.akuncore interconnectCMS Horizontal Egress Injection Starvation : AKevent=0xa5,umask=201CMS Horizontal Egress Injection Starvation : AK : Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_m3upi_txr_horz_starved.akc_uncrduncore interconnectCMS Horizontal Egress Injection Starvation : AKC - Uncreditedevent=0xa5,umask=0x8001CMS Horizontal Egress Injection Starvation : AKC - Uncredited : Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_m3upi_txr_horz_starved.bl_alluncore interconnectCMS Horizontal Egress Injection Starvation : BL - Allevent=0xa5,umask=401CMS Horizontal Egress Injection Starvation : BL - All : Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time. : All == Credited + Uncreditedunc_m3upi_txr_horz_starved.bl_uncrduncore interconnectCMS Horizontal Egress Injection Starvation : BL - Uncreditedevent=0xa5,umask=401CMS Horizontal Egress Injection Starvation : BL - Uncredited : Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_m3upi_txr_horz_starved.ivuncore interconnectCMS Horizontal Egress Injection Starvation : IVevent=0xa5,umask=801CMS Horizontal Egress Injection Starvation : IV : Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_m3upi_txr_vert_ads_used.ad_ag0uncore interconnectCMS Vertical ADS Used : AD - Agent 0event=0x9c,umask=101CMS Vertical ADS Used : AD - Agent 0 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m3upi_txr_vert_ads_used.ad_ag1uncore interconnectCMS Vertical ADS Used : AD - Agent 1event=0x9c,umask=0x1001CMS Vertical ADS Used : AD - Agent 1 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m3upi_txr_vert_ads_used.bl_ag0uncore interconnectCMS Vertical ADS Used : BL - Agent 0event=0x9c,umask=401CMS Vertical ADS Used : BL - Agent 0 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m3upi_txr_vert_ads_used.bl_ag1uncore interconnectCMS Vertical ADS Used : BL - Agent 1event=0x9c,umask=0x4001CMS Vertical ADS Used : BL - Agent 1 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m3upi_txr_vert_bypass.ad_ag0uncore interconnectCMS Vertical ADS Used : AD - Agent 0event=0x9d,umask=101CMS Vertical ADS Used : AD - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m3upi_txr_vert_bypass.ad_ag1uncore interconnectCMS Vertical ADS Used : AD - Agent 1event=0x9d,umask=0x1001CMS Vertical ADS Used : AD - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m3upi_txr_vert_bypass.ak_ag0uncore interconnectCMS Vertical ADS Used : AK - Agent 0event=0x9d,umask=201CMS Vertical ADS Used : AK - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m3upi_txr_vert_bypass.ak_ag1uncore interconnectCMS Vertical ADS Used : AK - Agent 1event=0x9d,umask=0x2001CMS Vertical ADS Used : AK - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m3upi_txr_vert_bypass.bl_ag0uncore interconnectCMS Vertical ADS Used : BL - Agent 0event=0x9d,umask=401CMS Vertical ADS Used : BL - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m3upi_txr_vert_bypass.bl_ag1uncore interconnectCMS Vertical ADS Used : BL - Agent 1event=0x9d,umask=0x4001CMS Vertical ADS Used : BL - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m3upi_txr_vert_bypass.iv_ag1uncore interconnectCMS Vertical ADS Used : IV - Agent 1event=0x9d,umask=801CMS Vertical ADS Used : IV - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m3upi_txr_vert_bypass_1.akc_ag0uncore interconnectCMS Vertical ADS Used : AKC - Agent 0event=0x9e,umask=101CMS Vertical ADS Used : AKC - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m3upi_txr_vert_bypass_1.akc_ag1uncore interconnectCMS Vertical ADS Used : AKC - Agent 1event=0x9e,umask=201CMS Vertical ADS Used : AKC - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m3upi_txr_vert_cycles_full0.ad_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Full : AD - Agent 0event=0x94,umask=101Cycles CMS Vertical Egress Queue Is Full : AD - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m3upi_txr_vert_cycles_full0.ad_ag1uncore interconnectCycles CMS Vertical Egress Queue Is Full : AD - Agent 1event=0x94,umask=0x1001Cycles CMS Vertical Egress Queue Is Full : AD - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring.  This is commonly used for outbound requestsunc_m3upi_txr_vert_cycles_full0.ak_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Full : AK - Agent 0event=0x94,umask=201Cycles CMS Vertical Egress Queue Is Full : AK - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m3upi_txr_vert_cycles_full0.ak_ag1uncore interconnectCycles CMS Vertical Egress Queue Is Full : AK - Agent 1event=0x94,umask=0x2001Cycles CMS Vertical Egress Queue Is Full : AK - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ringunc_m3upi_txr_vert_cycles_full0.bl_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Full : BL - Agent 0event=0x94,umask=401Cycles CMS Vertical Egress Queue Is Full : BL - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_m3upi_txr_vert_cycles_full0.bl_ag1uncore interconnectCycles CMS Vertical Egress Queue Is Full : BL - Agent 1event=0x94,umask=0x4001Cycles CMS Vertical Egress Queue Is Full : BL - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_m3upi_txr_vert_cycles_full0.iv_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Full : IV - Agent 0event=0x94,umask=801Cycles CMS Vertical Egress Queue Is Full : IV - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring.  This is commonly used for snoops to the coresunc_m3upi_txr_vert_cycles_full1.akc_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Full : AKC - Agent 0event=0x95,umask=101Cycles CMS Vertical Egress Queue Is Full : AKC - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m3upi_txr_vert_cycles_full1.akc_ag1uncore interconnectCycles CMS Vertical Egress Queue Is Full : AKC - Agent 1event=0x95,umask=201Cycles CMS Vertical Egress Queue Is Full : AKC - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m3upi_txr_vert_cycles_ne0.ad_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 0event=0x96,umask=101Cycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m3upi_txr_vert_cycles_ne0.ad_ag1uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 1event=0x96,umask=0x1001Cycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring.  This is commonly used for outbound requestsunc_m3upi_txr_vert_cycles_ne0.ak_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 0event=0x96,umask=201Cycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m3upi_txr_vert_cycles_ne0.ak_ag1uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 1event=0x96,umask=0x2001Cycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ringunc_m3upi_txr_vert_cycles_ne0.bl_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 0event=0x96,umask=401Cycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_m3upi_txr_vert_cycles_ne0.bl_ag1uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 1event=0x96,umask=0x4001Cycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_m3upi_txr_vert_cycles_ne0.iv_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty : IV - Agent 0event=0x96,umask=801Cycles CMS Vertical Egress Queue Is Not Empty : IV - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring.  This is commonly used for snoops to the coresunc_m3upi_txr_vert_cycles_ne1.akc_ag0uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 0event=0x97,umask=101Cycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m3upi_txr_vert_cycles_ne1.akc_ag1uncore interconnectCycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 1event=0x97,umask=201Cycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m3upi_txr_vert_inserts0.ad_ag0uncore interconnectCMS Vert Egress Allocations : AD - Agent 0event=0x92,umask=101CMS Vert Egress Allocations : AD - Agent 0 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m3upi_txr_vert_inserts0.ad_ag1uncore interconnectCMS Vert Egress Allocations : AD - Agent 1event=0x92,umask=0x1001CMS Vert Egress Allocations : AD - Agent 1 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring.  This is commonly used for outbound requestsunc_m3upi_txr_vert_inserts0.ak_ag0uncore interconnectCMS Vert Egress Allocations : AK - Agent 0event=0x92,umask=201CMS Vert Egress Allocations : AK - Agent 0 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m3upi_txr_vert_inserts0.ak_ag1uncore interconnectCMS Vert Egress Allocations : AK - Agent 1event=0x92,umask=0x2001CMS Vert Egress Allocations : AK - Agent 1 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ringunc_m3upi_txr_vert_inserts0.bl_ag0uncore interconnectCMS Vert Egress Allocations : BL - Agent 0event=0x92,umask=401CMS Vert Egress Allocations : BL - Agent 0 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_m3upi_txr_vert_inserts0.bl_ag1uncore interconnectCMS Vert Egress Allocations : BL - Agent 1event=0x92,umask=0x4001CMS Vert Egress Allocations : BL - Agent 1 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_m3upi_txr_vert_inserts0.iv_ag0uncore interconnectCMS Vert Egress Allocations : IV - Agent 0event=0x92,umask=801CMS Vert Egress Allocations : IV - Agent 0 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring.  This is commonly used for snoops to the coresunc_m3upi_txr_vert_inserts1.akc_ag0uncore interconnectCMS Vert Egress Allocations : AKC - Agent 0event=0x93,umask=101CMS Vert Egress Allocations : AKC - Agent 0 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m3upi_txr_vert_inserts1.akc_ag1uncore interconnectCMS Vert Egress Allocations : AKC - Agent 1event=0x93,umask=201CMS Vert Egress Allocations : AKC - Agent 1 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m3upi_txr_vert_nack0.ad_ag0uncore interconnectCMS Vertical Egress NACKs : AD - Agent 0event=0x98,umask=101CMS Vertical Egress NACKs : AD - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m3upi_txr_vert_nack0.ad_ag1uncore interconnectCMS Vertical Egress NACKs : AD - Agent 1event=0x98,umask=0x1001CMS Vertical Egress NACKs : AD - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m3upi_txr_vert_nack0.ak_ag0uncore interconnectCMS Vertical Egress NACKs : AK - Agent 0event=0x98,umask=201CMS Vertical Egress NACKs : AK - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m3upi_txr_vert_nack0.ak_ag1uncore interconnectCMS Vertical Egress NACKs : AK - Agent 1event=0x98,umask=0x2001CMS Vertical Egress NACKs : AK - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m3upi_txr_vert_nack0.bl_ag0uncore interconnectCMS Vertical Egress NACKs : BL - Agent 0event=0x98,umask=401CMS Vertical Egress NACKs : BL - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m3upi_txr_vert_nack0.bl_ag1uncore interconnectCMS Vertical Egress NACKs : BL - Agent 1event=0x98,umask=0x4001CMS Vertical Egress NACKs : BL - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m3upi_txr_vert_nack0.iv_ag0uncore interconnectCMS Vertical Egress NACKs : IVevent=0x98,umask=801CMS Vertical Egress NACKs : IV : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m3upi_txr_vert_nack1.akc_ag0uncore interconnectCMS Vertical Egress NACKs : AKC - Agent 0event=0x99,umask=101CMS Vertical Egress NACKs : AKC - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m3upi_txr_vert_nack1.akc_ag1uncore interconnectCMS Vertical Egress NACKs : AKC - Agent 1event=0x99,umask=201CMS Vertical Egress NACKs : AKC - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m3upi_txr_vert_occupancy0.ad_ag0uncore interconnectCMS Vert Egress Occupancy : AD - Agent 0event=0x90,umask=101CMS Vert Egress Occupancy : AD - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m3upi_txr_vert_occupancy0.ad_ag1uncore interconnectCMS Vert Egress Occupancy : AD - Agent 1event=0x90,umask=0x1001CMS Vert Egress Occupancy : AD - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring.  This is commonly used for outbound requestsunc_m3upi_txr_vert_occupancy0.ak_ag0uncore interconnectCMS Vert Egress Occupancy : AK - Agent 0event=0x90,umask=201CMS Vert Egress Occupancy : AK - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m3upi_txr_vert_occupancy0.ak_ag1uncore interconnectCMS Vert Egress Occupancy : AK - Agent 1event=0x90,umask=0x2001CMS Vert Egress Occupancy : AK - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ringunc_m3upi_txr_vert_occupancy0.bl_ag0uncore interconnectCMS Vert Egress Occupancy : BL - Agent 0event=0x90,umask=401CMS Vert Egress Occupancy : BL - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_m3upi_txr_vert_occupancy0.bl_ag1uncore interconnectCMS Vert Egress Occupancy : BL - Agent 1event=0x90,umask=0x4001CMS Vert Egress Occupancy : BL - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_m3upi_txr_vert_occupancy0.iv_ag0uncore interconnectCMS Vert Egress Occupancy : IV - Agent 0event=0x90,umask=801CMS Vert Egress Occupancy : IV - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring.  This is commonly used for snoops to the coresunc_m3upi_txr_vert_occupancy1.akc_ag0uncore interconnectCMS Vert Egress Occupancy : AKC - Agent 0event=0x91,umask=101CMS Vert Egress Occupancy : AKC - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m3upi_txr_vert_occupancy1.akc_ag1uncore interconnectCMS Vert Egress Occupancy : AKC - Agent 1event=0x91,umask=201CMS Vert Egress Occupancy : AKC - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m3upi_txr_vert_starved0.ad_ag0uncore interconnectCMS Vertical Egress Injection Starvation : AD - Agent 0event=0x9a,umask=101CMS Vertical Egress Injection Starvation : AD - Agent 0 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m3upi_txr_vert_starved0.ad_ag1uncore interconnectCMS Vertical Egress Injection Starvation : AD - Agent 1event=0x9a,umask=0x1001CMS Vertical Egress Injection Starvation : AD - Agent 1 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m3upi_txr_vert_starved0.ak_ag0uncore interconnectCMS Vertical Egress Injection Starvation : AK - Agent 0event=0x9a,umask=201CMS Vertical Egress Injection Starvation : AK - Agent 0 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m3upi_txr_vert_starved0.ak_ag1uncore interconnectCMS Vertical Egress Injection Starvation : AK - Agent 1event=0x9a,umask=0x2001CMS Vertical Egress Injection Starvation : AK - Agent 1 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m3upi_txr_vert_starved0.bl_ag0uncore interconnectCMS Vertical Egress Injection Starvation : BL - Agent 0event=0x9a,umask=401CMS Vertical Egress Injection Starvation : BL - Agent 0 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m3upi_txr_vert_starved0.bl_ag1uncore interconnectCMS Vertical Egress Injection Starvation : BL - Agent 1event=0x9a,umask=0x4001CMS Vertical Egress Injection Starvation : BL - Agent 1 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m3upi_txr_vert_starved0.iv_ag0uncore interconnectCMS Vertical Egress Injection Starvation : IVevent=0x9a,umask=801CMS Vertical Egress Injection Starvation : IV : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m3upi_txr_vert_starved1.akc_ag0uncore interconnectCMS Vertical Egress Injection Starvation : AKC - Agent 0event=0x9b,umask=101CMS Vertical Egress Injection Starvation : AKC - Agent 0 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m3upi_txr_vert_starved1.akc_ag1uncore interconnectCMS Vertical Egress Injection Starvation : AKC - Agent 1event=0x9b,umask=201CMS Vertical Egress Injection Starvation : AKC - Agent 1 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m3upi_txr_vert_starved1.tgcuncore interconnectCMS Vertical Egress Injection Starvation : AKC - Agent 0event=0x9b,umask=401CMS Vertical Egress Injection Starvation : AKC - Agent 0 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m3upi_vert_ring_ad_in_use.dn_evenuncore interconnectVertical AD Ring In Use : Down and Evenevent=0xb0,umask=401Vertical AD Ring In Use : Down and Even : Counts the number of cycles that the Vertical AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings  -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_ad_in_use.dn_odduncore interconnectVertical AD Ring In Use : Down and Oddevent=0xb0,umask=801Vertical AD Ring In Use : Down and Odd : Counts the number of cycles that the Vertical AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings  -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_ad_in_use.up_evenuncore interconnectVertical AD Ring In Use : Up and Evenevent=0xb0,umask=101Vertical AD Ring In Use : Up and Even : Counts the number of cycles that the Vertical AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings  -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_ad_in_use.up_odduncore interconnectVertical AD Ring In Use : Up and Oddevent=0xb0,umask=201Vertical AD Ring In Use : Up and Odd : Counts the number of cycles that the Vertical AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings  -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_akc_in_use.dn_evenuncore interconnectVertical AKC Ring In Use : Down and Evenevent=0xb4,umask=401Vertical AKC Ring In Use : Down and Even : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_akc_in_use.dn_odduncore interconnectVertical AKC Ring In Use : Down and Oddevent=0xb4,umask=801Vertical AKC Ring In Use : Down and Odd : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_akc_in_use.up_evenuncore interconnectVertical AKC Ring In Use : Up and Evenevent=0xb4,umask=101Vertical AKC Ring In Use : Up and Even : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_akc_in_use.up_odduncore interconnectVertical AKC Ring In Use : Up and Oddevent=0xb4,umask=201Vertical AKC Ring In Use : Up and Odd : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_ak_in_use.dn_evenuncore interconnectVertical AK Ring In Use : Down and Evenevent=0xb1,umask=401Vertical AK Ring In Use : Down and Even : Counts the number of cycles that the Vertical AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_ak_in_use.dn_odduncore interconnectVertical AK Ring In Use : Down and Oddevent=0xb1,umask=801Vertical AK Ring In Use : Down and Odd : Counts the number of cycles that the Vertical AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_ak_in_use.up_evenuncore interconnectVertical AK Ring In Use : Up and Evenevent=0xb1,umask=101Vertical AK Ring In Use : Up and Even : Counts the number of cycles that the Vertical AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_ak_in_use.up_odduncore interconnectVertical AK Ring In Use : Up and Oddevent=0xb1,umask=201Vertical AK Ring In Use : Up and Odd : Counts the number of cycles that the Vertical AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_bl_in_use.dn_evenuncore interconnectVertical BL Ring in Use : Down and Evenevent=0xb2,umask=401Vertical BL Ring in Use : Down and Even : Counts the number of cycles that the Vertical BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_bl_in_use.dn_odduncore interconnectVertical BL Ring in Use : Down and Oddevent=0xb2,umask=801Vertical BL Ring in Use : Down and Odd : Counts the number of cycles that the Vertical BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_bl_in_use.up_evenuncore interconnectVertical BL Ring in Use : Up and Evenevent=0xb2,umask=101Vertical BL Ring in Use : Up and Even : Counts the number of cycles that the Vertical BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_bl_in_use.up_odduncore interconnectVertical BL Ring in Use : Up and Oddevent=0xb2,umask=201Vertical BL Ring in Use : Up and Odd : Counts the number of cycles that the Vertical BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_iv_in_use.dnuncore interconnectVertical IV Ring in Use : Downevent=0xb3,umask=401Vertical IV Ring in Use : Down : Counts the number of cycles that the Vertical IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODDunc_m3upi_vert_ring_iv_in_use.upuncore interconnectVertical IV Ring in Use : Upevent=0xb3,umask=101Vertical IV Ring in Use : Up : Counts the number of cycles that the Vertical IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODDunc_m3upi_vert_ring_tgc_in_use.dn_evenuncore interconnectVertical TGC Ring In Use : Down and Evenevent=0xb5,umask=401Vertical TGC Ring In Use : Down and Even : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_tgc_in_use.dn_odduncore interconnectVertical TGC Ring In Use : Down and Oddevent=0xb5,umask=801Vertical TGC Ring In Use : Down and Odd : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_tgc_in_use.up_evenuncore interconnectVertical TGC Ring In Use : Up and Evenevent=0xb5,umask=101Vertical TGC Ring In Use : Up and Even : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_vert_ring_tgc_in_use.up_odduncore interconnectVertical TGC Ring In Use : Up and Oddevent=0xb5,umask=201Vertical TGC Ring In Use : Up and Odd : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m3upi_xpt_pftch.lost_qfulluncore interconnectUNC_M3UPI_XPT_PFTCH.LOST_QFULLevent=0x61,umask=0x2001: xpt prefetch message was dropped because it was overwritten by new message while prefetch queue was fullunc_upi_clockticksuncore interconnectNumber of kfclksevent=101Number of kfclks : Counts the number of clocks in the UPI LL.  This clock runs at 1/8th the GT/s speed of the UPI link.  For example, a 8GT/s link will have qfclk or 1GHz.  Current products do not support dynamic link speeds, so this frequency is fixedunc_upi_rxl_basic_hdr_match.requncore interconnectMatches on Receive path of a UPI Port : Requestevent=5,umask=801Matches on Receive path of a UPI Port : Request : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_rxl_basic_hdr_match.req_opcuncore interconnectMatches on Receive path of a UPI Port : Request, Match Opcodeevent=5,umask=0x10801Matches on Receive path of a UPI Port : Request, Match Opcode : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_rxl_basic_hdr_match.rspcnfltuncore interconnectMatches on Receive path of a UPI Port : Response - Conflictevent=5,umask=0x1aa01Matches on Receive path of a UPI Port : Response - Conflict : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_rxl_basic_hdr_match.rspiuncore interconnectMatches on Receive path of a UPI Port : Response - Invalidevent=5,umask=0x12a01Matches on Receive path of a UPI Port : Response - Invalid : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_rxl_basic_hdr_match.rsp_datauncore interconnectMatches on Receive path of a UPI Port : Response - Dataevent=5,umask=0xc01Matches on Receive path of a UPI Port : Response - Data : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_rxl_basic_hdr_match.rsp_data_opcuncore interconnectMatches on Receive path of a UPI Port : Response - Data, Match Opcodeevent=5,umask=0x10c01Matches on Receive path of a UPI Port : Response - Data, Match Opcode : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_rxl_basic_hdr_match.rsp_nodatauncore interconnectMatches on Receive path of a UPI Port : Response - No Dataevent=5,umask=0xa01Matches on Receive path of a UPI Port : Response - No Data : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_rxl_basic_hdr_match.rsp_nodata_opcuncore interconnectMatches on Receive path of a UPI Port : Response - No Data, Match Opcodeevent=5,umask=0x10a01Matches on Receive path of a UPI Port : Response - No Data, Match Opcode : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_rxl_basic_hdr_match.snpuncore interconnectMatches on Receive path of a UPI Port : Snoopevent=5,umask=901Matches on Receive path of a UPI Port : Snoop : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_rxl_basic_hdr_match.snp_opcuncore interconnectMatches on Receive path of a UPI Port : Snoop, Match Opcodeevent=5,umask=0x10901Matches on Receive path of a UPI Port : Snoop, Match Opcode : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_rxl_basic_hdr_match.wbuncore interconnectMatches on Receive path of a UPI Port : Writebackevent=5,umask=0xd01Matches on Receive path of a UPI Port : Writeback : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_rxl_basic_hdr_match.wb_opcuncore interconnectMatches on Receive path of a UPI Port : Writeback, Match Opcodeevent=5,umask=0x10d01Matches on Receive path of a UPI Port : Writeback, Match Opcode : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_rxl_crc_llr_req_transmituncore interconnectLLR Requests Sentevent=801LLR Requests Sent : Number of LLR Requests were transmitted.  This should generally be <= the number of CRC errors detected.  If multiple errors are detected before the Rx side receives a LLC_REQ_ACK from the Tx side, there is no need to send more LLR_REQ_NACKsunc_upi_rxl_credits_consumed_vnauncore interconnectVNA Credit Consumedevent=0x3801VNA Credit Consumed : Counts the number of times that an RxQ VNA credit was consumed (i.e. message uses a VNA credit for the Rx Buffer).  This includes packets that went through the RxQ and those that were bypasssedunc_upi_rxl_flits.all_nulluncore interconnectValid Flits Received : Null FLITs received from any slotevent=3,umask=0x2701Valid Flits Received : Null FLITs received from any slot : Shows legal flit time (hides impact of L0p and L0c)unc_upi_rxl_flits.idleuncore interconnectValid Flits Received : Null FLITs received from any slotevent=3,umask=0x4701Valid Flits Received : Null FLITs received from any slot : Shows legal flit time (hides impact of L0p and L0c)unc_upi_txl_basic_hdr_match.requncore interconnectMatches on Transmit path of a UPI Port : Requestevent=4,umask=801Matches on Transmit path of a UPI Port : Request : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_txl_basic_hdr_match.req_opcuncore interconnectMatches on Transmit path of a UPI Port : Request, Match Opcodeevent=4,umask=0x10801Matches on Transmit path of a UPI Port : Request, Match Opcode : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_txl_basic_hdr_match.rspcnfltuncore interconnectMatches on Transmit path of a UPI Port : Response - Conflictevent=4,umask=0x1aa01Matches on Transmit path of a UPI Port : Response - Conflict : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_txl_basic_hdr_match.rspiuncore interconnectMatches on Transmit path of a UPI Port : Response - Invalidevent=4,umask=0x12a01Matches on Transmit path of a UPI Port : Response - Invalid : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_txl_basic_hdr_match.rsp_datauncore interconnectMatches on Transmit path of a UPI Port : Response - Dataevent=4,umask=0xc01Matches on Transmit path of a UPI Port : Response - Data : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_txl_basic_hdr_match.rsp_data_opcuncore interconnectMatches on Transmit path of a UPI Port : Response - Data, Match Opcodeevent=4,umask=0x10c01Matches on Transmit path of a UPI Port : Response - Data, Match Opcode : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_txl_basic_hdr_match.rsp_nodatauncore interconnectMatches on Transmit path of a UPI Port : Response - No Dataevent=4,umask=0xa01Matches on Transmit path of a UPI Port : Response - No Data : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_txl_basic_hdr_match.rsp_nodata_opcuncore interconnectMatches on Transmit path of a UPI Port : Response - No Data, Match Opcodeevent=4,umask=0x10a01Matches on Transmit path of a UPI Port : Response - No Data, Match Opcode : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_txl_basic_hdr_match.snpuncore interconnectMatches on Transmit path of a UPI Port : Snoopevent=4,umask=901Matches on Transmit path of a UPI Port : Snoop : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_txl_basic_hdr_match.snp_opcuncore interconnectMatches on Transmit path of a UPI Port : Snoop, Match Opcodeevent=4,umask=0x10901Matches on Transmit path of a UPI Port : Snoop, Match Opcode : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_txl_basic_hdr_match.wbuncore interconnectMatches on Transmit path of a UPI Port : Writebackevent=4,umask=0xd01Matches on Transmit path of a UPI Port : Writeback : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_txl_basic_hdr_match.wb_opcuncore interconnectMatches on Transmit path of a UPI Port : Writeback, Match Opcodeevent=4,umask=0x10d01Matches on Transmit path of a UPI Port : Writeback, Match Opcode : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabledunc_upi_txl_flits.all_datauncore interconnectValid Flits Sent : All Dataevent=2,umask=0xf01Valid Flits Sent : All Data : Shows legal flit time (hides impact of L0p and L0c)unc_upi_txl_flits.all_nulluncore interconnectValid Flits Sent : Null FLITs transmitted to any slotevent=2,umask=0x2701Valid Flits Sent : Null FLITs transmitted to any slot : Shows legal flit time (hides impact of L0p and L0c)unc_u_lock_cyclesuncore interconnectIDI Lock/SplitLock Cyclesevent=0x4401IDI Lock/SplitLock Cycles : Number of times an IDI Lock/SplitLock sequence was startedunc_iio_clockticksuncore ioClockticks of the integrated IO (IIO) traffic controllerevent=101Clockticks of the integrated IO (IIO) traffic controller : Increments counter once every Traffic Controller clock, the LSCLK (500MHz)unc_iio_clockticks_freerununcore ioFree running counter that increments for IIO clocktickevent=0xff,umask=0x1001Free running counter that increments for integrated IO (IIO) traffic controller clockticksunc_iio_comp_buf_inserts.cmpd.alluncore ioPCIe Completion Buffer Inserts : All Portsevent=0xc2,ch_mask=0xff,fc_mask=4,umask=301unc_iio_comp_buf_inserts.cmpd.all_partsuncore ioPCIe Completion Buffer Inserts of completions with data: Part 0-7event=0xc2,ch_mask=0xff,fc_mask=4,umask=301PCIe Completion Buffer Inserts of completions with data : Part 0-7unc_iio_comp_buf_inserts.cmpd.part0uncore ioPCIe Completion Buffer Inserts of completions with data: Part 0event=0xc2,ch_mask=1,fc_mask=4,umask=301PCIe Completion Buffer Inserts of completions with data : Part 0 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_comp_buf_inserts.cmpd.part1uncore ioPCIe Completion Buffer Inserts of completions with data: Part 1event=0xc2,ch_mask=2,fc_mask=4,umask=301PCIe Completion Buffer Inserts of completions with data : Part 1 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 1unc_iio_comp_buf_inserts.cmpd.part2uncore ioPCIe Completion Buffer Inserts of completions with data: Part 2event=0xc2,ch_mask=4,fc_mask=4,umask=301PCIe Completion Buffer Inserts of completions with data : Part 2 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 2unc_iio_comp_buf_inserts.cmpd.part3uncore ioPCIe Completion Buffer Inserts of completions with data: Part 3event=0xc2,ch_mask=8,fc_mask=4,umask=301PCIe Completion Buffer Inserts of completions with data : Part 2 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 3unc_iio_comp_buf_inserts.cmpd.part4uncore ioPCIe Completion Buffer Inserts of completions with data: Part 4event=0xc2,ch_mask=0x10,fc_mask=4,umask=301PCIe Completion Buffer Inserts of completions with data : Part 0 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 4unc_iio_comp_buf_inserts.cmpd.part5uncore ioPCIe Completion Buffer Inserts of completions with data: Part 5event=0xc2,ch_mask=0x20,fc_mask=4,umask=301PCIe Completion Buffer Inserts of completions with data : Part 1 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 5unc_iio_comp_buf_inserts.cmpd.part6uncore ioPCIe Completion Buffer Inserts of completions with data: Part 6event=0xc2,ch_mask=0x40,fc_mask=4,umask=301PCIe Completion Buffer Inserts of completions with data : Part 2 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 6unc_iio_comp_buf_inserts.cmpd.part7uncore ioPCIe Completion Buffer Inserts of completions with data: Part 7event=0xc2,ch_mask=0x80,fc_mask=4,umask=301PCIe Completion Buffer Inserts of completions with data : Part 2 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 7unc_iio_comp_buf_occupancy.cmpd.alluncore ioPCIe Completion Buffer Occupancy of completions with data : Part 0-7event=0xd5,fc_mask=4,umask=0xff01PCIe Completion Buffer Occupancy : Part 0-7unc_iio_comp_buf_occupancy.cmpd.all_partsuncore ioPCIe Completion Buffer Occupancy of completions with data : Part 0-7event=0xd5,fc_mask=4,umask=0xff01PCIe Completion Buffer Occupancy : Part 0-7unc_iio_comp_buf_occupancy.cmpd.part0uncore ioPCIe Completion Buffer Occupancy of completions with data : Part 0event=0xd5,fc_mask=4,umask=101PCIe Completion Buffer Occupancy : Part 0 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_comp_buf_occupancy.cmpd.part1uncore ioPCIe Completion Buffer Occupancy of completions with data : Part 1event=0xd5,fc_mask=4,umask=201PCIe Completion Buffer Occupancy : Part 1 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 1unc_iio_comp_buf_occupancy.cmpd.part2uncore ioPCIe Completion Buffer Occupancy of completions with data : Part 2event=0xd5,fc_mask=4,umask=401PCIe Completion Buffer Occupancy : Part 2 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 2unc_iio_comp_buf_occupancy.cmpd.part3uncore ioPCIe Completion Buffer Occupancy of completions with data : Part 3event=0xd5,fc_mask=4,umask=801PCIe Completion Buffer Occupancy : Part 3 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 3unc_iio_comp_buf_occupancy.cmpd.part4uncore ioPCIe Completion Buffer Occupancy of completions with data : Part 4event=0xd5,fc_mask=4,umask=0x1001PCIe Completion Buffer Occupancy : Part 4 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 4unc_iio_comp_buf_occupancy.cmpd.part5uncore ioPCIe Completion Buffer Occupancy of completions with data : Part 5event=0xd5,fc_mask=4,umask=0x2001PCIe Completion Buffer Occupancy : Part 5 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 5unc_iio_comp_buf_occupancy.cmpd.part6uncore ioPCIe Completion Buffer Occupancy of completions with data : Part 6event=0xd5,fc_mask=4,umask=0x4001PCIe Completion Buffer Occupancy : Part 6 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 6unc_iio_comp_buf_occupancy.cmpd.part7uncore ioPCIe Completion Buffer Occupancy of completions with data : Part 7event=0xd5,fc_mask=4,umask=0x8001PCIe Completion Buffer Occupancy : Part 7 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 7unc_iio_data_req_by_cpu.cfg_read.iommu0uncore ioData requested by the CPU : Core reading from Card's PCICFG spaceevent=0xc0,ch_mask=0x100,fc_mask=7,umask=0x4001Data requested by the CPU : Core reading from Card's PCICFG space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : IOMMU - Type 0unc_iio_data_req_by_cpu.cfg_read.iommu1uncore ioData requested by the CPU : Core reading from Card's PCICFG spaceevent=0xc0,ch_mask=0x200,fc_mask=7,umask=0x4001Data requested by the CPU : Core reading from Card's PCICFG space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : IOMMU - Type 1unc_iio_data_req_by_cpu.cfg_read.part0uncore ioData requested by the CPU : Core reading from Card's PCICFG spaceevent=0xc0,ch_mask=1,fc_mask=7,umask=0x4001Data requested by the CPU : Core reading from Card's PCICFG space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_by_cpu.cfg_read.part1uncore ioData requested by the CPU : Core reading from Card's PCICFG spaceevent=0xc0,ch_mask=2,fc_mask=7,umask=0x4001Data requested by the CPU : Core reading from Card's PCICFG space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_by_cpu.cfg_read.part2uncore ioData requested by the CPU : Core reading from Card's PCICFG spaceevent=0xc0,ch_mask=4,fc_mask=7,umask=0x4001Data requested by the CPU : Core reading from Card's PCICFG space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_data_req_by_cpu.cfg_read.part3uncore ioData requested by the CPU : Core reading from Card's PCICFG spaceevent=0xc0,ch_mask=8,fc_mask=7,umask=0x4001Data requested by the CPU : Core reading from Card's PCICFG space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_by_cpu.cfg_read.part4uncore ioData requested by the CPU : Core reading from Card's PCICFG spaceevent=0xc0,ch_mask=0x10,fc_mask=7,umask=0x4001Data requested by the CPU : Core reading from Card's PCICFG space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_data_req_by_cpu.cfg_read.part5uncore ioData requested by the CPU : Core reading from Card's PCICFG spaceevent=0xc0,ch_mask=0x20,fc_mask=7,umask=0x4001Data requested by the CPU : Core reading from Card's PCICFG space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 5unc_iio_data_req_by_cpu.cfg_read.part6uncore ioData requested by the CPU : Core reading from Card's PCICFG spaceevent=0xc0,ch_mask=0x40,fc_mask=7,umask=0x4001Data requested by the CPU : Core reading from Card's PCICFG space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_data_req_by_cpu.cfg_read.part7uncore ioData requested by the CPU : Core reading from Card's PCICFG spaceevent=0xc0,ch_mask=0x80,fc_mask=7,umask=0x4001Data requested by the CPU : Core reading from Card's PCICFG space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 7unc_iio_data_req_by_cpu.cfg_write.iommu0uncore ioData requested by the CPU : Core writing to Card's PCICFG spaceevent=0xc0,ch_mask=0x100,fc_mask=7,umask=0x1001Data requested by the CPU : Core writing to Card's PCICFG space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : IOMMU - Type 0unc_iio_data_req_by_cpu.cfg_write.iommu1uncore ioData requested by the CPU : Core writing to Card's PCICFG spaceevent=0xc0,ch_mask=0x200,fc_mask=7,umask=0x1001Data requested by the CPU : Core writing to Card's PCICFG space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : IOMMU - Type 1unc_iio_data_req_by_cpu.cfg_write.part0uncore ioData requested by the CPU : Core writing to Card's PCICFG spaceevent=0xc0,ch_mask=1,fc_mask=7,umask=0x1001Data requested by the CPU : Core writing to Card's PCICFG space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_by_cpu.cfg_write.part1uncore ioData requested by the CPU : Core writing to Card's PCICFG spaceevent=0xc0,ch_mask=2,fc_mask=7,umask=0x1001Data requested by the CPU : Core writing to Card's PCICFG space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_by_cpu.cfg_write.part2uncore ioData requested by the CPU : Core writing to Card's PCICFG spaceevent=0xc0,ch_mask=4,fc_mask=7,umask=0x1001Data requested by the CPU : Core writing to Card's PCICFG space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_data_req_by_cpu.cfg_write.part3uncore ioData requested by the CPU : Core writing to Card's PCICFG spaceevent=0xc0,ch_mask=8,fc_mask=7,umask=0x1001Data requested by the CPU : Core writing to Card's PCICFG space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_by_cpu.cfg_write.part4uncore ioData requested by the CPU : Core writing to Card's PCICFG spaceevent=0xc0,ch_mask=0x10,fc_mask=7,umask=0x1001Data requested by the CPU : Core writing to Card's PCICFG space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_data_req_by_cpu.cfg_write.part5uncore ioData requested by the CPU : Core writing to Card's PCICFG spaceevent=0xc0,ch_mask=0x20,fc_mask=7,umask=0x1001Data requested by the CPU : Core writing to Card's PCICFG space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 5unc_iio_data_req_by_cpu.cfg_write.part6uncore ioData requested by the CPU : Core writing to Card's PCICFG spaceevent=0xc0,ch_mask=0x40,fc_mask=7,umask=0x1001Data requested by the CPU : Core writing to Card's PCICFG space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_data_req_by_cpu.cfg_write.part7uncore ioData requested by the CPU : Core writing to Card's PCICFG spaceevent=0xc0,ch_mask=0x80,fc_mask=7,umask=0x1001Data requested by the CPU : Core writing to Card's PCICFG space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 7unc_iio_data_req_by_cpu.io_read.iommu0uncore ioData requested by the CPU : Core reading from Card's IO spaceevent=0xc0,ch_mask=0x100,fc_mask=7,umask=0x8001Data requested by the CPU : Core reading from Card's IO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : IOMMU - Type 0unc_iio_data_req_by_cpu.io_read.iommu1uncore ioData requested by the CPU : Core reading from Card's IO spaceevent=0xc0,ch_mask=0x200,fc_mask=7,umask=0x8001Data requested by the CPU : Core reading from Card's IO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : IOMMU - Type 1unc_iio_data_req_by_cpu.io_read.part0uncore ioData requested by the CPU : Core reading from Card's IO spaceevent=0xc0,ch_mask=1,fc_mask=7,umask=0x8001Data requested by the CPU : Core reading from Card's IO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_by_cpu.io_read.part1uncore ioData requested by the CPU : Core reading from Card's IO spaceevent=0xc0,ch_mask=2,fc_mask=7,umask=0x8001Data requested by the CPU : Core reading from Card's IO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_by_cpu.io_read.part2uncore ioData requested by the CPU : Core reading from Card's IO spaceevent=0xc0,ch_mask=4,fc_mask=7,umask=0x8001Data requested by the CPU : Core reading from Card's IO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_data_req_by_cpu.io_read.part3uncore ioData requested by the CPU : Core reading from Card's IO spaceevent=0xc0,ch_mask=8,fc_mask=7,umask=0x8001Data requested by the CPU : Core reading from Card's IO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_by_cpu.io_read.part4uncore ioData requested by the CPU : Core reading from Card's IO spaceevent=0xc0,ch_mask=0x10,fc_mask=7,umask=0x8001Data requested by the CPU : Core reading from Card's IO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_data_req_by_cpu.io_read.part5uncore ioData requested by the CPU : Core reading from Card's IO spaceevent=0xc0,ch_mask=0x20,fc_mask=7,umask=0x8001Data requested by the CPU : Core reading from Card's IO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 5unc_iio_data_req_by_cpu.io_read.part6uncore ioData requested by the CPU : Core reading from Card's IO spaceevent=0xc0,ch_mask=0x40,fc_mask=7,umask=0x8001Data requested by the CPU : Core reading from Card's IO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_data_req_by_cpu.io_read.part7uncore ioData requested by the CPU : Core reading from Card's IO spaceevent=0xc0,ch_mask=0x80,fc_mask=7,umask=0x8001Data requested by the CPU : Core reading from Card's IO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 7unc_iio_data_req_by_cpu.io_write.iommu0uncore ioData requested by the CPU : Core writing to Card's IO spaceevent=0xc0,ch_mask=0x100,fc_mask=7,umask=0x2001Data requested by the CPU : Core writing to Card's IO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : IOMMU - Type 0unc_iio_data_req_by_cpu.io_write.iommu1uncore ioData requested by the CPU : Core writing to Card's IO spaceevent=0xc0,ch_mask=0x200,fc_mask=7,umask=0x2001Data requested by the CPU : Core writing to Card's IO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : IOMMU - Type 1unc_iio_data_req_by_cpu.io_write.part0uncore ioData requested by the CPU : Core writing to Card's IO spaceevent=0xc0,ch_mask=1,fc_mask=7,umask=0x2001Data requested by the CPU : Core writing to Card's IO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_by_cpu.io_write.part1uncore ioData requested by the CPU : Core writing to Card's IO spaceevent=0xc0,ch_mask=2,fc_mask=7,umask=0x2001Data requested by the CPU : Core writing to Card's IO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_by_cpu.io_write.part2uncore ioData requested by the CPU : Core writing to Card's IO spaceevent=0xc0,ch_mask=4,fc_mask=7,umask=0x2001Data requested by the CPU : Core writing to Card's IO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_data_req_by_cpu.io_write.part3uncore ioData requested by the CPU : Core writing to Card's IO spaceevent=0xc0,ch_mask=8,fc_mask=7,umask=0x2001Data requested by the CPU : Core writing to Card's IO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_by_cpu.io_write.part4uncore ioData requested by the CPU : Core writing to Card's IO spaceevent=0xc0,ch_mask=0x10,fc_mask=7,umask=0x2001Data requested by the CPU : Core writing to Card's IO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_data_req_by_cpu.io_write.part5uncore ioData requested by the CPU : Core writing to Card's IO spaceevent=0xc0,ch_mask=0x20,fc_mask=7,umask=0x2001Data requested by the CPU : Core writing to Card's IO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 5unc_iio_data_req_by_cpu.io_write.part6uncore ioData requested by the CPU : Core writing to Card's IO spaceevent=0xc0,ch_mask=0x40,fc_mask=7,umask=0x2001Data requested by the CPU : Core writing to Card's IO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_data_req_by_cpu.io_write.part7uncore ioData requested by the CPU : Core writing to Card's IO spaceevent=0xc0,ch_mask=0x80,fc_mask=7,umask=0x2001Data requested by the CPU : Core writing to Card's IO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 7unc_iio_data_req_by_cpu.mem_read.iommu0uncore ioData requested by the CPU : Core reporting completion of Card read from Core DRAMevent=0xc0,ch_mask=0x100,fc_mask=7,umask=401Data requested by the CPU : Core reporting completion of Card read from Core DRAM : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : IOMMU - Type 0unc_iio_data_req_by_cpu.mem_read.iommu1uncore ioData requested by the CPU : Core reporting completion of Card read from Core DRAMevent=0xc0,ch_mask=0x200,fc_mask=7,umask=401Data requested by the CPU : Core reporting completion of Card read from Core DRAM : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : IOMMU - Type 1unc_iio_data_req_by_cpu.mem_read.part0uncore ioData requested by the CPU : Core reporting completion of Card read from Core DRAMevent=0xc0,ch_mask=1,fc_mask=7,umask=401Data requested by the CPU : Core reporting completion of Card read from Core DRAM : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_by_cpu.mem_read.part1uncore ioData requested by the CPU : Core reporting completion of Card read from Core DRAMevent=0xc0,ch_mask=2,fc_mask=7,umask=401Data requested by the CPU : Core reporting completion of Card read from Core DRAM : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_by_cpu.mem_read.part2uncore ioData requested by the CPU : Core reporting completion of Card read from Core DRAMevent=0xc0,ch_mask=4,fc_mask=7,umask=401Data requested by the CPU : Core reporting completion of Card read from Core DRAM : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_data_req_by_cpu.mem_read.part3uncore ioData requested by the CPU : Core reporting completion of Card read from Core DRAMevent=0xc0,ch_mask=8,fc_mask=7,umask=401Data requested by the CPU : Core reporting completion of Card read from Core DRAM : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_by_cpu.mem_read.part4uncore ioData requested by the CPU : Core reporting completion of Card read from Core DRAMevent=0xc0,ch_mask=0x10,fc_mask=7,umask=401Data requested by the CPU : Core reporting completion of Card read from Core DRAM : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_data_req_by_cpu.mem_read.part5uncore ioData requested by the CPU : Core reporting completion of Card read from Core DRAMevent=0xc0,ch_mask=0x20,fc_mask=7,umask=401Data requested by the CPU : Core reporting completion of Card read from Core DRAM : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 5unc_iio_data_req_by_cpu.mem_read.part6uncore ioData requested by the CPU : Core reporting completion of Card read from Core DRAMevent=0xc0,ch_mask=0x40,fc_mask=7,umask=401Data requested by the CPU : Core reporting completion of Card read from Core DRAM : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_data_req_by_cpu.mem_read.part7uncore ioData requested by the CPU : Core reporting completion of Card read from Core DRAMevent=0xc0,ch_mask=0x80,fc_mask=7,umask=401Data requested by the CPU : Core reporting completion of Card read from Core DRAM : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 7unc_iio_data_req_by_cpu.mem_write.iommu0uncore ioData requested by the CPU : Core writing to Card's MMIO spaceevent=0xc0,ch_mask=0x100,fc_mask=7,umask=101Data requested by the CPU : Core writing to Card's MMIO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : IOMMU - Type 0unc_iio_data_req_by_cpu.mem_write.iommu1uncore ioData requested by the CPU : Core writing to Card's MMIO spaceevent=0xc0,ch_mask=0x200,fc_mask=7,umask=101Data requested by the CPU : Core writing to Card's MMIO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : IOMMU - Type 1unc_iio_data_req_by_cpu.mem_write.part0uncore ioData requested by the CPU : Core writing to Card's MMIO spaceevent=0xc0,ch_mask=1,fc_mask=7,umask=101Data requested by the CPU : Core writing to Card's MMIO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_by_cpu.mem_write.part1uncore ioData requested by the CPU : Core writing to Card's MMIO spaceevent=0xc0,ch_mask=2,fc_mask=7,umask=101Data requested by the CPU : Core writing to Card's MMIO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_by_cpu.mem_write.part2uncore ioData requested by the CPU : Core writing to Card's MMIO spaceevent=0xc0,ch_mask=4,fc_mask=7,umask=101Data requested by the CPU : Core writing to Card's MMIO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_data_req_by_cpu.mem_write.part3uncore ioData requested by the CPU : Core writing to Card's MMIO spaceevent=0xc0,ch_mask=8,fc_mask=7,umask=101Data requested by the CPU : Core writing to Card's MMIO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_by_cpu.mem_write.part4uncore ioData requested by the CPU : Core writing to Card's MMIO spaceevent=0xc0,ch_mask=0x10,fc_mask=7,umask=101Data requested by the CPU : Core writing to Card's MMIO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_data_req_by_cpu.mem_write.part5uncore ioData requested by the CPU : Core writing to Card's MMIO spaceevent=0xc0,ch_mask=0x20,fc_mask=7,umask=101Data requested by the CPU : Core writing to Card's MMIO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 5unc_iio_data_req_by_cpu.mem_write.part6uncore ioData requested by the CPU : Core writing to Card's MMIO spaceevent=0xc0,ch_mask=0x40,fc_mask=7,umask=101Data requested by the CPU : Core writing to Card's MMIO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_data_req_by_cpu.mem_write.part7uncore ioData requested by the CPU : Core writing to Card's MMIO spaceevent=0xc0,ch_mask=0x80,fc_mask=7,umask=101Data requested by the CPU : Core writing to Card's MMIO space : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 7unc_iio_data_req_by_cpu.peer_read.iommu0uncore ioData requested by the CPU : Another card (different IIO stack) reading from this cardevent=0xc0,ch_mask=0x100,fc_mask=7,umask=801Data requested by the CPU : Another card (different IIO stack) reading from this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : IOMMU - Type 0unc_iio_data_req_by_cpu.peer_read.iommu1uncore ioData requested by the CPU : Another card (different IIO stack) reading from this cardevent=0xc0,ch_mask=0x200,fc_mask=7,umask=801Data requested by the CPU : Another card (different IIO stack) reading from this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : IOMMU - Type 1unc_iio_data_req_by_cpu.peer_read.part0uncore ioData requested by the CPU : Another card (different IIO stack) reading from this cardevent=0xc0,ch_mask=1,fc_mask=7,umask=801Data requested by the CPU : Another card (different IIO stack) reading from this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_by_cpu.peer_read.part1uncore ioData requested by the CPU : Another card (different IIO stack) reading from this cardevent=0xc0,ch_mask=2,fc_mask=7,umask=801Data requested by the CPU : Another card (different IIO stack) reading from this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_by_cpu.peer_read.part2uncore ioData requested by the CPU : Another card (different IIO stack) reading from this cardevent=0xc0,ch_mask=4,fc_mask=7,umask=801Data requested by the CPU : Another card (different IIO stack) reading from this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_data_req_by_cpu.peer_read.part3uncore ioData requested by the CPU : Another card (different IIO stack) reading from this cardevent=0xc0,ch_mask=8,fc_mask=7,umask=801Data requested by the CPU : Another card (different IIO stack) reading from this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_by_cpu.peer_read.part4uncore ioData requested by the CPU : Another card (different IIO stack) reading from this cardevent=0xc0,ch_mask=0x10,fc_mask=7,umask=801Data requested by the CPU : Another card (different IIO stack) reading from this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_data_req_by_cpu.peer_read.part5uncore ioData requested by the CPU : Another card (different IIO stack) reading from this cardevent=0xc0,ch_mask=0x20,fc_mask=7,umask=801Data requested by the CPU : Another card (different IIO stack) reading from this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 5unc_iio_data_req_by_cpu.peer_read.part6uncore ioData requested by the CPU : Another card (different IIO stack) reading from this cardevent=0xc0,ch_mask=0x40,fc_mask=7,umask=801Data requested by the CPU : Another card (different IIO stack) reading from this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_data_req_by_cpu.peer_read.part7uncore ioData requested by the CPU : Another card (different IIO stack) reading from this cardevent=0xc0,ch_mask=0x80,fc_mask=7,umask=801Data requested by the CPU : Another card (different IIO stack) reading from this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 7unc_iio_data_req_by_cpu.peer_write.iommu0uncore ioData requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc0,ch_mask=0x100,fc_mask=7,umask=201Data requested by the CPU : Another card (different IIO stack) writing to this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : IOMMU - Type 0unc_iio_data_req_by_cpu.peer_write.iommu1uncore ioData requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc0,ch_mask=0x200,fc_mask=7,umask=201Data requested by the CPU : Another card (different IIO stack) writing to this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : IOMMU - Type 1unc_iio_data_req_by_cpu.peer_write.part0uncore ioData requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc0,ch_mask=1,fc_mask=7,umask=201Data requested by the CPU : Another card (different IIO stack) writing to this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_by_cpu.peer_write.part1uncore ioData requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc0,ch_mask=2,fc_mask=7,umask=201Data requested by the CPU : Another card (different IIO stack) writing to this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_by_cpu.peer_write.part2uncore ioData requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc0,ch_mask=4,fc_mask=7,umask=201Data requested by the CPU : Another card (different IIO stack) writing to this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_data_req_by_cpu.peer_write.part3uncore ioData requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc0,ch_mask=8,fc_mask=7,umask=201Data requested by the CPU : Another card (different IIO stack) writing to this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_by_cpu.peer_write.part4uncore ioData requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc0,ch_mask=0x10,fc_mask=7,umask=201Data requested by the CPU : Another card (different IIO stack) writing to this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_data_req_by_cpu.peer_write.part5uncore ioData requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc0,ch_mask=0x20,fc_mask=7,umask=201Data requested by the CPU : Another card (different IIO stack) writing to this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 5unc_iio_data_req_by_cpu.peer_write.part6uncore ioData requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc0,ch_mask=0x40,fc_mask=7,umask=201Data requested by the CPU : Another card (different IIO stack) writing to this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_data_req_by_cpu.peer_write.part7uncore ioData requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc0,ch_mask=0x80,fc_mask=7,umask=201Data requested by the CPU : Another card (different IIO stack) writing to this card. : Number of DWs (4 bytes) requested by the main die.  Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 7unc_iio_data_req_of_cpu.atomic.iommu0uncore ioData requested of the CPU : Atomic requests targeting DRAMevent=0x83,ch_mask=0x100,fc_mask=7,umask=0x1001Data requested of the CPU : Atomic requests targeting DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : IOMMU - Type 0unc_iio_data_req_of_cpu.atomic.iommu1uncore ioData requested of the CPU : Atomic requests targeting DRAMevent=0x83,ch_mask=0x200,fc_mask=7,umask=0x1001Data requested of the CPU : Atomic requests targeting DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : IOMMU - Type 1unc_iio_data_req_of_cpu.atomic.part0uncore ioData requested of the CPU : Atomic requests targeting DRAMevent=0x83,ch_mask=1,fc_mask=7,umask=0x1001Data requested of the CPU : Atomic requests targeting DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_of_cpu.atomic.part1uncore ioData requested of the CPU : Atomic requests targeting DRAMevent=0x83,ch_mask=2,fc_mask=7,umask=0x1001Data requested of the CPU : Atomic requests targeting DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_of_cpu.atomic.part2uncore ioData requested of the CPU : Atomic requests targeting DRAMevent=0x83,ch_mask=4,fc_mask=7,umask=0x1001Data requested of the CPU : Atomic requests targeting DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_data_req_of_cpu.atomic.part3uncore ioData requested of the CPU : Atomic requests targeting DRAMevent=0x83,ch_mask=8,fc_mask=7,umask=0x1001Data requested of the CPU : Atomic requests targeting DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_of_cpu.atomic.part4uncore ioData requested of the CPU : Atomic requests targeting DRAMevent=0x83,ch_mask=0x10,fc_mask=7,umask=0x1001Data requested of the CPU : Atomic requests targeting DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_data_req_of_cpu.atomic.part5uncore ioData requested of the CPU : Atomic requests targeting DRAMevent=0x83,ch_mask=0x20,fc_mask=7,umask=0x1001Data requested of the CPU : Atomic requests targeting DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 5unc_iio_data_req_of_cpu.atomic.part6uncore ioData requested of the CPU : Atomic requests targeting DRAMevent=0x83,ch_mask=0x40,fc_mask=7,umask=0x1001Data requested of the CPU : Atomic requests targeting DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_data_req_of_cpu.atomic.part7uncore ioData requested of the CPU : Atomic requests targeting DRAMevent=0x83,ch_mask=0x80,fc_mask=7,umask=0x1001Data requested of the CPU : Atomic requests targeting DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 7unc_iio_data_req_of_cpu.cmpd.iommu0uncore ioData requested of the CPU : CmpD - device sending completion to CPU requestevent=0x83,ch_mask=0x100,fc_mask=7,umask=0x8001Data requested of the CPU : CmpD - device sending completion to CPU request : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : IOMMU - Type 0unc_iio_data_req_of_cpu.cmpd.iommu1uncore ioData requested of the CPU : CmpD - device sending completion to CPU requestevent=0x83,ch_mask=0x200,fc_mask=7,umask=0x8001Data requested of the CPU : CmpD - device sending completion to CPU request : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : IOMMU - Type 1unc_iio_data_req_of_cpu.mem_read.iommu0uncore ioFour byte data request of the CPU : Card reading from DRAMevent=0x83,ch_mask=0x100,fc_mask=7,umask=401Data requested of the CPU : Card reading from DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : IOMMU - Type 0unc_iio_data_req_of_cpu.mem_read.iommu1uncore ioFour byte data request of the CPU : Card reading from DRAMevent=0x83,ch_mask=0x200,fc_mask=7,umask=401Data requested of the CPU : Card reading from DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : IOMMU - Type 1unc_iio_data_req_of_cpu.mem_read.part0uncore ioFour byte data request of the CPU : Card reading from DRAMevent=0x83,ch_mask=1,fc_mask=7,umask=401Data requested of the CPU : Card reading from DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_of_cpu.mem_read.part1uncore ioFour byte data request of the CPU : Card reading from DRAMevent=0x83,ch_mask=2,fc_mask=7,umask=401Data requested of the CPU : Card reading from DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_of_cpu.mem_read.part2uncore ioFour byte data request of the CPU : Card reading from DRAMevent=0x83,ch_mask=4,fc_mask=7,umask=401Data requested of the CPU : Card reading from DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_data_req_of_cpu.mem_read.part3uncore ioFour byte data request of the CPU : Card reading from DRAMevent=0x83,ch_mask=8,fc_mask=7,umask=401Data requested of the CPU : Card reading from DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_of_cpu.mem_read.part4uncore ioFour byte data request of the CPU : Card reading from DRAMevent=0x83,ch_mask=0x10,fc_mask=7,umask=401Data requested of the CPU : Card reading from DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_data_req_of_cpu.mem_read.part5uncore ioFour byte data request of the CPU : Card reading from DRAMevent=0x83,ch_mask=0x20,fc_mask=7,umask=401Data requested of the CPU : Card reading from DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 5unc_iio_data_req_of_cpu.mem_read.part6uncore ioFour byte data request of the CPU : Card reading from DRAMevent=0x83,ch_mask=0x40,fc_mask=7,umask=401Data requested of the CPU : Card reading from DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_data_req_of_cpu.mem_read.part7uncore ioFour byte data request of the CPU : Card reading from DRAMevent=0x83,ch_mask=0x80,fc_mask=7,umask=401Data requested of the CPU : Card reading from DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 7unc_iio_data_req_of_cpu.mem_write.iommu0uncore ioFour byte data request of the CPU : Card writing to DRAMevent=0x83,ch_mask=0x100,fc_mask=7,umask=101Data requested of the CPU : Card writing to DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : IOMMU - Type 0unc_iio_data_req_of_cpu.mem_write.iommu1uncore ioFour byte data request of the CPU : Card writing to DRAMevent=0x83,ch_mask=0x200,fc_mask=7,umask=101Data requested of the CPU : Card writing to DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : IOMMU - Type 1unc_iio_data_req_of_cpu.mem_write.part0uncore ioFour byte data request of the CPU : Card writing to DRAMevent=0x83,ch_mask=1,fc_mask=7,umask=101Data requested of the CPU : Card writing to DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_of_cpu.mem_write.part1uncore ioFour byte data request of the CPU : Card writing to DRAMevent=0x83,ch_mask=2,fc_mask=7,umask=101Data requested of the CPU : Card writing to DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_of_cpu.mem_write.part2uncore ioFour byte data request of the CPU : Card writing to DRAMevent=0x83,ch_mask=4,fc_mask=7,umask=101Data requested of the CPU : Card writing to DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_data_req_of_cpu.mem_write.part3uncore ioFour byte data request of the CPU : Card writing to DRAMevent=0x83,ch_mask=8,fc_mask=7,umask=101Data requested of the CPU : Card writing to DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_of_cpu.mem_write.part4uncore ioFour byte data request of the CPU : Card writing to DRAMevent=0x83,ch_mask=0x10,fc_mask=7,umask=101Data requested of the CPU : Card writing to DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_data_req_of_cpu.mem_write.part5uncore ioFour byte data request of the CPU : Card writing to DRAMevent=0x83,ch_mask=0x20,fc_mask=7,umask=101Data requested of the CPU : Card writing to DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 5unc_iio_data_req_of_cpu.mem_write.part6uncore ioFour byte data request of the CPU : Card writing to DRAMevent=0x83,ch_mask=0x40,fc_mask=7,umask=101Data requested of the CPU : Card writing to DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_data_req_of_cpu.mem_write.part7uncore ioFour byte data request of the CPU : Card writing to DRAMevent=0x83,ch_mask=0x80,fc_mask=7,umask=101Data requested of the CPU : Card writing to DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 7unc_iio_data_req_of_cpu.msg.iommu0uncore ioData requested of the CPU : Messagesevent=0x83,ch_mask=0x100,fc_mask=7,umask=0x4001Data requested of the CPU : Messages : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : IOMMU - Type 0unc_iio_data_req_of_cpu.msg.iommu1uncore ioData requested of the CPU : Messagesevent=0x83,ch_mask=0x200,fc_mask=7,umask=0x4001Data requested of the CPU : Messages : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : IOMMU - Type 1unc_iio_data_req_of_cpu.msg.part0uncore ioData requested of the CPU : Messagesevent=0x83,ch_mask=1,fc_mask=7,umask=0x4001Data requested of the CPU : Messages : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_of_cpu.msg.part1uncore ioData requested of the CPU : Messagesevent=0x83,ch_mask=2,fc_mask=7,umask=0x4001Data requested of the CPU : Messages : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_of_cpu.msg.part2uncore ioData requested of the CPU : Messagesevent=0x83,ch_mask=4,fc_mask=7,umask=0x4001Data requested of the CPU : Messages : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_data_req_of_cpu.msg.part3uncore ioData requested of the CPU : Messagesevent=0x83,ch_mask=8,fc_mask=7,umask=0x4001Data requested of the CPU : Messages : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_of_cpu.msg.part4uncore ioData requested of the CPU : Messagesevent=0x83,ch_mask=0x10,fc_mask=7,umask=0x4001Data requested of the CPU : Messages : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_data_req_of_cpu.msg.part5uncore ioData requested of the CPU : Messagesevent=0x83,ch_mask=0x20,fc_mask=7,umask=0x4001Data requested of the CPU : Messages : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 5unc_iio_data_req_of_cpu.msg.part6uncore ioData requested of the CPU : Messagesevent=0x83,ch_mask=0x40,fc_mask=7,umask=0x4001Data requested of the CPU : Messages : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_data_req_of_cpu.msg.part7uncore ioData requested of the CPU : Messagesevent=0x83,ch_mask=0x80,fc_mask=7,umask=0x4001Data requested of the CPU : Messages : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 7unc_iio_data_req_of_cpu.peer_read.iommu0uncore ioData requested of the CPU : Card reading from another Card (same or different stack)event=0x83,ch_mask=0x100,fc_mask=7,umask=801Data requested of the CPU : Card reading from another Card (same or different stack) : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : IOMMU - Type 0unc_iio_data_req_of_cpu.peer_read.iommu1uncore ioData requested of the CPU : Card reading from another Card (same or different stack)event=0x83,ch_mask=0x200,fc_mask=7,umask=801Data requested of the CPU : Card reading from another Card (same or different stack) : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : IOMMU - Type 1unc_iio_data_req_of_cpu.peer_read.part0uncore ioData requested of the CPU : Card reading from another Card (same or different stack)event=0x83,ch_mask=1,fc_mask=7,umask=801Data requested of the CPU : Card reading from another Card (same or different stack) : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_of_cpu.peer_read.part1uncore ioData requested of the CPU : Card reading from another Card (same or different stack)event=0x83,ch_mask=2,fc_mask=7,umask=801Data requested of the CPU : Card reading from another Card (same or different stack) : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_of_cpu.peer_read.part2uncore ioData requested of the CPU : Card reading from another Card (same or different stack)event=0x83,ch_mask=4,fc_mask=7,umask=801Data requested of the CPU : Card reading from another Card (same or different stack) : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_data_req_of_cpu.peer_read.part3uncore ioData requested of the CPU : Card reading from another Card (same or different stack)event=0x83,ch_mask=8,fc_mask=7,umask=801Data requested of the CPU : Card reading from another Card (same or different stack) : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_of_cpu.peer_read.part4uncore ioData requested of the CPU : Card reading from another Card (same or different stack)event=0x83,ch_mask=0x10,fc_mask=7,umask=801Data requested of the CPU : Card reading from another Card (same or different stack) : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_data_req_of_cpu.peer_read.part5uncore ioData requested of the CPU : Card reading from another Card (same or different stack)event=0x83,ch_mask=0x20,fc_mask=7,umask=801Data requested of the CPU : Card reading from another Card (same or different stack) : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 5unc_iio_data_req_of_cpu.peer_read.part6uncore ioData requested of the CPU : Card reading from another Card (same or different stack)event=0x83,ch_mask=0x40,fc_mask=7,umask=801Data requested of the CPU : Card reading from another Card (same or different stack) : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_data_req_of_cpu.peer_read.part7uncore ioData requested of the CPU : Card reading from another Card (same or different stack)event=0x83,ch_mask=0x80,fc_mask=7,umask=801Data requested of the CPU : Card reading from another Card (same or different stack) : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 7unc_iio_data_req_of_cpu.peer_write.iommu0uncore ioData requested of the CPU : Card writing to another Card (same or different stack)event=0x83,ch_mask=0x100,fc_mask=7,umask=201Data requested of the CPU : Card writing to another Card (same or different stack) : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : IOMMU - Type 0unc_iio_data_req_of_cpu.peer_write.iommu1uncore ioData requested of the CPU : Card writing to another Card (same or different stack)event=0x83,ch_mask=0x200,fc_mask=7,umask=201Data requested of the CPU : Card writing to another Card (same or different stack) : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : IOMMU - Type 1unc_iio_data_req_of_cpu.peer_write.part0uncore ioData requested of the CPU : Card writing to another Card (same or different stack)event=0x83,ch_mask=1,fc_mask=7,umask=201Data requested of the CPU : Card writing to another Card (same or different stack) : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_of_cpu.peer_write.part1uncore ioData requested of the CPU : Card writing to another Card (same or different stack)event=0x83,ch_mask=2,fc_mask=7,umask=201Data requested of the CPU : Card writing to another Card (same or different stack) : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_of_cpu.peer_write.part2uncore ioData requested of the CPU : Card writing to another Card (same or different stack)event=0x83,ch_mask=4,fc_mask=7,umask=201Data requested of the CPU : Card writing to another Card (same or different stack) : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_data_req_of_cpu.peer_write.part3uncore ioData requested of the CPU : Card writing to another Card (same or different stack)event=0x83,ch_mask=8,fc_mask=7,umask=201Data requested of the CPU : Card writing to another Card (same or different stack) : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_of_cpu.peer_write.part4uncore ioData requested of the CPU : Card writing to another Card (same or different stack)event=0x83,ch_mask=0x10,fc_mask=7,umask=201Data requested of the CPU : Card writing to another Card (same or different stack) : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_data_req_of_cpu.peer_write.part5uncore ioData requested of the CPU : Card writing to another Card (same or different stack)event=0x83,ch_mask=0x20,fc_mask=7,umask=201Data requested of the CPU : Card writing to another Card (same or different stack) : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 5unc_iio_data_req_of_cpu.peer_write.part6uncore ioData requested of the CPU : Card writing to another Card (same or different stack)event=0x83,ch_mask=0x40,fc_mask=7,umask=201Data requested of the CPU : Card writing to another Card (same or different stack) : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_data_req_of_cpu.peer_write.part7uncore ioData requested of the CPU : Card writing to another Card (same or different stack)event=0x83,ch_mask=0x80,fc_mask=7,umask=201Data requested of the CPU : Card writing to another Card (same or different stack) : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 7unc_iio_inbound_arb_req.datauncore ioIncoming arbitration requests : Passing data to be writtenevent=0x86,ch_mask=0xff,fc_mask=7,umask=0x2001Incoming arbitration requests : Passing data to be written : How often different queues (e.g. channel / fc) ask to send request into pipeline : Only for posted requestsunc_iio_inbound_arb_req.req_ownuncore ioIncoming arbitration requests : Request Ownershipevent=0x86,ch_mask=0xff,fc_mask=7,umask=401Incoming arbitration requests : Request Ownership : How often different queues (e.g. channel / fc) ask to send request into pipeline : Only for posted requestsunc_iio_inbound_arb_req.wruncore ioIncoming arbitration requests : Writing lineevent=0x86,ch_mask=0xff,fc_mask=7,umask=0x1001Incoming arbitration requests : Writing line : How often different queues (e.g. channel / fc) ask to send request into pipeline : Only for posted requestsunc_iio_inbound_arb_won.datauncore ioIncoming arbitration requests granted : Passing data to be writtenevent=0x87,ch_mask=0xff,fc_mask=7,umask=0x2001Incoming arbitration requests granted : Passing data to be written : How often different queues (e.g. channel / fc) are allowed to send request into pipeline : Only for posted requestsunc_iio_inbound_arb_won.final_rd_wruncore ioIncoming arbitration requests granted : Issuing final read or write of lineevent=0x87,ch_mask=0xff,fc_mask=7,umask=801Incoming arbitration requests granted : Issuing final read or write of line : How often different queues (e.g. channel / fc) are allowed to send request into pipelineunc_iio_inbound_arb_won.iommu_hituncore ioIncoming arbitration requests granted : Processing response from IOMMUevent=0x87,ch_mask=0xff,fc_mask=7,umask=201Incoming arbitration requests granted : Processing response from IOMMU : How often different queues (e.g. channel / fc) are allowed to send request into pipelineunc_iio_inbound_arb_won.iommu_requncore ioIncoming arbitration requests granted : Issuing to IOMMUevent=0x87,ch_mask=0xff,fc_mask=7,umask=101Incoming arbitration requests granted : Issuing to IOMMU : How often different queues (e.g. channel / fc) are allowed to send request into pipelineunc_iio_inbound_arb_won.req_ownuncore ioIncoming arbitration requests granted : Request Ownershipevent=0x87,ch_mask=0xff,fc_mask=7,umask=401Incoming arbitration requests granted : Request Ownership : How often different queues (e.g. channel / fc) are allowed to send request into pipeline : Only for posted requestsunc_iio_inbound_arb_won.wruncore ioIncoming arbitration requests granted : Writing lineevent=0x87,ch_mask=0xff,fc_mask=7,umask=0x1001Incoming arbitration requests granted : Writing line : How often different queues (e.g. channel / fc) are allowed to send request into pipeline : Only for posted requestsunc_iio_iommu0.all_lookupsuncore io: IOTLB lookups allevent=0x40,umask=201: IOTLB lookups all : Some transactions have to look up IOTLB multiple times.  Counts every time a request looks up IOTLBunc_iio_iommu0.missesuncore io: IOTLB Fills (same as IOTLB miss)event=0x40,umask=0x2001: IOTLB Fills (same as IOTLB miss) : When a transaction misses IOTLB, it does a page walk to look up memory and bring in the relevant page translation. Counts when this page translation is written to IOTLBunc_iio_iommu1.cyc_pwt_fulluncore io: Cycles PWT fullevent=0x41,umask=0x8001: Cycles PWT full : Counts cycles the IOMMU has reached its maximum limit for outstanding page walksunc_iio_iommu1.num_mem_accessesuncore io: IOMMU memory accessevent=0x41,umask=0x4001: IOMMU memory access : IOMMU sends out memory fetches when it misses the cache look up which is indicated by this signal.  M2IOSF only uses low priority channelunc_iio_iommu1.pwc_1g_hitsuncore io: PWC Hit to a 1G pageevent=0x41,umask=801: PWC Hit to a 1G page : Counts each time a transaction's first look up hits the SLPWC at the 1G levelunc_iio_iommu1.pwc_2m_hitsuncore io: PWC Hit to a 2M pageevent=0x41,umask=401: PWC Hit to a 2M page : Counts each time a transaction's first look up hits the SLPWC at the 2M levelunc_iio_iommu1.pwc_4k_hitsuncore io: PWC Hit to a 4K pageevent=0x41,umask=201: PWC Hit to a 4K page : Counts each time a transaction's first look up hits the SLPWC at the 4K levelunc_iio_iommu1.pwc_512g_hitsuncore io: PWT Hit to a 256T pageevent=0x41,umask=0x1001: PWT Hit to a 256T page : Counts each time a transaction's first look up hits the SLPWC at the 512G levelunc_iio_iommu3.int_cache_hitsuncore io: Interrupt Entry cache hitevent=0x43,umask=0x8001: Interrupt Entry cache hit : Counts each time a transaction's first look up hits the IECunc_iio_iommu3.int_cache_lookupsuncore io: Interrupt Entry cache lookupevent=0x43,umask=0x4001: Interrupt Entry cache lookup : Counts the number of transaction looks up that interrupt remapping cacheunc_iio_iommu3.num_ctxt_cache_inval_deviceuncore io: Device-selective Context cache invalidation cyclesevent=0x43,umask=0x2001: Device-selective Context cache invalidation cycles : Counts number of Device selective context cache invalidation eventsunc_iio_iommu3.num_ctxt_cache_inval_domainuncore io: Domain-selective Context cache invalidation cyclesevent=0x43,umask=0x1001: Domain-selective Context cache invalidation cycles : Counts number of Domain selective context cache invalidation eventsunc_iio_iommu3.num_ctxt_cache_inval_gbluncore io: Context cache global invalidation cyclesevent=0x43,umask=801: Context cache global invalidation cycles : Counts number of Context Cache global invalidation eventsunc_iio_iommu3.num_inval_domainuncore io: Domain-selective IOTLB invalidation cyclesevent=0x43,umask=201: Domain-selective IOTLB invalidation cycles : Counts number of Domain selective invalidation eventsunc_iio_iommu3.num_inval_gbluncore io: Global IOTLB invalidation cyclesevent=0x43,umask=101: Global IOTLB invalidation cycles : Indicates that IOMMU is doing global invalidationunc_iio_iommu3.num_inval_pageuncore io: Page-selective IOTLB invalidation cyclesevent=0x43,umask=401: Page-selective IOTLB invalidation cycles : Counts number of Page-selective within Domain Invalidation eventsunc_iio_nothinguncore ioCounting disabledevent=0x8001unc_iio_num_oustanding_req_from_cpu.to_iouncore ioOccupancy of outbound request queue : To deviceevent=0xc5,ch_mask=0xff,fc_mask=7,umask=801Occupancy of outbound request queue : To device : Counts number of outbound requests/completions IIO is currently processingunc_iio_num_outstanding_req_of_cpu.datauncore io: Passing data to be writtenevent=0x88,ch_mask=0xff,fc_mask=7,umask=0x2001: Passing data to be written : Only for posted requestsunc_iio_num_outstanding_req_of_cpu.final_rd_wruncore io: Issuing final read or write of lineevent=0x88,ch_mask=0xff,fc_mask=7,umask=801unc_iio_num_outstanding_req_of_cpu.iommu_hituncore io: Processing response from IOMMUevent=0x88,ch_mask=0xff,fc_mask=7,umask=201unc_iio_num_outstanding_req_of_cpu.iommu_requncore io: Issuing to IOMMUevent=0x88,ch_mask=0xff,fc_mask=7,umask=101unc_iio_num_outstanding_req_of_cpu.req_ownuncore io: Request Ownershipevent=0x88,ch_mask=0xff,fc_mask=7,umask=401: Request Ownership : Only for posted requestsunc_iio_num_outstanding_req_of_cpu.wruncore io: Writing lineevent=0x88,ch_mask=0xff,fc_mask=7,umask=0x1001: Writing line : Only for posted requestsunc_iio_num_req_from_cpu.itcuncore ioNumber requests sent to PCIe from main die : From ITCevent=0xc2,ch_mask=0xff,fc_mask=7,umask=201Number requests sent to PCIe from main die : From ITC : Confined P2Punc_iio_num_req_from_cpu.preallocuncore ioNumber requests sent to PCIe from main die : Completion allocationsevent=0xc2,ch_mask=0xff,fc_mask=7,umask=401unc_iio_num_req_of_cpu.all.dropuncore ioNumber requests PCIe makes of the main die : Drop requestevent=0x85,ch_mask=0xff,fc_mask=7,umask=201Number requests PCIe makes of the main die : Drop request : Counts full PCIe requests before they're broken into a series of cache-line size requests as measured by DATA_REQ_OF_CPU and TXN_REQ_OF_CPU. : Packet error detected, must be droppedunc_iio_num_req_of_cpu.commit.alluncore ioNumber requests PCIe makes of the main die : Allevent=0x85,ch_mask=0xff,fc_mask=7,umask=101Number requests PCIe makes of the main die : All : Counts full PCIe requests before they're broken into a series of cache-line size requests as measured by DATA_REQ_OF_CPU and TXN_REQ_OF_CPUunc_iio_num_tgt_matched_req_of_cpuuncore ioITC address map 1event=0x8f01unc_iio_pwt_occupancyuncore ioPWT occupancyevent=0x4201PWT occupancy : Indicates how many page walks are outstanding at any point in timeunc_iio_req_from_pcie_cl_cmpl.datauncore ioPCIe Request - cacheline complete : Passing data to be writtenevent=0x91,ch_mask=0xff,fc_mask=7,umask=0x2001PCIe Request - cacheline complete : Passing data to be written : Each PCIe request is broken down into a series of cacheline granular requests and each cacheline size request may need to make multiple passes through the pipeline (e.g. for posted interrupts or multi-cast).   Each time a cacheline completes all its passes (e.g. finishes posting writes to all multi-cast targets) it advances line : Only for posted requestsunc_iio_req_from_pcie_cl_cmpl.final_rd_wruncore ioPCIe Request - cacheline complete : Issuing final read or write of lineevent=0x91,ch_mask=0xff,fc_mask=7,umask=801PCIe Request - cacheline complete : Issuing final read or write of line : Each PCIe request is broken down into a series of cacheline granular requests and each cacheline size request may need to make multiple passes through the pipeline (e.g. for posted interrupts or multi-cast).   Each time a cacheline completes all its passes (e.g. finishes posting writes to all multi-cast targets) it advances lineunc_iio_req_from_pcie_cl_cmpl.req_ownuncore ioPCIe Request - cacheline complete : Request Ownershipevent=0x91,ch_mask=0xff,fc_mask=7,umask=401PCIe Request - cacheline complete : Request Ownership : Each PCIe request is broken down into a series of cacheline granular requests and each cacheline size request may need to make multiple passes through the pipeline (e.g. for posted interrupts or multi-cast).   Each time a cacheline completes all its passes (e.g. finishes posting writes to all multi-cast targets) it advances line : Only for posted requestsunc_iio_req_from_pcie_cl_cmpl.wruncore ioPCIe Request - cacheline complete : Writing lineevent=0x91,ch_mask=0xff,fc_mask=7,umask=0x1001PCIe Request - cacheline complete : Writing line : Each PCIe request is broken down into a series of cacheline granular requests and each cacheline size request may need to make multiple passes through the pipeline (e.g. for posted interrupts or multi-cast).   Each time a cacheline completes all its passes (e.g. finishes posting writes to all multi-cast targets) it advances line : Only for posted requestsunc_iio_req_from_pcie_cmpl.datauncore ioPCIe Request complete : Passing data to be writtenevent=0x92,ch_mask=0xff,fc_mask=7,umask=0x2001PCIe Request complete : Passing data to be written : Each PCIe request is broken down into a series of cacheline granular requests and each cacheline size request may need to make multiple passes through the pipeline (e.g. for posted interrupts or multi-cast).   Each time a single PCIe request completes all its cacheline granular requests, it advances pointer. : Only for posted requestsunc_iio_req_from_pcie_cmpl.final_rd_wruncore ioPCIe Request complete : Issuing final read or write of lineevent=0x92,ch_mask=0xff,fc_mask=7,umask=801PCIe Request complete : Issuing final read or write of line : Each PCIe request is broken down into a series of cacheline granular requests and each cacheline size request may need to make multiple passes through the pipeline (e.g. for posted interrupts or multi-cast).   Each time a single PCIe request completes all its cacheline granular requests, it advances pointerunc_iio_req_from_pcie_cmpl.iommu_hituncore ioPCIe Request complete : Processing response from IOMMUevent=0x92,ch_mask=0xff,fc_mask=7,umask=201PCIe Request complete : Processing response from IOMMU : Each PCIe request is broken down into a series of cacheline granular requests and each cacheline size request may need to make multiple passes through the pipeline (e.g. for posted interrupts or multi-cast).   Each time a single PCIe request completes all its cacheline granular requests, it advances pointerunc_iio_req_from_pcie_cmpl.iommu_requncore ioPCIe Request complete : Issuing to IOMMUevent=0x92,ch_mask=0xff,fc_mask=7,umask=101PCIe Request complete : Issuing to IOMMU : Each PCIe request is broken down into a series of cacheline granular requests and each cacheline size request may need to make multiple passes through the pipeline (e.g. for posted interrupts or multi-cast).   Each time a single PCIe request completes all its cacheline granular requests, it advances pointerunc_iio_req_from_pcie_cmpl.req_ownuncore ioPCIe Request complete : Request Ownershipevent=0x92,ch_mask=0xff,fc_mask=7,umask=401PCIe Request complete : Request Ownership : Each PCIe request is broken down into a series of cacheline granular requests and each cacheline size request may need to make multiple passes through the pipeline (e.g. for posted interrupts or multi-cast).   Each time a single PCIe request completes all its cacheline granular requests, it advances pointer. : Only for posted requestsunc_iio_req_from_pcie_cmpl.wruncore ioPCIe Request complete : Writing lineevent=0x92,ch_mask=0xff,fc_mask=7,umask=0x1001PCIe Request complete : Writing line : Each PCIe request is broken down into a series of cacheline granular requests and each cacheline size request may need to make multiple passes through the pipeline (e.g. for posted interrupts or multi-cast).   Each time a single PCIe request completes all its cacheline granular requests, it advances pointer. : Only for posted requestsunc_iio_req_from_pcie_pass_cmpl.datauncore ioPCIe Request - pass complete : Passing data to be writtenevent=0x90,ch_mask=0xff,fc_mask=7,umask=0x2001PCIe Request - pass complete : Passing data to be written : Each PCIe request is broken down into a series of cacheline granular requests and each cacheline size request may need to make multiple passes through the pipeline (e.g. for posted interrupts or multi-cast).   Each time a cacheline completes a single pass (e.g. posts a write to single multi-cast target) it advances state : Only for posted requestsunc_iio_req_from_pcie_pass_cmpl.req_ownuncore ioPCIe Request - pass complete : Request Ownershipevent=0x90,ch_mask=0xff,fc_mask=7,umask=401PCIe Request - pass complete : Request Ownership : Each PCIe request is broken down into a series of cacheline granular requests and each cacheline size request may need to make multiple passes through the pipeline (e.g. for posted interrupts or multi-cast).   Each time a cacheline completes a single pass (e.g. posts a write to single multi-cast target) it advances state : Only for posted requestsunc_iio_req_from_pcie_pass_cmpl.wruncore ioPCIe Request - pass complete : Writing lineevent=0x90,ch_mask=0xff,fc_mask=7,umask=0x1001PCIe Request - pass complete : Writing line : Each PCIe request is broken down into a series of cacheline granular requests and each cacheline size request may need to make multiple passes through the pipeline (e.g. for posted interrupts or multi-cast).   Each time a cacheline completes a single pass (e.g. posts a write to single multi-cast target) it advances state : Only for posted requestsunc_iio_symbol_timesuncore ioSymbol Times on Linkevent=0x8201Symbol Times on Link : Gen1 - increment once every 4nS, Gen2 - increment once every 2nS, Gen3 - increment once every 1nSunc_iio_txn_req_by_cpu.cfg_read.iommu0uncore ioNumber Transactions requested by the CPU : Core reading from Card's PCICFG spaceevent=0xc1,ch_mask=0x100,fc_mask=7,umask=0x4001Number Transactions requested by the CPU : Core reading from Card's PCICFG space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : IOMMU - Type 0unc_iio_txn_req_by_cpu.cfg_read.iommu1uncore ioNumber Transactions requested by the CPU : Core reading from Card's PCICFG spaceevent=0xc1,ch_mask=0x200,fc_mask=7,umask=0x4001Number Transactions requested by the CPU : Core reading from Card's PCICFG space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : IOMMU - Type 1unc_iio_txn_req_by_cpu.cfg_read.part0uncore ioNumber Transactions requested by the CPU : Core reading from Card's PCICFG spaceevent=0xc1,ch_mask=1,fc_mask=7,umask=0x4001Number Transactions requested by the CPU : Core reading from Card's PCICFG space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_by_cpu.cfg_read.part1uncore ioNumber Transactions requested by the CPU : Core reading from Card's PCICFG spaceevent=0xc1,ch_mask=2,fc_mask=7,umask=0x4001Number Transactions requested by the CPU : Core reading from Card's PCICFG space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_txn_req_by_cpu.cfg_read.part2uncore ioNumber Transactions requested by the CPU : Core reading from Card's PCICFG spaceevent=0xc1,ch_mask=4,fc_mask=7,umask=0x4001Number Transactions requested by the CPU : Core reading from Card's PCICFG space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_txn_req_by_cpu.cfg_read.part3uncore ioNumber Transactions requested by the CPU : Core reading from Card's PCICFG spaceevent=0xc1,ch_mask=8,fc_mask=7,umask=0x4001Number Transactions requested by the CPU : Core reading from Card's PCICFG space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_txn_req_by_cpu.cfg_read.part4uncore ioNumber Transactions requested by the CPU : Core reading from Card's PCICFG spaceevent=0xc1,ch_mask=0x10,fc_mask=7,umask=0x4001Number Transactions requested by the CPU : Core reading from Card's PCICFG space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_txn_req_by_cpu.cfg_read.part5uncore ioNumber Transactions requested by the CPU : Core reading from Card's PCICFG spaceevent=0xc1,ch_mask=0x20,fc_mask=7,umask=0x4001Number Transactions requested by the CPU : Core reading from Card's PCICFG space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 5unc_iio_txn_req_by_cpu.cfg_read.part6uncore ioNumber Transactions requested by the CPU : Core reading from Card's PCICFG spaceevent=0xc1,ch_mask=0x40,fc_mask=7,umask=0x4001Number Transactions requested by the CPU : Core reading from Card's PCICFG space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_txn_req_by_cpu.cfg_read.part7uncore ioNumber Transactions requested by the CPU : Core reading from Card's PCICFG spaceevent=0xc1,ch_mask=0x80,fc_mask=7,umask=0x4001Number Transactions requested by the CPU : Core reading from Card's PCICFG space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 7unc_iio_txn_req_by_cpu.cfg_write.iommu0uncore ioNumber Transactions requested by the CPU : Core writing to Card's PCICFG spaceevent=0xc1,ch_mask=0x100,fc_mask=7,umask=0x1001Number Transactions requested by the CPU : Core writing to Card's PCICFG space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : IOMMU - Type 0unc_iio_txn_req_by_cpu.cfg_write.iommu1uncore ioNumber Transactions requested by the CPU : Core writing to Card's PCICFG spaceevent=0xc1,ch_mask=0x200,fc_mask=7,umask=0x1001Number Transactions requested by the CPU : Core writing to Card's PCICFG space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : IOMMU - Type 1unc_iio_txn_req_by_cpu.cfg_write.part0uncore ioNumber Transactions requested by the CPU : Core writing to Card's PCICFG spaceevent=0xc1,ch_mask=1,fc_mask=7,umask=0x1001Number Transactions requested by the CPU : Core writing to Card's PCICFG space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_by_cpu.cfg_write.part1uncore ioNumber Transactions requested by the CPU : Core writing to Card's PCICFG spaceevent=0xc1,ch_mask=2,fc_mask=7,umask=0x1001Number Transactions requested by the CPU : Core writing to Card's PCICFG space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_txn_req_by_cpu.cfg_write.part2uncore ioNumber Transactions requested by the CPU : Core writing to Card's PCICFG spaceevent=0xc1,ch_mask=4,fc_mask=7,umask=0x1001Number Transactions requested by the CPU : Core writing to Card's PCICFG space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_txn_req_by_cpu.cfg_write.part3uncore ioNumber Transactions requested by the CPU : Core writing to Card's PCICFG spaceevent=0xc1,ch_mask=8,fc_mask=7,umask=0x1001Number Transactions requested by the CPU : Core writing to Card's PCICFG space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_txn_req_by_cpu.cfg_write.part4uncore ioNumber Transactions requested by the CPU : Core writing to Card's PCICFG spaceevent=0xc1,ch_mask=0x10,fc_mask=7,umask=0x1001Number Transactions requested by the CPU : Core writing to Card's PCICFG space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_txn_req_by_cpu.cfg_write.part5uncore ioNumber Transactions requested by the CPU : Core writing to Card's PCICFG spaceevent=0xc1,ch_mask=0x20,fc_mask=7,umask=0x1001Number Transactions requested by the CPU : Core writing to Card's PCICFG space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 5unc_iio_txn_req_by_cpu.cfg_write.part6uncore ioNumber Transactions requested by the CPU : Core writing to Card's PCICFG spaceevent=0xc1,ch_mask=0x40,fc_mask=7,umask=0x1001Number Transactions requested by the CPU : Core writing to Card's PCICFG space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_txn_req_by_cpu.cfg_write.part7uncore ioNumber Transactions requested by the CPU : Core writing to Card's PCICFG spaceevent=0xc1,ch_mask=0x80,fc_mask=7,umask=0x1001Number Transactions requested by the CPU : Core writing to Card's PCICFG space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 7unc_iio_txn_req_by_cpu.io_read.iommu0uncore ioNumber Transactions requested by the CPU : Core reading from Card's IO spaceevent=0xc1,ch_mask=0x100,fc_mask=7,umask=0x8001Number Transactions requested by the CPU : Core reading from Card's IO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : IOMMU - Type 0unc_iio_txn_req_by_cpu.io_read.iommu1uncore ioNumber Transactions requested by the CPU : Core reading from Card's IO spaceevent=0xc1,ch_mask=0x200,fc_mask=7,umask=0x8001Number Transactions requested by the CPU : Core reading from Card's IO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : IOMMU - Type 1unc_iio_txn_req_by_cpu.io_read.part0uncore ioNumber Transactions requested by the CPU : Core reading from Card's IO spaceevent=0xc1,ch_mask=1,fc_mask=7,umask=0x8001Number Transactions requested by the CPU : Core reading from Card's IO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_by_cpu.io_read.part1uncore ioNumber Transactions requested by the CPU : Core reading from Card's IO spaceevent=0xc1,ch_mask=2,fc_mask=7,umask=0x8001Number Transactions requested by the CPU : Core reading from Card's IO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_txn_req_by_cpu.io_read.part2uncore ioNumber Transactions requested by the CPU : Core reading from Card's IO spaceevent=0xc1,ch_mask=4,fc_mask=7,umask=0x8001Number Transactions requested by the CPU : Core reading from Card's IO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_txn_req_by_cpu.io_read.part3uncore ioNumber Transactions requested by the CPU : Core reading from Card's IO spaceevent=0xc1,ch_mask=8,fc_mask=7,umask=0x8001Number Transactions requested by the CPU : Core reading from Card's IO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_txn_req_by_cpu.io_read.part4uncore ioNumber Transactions requested by the CPU : Core reading from Card's IO spaceevent=0xc1,ch_mask=0x10,fc_mask=7,umask=0x8001Number Transactions requested by the CPU : Core reading from Card's IO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_txn_req_by_cpu.io_read.part5uncore ioNumber Transactions requested by the CPU : Core reading from Card's IO spaceevent=0xc1,ch_mask=0x20,fc_mask=7,umask=0x8001Number Transactions requested by the CPU : Core reading from Card's IO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 5unc_iio_txn_req_by_cpu.io_read.part6uncore ioNumber Transactions requested by the CPU : Core reading from Card's IO spaceevent=0xc1,ch_mask=0x40,fc_mask=7,umask=0x8001Number Transactions requested by the CPU : Core reading from Card's IO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_txn_req_by_cpu.io_read.part7uncore ioNumber Transactions requested by the CPU : Core reading from Card's IO spaceevent=0xc1,ch_mask=0x80,fc_mask=7,umask=0x8001Number Transactions requested by the CPU : Core reading from Card's IO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 7unc_iio_txn_req_by_cpu.io_write.iommu0uncore ioNumber Transactions requested by the CPU : Core writing to Card's IO spaceevent=0xc1,ch_mask=0x100,fc_mask=7,umask=0x2001Number Transactions requested by the CPU : Core writing to Card's IO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : IOMMU - Type 0unc_iio_txn_req_by_cpu.io_write.iommu1uncore ioNumber Transactions requested by the CPU : Core writing to Card's IO spaceevent=0xc1,ch_mask=0x200,fc_mask=7,umask=0x2001Number Transactions requested by the CPU : Core writing to Card's IO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : IOMMU - Type 1unc_iio_txn_req_by_cpu.io_write.part0uncore ioNumber Transactions requested by the CPU : Core writing to Card's IO spaceevent=0xc1,ch_mask=1,fc_mask=7,umask=0x2001Number Transactions requested by the CPU : Core writing to Card's IO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_by_cpu.io_write.part1uncore ioNumber Transactions requested by the CPU : Core writing to Card's IO spaceevent=0xc1,ch_mask=2,fc_mask=7,umask=0x2001Number Transactions requested by the CPU : Core writing to Card's IO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_txn_req_by_cpu.io_write.part2uncore ioNumber Transactions requested by the CPU : Core writing to Card's IO spaceevent=0xc1,ch_mask=4,fc_mask=7,umask=0x2001Number Transactions requested by the CPU : Core writing to Card's IO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_txn_req_by_cpu.io_write.part3uncore ioNumber Transactions requested by the CPU : Core writing to Card's IO spaceevent=0xc1,ch_mask=8,fc_mask=7,umask=0x2001Number Transactions requested by the CPU : Core writing to Card's IO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_txn_req_by_cpu.io_write.part4uncore ioNumber Transactions requested by the CPU : Core writing to Card's IO spaceevent=0xc1,ch_mask=0x10,fc_mask=7,umask=0x2001Number Transactions requested by the CPU : Core writing to Card's IO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_txn_req_by_cpu.io_write.part5uncore ioNumber Transactions requested by the CPU : Core writing to Card's IO spaceevent=0xc1,ch_mask=0x20,fc_mask=7,umask=0x2001Number Transactions requested by the CPU : Core writing to Card's IO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 5unc_iio_txn_req_by_cpu.io_write.part6uncore ioNumber Transactions requested by the CPU : Core writing to Card's IO spaceevent=0xc1,ch_mask=0x40,fc_mask=7,umask=0x2001Number Transactions requested by the CPU : Core writing to Card's IO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_txn_req_by_cpu.io_write.part7uncore ioNumber Transactions requested by the CPU : Core writing to Card's IO spaceevent=0xc1,ch_mask=0x80,fc_mask=7,umask=0x2001Number Transactions requested by the CPU : Core writing to Card's IO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 7unc_iio_txn_req_by_cpu.mem_read.iommu0uncore ioNumber Transactions requested by the CPU : Core reading from Card's MMIO spaceevent=0xc1,ch_mask=0x100,fc_mask=7,umask=401Number Transactions requested by the CPU : Core reading from Card's MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : IOMMU - Type 0unc_iio_txn_req_by_cpu.mem_read.iommu1uncore ioNumber Transactions requested by the CPU : Core reading from Card's MMIO spaceevent=0xc1,ch_mask=0x200,fc_mask=7,umask=401Number Transactions requested by the CPU : Core reading from Card's MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : IOMMU - Type 1unc_iio_txn_req_by_cpu.mem_read.part0uncore ioNumber Transactions requested by the CPU : Core reading from Card's MMIO spaceevent=0xc1,ch_mask=1,fc_mask=7,umask=401Number Transactions requested by the CPU : Core reading from Card's MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_by_cpu.mem_read.part1uncore ioNumber Transactions requested by the CPU : Core reading from Card's MMIO spaceevent=0xc1,ch_mask=2,fc_mask=7,umask=401Number Transactions requested by the CPU : Core reading from Card's MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_txn_req_by_cpu.mem_read.part2uncore ioNumber Transactions requested by the CPU : Core reading from Card's MMIO spaceevent=0xc1,ch_mask=4,fc_mask=7,umask=401Number Transactions requested by the CPU : Core reading from Card's MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_txn_req_by_cpu.mem_read.part3uncore ioNumber Transactions requested by the CPU : Core reading from Card's MMIO spaceevent=0xc1,ch_mask=8,fc_mask=7,umask=401Number Transactions requested by the CPU : Core reading from Card's MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_txn_req_by_cpu.mem_read.part4uncore ioNumber Transactions requested by the CPU : Core reading from Card's MMIO spaceevent=0xc1,ch_mask=0x10,fc_mask=7,umask=401Number Transactions requested by the CPU : Core reading from Card's MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_txn_req_by_cpu.mem_read.part5uncore ioNumber Transactions requested by the CPU : Core reading from Card's MMIO spaceevent=0xc1,ch_mask=0x20,fc_mask=7,umask=401Number Transactions requested by the CPU : Core reading from Card's MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 5unc_iio_txn_req_by_cpu.mem_read.part6uncore ioNumber Transactions requested by the CPU : Core reading from Card's MMIO spaceevent=0xc1,ch_mask=0x40,fc_mask=7,umask=401Number Transactions requested by the CPU : Core reading from Card's MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_txn_req_by_cpu.mem_read.part7uncore ioNumber Transactions requested by the CPU : Core reading from Card's MMIO spaceevent=0xc1,ch_mask=0x80,fc_mask=7,umask=401Number Transactions requested by the CPU : Core reading from Card's MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 7unc_iio_txn_req_by_cpu.mem_write.iommu0uncore ioNumber Transactions requested by the CPU : Core writing to Card's MMIO spaceevent=0xc1,ch_mask=0x100,fc_mask=7,umask=101Number Transactions requested by the CPU : Core writing to Card's MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : IOMMU - Type 0unc_iio_txn_req_by_cpu.mem_write.iommu1uncore ioNumber Transactions requested by the CPU : Core writing to Card's MMIO spaceevent=0xc1,ch_mask=0x200,fc_mask=7,umask=101Number Transactions requested by the CPU : Core writing to Card's MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : IOMMU - Type 1unc_iio_txn_req_by_cpu.mem_write.part0uncore ioNumber Transactions requested by the CPU : Core writing to Card's MMIO spaceevent=0xc1,ch_mask=1,fc_mask=7,umask=101Number Transactions requested by the CPU : Core writing to Card's MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_by_cpu.mem_write.part1uncore ioNumber Transactions requested by the CPU : Core writing to Card's MMIO spaceevent=0xc1,ch_mask=2,fc_mask=7,umask=101Number Transactions requested by the CPU : Core writing to Card's MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_txn_req_by_cpu.mem_write.part2uncore ioNumber Transactions requested by the CPU : Core writing to Card's MMIO spaceevent=0xc1,ch_mask=4,fc_mask=7,umask=101Number Transactions requested by the CPU : Core writing to Card's MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_txn_req_by_cpu.mem_write.part3uncore ioNumber Transactions requested by the CPU : Core writing to Card's MMIO spaceevent=0xc1,ch_mask=8,fc_mask=7,umask=101Number Transactions requested by the CPU : Core writing to Card's MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_txn_req_by_cpu.mem_write.part4uncore ioNumber Transactions requested by the CPU : Core writing to Card's MMIO spaceevent=0xc1,ch_mask=0x10,fc_mask=7,umask=101Number Transactions requested by the CPU : Core writing to Card's MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_txn_req_by_cpu.mem_write.part5uncore ioNumber Transactions requested by the CPU : Core writing to Card's MMIO spaceevent=0xc1,ch_mask=0x20,fc_mask=7,umask=101Number Transactions requested by the CPU : Core writing to Card's MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 5unc_iio_txn_req_by_cpu.mem_write.part6uncore ioNumber Transactions requested by the CPU : Core writing to Card's MMIO spaceevent=0xc1,ch_mask=0x40,fc_mask=7,umask=101Number Transactions requested by the CPU : Core writing to Card's MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_txn_req_by_cpu.mem_write.part7uncore ioNumber Transactions requested by the CPU : Core writing to Card's MMIO spaceevent=0xc1,ch_mask=0x80,fc_mask=7,umask=101Number Transactions requested by the CPU : Core writing to Card's MMIO space : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 7unc_iio_txn_req_by_cpu.peer_read.iommu0uncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) reading from this cardevent=0xc1,ch_mask=0x100,fc_mask=7,umask=801Number Transactions requested by the CPU : Another card (different IIO stack) reading from this card. : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : IOMMU - Type 0unc_iio_txn_req_by_cpu.peer_read.iommu1uncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) reading from this cardevent=0xc1,ch_mask=0x200,fc_mask=7,umask=801Number Transactions requested by the CPU : Another card (different IIO stack) reading from this card. : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : IOMMU - Type 1unc_iio_txn_req_by_cpu.peer_read.part0uncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) reading from this cardevent=0xc1,ch_mask=1,fc_mask=7,umask=801Number Transactions requested by the CPU : Another card (different IIO stack) reading from this card. : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_by_cpu.peer_read.part1uncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) reading from this cardevent=0xc1,ch_mask=2,fc_mask=7,umask=801Number Transactions requested by the CPU : Another card (different IIO stack) reading from this card. : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_txn_req_by_cpu.peer_read.part2uncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) reading from this cardevent=0xc1,ch_mask=4,fc_mask=7,umask=801Number Transactions requested by the CPU : Another card (different IIO stack) reading from this card. : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_txn_req_by_cpu.peer_read.part3uncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) reading from this cardevent=0xc1,ch_mask=8,fc_mask=7,umask=801Number Transactions requested by the CPU : Another card (different IIO stack) reading from this card. : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_txn_req_by_cpu.peer_read.part4uncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) reading from this cardevent=0xc1,ch_mask=0x10,fc_mask=7,umask=801Number Transactions requested by the CPU : Another card (different IIO stack) reading from this card. : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_txn_req_by_cpu.peer_read.part5uncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) reading from this cardevent=0xc1,ch_mask=0x20,fc_mask=7,umask=801Number Transactions requested by the CPU : Another card (different IIO stack) reading from this card. : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 5unc_iio_txn_req_by_cpu.peer_read.part6uncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) reading from this cardevent=0xc1,ch_mask=0x40,fc_mask=7,umask=801Number Transactions requested by the CPU : Another card (different IIO stack) reading from this card. : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_txn_req_by_cpu.peer_read.part7uncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) reading from this cardevent=0xc1,ch_mask=0x80,fc_mask=7,umask=801Number Transactions requested by the CPU : Another card (different IIO stack) reading from this card. : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 7unc_iio_txn_req_by_cpu.peer_write.iommu0uncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc1,ch_mask=0x200,fc_mask=7,umask=201Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card. : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : IOMMU - Type 1unc_iio_txn_req_by_cpu.peer_write.part0uncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc1,ch_mask=1,fc_mask=7,umask=201Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card. : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_by_cpu.peer_write.part1uncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc1,ch_mask=2,fc_mask=7,umask=201Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card. : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1unc_iio_txn_req_by_cpu.peer_write.part2uncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc1,ch_mask=4,fc_mask=7,umask=201Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card. : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_txn_req_by_cpu.peer_write.part3uncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc1,ch_mask=8,fc_mask=7,umask=201Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card. : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3unc_iio_txn_req_by_cpu.peer_write.part4uncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc1,ch_mask=0x10,fc_mask=7,umask=201Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card. : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_txn_req_by_cpu.peer_write.part5uncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc1,ch_mask=0x20,fc_mask=7,umask=201Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card. : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 5unc_iio_txn_req_by_cpu.peer_write.part6uncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc1,ch_mask=0x40,fc_mask=7,umask=201Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card. : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_txn_req_by_cpu.peer_write.part7uncore ioNumber Transactions requested by the CPU : Another card (different IIO stack) writing to this cardevent=0xc1,ch_mask=0x80,fc_mask=7,umask=201Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card. : Also known as Outbound.  Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 7unc_iio_txn_req_of_cpu.atomic.iommu0uncore ioNumber Transactions requested of the CPU : Atomic requests targeting DRAMevent=0x84,ch_mask=0x100,fc_mask=7,umask=0x1001Number Transactions requested of the CPU : Atomic requests targeting DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : IOMMU - Type 0unc_iio_txn_req_of_cpu.atomic.iommu1uncore ioNumber Transactions requested of the CPU : Atomic requests targeting DRAMevent=0x84,ch_mask=0x200,fc_mask=7,umask=0x1001Number Transactions requested of the CPU : Atomic requests targeting DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : IOMMU - Type 1unc_iio_txn_req_of_cpu.atomic.part0uncore ioNumber Transactions requested of the CPU : Atomic requests targeting DRAMevent=0x84,ch_mask=1,fc_mask=7,umask=0x1001Number Transactions requested of the CPU : Atomic requests targeting DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_of_cpu.atomic.part1uncore ioNumber Transactions requested of the CPU : Atomic requests targeting DRAMevent=0x84,ch_mask=2,fc_mask=7,umask=0x1001Number Transactions requested of the CPU : Atomic requests targeting DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 1unc_iio_txn_req_of_cpu.atomic.part2uncore ioNumber Transactions requested of the CPU : Atomic requests targeting DRAMevent=0x84,ch_mask=4,fc_mask=7,umask=0x1001Number Transactions requested of the CPU : Atomic requests targeting DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_txn_req_of_cpu.atomic.part3uncore ioNumber Transactions requested of the CPU : Atomic requests targeting DRAMevent=0x84,ch_mask=8,fc_mask=7,umask=0x1001Number Transactions requested of the CPU : Atomic requests targeting DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 3unc_iio_txn_req_of_cpu.atomic.part4uncore ioNumber Transactions requested of the CPU : Atomic requests targeting DRAMevent=0x84,ch_mask=0x10,fc_mask=7,umask=0x1001Number Transactions requested of the CPU : Atomic requests targeting DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_txn_req_of_cpu.atomic.part5uncore ioNumber Transactions requested of the CPU : Atomic requests targeting DRAMevent=0x84,ch_mask=0x20,fc_mask=7,umask=0x1001Number Transactions requested of the CPU : Atomic requests targeting DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 5unc_iio_txn_req_of_cpu.atomic.part6uncore ioNumber Transactions requested of the CPU : Atomic requests targeting DRAMevent=0x84,ch_mask=0x40,fc_mask=7,umask=0x1001Number Transactions requested of the CPU : Atomic requests targeting DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_txn_req_of_cpu.atomic.part7uncore ioNumber Transactions requested of the CPU : Atomic requests targeting DRAMevent=0x84,ch_mask=0x80,fc_mask=7,umask=0x1001Number Transactions requested of the CPU : Atomic requests targeting DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 7unc_iio_txn_req_of_cpu.cmpd.iommu0uncore ioNumber Transactions requested of the CPU : CmpD - device sending completion to CPU requestevent=0x84,ch_mask=0x100,fc_mask=7,umask=0x8001Number Transactions requested of the CPU : CmpD - device sending completion to CPU request : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : IOMMU - Type 0unc_iio_txn_req_of_cpu.cmpd.iommu1uncore ioNumber Transactions requested of the CPU : CmpD - device sending completion to CPU requestevent=0x84,ch_mask=0x200,fc_mask=7,umask=0x8001Number Transactions requested of the CPU : CmpD - device sending completion to CPU request : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : IOMMU - Type 1unc_iio_txn_req_of_cpu.mem_read.iommu0uncore ioNumber Transactions requested of the CPU : Card reading from DRAMevent=0x84,ch_mask=0x100,fc_mask=7,umask=401Number Transactions requested of the CPU : Card reading from DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : IOMMU - Type 0unc_iio_txn_req_of_cpu.mem_read.iommu1uncore ioNumber Transactions requested of the CPU : Card reading from DRAMevent=0x84,ch_mask=0x200,fc_mask=7,umask=401Number Transactions requested of the CPU : Card reading from DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : IOMMU - Type 1unc_iio_txn_req_of_cpu.mem_read.part0uncore ioNumber Transactions requested of the CPU : Card reading from DRAMevent=0x84,ch_mask=1,fc_mask=7,umask=401Number Transactions requested of the CPU : Card reading from DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_of_cpu.mem_read.part1uncore ioNumber Transactions requested of the CPU : Card reading from DRAMevent=0x84,ch_mask=2,fc_mask=7,umask=401Number Transactions requested of the CPU : Card reading from DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 1unc_iio_txn_req_of_cpu.mem_read.part2uncore ioNumber Transactions requested of the CPU : Card reading from DRAMevent=0x84,ch_mask=4,fc_mask=7,umask=401Number Transactions requested of the CPU : Card reading from DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_txn_req_of_cpu.mem_read.part3uncore ioNumber Transactions requested of the CPU : Card reading from DRAMevent=0x84,ch_mask=8,fc_mask=7,umask=401Number Transactions requested of the CPU : Card reading from DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 3unc_iio_txn_req_of_cpu.mem_read.part4uncore ioNumber Transactions requested of the CPU : Card reading from DRAMevent=0x84,ch_mask=0x10,fc_mask=7,umask=401Number Transactions requested of the CPU : Card reading from DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_txn_req_of_cpu.mem_read.part5uncore ioNumber Transactions requested of the CPU : Card reading from DRAMevent=0x84,ch_mask=0x20,fc_mask=7,umask=401Number Transactions requested of the CPU : Card reading from DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 5unc_iio_txn_req_of_cpu.mem_read.part6uncore ioNumber Transactions requested of the CPU : Card reading from DRAMevent=0x84,ch_mask=0x40,fc_mask=7,umask=401Number Transactions requested of the CPU : Card reading from DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_txn_req_of_cpu.mem_read.part7uncore ioNumber Transactions requested of the CPU : Card reading from DRAMevent=0x84,ch_mask=0x80,fc_mask=7,umask=401Number Transactions requested of the CPU : Card reading from DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 7unc_iio_txn_req_of_cpu.mem_write.iommu0uncore ioNumber Transactions requested of the CPU : Card writing to DRAMevent=0x84,ch_mask=0x100,fc_mask=7,umask=101Number Transactions requested of the CPU : Card writing to DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : IOMMU - Type 0unc_iio_txn_req_of_cpu.mem_write.iommu1uncore ioNumber Transactions requested of the CPU : Card writing to DRAMevent=0x84,ch_mask=0x200,fc_mask=7,umask=101Number Transactions requested of the CPU : Card writing to DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : IOMMU - Type 1unc_iio_txn_req_of_cpu.mem_write.part0uncore ioNumber Transactions requested of the CPU : Card writing to DRAMevent=0x84,ch_mask=1,fc_mask=7,umask=101Number Transactions requested of the CPU : Card writing to DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_of_cpu.mem_write.part1uncore ioNumber Transactions requested of the CPU : Card writing to DRAMevent=0x84,ch_mask=2,fc_mask=7,umask=101Number Transactions requested of the CPU : Card writing to DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 1unc_iio_txn_req_of_cpu.mem_write.part2uncore ioNumber Transactions requested of the CPU : Card writing to DRAMevent=0x84,ch_mask=4,fc_mask=7,umask=101Number Transactions requested of the CPU : Card writing to DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_txn_req_of_cpu.mem_write.part3uncore ioNumber Transactions requested of the CPU : Card writing to DRAMevent=0x84,ch_mask=8,fc_mask=7,umask=101Number Transactions requested of the CPU : Card writing to DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 3unc_iio_txn_req_of_cpu.mem_write.part4uncore ioNumber Transactions requested of the CPU : Card writing to DRAMevent=0x84,ch_mask=0x10,fc_mask=7,umask=101Number Transactions requested of the CPU : Card writing to DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_txn_req_of_cpu.mem_write.part5uncore ioNumber Transactions requested of the CPU : Card writing to DRAMevent=0x84,ch_mask=0x20,fc_mask=7,umask=101Number Transactions requested of the CPU : Card writing to DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 5unc_iio_txn_req_of_cpu.mem_write.part6uncore ioNumber Transactions requested of the CPU : Card writing to DRAMevent=0x84,ch_mask=0x40,fc_mask=7,umask=101Number Transactions requested of the CPU : Card writing to DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_txn_req_of_cpu.mem_write.part7uncore ioNumber Transactions requested of the CPU : Card writing to DRAMevent=0x84,ch_mask=0x80,fc_mask=7,umask=101Number Transactions requested of the CPU : Card writing to DRAM : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 7unc_iio_txn_req_of_cpu.msg.iommu0uncore ioNumber Transactions requested of the CPU : Messagesevent=0x84,ch_mask=0x100,fc_mask=7,umask=0x4001Number Transactions requested of the CPU : Messages : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : IOMMU - Type 0unc_iio_txn_req_of_cpu.msg.iommu1uncore ioNumber Transactions requested of the CPU : Messagesevent=0x84,ch_mask=0x200,fc_mask=7,umask=0x4001Number Transactions requested of the CPU : Messages : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : IOMMU - Type 1unc_iio_txn_req_of_cpu.msg.part0uncore ioNumber Transactions requested of the CPU : Messagesevent=0x84,ch_mask=1,fc_mask=7,umask=0x4001Number Transactions requested of the CPU : Messages : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_of_cpu.msg.part1uncore ioNumber Transactions requested of the CPU : Messagesevent=0x84,ch_mask=2,fc_mask=7,umask=0x4001Number Transactions requested of the CPU : Messages : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 1unc_iio_txn_req_of_cpu.msg.part2uncore ioNumber Transactions requested of the CPU : Messagesevent=0x84,ch_mask=4,fc_mask=7,umask=0x4001Number Transactions requested of the CPU : Messages : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_txn_req_of_cpu.msg.part3uncore ioNumber Transactions requested of the CPU : Messagesevent=0x84,ch_mask=8,fc_mask=7,umask=0x4001Number Transactions requested of the CPU : Messages : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 3unc_iio_txn_req_of_cpu.msg.part4uncore ioNumber Transactions requested of the CPU : Messagesevent=0x84,ch_mask=0x10,fc_mask=7,umask=0x4001Number Transactions requested of the CPU : Messages : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_txn_req_of_cpu.msg.part5uncore ioNumber Transactions requested of the CPU : Messagesevent=0x84,ch_mask=0x20,fc_mask=7,umask=0x4001Number Transactions requested of the CPU : Messages : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 5unc_iio_txn_req_of_cpu.msg.part6uncore ioNumber Transactions requested of the CPU : Messagesevent=0x84,ch_mask=0x40,fc_mask=7,umask=0x4001Number Transactions requested of the CPU : Messages : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_txn_req_of_cpu.msg.part7uncore ioNumber Transactions requested of the CPU : Messagesevent=0x84,ch_mask=0x80,fc_mask=7,umask=0x4001Number Transactions requested of the CPU : Messages : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 7unc_iio_txn_req_of_cpu.peer_read.iommu0uncore ioNumber Transactions requested of the CPU : Card reading from another Card (same or different stack)event=0x84,ch_mask=0x100,fc_mask=7,umask=801Number Transactions requested of the CPU : Card reading from another Card (same or different stack) : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : IOMMU - Type 0unc_iio_txn_req_of_cpu.peer_read.iommu1uncore ioNumber Transactions requested of the CPU : Card reading from another Card (same or different stack)event=0x84,ch_mask=0x200,fc_mask=7,umask=801Number Transactions requested of the CPU : Card reading from another Card (same or different stack) : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : IOMMU - Type 1unc_iio_txn_req_of_cpu.peer_read.part0uncore ioNumber Transactions requested of the CPU : Card reading from another Card (same or different stack)event=0x84,ch_mask=1,fc_mask=7,umask=801Number Transactions requested of the CPU : Card reading from another Card (same or different stack) : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_txn_req_of_cpu.peer_read.part1uncore ioNumber Transactions requested of the CPU : Card reading from another Card (same or different stack)event=0x84,ch_mask=2,fc_mask=7,umask=801Number Transactions requested of the CPU : Card reading from another Card (same or different stack) : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 1unc_iio_txn_req_of_cpu.peer_read.part2uncore ioNumber Transactions requested of the CPU : Card reading from another Card (same or different stack)event=0x84,ch_mask=4,fc_mask=7,umask=801Number Transactions requested of the CPU : Card reading from another Card (same or different stack) : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_txn_req_of_cpu.peer_read.part3uncore ioNumber Transactions requested of the CPU : Card reading from another Card (same or different stack)event=0x84,ch_mask=8,fc_mask=7,umask=801Number Transactions requested of the CPU : Card reading from another Card (same or different stack) : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 3unc_iio_txn_req_of_cpu.peer_read.part4uncore ioNumber Transactions requested of the CPU : Card reading from another Card (same or different stack)event=0x84,ch_mask=0x10,fc_mask=7,umask=801Number Transactions requested of the CPU : Card reading from another Card (same or different stack) : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4unc_iio_txn_req_of_cpu.peer_read.part5uncore ioNumber Transactions requested of the CPU : Card reading from another Card (same or different stack)event=0x84,ch_mask=0x20,fc_mask=7,umask=801Number Transactions requested of the CPU : Card reading from another Card (same or different stack) : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 5unc_iio_txn_req_of_cpu.peer_read.part6uncore ioNumber Transactions requested of the CPU : Card reading from another Card (same or different stack)event=0x84,ch_mask=0x40,fc_mask=7,umask=801Number Transactions requested of the CPU : Card reading from another Card (same or different stack) : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6unc_iio_txn_req_of_cpu.peer_read.part7uncore ioNumber Transactions requested of the CPU : Card reading from another Card (same or different stack)event=0x84,ch_mask=0x80,fc_mask=7,umask=801Number Transactions requested of the CPU : Card reading from another Card (same or different stack) : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 7unc_iio_txn_req_of_cpu.peer_write.iommu0uncore ioNumber Transactions requested of the CPU : Card writing to another Card (same or different stack)event=0x84,ch_mask=0x100,fc_mask=7,umask=201Number Transactions requested of the CPU : Card writing to another Card (same or different stack) : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : IOMMU - Type 0unc_iio_txn_req_of_cpu.peer_write.iommu1uncore ioNumber Transactions requested of the CPU : Card writing to another Card (same or different stack)event=0x84,ch_mask=0x200,fc_mask=7,umask=201Number Transactions requested of the CPU : Card writing to another Card (same or different stack) : Also known as Inbound.  Number of 64B cache line requests initiated by the Card, including reads and writes. : IOMMU - Type 1unc_m2p_ag0_ad_crd_acquired0.tgr0uncore ioCMS Agent0 AD Credits Acquired : For Transgress 0event=0x80,umask=101CMS Agent0 AD Credits Acquired : For Transgress 0 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m2p_ag0_ad_crd_acquired0.tgr1uncore ioCMS Agent0 AD Credits Acquired : For Transgress 1event=0x80,umask=201CMS Agent0 AD Credits Acquired : For Transgress 1 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m2p_ag0_ad_crd_acquired0.tgr2uncore ioCMS Agent0 AD Credits Acquired : For Transgress 2event=0x80,umask=401CMS Agent0 AD Credits Acquired : For Transgress 2 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m2p_ag0_ad_crd_acquired0.tgr3uncore ioCMS Agent0 AD Credits Acquired : For Transgress 3event=0x80,umask=801CMS Agent0 AD Credits Acquired : For Transgress 3 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m2p_ag0_ad_crd_acquired0.tgr4uncore ioCMS Agent0 AD Credits Acquired : For Transgress 4event=0x80,umask=0x1001CMS Agent0 AD Credits Acquired : For Transgress 4 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m2p_ag0_ad_crd_acquired0.tgr5uncore ioCMS Agent0 AD Credits Acquired : For Transgress 5event=0x80,umask=0x2001CMS Agent0 AD Credits Acquired : For Transgress 5 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m2p_ag0_ad_crd_acquired0.tgr6uncore ioCMS Agent0 AD Credits Acquired : For Transgress 6event=0x80,umask=0x4001CMS Agent0 AD Credits Acquired : For Transgress 6 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m2p_ag0_ad_crd_acquired0.tgr7uncore ioCMS Agent0 AD Credits Acquired : For Transgress 7event=0x80,umask=0x8001CMS Agent0 AD Credits Acquired : For Transgress 7 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m2p_ag0_ad_crd_acquired1.tgr10uncore ioCMS Agent0 AD Credits Acquired : For Transgress 10event=0x81,umask=401CMS Agent0 AD Credits Acquired : For Transgress 10 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m2p_ag0_ad_crd_acquired1.tgr8uncore ioCMS Agent0 AD Credits Acquired : For Transgress 8event=0x81,umask=101CMS Agent0 AD Credits Acquired : For Transgress 8 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m2p_ag0_ad_crd_acquired1.tgr9uncore ioCMS Agent0 AD Credits Acquired : For Transgress 9event=0x81,umask=201CMS Agent0 AD Credits Acquired : For Transgress 9 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgressunc_m2p_ag0_ad_crd_occupancy0.tgr0uncore ioCMS Agent0 AD Credits Occupancy : For Transgress 0event=0x82,umask=101CMS Agent0 AD Credits Occupancy : For Transgress 0 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m2p_ag0_ad_crd_occupancy0.tgr1uncore ioCMS Agent0 AD Credits Occupancy : For Transgress 1event=0x82,umask=201CMS Agent0 AD Credits Occupancy : For Transgress 1 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m2p_ag0_ad_crd_occupancy0.tgr2uncore ioCMS Agent0 AD Credits Occupancy : For Transgress 2event=0x82,umask=401CMS Agent0 AD Credits Occupancy : For Transgress 2 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m2p_ag0_ad_crd_occupancy0.tgr3uncore ioCMS Agent0 AD Credits Occupancy : For Transgress 3event=0x82,umask=801CMS Agent0 AD Credits Occupancy : For Transgress 3 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m2p_ag0_ad_crd_occupancy0.tgr4uncore ioCMS Agent0 AD Credits Occupancy : For Transgress 4event=0x82,umask=0x1001CMS Agent0 AD Credits Occupancy : For Transgress 4 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m2p_ag0_ad_crd_occupancy0.tgr5uncore ioCMS Agent0 AD Credits Occupancy : For Transgress 5event=0x82,umask=0x2001CMS Agent0 AD Credits Occupancy : For Transgress 5 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m2p_ag0_ad_crd_occupancy0.tgr6uncore ioCMS Agent0 AD Credits Occupancy : For Transgress 6event=0x82,umask=0x4001CMS Agent0 AD Credits Occupancy : For Transgress 6 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m2p_ag0_ad_crd_occupancy0.tgr7uncore ioCMS Agent0 AD Credits Occupancy : For Transgress 7event=0x82,umask=0x8001CMS Agent0 AD Credits Occupancy : For Transgress 7 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m2p_ag0_ad_crd_occupancy1.tgr10uncore ioCMS Agent0 AD Credits Occupancy : For Transgress 10event=0x83,umask=401CMS Agent0 AD Credits Occupancy : For Transgress 10 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m2p_ag0_ad_crd_occupancy1.tgr8uncore ioCMS Agent0 AD Credits Occupancy : For Transgress 8event=0x83,umask=101CMS Agent0 AD Credits Occupancy : For Transgress 8 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m2p_ag0_ad_crd_occupancy1.tgr9uncore ioCMS Agent0 AD Credits Occupancy : For Transgress 9event=0x83,umask=201CMS Agent0 AD Credits Occupancy : For Transgress 9 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgressunc_m2p_ag0_bl_crd_acquired0.tgr0uncore ioCMS Agent0 BL Credits Acquired : For Transgress 0event=0x88,umask=101CMS Agent0 BL Credits Acquired : For Transgress 0 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m2p_ag0_bl_crd_acquired0.tgr1uncore ioCMS Agent0 BL Credits Acquired : For Transgress 1event=0x88,umask=201CMS Agent0 BL Credits Acquired : For Transgress 1 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m2p_ag0_bl_crd_acquired0.tgr2uncore ioCMS Agent0 BL Credits Acquired : For Transgress 2event=0x88,umask=401CMS Agent0 BL Credits Acquired : For Transgress 2 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m2p_ag0_bl_crd_acquired0.tgr3uncore ioCMS Agent0 BL Credits Acquired : For Transgress 3event=0x88,umask=801CMS Agent0 BL Credits Acquired : For Transgress 3 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m2p_ag0_bl_crd_acquired0.tgr4uncore ioCMS Agent0 BL Credits Acquired : For Transgress 4event=0x88,umask=0x1001CMS Agent0 BL Credits Acquired : For Transgress 4 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m2p_ag0_bl_crd_acquired0.tgr5uncore ioCMS Agent0 BL Credits Acquired : For Transgress 5event=0x88,umask=0x2001CMS Agent0 BL Credits Acquired : For Transgress 5 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m2p_ag0_bl_crd_acquired0.tgr6uncore ioCMS Agent0 BL Credits Acquired : For Transgress 6event=0x88,umask=0x4001CMS Agent0 BL Credits Acquired : For Transgress 6 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m2p_ag0_bl_crd_acquired0.tgr7uncore ioCMS Agent0 BL Credits Acquired : For Transgress 7event=0x88,umask=0x8001CMS Agent0 BL Credits Acquired : For Transgress 7 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m2p_ag0_bl_crd_acquired1.tgr10uncore ioCMS Agent0 BL Credits Acquired : For Transgress 10event=0x89,umask=401CMS Agent0 BL Credits Acquired : For Transgress 10 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m2p_ag0_bl_crd_acquired1.tgr8uncore ioCMS Agent0 BL Credits Acquired : For Transgress 8event=0x89,umask=101CMS Agent0 BL Credits Acquired : For Transgress 8 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m2p_ag0_bl_crd_acquired1.tgr9uncore ioCMS Agent0 BL Credits Acquired : For Transgress 9event=0x89,umask=201CMS Agent0 BL Credits Acquired : For Transgress 9 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgressunc_m2p_ag0_bl_crd_occupancy0.tgr0uncore ioCMS Agent0 BL Credits Occupancy : For Transgress 0event=0x8a,umask=101CMS Agent0 BL Credits Occupancy : For Transgress 0 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m2p_ag0_bl_crd_occupancy0.tgr1uncore ioCMS Agent0 BL Credits Occupancy : For Transgress 1event=0x8a,umask=201CMS Agent0 BL Credits Occupancy : For Transgress 1 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m2p_ag0_bl_crd_occupancy0.tgr2uncore ioCMS Agent0 BL Credits Occupancy : For Transgress 2event=0x8a,umask=401CMS Agent0 BL Credits Occupancy : For Transgress 2 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m2p_ag0_bl_crd_occupancy0.tgr3uncore ioCMS Agent0 BL Credits Occupancy : For Transgress 3event=0x8a,umask=801CMS Agent0 BL Credits Occupancy : For Transgress 3 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m2p_ag0_bl_crd_occupancy0.tgr4uncore ioCMS Agent0 BL Credits Occupancy : For Transgress 4event=0x8a,umask=0x1001CMS Agent0 BL Credits Occupancy : For Transgress 4 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m2p_ag0_bl_crd_occupancy0.tgr5uncore ioCMS Agent0 BL Credits Occupancy : For Transgress 5event=0x8a,umask=0x2001CMS Agent0 BL Credits Occupancy : For Transgress 5 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m2p_ag0_bl_crd_occupancy0.tgr6uncore ioCMS Agent0 BL Credits Occupancy : For Transgress 6event=0x8a,umask=0x4001CMS Agent0 BL Credits Occupancy : For Transgress 6 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m2p_ag0_bl_crd_occupancy0.tgr7uncore ioCMS Agent0 BL Credits Occupancy : For Transgress 7event=0x8a,umask=0x8001CMS Agent0 BL Credits Occupancy : For Transgress 7 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m2p_ag0_bl_crd_occupancy1.tgr10uncore ioCMS Agent0 BL Credits Occupancy : For Transgress 10event=0x8b,umask=401CMS Agent0 BL Credits Occupancy : For Transgress 10 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m2p_ag0_bl_crd_occupancy1.tgr8uncore ioCMS Agent0 BL Credits Occupancy : For Transgress 8event=0x8b,umask=101CMS Agent0 BL Credits Occupancy : For Transgress 8 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m2p_ag0_bl_crd_occupancy1.tgr9uncore ioCMS Agent0 BL Credits Occupancy : For Transgress 9event=0x8b,umask=201CMS Agent0 BL Credits Occupancy : For Transgress 9 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgressunc_m2p_ag1_ad_crd_acquired0.tgr0uncore ioCMS Agent1 AD Credits Acquired : For Transgress 0event=0x84,umask=101CMS Agent1 AD Credits Acquired : For Transgress 0 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m2p_ag1_ad_crd_acquired0.tgr1uncore ioCMS Agent1 AD Credits Acquired : For Transgress 1event=0x84,umask=201CMS Agent1 AD Credits Acquired : For Transgress 1 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m2p_ag1_ad_crd_acquired0.tgr2uncore ioCMS Agent1 AD Credits Acquired : For Transgress 2event=0x84,umask=401CMS Agent1 AD Credits Acquired : For Transgress 2 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m2p_ag1_ad_crd_acquired0.tgr3uncore ioCMS Agent1 AD Credits Acquired : For Transgress 3event=0x84,umask=801CMS Agent1 AD Credits Acquired : For Transgress 3 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m2p_ag1_ad_crd_acquired0.tgr4uncore ioCMS Agent1 AD Credits Acquired : For Transgress 4event=0x84,umask=0x1001CMS Agent1 AD Credits Acquired : For Transgress 4 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m2p_ag1_ad_crd_acquired0.tgr5uncore ioCMS Agent1 AD Credits Acquired : For Transgress 5event=0x84,umask=0x2001CMS Agent1 AD Credits Acquired : For Transgress 5 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m2p_ag1_ad_crd_acquired0.tgr6uncore ioCMS Agent1 AD Credits Acquired : For Transgress 6event=0x84,umask=0x4001CMS Agent1 AD Credits Acquired : For Transgress 6 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m2p_ag1_ad_crd_acquired0.tgr7uncore ioCMS Agent1 AD Credits Acquired : For Transgress 7event=0x84,umask=0x8001CMS Agent1 AD Credits Acquired : For Transgress 7 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m2p_ag1_ad_crd_acquired1.tgr10uncore ioCMS Agent1 AD Credits Acquired : For Transgress 10event=0x85,umask=401CMS Agent1 AD Credits Acquired : For Transgress 10 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m2p_ag1_ad_crd_acquired1.tgr8uncore ioCMS Agent1 AD Credits Acquired : For Transgress 8event=0x85,umask=101CMS Agent1 AD Credits Acquired : For Transgress 8 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m2p_ag1_ad_crd_acquired1.tgr9uncore ioCMS Agent1 AD Credits Acquired : For Transgress 9event=0x85,umask=201CMS Agent1 AD Credits Acquired : For Transgress 9 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgressunc_m2p_ag1_ad_crd_occupancy0.tgr0uncore ioCMS Agent1 AD Credits Occupancy : For Transgress 0event=0x86,umask=101CMS Agent1 AD Credits Occupancy : For Transgress 0 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m2p_ag1_ad_crd_occupancy0.tgr1uncore ioCMS Agent1 AD Credits Occupancy : For Transgress 1event=0x86,umask=201CMS Agent1 AD Credits Occupancy : For Transgress 1 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m2p_ag1_ad_crd_occupancy0.tgr2uncore ioCMS Agent1 AD Credits Occupancy : For Transgress 2event=0x86,umask=401CMS Agent1 AD Credits Occupancy : For Transgress 2 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m2p_ag1_ad_crd_occupancy0.tgr3uncore ioCMS Agent1 AD Credits Occupancy : For Transgress 3event=0x86,umask=801CMS Agent1 AD Credits Occupancy : For Transgress 3 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m2p_ag1_ad_crd_occupancy0.tgr4uncore ioCMS Agent1 AD Credits Occupancy : For Transgress 4event=0x86,umask=0x1001CMS Agent1 AD Credits Occupancy : For Transgress 4 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m2p_ag1_ad_crd_occupancy0.tgr5uncore ioCMS Agent1 AD Credits Occupancy : For Transgress 5event=0x86,umask=0x2001CMS Agent1 AD Credits Occupancy : For Transgress 5 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m2p_ag1_ad_crd_occupancy0.tgr6uncore ioCMS Agent1 AD Credits Occupancy : For Transgress 6event=0x86,umask=0x4001CMS Agent1 AD Credits Occupancy : For Transgress 6 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m2p_ag1_ad_crd_occupancy0.tgr7uncore ioCMS Agent1 AD Credits Occupancy : For Transgress 7event=0x86,umask=0x8001CMS Agent1 AD Credits Occupancy : For Transgress 7 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m2p_ag1_ad_crd_occupancy1.tgr10uncore ioCMS Agent1 AD Credits Occupancy : For Transgress 10event=0x87,umask=401CMS Agent1 AD Credits Occupancy : For Transgress 10 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m2p_ag1_ad_crd_occupancy1.tgr8uncore ioCMS Agent1 AD Credits Occupancy : For Transgress 8event=0x87,umask=101CMS Agent1 AD Credits Occupancy : For Transgress 8 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m2p_ag1_ad_crd_occupancy1.tgr9uncore ioCMS Agent1 AD Credits Occupancy : For Transgress 9event=0x87,umask=201CMS Agent1 AD Credits Occupancy : For Transgress 9 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgressunc_m2p_ag1_bl_crd_acquired0.tgr0uncore ioCMS Agent1 BL Credits Acquired : For Transgress 0event=0x8c,umask=101CMS Agent1 BL Credits Acquired : For Transgress 0 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m2p_ag1_bl_crd_acquired0.tgr1uncore ioCMS Agent1 BL Credits Acquired : For Transgress 1event=0x8c,umask=201CMS Agent1 BL Credits Acquired : For Transgress 1 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m2p_ag1_bl_crd_acquired0.tgr2uncore ioCMS Agent1 BL Credits Acquired : For Transgress 2event=0x8c,umask=401CMS Agent1 BL Credits Acquired : For Transgress 2 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m2p_ag1_bl_crd_acquired0.tgr3uncore ioCMS Agent1 BL Credits Acquired : For Transgress 3event=0x8c,umask=801CMS Agent1 BL Credits Acquired : For Transgress 3 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m2p_ag1_bl_crd_acquired0.tgr4uncore ioCMS Agent1 BL Credits Acquired : For Transgress 4event=0x8c,umask=0x1001CMS Agent1 BL Credits Acquired : For Transgress 4 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m2p_ag1_bl_crd_acquired0.tgr5uncore ioCMS Agent1 BL Credits Acquired : For Transgress 5event=0x8c,umask=0x2001CMS Agent1 BL Credits Acquired : For Transgress 5 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m2p_ag1_bl_crd_acquired0.tgr6uncore ioCMS Agent1 BL Credits Acquired : For Transgress 4event=0x8c,umask=0x4001CMS Agent1 BL Credits Acquired : For Transgress 4 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m2p_ag1_bl_crd_acquired0.tgr7uncore ioCMS Agent1 BL Credits Acquired : For Transgress 5event=0x8c,umask=0x8001CMS Agent1 BL Credits Acquired : For Transgress 5 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m2p_ag1_bl_crd_acquired1.tgr10uncore ioCMS Agent1 BL Credits Acquired : For Transgress 10event=0x8d,umask=401CMS Agent1 BL Credits Acquired : For Transgress 10 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m2p_ag1_bl_crd_acquired1.tgr8uncore ioCMS Agent1 BL Credits Acquired : For Transgress 8event=0x8d,umask=101CMS Agent1 BL Credits Acquired : For Transgress 8 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m2p_ag1_bl_crd_acquired1.tgr9uncore ioCMS Agent1 BL Credits Acquired : For Transgress 9event=0x8d,umask=201CMS Agent1 BL Credits Acquired : For Transgress 9 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgressunc_m2p_ag1_bl_crd_occupancy0.tgr0uncore ioCMS Agent1 BL Credits Occupancy : For Transgress 0event=0x8e,umask=101CMS Agent1 BL Credits Occupancy : For Transgress 0 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m2p_ag1_bl_crd_occupancy0.tgr1uncore ioCMS Agent1 BL Credits Occupancy : For Transgress 1event=0x8e,umask=201CMS Agent1 BL Credits Occupancy : For Transgress 1 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m2p_ag1_bl_crd_occupancy0.tgr2uncore ioCMS Agent1 BL Credits Occupancy : For Transgress 2event=0x8e,umask=401CMS Agent1 BL Credits Occupancy : For Transgress 2 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m2p_ag1_bl_crd_occupancy0.tgr3uncore ioCMS Agent1 BL Credits Occupancy : For Transgress 3event=0x8e,umask=801CMS Agent1 BL Credits Occupancy : For Transgress 3 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m2p_ag1_bl_crd_occupancy0.tgr4uncore ioCMS Agent1 BL Credits Occupancy : For Transgress 4event=0x8e,umask=0x1001CMS Agent1 BL Credits Occupancy : For Transgress 4 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m2p_ag1_bl_crd_occupancy0.tgr5uncore ioCMS Agent1 BL Credits Occupancy : For Transgress 5event=0x8e,umask=0x2001CMS Agent1 BL Credits Occupancy : For Transgress 5 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m2p_ag1_bl_crd_occupancy0.tgr6uncore ioCMS Agent1 BL Credits Occupancy : For Transgress 6event=0x8e,umask=0x4001CMS Agent1 BL Credits Occupancy : For Transgress 6 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m2p_ag1_bl_crd_occupancy0.tgr7uncore ioCMS Agent1 BL Credits Occupancy : For Transgress 7event=0x8e,umask=0x8001CMS Agent1 BL Credits Occupancy : For Transgress 7 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m2p_ag1_bl_crd_occupancy1.tgr10uncore ioCMS Agent1 BL Credits Occupancy : For Transgress 10event=0x8f,umask=401CMS Agent1 BL Credits Occupancy : For Transgress 10 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m2p_ag1_bl_crd_occupancy1.tgr8uncore ioCMS Agent1 BL Credits Occupancy : For Transgress 8event=0x8f,umask=101CMS Agent1 BL Credits Occupancy : For Transgress 8 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m2p_ag1_bl_crd_occupancy1.tgr9uncore ioCMS Agent1 BL Credits Occupancy : For Transgress 9event=0x8f,umask=201CMS Agent1 BL Credits Occupancy : For Transgress 9 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgressunc_m2p_clockticksuncore ioClockticks of the mesh to PCI (M2P)event=101Clockticks of the mesh to PCI (M2P) : Counts the number of uclks in the M3 uclk domain.  This could be slightly different than the count in the Ubox because of enable/freeze delays.  However, because the M3 is close to the Ubox, they generally should not diverge by more than a handful of cyclesunc_m2p_distress_asserted.dpt_localuncore ioDistress signal asserted : DPT Localevent=0xaf,umask=401Distress signal asserted : DPT Local : Counts the number of cycles either the local or incoming distress signals are asserted. : Dynamic Prefetch Throttle triggered by this tileunc_m2p_distress_asserted.dpt_nonlocaluncore ioDistress signal asserted : DPT Remoteevent=0xaf,umask=801Distress signal asserted : DPT Remote : Counts the number of cycles either the local or incoming distress signals are asserted. : Dynamic Prefetch Throttle received by this tileunc_m2p_distress_asserted.dpt_stall_ivuncore ioDistress signal asserted : DPT Stalled - IVevent=0xaf,umask=0x4001Distress signal asserted : DPT Stalled - IV : Counts the number of cycles either the local or incoming distress signals are asserted. : DPT occurred while regular IVs were received, causing DPT to be stalledunc_m2p_distress_asserted.dpt_stall_nocrduncore ioDistress signal asserted : DPT Stalled -  No Creditevent=0xaf,umask=0x8001Distress signal asserted : DPT Stalled -  No Credit : Counts the number of cycles either the local or incoming distress signals are asserted. : DPT occurred while credit not available causing DPT to be stalledunc_m2p_distress_asserted.horzuncore ioDistress signal asserted : Horizontalevent=0xaf,umask=201Distress signal asserted : Horizontal : Counts the number of cycles either the local or incoming distress signals are asserted. : If TGR egress is full, then agents will throttle outgoing AD IDI transactionsunc_m2p_distress_asserted.pmm_localuncore ioDistress signal asserted : PMM Localevent=0xaf,umask=0x1001Distress signal asserted : PMM Local : Counts the number of cycles either the local or incoming distress signals are asserted. : If the CHA TOR has too many PMM transactions, this signal will throttle outgoing MS2IDI trafficunc_m2p_distress_asserted.pmm_nonlocaluncore ioDistress signal asserted : PMM Remoteevent=0xaf,umask=0x2001Distress signal asserted : PMM Remote : Counts the number of cycles either the local or incoming distress signals are asserted. : If another CHA TOR has too many PMM transactions, this signal will throttle outgoing MS2IDI trafficunc_m2p_distress_asserted.vertuncore ioDistress signal asserted : Verticalevent=0xaf,umask=101Distress signal asserted : Vertical : Counts the number of cycles either the local or incoming distress signals are asserted. : If IRQ egress is full, then agents will throttle outgoing AD IDI transactionsunc_m2p_horz_ring_ad_in_use.left_evenuncore ioHorizontal AD Ring In Use : Left and Evenevent=0xb6,umask=101Horizontal AD Ring In Use : Left and Even : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_horz_ring_ad_in_use.left_odduncore ioHorizontal AD Ring In Use : Left and Oddevent=0xb6,umask=201Horizontal AD Ring In Use : Left and Odd : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_horz_ring_ad_in_use.right_evenuncore ioHorizontal AD Ring In Use : Right and Evenevent=0xb6,umask=401Horizontal AD Ring In Use : Right and Even : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_horz_ring_ad_in_use.right_odduncore ioHorizontal AD Ring In Use : Right and Oddevent=0xb6,umask=801Horizontal AD Ring In Use : Right and Odd : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_horz_ring_akc_in_use.left_evenuncore ioHorizontal AK Ring In Use : Left and Evenevent=0xbb,umask=101Horizontal AK Ring In Use : Left and Even : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_horz_ring_akc_in_use.left_odduncore ioHorizontal AK Ring In Use : Left and Oddevent=0xbb,umask=201Horizontal AK Ring In Use : Left and Odd : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_horz_ring_akc_in_use.right_evenuncore ioHorizontal AK Ring In Use : Right and Evenevent=0xbb,umask=401Horizontal AK Ring In Use : Right and Even : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_horz_ring_akc_in_use.right_odduncore ioHorizontal AK Ring In Use : Right and Oddevent=0xbb,umask=801Horizontal AK Ring In Use : Right and Odd : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_horz_ring_ak_in_use.left_evenuncore ioHorizontal AK Ring In Use : Left and Evenevent=0xb7,umask=101Horizontal AK Ring In Use : Left and Even : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_horz_ring_ak_in_use.left_odduncore ioHorizontal AK Ring In Use : Left and Oddevent=0xb7,umask=201Horizontal AK Ring In Use : Left and Odd : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_horz_ring_ak_in_use.right_evenuncore ioHorizontal AK Ring In Use : Right and Evenevent=0xb7,umask=401Horizontal AK Ring In Use : Right and Even : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_horz_ring_ak_in_use.right_odduncore ioHorizontal AK Ring In Use : Right and Oddevent=0xb7,umask=801Horizontal AK Ring In Use : Right and Odd : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_horz_ring_bl_in_use.left_evenuncore ioHorizontal BL Ring in Use : Left and Evenevent=0xb8,umask=101Horizontal BL Ring in Use : Left and Even : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_horz_ring_bl_in_use.left_odduncore ioHorizontal BL Ring in Use : Left and Oddevent=0xb8,umask=201Horizontal BL Ring in Use : Left and Odd : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_horz_ring_bl_in_use.right_evenuncore ioHorizontal BL Ring in Use : Right and Evenevent=0xb8,umask=401Horizontal BL Ring in Use : Right and Even : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_horz_ring_bl_in_use.right_odduncore ioHorizontal BL Ring in Use : Right and Oddevent=0xb8,umask=801Horizontal BL Ring in Use : Right and Odd : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_horz_ring_iv_in_use.leftuncore ioHorizontal IV Ring in Use : Leftevent=0xb9,umask=101Horizontal IV Ring in Use : Left : Counts the number of cycles that the Horizontal IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODDunc_m2p_horz_ring_iv_in_use.rightuncore ioHorizontal IV Ring in Use : Rightevent=0xb9,umask=401Horizontal IV Ring in Use : Right : Counts the number of cycles that the Horizontal IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODDunc_m2p_local_p2p_ded_returned_0.ms2iosf3_ncbuncore ioLocal P2P Dedicated Credits Returned - 0 : M2IOSF3 - NCBevent=0x19,umask=0x1001unc_m2p_local_p2p_ded_returned_0.ms2iosf3_ncsuncore ioLocal P2P Dedicated Credits Returned - 0 : M2IOSF3 - NCSevent=0x19,umask=0x2001unc_m2p_misc_external.mbe_inst0uncore ioMiscellaneous Events (mostly from MS2IDI) : Number of cycles MBE is high for MS2IDI0event=0xe6,umask=101unc_m2p_misc_external.mbe_inst1uncore ioMiscellaneous Events (mostly from MS2IDI) : Number of cycles MBE is high for MS2IDI1event=0xe6,umask=201unc_m2p_ring_bounces_horz.aduncore ioMessages that bounced on the Horizontal Ring. : ADevent=0xac,umask=101Messages that bounced on the Horizontal Ring. : AD : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring typeunc_m2p_ring_bounces_horz.akuncore ioMessages that bounced on the Horizontal Ring. : AKevent=0xac,umask=201Messages that bounced on the Horizontal Ring. : AK : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring typeunc_m2p_ring_bounces_horz.bluncore ioMessages that bounced on the Horizontal Ring. : BLevent=0xac,umask=401Messages that bounced on the Horizontal Ring. : BL : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring typeunc_m2p_ring_bounces_horz.ivuncore ioMessages that bounced on the Horizontal Ring. : IVevent=0xac,umask=801Messages that bounced on the Horizontal Ring. : IV : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring typeunc_m2p_ring_bounces_vert.aduncore ioMessages that bounced on the Vertical Ring. : ADevent=0xaa,umask=101Messages that bounced on the Vertical Ring. : AD : Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_m2p_ring_bounces_vert.akuncore ioMessages that bounced on the Vertical Ring. : Acknowledgements to coreevent=0xaa,umask=201Messages that bounced on the Vertical Ring. : Acknowledgements to core : Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_m2p_ring_bounces_vert.akcuncore ioMessages that bounced on the Vertical Ringevent=0xaa,umask=0x1001Messages that bounced on the Vertical Ring. : Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_m2p_ring_bounces_vert.bluncore ioMessages that bounced on the Vertical Ring. : Data Responses to coreevent=0xaa,umask=401Messages that bounced on the Vertical Ring. : Data Responses to core : Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_m2p_ring_bounces_vert.ivuncore ioMessages that bounced on the Vertical Ring. : Snoops of processor's cacheevent=0xaa,umask=801Messages that bounced on the Vertical Ring. : Snoops of processor's cache. : Number of cycles incoming messages from the Vertical ring that were bounced, by ring typeunc_m2p_ring_sink_starved_horz.aduncore ioSink Starvation on Horizontal Ring : ADevent=0xad,umask=101unc_m2p_ring_sink_starved_horz.akuncore ioSink Starvation on Horizontal Ring : AKevent=0xad,umask=201unc_m2p_ring_sink_starved_horz.ak_ag1uncore ioSink Starvation on Horizontal Ring : Acknowledgements to Agent 1event=0xad,umask=0x2001unc_m2p_ring_sink_starved_horz.bluncore ioSink Starvation on Horizontal Ring : BLevent=0xad,umask=401unc_m2p_ring_sink_starved_horz.ivuncore ioSink Starvation on Horizontal Ring : IVevent=0xad,umask=801unc_m2p_ring_sink_starved_vert.aduncore ioSink Starvation on Vertical Ring : ADevent=0xab,umask=101unc_m2p_ring_sink_starved_vert.akuncore ioSink Starvation on Vertical Ring : Acknowledgements to coreevent=0xab,umask=201unc_m2p_ring_sink_starved_vert.akcuncore ioSink Starvation on Vertical Ringevent=0xab,umask=0x1001unc_m2p_ring_sink_starved_vert.bluncore ioSink Starvation on Vertical Ring : Data Responses to coreevent=0xab,umask=401unc_m2p_ring_sink_starved_vert.ivuncore ioSink Starvation on Vertical Ring : Snoops of processor's cacheevent=0xab,umask=801unc_m2p_ring_src_thrtluncore ioSource Throttleevent=0xae01unc_m2p_rxr_busy_starved.ad_alluncore ioTransgress Injection Starvation : AD - Allevent=0xe5,umask=0x1101Transgress Injection Starvation : AD - All : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priority : All == Credited + Uncreditedunc_m2p_rxr_busy_starved.ad_crduncore ioTransgress Injection Starvation : AD - Creditedevent=0xe5,umask=0x1001Transgress Injection Starvation : AD - Credited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityunc_m2p_rxr_busy_starved.ad_uncrduncore ioTransgress Injection Starvation : AD - Uncreditedevent=0xe5,umask=101Transgress Injection Starvation : AD - Uncredited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityunc_m2p_rxr_busy_starved.bl_alluncore ioTransgress Injection Starvation : BL - Allevent=0xe5,umask=0x4401Transgress Injection Starvation : BL - All : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priority : All == Credited + Uncreditedunc_m2p_rxr_busy_starved.bl_crduncore ioTransgress Injection Starvation : BL - Creditedevent=0xe5,umask=0x4001Transgress Injection Starvation : BL - Credited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityunc_m2p_rxr_busy_starved.bl_uncrduncore ioTransgress Injection Starvation : BL - Uncreditedevent=0xe5,umask=401Transgress Injection Starvation : BL - Uncredited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityunc_m2p_rxr_bypass.ad_alluncore ioTransgress Ingress Bypass : AD - Allevent=0xe2,umask=0x1101Transgress Ingress Bypass : AD - All : Number of packets bypassing the CMS Ingress : All == Credited + Uncreditedunc_m2p_rxr_bypass.ad_crduncore ioTransgress Ingress Bypass : AD - Creditedevent=0xe2,umask=0x1001Transgress Ingress Bypass : AD - Credited : Number of packets bypassing the CMS Ingressunc_m2p_rxr_bypass.ad_uncrduncore ioTransgress Ingress Bypass : AD - Uncreditedevent=0xe2,umask=101Transgress Ingress Bypass : AD - Uncredited : Number of packets bypassing the CMS Ingressunc_m2p_rxr_bypass.akuncore ioTransgress Ingress Bypass : AKevent=0xe2,umask=201Transgress Ingress Bypass : AK : Number of packets bypassing the CMS Ingressunc_m2p_rxr_bypass.akc_uncrduncore ioTransgress Ingress Bypass : AKC - Uncreditedevent=0xe2,umask=0x8001Transgress Ingress Bypass : AKC - Uncredited : Number of packets bypassing the CMS Ingressunc_m2p_rxr_bypass.bl_alluncore ioTransgress Ingress Bypass : BL - Allevent=0xe2,umask=0x4401Transgress Ingress Bypass : BL - All : Number of packets bypassing the CMS Ingress : All == Credited + Uncreditedunc_m2p_rxr_bypass.bl_crduncore ioTransgress Ingress Bypass : BL - Creditedevent=0xe2,umask=0x4001Transgress Ingress Bypass : BL - Credited : Number of packets bypassing the CMS Ingressunc_m2p_rxr_bypass.bl_uncrduncore ioTransgress Ingress Bypass : BL - Uncreditedevent=0xe2,umask=401Transgress Ingress Bypass : BL - Uncredited : Number of packets bypassing the CMS Ingressunc_m2p_rxr_bypass.ivuncore ioTransgress Ingress Bypass : IVevent=0xe2,umask=801Transgress Ingress Bypass : IV : Number of packets bypassing the CMS Ingressunc_m2p_rxr_crd_starved.ad_alluncore ioTransgress Injection Starvation : AD - Allevent=0xe3,umask=0x1101Transgress Injection Starvation : AD - All : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of credit. : All == Credited + Uncreditedunc_m2p_rxr_crd_starved.ad_crduncore ioTransgress Injection Starvation : AD - Creditedevent=0xe3,umask=0x1001Transgress Injection Starvation : AD - Credited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m2p_rxr_crd_starved.ad_uncrduncore ioTransgress Injection Starvation : AD - Uncreditedevent=0xe3,umask=101Transgress Injection Starvation : AD - Uncredited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m2p_rxr_crd_starved.akuncore ioTransgress Injection Starvation : AKevent=0xe3,umask=201Transgress Injection Starvation : AK : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m2p_rxr_crd_starved.bl_alluncore ioTransgress Injection Starvation : BL - Allevent=0xe3,umask=0x4401Transgress Injection Starvation : BL - All : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of credit. : All == Credited + Uncreditedunc_m2p_rxr_crd_starved.bl_crduncore ioTransgress Injection Starvation : BL - Creditedevent=0xe3,umask=0x4001Transgress Injection Starvation : BL - Credited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m2p_rxr_crd_starved.bl_uncrduncore ioTransgress Injection Starvation : BL - Uncreditedevent=0xe3,umask=401Transgress Injection Starvation : BL - Uncredited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m2p_rxr_crd_starved.ifvuncore ioTransgress Injection Starvation : IFV - Creditedevent=0xe3,umask=0x8001Transgress Injection Starvation : IFV - Credited : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m2p_rxr_crd_starved.ivuncore ioTransgress Injection Starvation : IVevent=0xe3,umask=801Transgress Injection Starvation : IV : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m2p_rxr_crd_starved_1uncore ioTransgress Injection Starvationevent=0xe401Transgress Injection Starvation : Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditunc_m2p_rxr_inserts.ad_alluncore ioTransgress Ingress Allocations : AD - Allevent=0xe1,umask=0x1101Transgress Ingress Allocations : AD - All : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the mesh : All == Credited + Uncreditedunc_m2p_rxr_inserts.ad_crduncore ioTransgress Ingress Allocations : AD - Creditedevent=0xe1,umask=0x1001Transgress Ingress Allocations : AD - Credited : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m2p_rxr_inserts.ad_uncrduncore ioTransgress Ingress Allocations : AD - Uncreditedevent=0xe1,umask=101Transgress Ingress Allocations : AD - Uncredited : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m2p_rxr_inserts.akuncore ioTransgress Ingress Allocations : AKevent=0xe1,umask=201Transgress Ingress Allocations : AK : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m2p_rxr_inserts.akc_uncrduncore ioTransgress Ingress Allocations : AKC - Uncreditedevent=0xe1,umask=0x8001Transgress Ingress Allocations : AKC - Uncredited : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m2p_rxr_inserts.bl_alluncore ioTransgress Ingress Allocations : BL - Allevent=0xe1,umask=0x4401Transgress Ingress Allocations : BL - All : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the mesh : All == Credited + Uncreditedunc_m2p_rxr_inserts.bl_crduncore ioTransgress Ingress Allocations : BL - Creditedevent=0xe1,umask=0x4001Transgress Ingress Allocations : BL - Credited : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m2p_rxr_inserts.bl_uncrduncore ioTransgress Ingress Allocations : BL - Uncreditedevent=0xe1,umask=401Transgress Ingress Allocations : BL - Uncredited : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m2p_rxr_inserts.ivuncore ioTransgress Ingress Allocations : IVevent=0xe1,umask=801Transgress Ingress Allocations : IV : Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshunc_m2p_rxr_occupancy.ad_alluncore ioTransgress Ingress Occupancy : AD - Allevent=0xe0,umask=0x1101Transgress Ingress Occupancy : AD - All : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the mesh : All == Credited + Uncreditedunc_m2p_rxr_occupancy.ad_crduncore ioTransgress Ingress Occupancy : AD - Creditedevent=0xe0,umask=0x1001Transgress Ingress Occupancy : AD - Credited : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m2p_rxr_occupancy.ad_uncrduncore ioTransgress Ingress Occupancy : AD - Uncreditedevent=0xe0,umask=101Transgress Ingress Occupancy : AD - Uncredited : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m2p_rxr_occupancy.akuncore ioTransgress Ingress Occupancy : AKevent=0xe0,umask=201Transgress Ingress Occupancy : AK : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m2p_rxr_occupancy.akc_uncrduncore ioTransgress Ingress Occupancy : AKC - Uncreditedevent=0xe0,umask=0x8001Transgress Ingress Occupancy : AKC - Uncredited : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m2p_rxr_occupancy.bl_alluncore ioTransgress Ingress Occupancy : BL - Allevent=0xe0,umask=0x4401Transgress Ingress Occupancy : BL - All : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the mesh : All == Credited + Uncreditedunc_m2p_rxr_occupancy.bl_crduncore ioTransgress Ingress Occupancy : BL - Creditedevent=0xe0,umask=0x2001Transgress Ingress Occupancy : BL - Credited : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m2p_rxr_occupancy.bl_uncrduncore ioTransgress Ingress Occupancy : BL - Uncreditedevent=0xe0,umask=401Transgress Ingress Occupancy : BL - Uncredited : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m2p_rxr_occupancy.ivuncore ioTransgress Ingress Occupancy : IVevent=0xe0,umask=801Transgress Ingress Occupancy : IV : Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshunc_m2p_stall0_no_txr_horz_crd_ad_ag0.tgr0uncore ioStall on No AD Agent0 Transgress Credits : For Transgress 0event=0xd0,umask=101Stall on No AD Agent0 Transgress Credits : For Transgress 0 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_ad_ag0.tgr1uncore ioStall on No AD Agent0 Transgress Credits : For Transgress 1event=0xd0,umask=201Stall on No AD Agent0 Transgress Credits : For Transgress 1 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_ad_ag0.tgr2uncore ioStall on No AD Agent0 Transgress Credits : For Transgress 2event=0xd0,umask=401Stall on No AD Agent0 Transgress Credits : For Transgress 2 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_ad_ag0.tgr3uncore ioStall on No AD Agent0 Transgress Credits : For Transgress 3event=0xd0,umask=801Stall on No AD Agent0 Transgress Credits : For Transgress 3 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_ad_ag0.tgr4uncore ioStall on No AD Agent0 Transgress Credits : For Transgress 4event=0xd0,umask=0x1001Stall on No AD Agent0 Transgress Credits : For Transgress 4 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_ad_ag0.tgr5uncore ioStall on No AD Agent0 Transgress Credits : For Transgress 5event=0xd0,umask=0x2001Stall on No AD Agent0 Transgress Credits : For Transgress 5 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_ad_ag0.tgr6uncore ioStall on No AD Agent0 Transgress Credits : For Transgress 6event=0xd0,umask=0x4001Stall on No AD Agent0 Transgress Credits : For Transgress 6 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_ad_ag0.tgr7uncore ioStall on No AD Agent0 Transgress Credits : For Transgress 7event=0xd0,umask=0x8001Stall on No AD Agent0 Transgress Credits : For Transgress 7 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_ad_ag1.tgr0uncore ioStall on No AD Agent1 Transgress Credits : For Transgress 0event=0xd2,umask=101Stall on No AD Agent1 Transgress Credits : For Transgress 0 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_ad_ag1.tgr1uncore ioStall on No AD Agent1 Transgress Credits : For Transgress 1event=0xd2,umask=201Stall on No AD Agent1 Transgress Credits : For Transgress 1 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_ad_ag1.tgr2uncore ioStall on No AD Agent1 Transgress Credits : For Transgress 2event=0xd2,umask=401Stall on No AD Agent1 Transgress Credits : For Transgress 2 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_ad_ag1.tgr3uncore ioStall on No AD Agent1 Transgress Credits : For Transgress 3event=0xd2,umask=801Stall on No AD Agent1 Transgress Credits : For Transgress 3 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_ad_ag1.tgr4uncore ioStall on No AD Agent1 Transgress Credits : For Transgress 4event=0xd2,umask=0x1001Stall on No AD Agent1 Transgress Credits : For Transgress 4 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_ad_ag1.tgr5uncore ioStall on No AD Agent1 Transgress Credits : For Transgress 5event=0xd2,umask=0x2001Stall on No AD Agent1 Transgress Credits : For Transgress 5 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_ad_ag1.tgr6uncore ioStall on No AD Agent1 Transgress Credits : For Transgress 6event=0xd2,umask=0x4001Stall on No AD Agent1 Transgress Credits : For Transgress 6 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_ad_ag1.tgr7uncore ioStall on No AD Agent1 Transgress Credits : For Transgress 7event=0xd2,umask=0x8001Stall on No AD Agent1 Transgress Credits : For Transgress 7 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_bl_ag0.tgr0uncore ioStall on No BL Agent0 Transgress Credits : For Transgress 0event=0xd4,umask=101Stall on No BL Agent0 Transgress Credits : For Transgress 0 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_bl_ag0.tgr1uncore ioStall on No BL Agent0 Transgress Credits : For Transgress 1event=0xd4,umask=201Stall on No BL Agent0 Transgress Credits : For Transgress 1 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_bl_ag0.tgr2uncore ioStall on No BL Agent0 Transgress Credits : For Transgress 2event=0xd4,umask=401Stall on No BL Agent0 Transgress Credits : For Transgress 2 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_bl_ag0.tgr3uncore ioStall on No BL Agent0 Transgress Credits : For Transgress 3event=0xd4,umask=801Stall on No BL Agent0 Transgress Credits : For Transgress 3 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_bl_ag0.tgr4uncore ioStall on No BL Agent0 Transgress Credits : For Transgress 4event=0xd4,umask=0x1001Stall on No BL Agent0 Transgress Credits : For Transgress 4 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_bl_ag0.tgr5uncore ioStall on No BL Agent0 Transgress Credits : For Transgress 5event=0xd4,umask=0x2001Stall on No BL Agent0 Transgress Credits : For Transgress 5 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_bl_ag0.tgr6uncore ioStall on No BL Agent0 Transgress Credits : For Transgress 6event=0xd4,umask=0x4001Stall on No BL Agent0 Transgress Credits : For Transgress 6 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_bl_ag0.tgr7uncore ioStall on No BL Agent0 Transgress Credits : For Transgress 7event=0xd4,umask=0x8001Stall on No BL Agent0 Transgress Credits : For Transgress 7 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_bl_ag1.tgr0uncore ioStall on No BL Agent1 Transgress Credits : For Transgress 0event=0xd6,umask=101Stall on No BL Agent1 Transgress Credits : For Transgress 0 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_bl_ag1.tgr1uncore ioStall on No BL Agent1 Transgress Credits : For Transgress 1event=0xd6,umask=201Stall on No BL Agent1 Transgress Credits : For Transgress 1 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_bl_ag1.tgr2uncore ioStall on No BL Agent1 Transgress Credits : For Transgress 2event=0xd6,umask=401Stall on No BL Agent1 Transgress Credits : For Transgress 2 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_bl_ag1.tgr3uncore ioStall on No BL Agent1 Transgress Credits : For Transgress 3event=0xd6,umask=801Stall on No BL Agent1 Transgress Credits : For Transgress 3 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_bl_ag1.tgr4uncore ioStall on No BL Agent1 Transgress Credits : For Transgress 4event=0xd6,umask=0x1001Stall on No BL Agent1 Transgress Credits : For Transgress 4 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_bl_ag1.tgr5uncore ioStall on No BL Agent1 Transgress Credits : For Transgress 5event=0xd6,umask=0x2001Stall on No BL Agent1 Transgress Credits : For Transgress 5 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_bl_ag1.tgr6uncore ioStall on No BL Agent1 Transgress Credits : For Transgress 6event=0xd6,umask=0x4001Stall on No BL Agent1 Transgress Credits : For Transgress 6 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall0_no_txr_horz_crd_bl_ag1.tgr7uncore ioStall on No BL Agent1 Transgress Credits : For Transgress 7event=0xd6,umask=0x8001Stall on No BL Agent1 Transgress Credits : For Transgress 7 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall1_no_txr_horz_crd_ad_ag0.tgr10uncore ioStall on No AD Agent0 Transgress Credits : For Transgress 10event=0xd1,umask=401Stall on No AD Agent0 Transgress Credits : For Transgress 10 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall1_no_txr_horz_crd_ad_ag0.tgr8uncore ioStall on No AD Agent0 Transgress Credits : For Transgress 8event=0xd1,umask=101Stall on No AD Agent0 Transgress Credits : For Transgress 8 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall1_no_txr_horz_crd_ad_ag0.tgr9uncore ioStall on No AD Agent0 Transgress Credits : For Transgress 9event=0xd1,umask=201Stall on No AD Agent0 Transgress Credits : For Transgress 9 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall1_no_txr_horz_crd_ad_ag1_1.tgr10uncore ioStall on No AD Agent1 Transgress Credits : For Transgress 10event=0xd3,umask=401Stall on No AD Agent1 Transgress Credits : For Transgress 10 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall1_no_txr_horz_crd_ad_ag1_1.tgr8uncore ioStall on No AD Agent1 Transgress Credits : For Transgress 8event=0xd3,umask=101Stall on No AD Agent1 Transgress Credits : For Transgress 8 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall1_no_txr_horz_crd_ad_ag1_1.tgr9uncore ioStall on No AD Agent1 Transgress Credits : For Transgress 9event=0xd3,umask=201Stall on No AD Agent1 Transgress Credits : For Transgress 9 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall1_no_txr_horz_crd_bl_ag0_1.tgr10uncore ioStall on No BL Agent0 Transgress Credits : For Transgress 10event=0xd5,umask=401Stall on No BL Agent0 Transgress Credits : For Transgress 10 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall1_no_txr_horz_crd_bl_ag0_1.tgr8uncore ioStall on No BL Agent0 Transgress Credits : For Transgress 8event=0xd5,umask=101Stall on No BL Agent0 Transgress Credits : For Transgress 8 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall1_no_txr_horz_crd_bl_ag0_1.tgr9uncore ioStall on No BL Agent0 Transgress Credits : For Transgress 9event=0xd5,umask=201Stall on No BL Agent0 Transgress Credits : For Transgress 9 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall1_no_txr_horz_crd_bl_ag1_1.tgr10uncore ioStall on No BL Agent1 Transgress Credits : For Transgress 10event=0xd7,umask=401Stall on No BL Agent1 Transgress Credits : For Transgress 10 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall1_no_txr_horz_crd_bl_ag1_1.tgr8uncore ioStall on No BL Agent1 Transgress Credits : For Transgress 8event=0xd7,umask=101Stall on No BL Agent1 Transgress Credits : For Transgress 8 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_stall1_no_txr_horz_crd_bl_ag1_1.tgr9uncore ioStall on No BL Agent1 Transgress Credits : For Transgress 9event=0xd7,umask=201Stall on No BL Agent1 Transgress Credits : For Transgress 9 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgressunc_m2p_txc_cycles_full.ad_0uncore ioEgress (to CMS) Cycles Fullevent=0x25,umask=101Egress (to CMS) Cycles Full : Counts the number of cycles when the M2PCIe Egress is full.  This tracks messages for one of the two CMS ports that are used by the M2PCIe agentunc_m2p_txc_cycles_full.ad_1uncore ioEgress (to CMS) Cycles Fullevent=0x25,umask=0x1001Egress (to CMS) Cycles Full : Counts the number of cycles when the M2PCIe Egress is full.  This tracks messages for one of the two CMS ports that are used by the M2PCIe agentunc_m2p_txc_cycles_full.ak_0uncore ioEgress (to CMS) Cycles Fullevent=0x25,umask=201Egress (to CMS) Cycles Full : Counts the number of cycles when the M2PCIe Egress is full.  This tracks messages for one of the two CMS ports that are used by the M2PCIe agentunc_m2p_txc_cycles_full.ak_1uncore ioEgress (to CMS) Cycles Fullevent=0x25,umask=0x2001Egress (to CMS) Cycles Full : Counts the number of cycles when the M2PCIe Egress is full.  This tracks messages for one of the two CMS ports that are used by the M2PCIe agentunc_m2p_txc_cycles_full.bl_0uncore ioEgress (to CMS) Cycles Fullevent=0x25,umask=401Egress (to CMS) Cycles Full : Counts the number of cycles when the M2PCIe Egress is full.  This tracks messages for one of the two CMS ports that are used by the M2PCIe agentunc_m2p_txc_cycles_full.bl_1uncore ioEgress (to CMS) Cycles Fullevent=0x25,umask=0x4001Egress (to CMS) Cycles Full : Counts the number of cycles when the M2PCIe Egress is full.  This tracks messages for one of the two CMS ports that are used by the M2PCIe agentunc_m2p_txc_cycles_ne.ad_0uncore ioEgress (to CMS) Cycles Not Emptyevent=0x23,umask=101Egress (to CMS) Cycles Not Empty : Counts the number of cycles when the M2PCIe Egress is not empty.  This tracks messages for one of the two CMS ports that are used by the M2PCIe agent.  This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple egress buffers can be tracked at a given time using multiple countersunc_m2p_txc_cycles_ne.ad_1uncore ioEgress (to CMS) Cycles Not Emptyevent=0x23,umask=0x1001Egress (to CMS) Cycles Not Empty : Counts the number of cycles when the M2PCIe Egress is not empty.  This tracks messages for one of the two CMS ports that are used by the M2PCIe agent.  This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple egress buffers can be tracked at a given time using multiple countersunc_m2p_txc_cycles_ne.ak_0uncore ioEgress (to CMS) Cycles Not Emptyevent=0x23,umask=201Egress (to CMS) Cycles Not Empty : Counts the number of cycles when the M2PCIe Egress is not empty.  This tracks messages for one of the two CMS ports that are used by the M2PCIe agent.  This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple egress buffers can be tracked at a given time using multiple countersunc_m2p_txc_cycles_ne.ak_1uncore ioEgress (to CMS) Cycles Not Emptyevent=0x23,umask=0x2001Egress (to CMS) Cycles Not Empty : Counts the number of cycles when the M2PCIe Egress is not empty.  This tracks messages for one of the two CMS ports that are used by the M2PCIe agent.  This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple egress buffers can be tracked at a given time using multiple countersunc_m2p_txc_cycles_ne.bl_0uncore ioEgress (to CMS) Cycles Not Emptyevent=0x23,umask=401Egress (to CMS) Cycles Not Empty : Counts the number of cycles when the M2PCIe Egress is not empty.  This tracks messages for one of the two CMS ports that are used by the M2PCIe agent.  This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple egress buffers can be tracked at a given time using multiple countersunc_m2p_txc_cycles_ne.bl_1uncore ioEgress (to CMS) Cycles Not Emptyevent=0x23,umask=0x4001Egress (to CMS) Cycles Not Empty : Counts the number of cycles when the M2PCIe Egress is not empty.  This tracks messages for one of the two CMS ports that are used by the M2PCIe agent.  This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple egress buffers can be tracked at a given time using multiple countersunc_m2p_txc_inserts.ad_0uncore ioEgress (to CMS) Ingressevent=0x24,umask=101Egress (to CMS) Ingress : Counts the number of number of messages inserted into the  the M2PCIe Egress queue.  This tracks messages for one of the two CMS ports that are used by the M2PCIe agent.  This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancyunc_m2p_txc_inserts.ad_1uncore ioEgress (to CMS) Ingressevent=0x24,umask=0x1001Egress (to CMS) Ingress : Counts the number of number of messages inserted into the  the M2PCIe Egress queue.  This tracks messages for one of the two CMS ports that are used by the M2PCIe agent.  This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancyunc_m2p_txc_inserts.ak_crd_0uncore ioEgress (to CMS) Ingressevent=0x24,umask=801Egress (to CMS) Ingress : Counts the number of number of messages inserted into the  the M2PCIe Egress queue.  This tracks messages for one of the two CMS ports that are used by the M2PCIe agent.  This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancyunc_m2p_txc_inserts.ak_crd_1uncore ioEgress (to CMS) Ingressevent=0x24,umask=0x8001Egress (to CMS) Ingress : Counts the number of number of messages inserted into the  the M2PCIe Egress queue.  This tracks messages for one of the two CMS ports that are used by the M2PCIe agent.  This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancyunc_m2p_txc_inserts.bl_0uncore ioEgress (to CMS) Ingressevent=0x24,umask=401Egress (to CMS) Ingress : Counts the number of number of messages inserted into the  the M2PCIe Egress queue.  This tracks messages for one of the two CMS ports that are used by the M2PCIe agent.  This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancyunc_m2p_txc_inserts.bl_1uncore ioEgress (to CMS) Ingressevent=0x24,umask=0x4001Egress (to CMS) Ingress : Counts the number of number of messages inserted into the  the M2PCIe Egress queue.  This tracks messages for one of the two CMS ports that are used by the M2PCIe agent.  This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancyunc_m2p_txr_horz_ads_used.ad_alluncore ioCMS Horizontal ADS Used : AD - Allevent=0xa6,umask=0x1101CMS Horizontal ADS Used : AD - All : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent. : All == Credited + Uncreditedunc_m2p_txr_horz_ads_used.ad_crduncore ioCMS Horizontal ADS Used : AD - Creditedevent=0xa6,umask=0x1001CMS Horizontal ADS Used : AD - Credited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m2p_txr_horz_ads_used.ad_uncrduncore ioCMS Horizontal ADS Used : AD - Uncreditedevent=0xa6,umask=101CMS Horizontal ADS Used : AD - Uncredited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m2p_txr_horz_ads_used.bl_alluncore ioCMS Horizontal ADS Used : BL - Allevent=0xa6,umask=0x4401CMS Horizontal ADS Used : BL - All : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent. : All == Credited + Uncreditedunc_m2p_txr_horz_ads_used.bl_crduncore ioCMS Horizontal ADS Used : BL - Creditedevent=0xa6,umask=0x4001CMS Horizontal ADS Used : BL - Credited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m2p_txr_horz_ads_used.bl_uncrduncore ioCMS Horizontal ADS Used : BL - Uncreditedevent=0xa6,umask=401CMS Horizontal ADS Used : BL - Uncredited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m2p_txr_horz_bypass.ad_alluncore ioCMS Horizontal Bypass Used : AD - Allevent=0xa7,umask=0x1101CMS Horizontal Bypass Used : AD - All : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent. : All == Credited + Uncreditedunc_m2p_txr_horz_bypass.ad_crduncore ioCMS Horizontal Bypass Used : AD - Creditedevent=0xa7,umask=0x1001CMS Horizontal Bypass Used : AD - Credited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m2p_txr_horz_bypass.ad_uncrduncore ioCMS Horizontal Bypass Used : AD - Uncreditedevent=0xa7,umask=101CMS Horizontal Bypass Used : AD - Uncredited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m2p_txr_horz_bypass.akuncore ioCMS Horizontal Bypass Used : AKevent=0xa7,umask=201CMS Horizontal Bypass Used : AK : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m2p_txr_horz_bypass.akc_uncrduncore ioCMS Horizontal Bypass Used : AKC - Uncreditedevent=0xa7,umask=0x8001CMS Horizontal Bypass Used : AKC - Uncredited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m2p_txr_horz_bypass.bl_alluncore ioCMS Horizontal Bypass Used : BL - Allevent=0xa7,umask=0x4401CMS Horizontal Bypass Used : BL - All : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent. : All == Credited + Uncreditedunc_m2p_txr_horz_bypass.bl_crduncore ioCMS Horizontal Bypass Used : BL - Creditedevent=0xa7,umask=0x4001CMS Horizontal Bypass Used : BL - Credited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m2p_txr_horz_bypass.bl_uncrduncore ioCMS Horizontal Bypass Used : BL - Uncreditedevent=0xa7,umask=401CMS Horizontal Bypass Used : BL - Uncredited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m2p_txr_horz_bypass.ivuncore ioCMS Horizontal Bypass Used : IVevent=0xa7,umask=801CMS Horizontal Bypass Used : IV : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agentunc_m2p_txr_horz_cycles_full.ad_alluncore ioCycles CMS Horizontal Egress Queue is Full : AD - Allevent=0xa2,umask=0x1101Cycles CMS Horizontal Egress Queue is Full : AD - All : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_m2p_txr_horz_cycles_full.ad_crduncore ioCycles CMS Horizontal Egress Queue is Full : AD - Creditedevent=0xa2,umask=0x1001Cycles CMS Horizontal Egress Queue is Full : AD - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2p_txr_horz_cycles_full.ad_uncrduncore ioCycles CMS Horizontal Egress Queue is Full : AD - Uncreditedevent=0xa2,umask=101Cycles CMS Horizontal Egress Queue is Full : AD - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2p_txr_horz_cycles_full.akuncore ioCycles CMS Horizontal Egress Queue is Full : AKevent=0xa2,umask=201Cycles CMS Horizontal Egress Queue is Full : AK : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2p_txr_horz_cycles_full.akc_uncrduncore ioCycles CMS Horizontal Egress Queue is Full : AKC - Uncreditedevent=0xa2,umask=0x8001Cycles CMS Horizontal Egress Queue is Full : AKC - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2p_txr_horz_cycles_full.bl_alluncore ioCycles CMS Horizontal Egress Queue is Full : BL - Allevent=0xa2,umask=0x4401Cycles CMS Horizontal Egress Queue is Full : BL - All : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_m2p_txr_horz_cycles_full.bl_crduncore ioCycles CMS Horizontal Egress Queue is Full : BL - Creditedevent=0xa2,umask=0x4001Cycles CMS Horizontal Egress Queue is Full : BL - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2p_txr_horz_cycles_full.bl_uncrduncore ioCycles CMS Horizontal Egress Queue is Full : BL - Uncreditedevent=0xa2,umask=401Cycles CMS Horizontal Egress Queue is Full : BL - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2p_txr_horz_cycles_full.ivuncore ioCycles CMS Horizontal Egress Queue is Full : IVevent=0xa2,umask=801Cycles CMS Horizontal Egress Queue is Full : IV : Cycles the Transgress buffers in the Common Mesh Stop are Full.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2p_txr_horz_cycles_ne.ad_alluncore ioCycles CMS Horizontal Egress Queue is Not Empty : AD - Allevent=0xa3,umask=0x1101Cycles CMS Horizontal Egress Queue is Not Empty : AD - All : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_m2p_txr_horz_cycles_ne.ad_crduncore ioCycles CMS Horizontal Egress Queue is Not Empty : AD - Creditedevent=0xa3,umask=0x1001Cycles CMS Horizontal Egress Queue is Not Empty : AD - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2p_txr_horz_cycles_ne.ad_uncrduncore ioCycles CMS Horizontal Egress Queue is Not Empty : AD - Uncreditedevent=0xa3,umask=101Cycles CMS Horizontal Egress Queue is Not Empty : AD - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2p_txr_horz_cycles_ne.akuncore ioCycles CMS Horizontal Egress Queue is Not Empty : AKevent=0xa3,umask=201Cycles CMS Horizontal Egress Queue is Not Empty : AK : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2p_txr_horz_cycles_ne.akc_uncrduncore ioCycles CMS Horizontal Egress Queue is Not Empty : AKC - Uncreditedevent=0xa3,umask=0x8001Cycles CMS Horizontal Egress Queue is Not Empty : AKC - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2p_txr_horz_cycles_ne.bl_alluncore ioCycles CMS Horizontal Egress Queue is Not Empty : BL - Allevent=0xa3,umask=0x4401Cycles CMS Horizontal Egress Queue is Not Empty : BL - All : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_m2p_txr_horz_cycles_ne.bl_crduncore ioCycles CMS Horizontal Egress Queue is Not Empty : BL - Creditedevent=0xa3,umask=0x4001Cycles CMS Horizontal Egress Queue is Not Empty : BL - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2p_txr_horz_cycles_ne.bl_uncrduncore ioCycles CMS Horizontal Egress Queue is Not Empty : BL - Uncreditedevent=0xa3,umask=401Cycles CMS Horizontal Egress Queue is Not Empty : BL - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2p_txr_horz_cycles_ne.ivuncore ioCycles CMS Horizontal Egress Queue is Not Empty : IVevent=0xa3,umask=801Cycles CMS Horizontal Egress Queue is Not Empty : IV : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty.  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2p_txr_horz_inserts.ad_alluncore ioCMS Horizontal Egress Inserts : AD - Allevent=0xa1,umask=0x1101CMS Horizontal Egress Inserts : AD - All : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_m2p_txr_horz_inserts.ad_crduncore ioCMS Horizontal Egress Inserts : AD - Creditedevent=0xa1,umask=0x1001CMS Horizontal Egress Inserts : AD - Credited : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2p_txr_horz_inserts.ad_uncrduncore ioCMS Horizontal Egress Inserts : AD - Uncreditedevent=0xa1,umask=101CMS Horizontal Egress Inserts : AD - Uncredited : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2p_txr_horz_inserts.akuncore ioCMS Horizontal Egress Inserts : AKevent=0xa1,umask=201CMS Horizontal Egress Inserts : AK : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2p_txr_horz_inserts.akc_uncrduncore ioCMS Horizontal Egress Inserts : AKC - Uncreditedevent=0xa1,umask=0x8001CMS Horizontal Egress Inserts : AKC - Uncredited : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2p_txr_horz_inserts.bl_alluncore ioCMS Horizontal Egress Inserts : BL - Allevent=0xa1,umask=0x4401CMS Horizontal Egress Inserts : BL - All : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_m2p_txr_horz_inserts.bl_crduncore ioCMS Horizontal Egress Inserts : BL - Creditedevent=0xa1,umask=0x4001CMS Horizontal Egress Inserts : BL - Credited : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2p_txr_horz_inserts.bl_uncrduncore ioCMS Horizontal Egress Inserts : BL - Uncreditedevent=0xa1,umask=401CMS Horizontal Egress Inserts : BL - Uncredited : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2p_txr_horz_inserts.ivuncore ioCMS Horizontal Egress Inserts : IVevent=0xa1,umask=801CMS Horizontal Egress Inserts : IV : Number of allocations into the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2p_txr_horz_nack.ad_alluncore ioCMS Horizontal Egress NACKs : AD - Allevent=0xa4,umask=0x1101CMS Horizontal Egress NACKs : AD - All : Counts number of Egress packets NACK'ed on to the Horizontal Ring : All == Credited + Uncreditedunc_m2p_txr_horz_nack.ad_crduncore ioCMS Horizontal Egress NACKs : AD - Creditedevent=0xa4,umask=0x1001CMS Horizontal Egress NACKs : AD - Credited : Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m2p_txr_horz_nack.ad_uncrduncore ioCMS Horizontal Egress NACKs : AD - Uncreditedevent=0xa4,umask=101CMS Horizontal Egress NACKs : AD - Uncredited : Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m2p_txr_horz_nack.akuncore ioCMS Horizontal Egress NACKs : AKevent=0xa4,umask=201CMS Horizontal Egress NACKs : AK : Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m2p_txr_horz_nack.akc_uncrduncore ioCMS Horizontal Egress NACKs : AKC - Uncreditedevent=0xa4,umask=0x8001CMS Horizontal Egress NACKs : AKC - Uncredited : Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m2p_txr_horz_nack.bl_alluncore ioCMS Horizontal Egress NACKs : BL - Allevent=0xa4,umask=0x4401CMS Horizontal Egress NACKs : BL - All : Counts number of Egress packets NACK'ed on to the Horizontal Ring : All == Credited + Uncreditedunc_m2p_txr_horz_nack.bl_crduncore ioCMS Horizontal Egress NACKs : BL - Creditedevent=0xa4,umask=0x4001CMS Horizontal Egress NACKs : BL - Credited : Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m2p_txr_horz_nack.bl_uncrduncore ioCMS Horizontal Egress NACKs : BL - Uncreditedevent=0xa4,umask=401CMS Horizontal Egress NACKs : BL - Uncredited : Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m2p_txr_horz_nack.ivuncore ioCMS Horizontal Egress NACKs : IVevent=0xa4,umask=801CMS Horizontal Egress NACKs : IV : Counts number of Egress packets NACK'ed on to the Horizontal Ringunc_m2p_txr_horz_occupancy.ad_alluncore ioCMS Horizontal Egress Occupancy : AD - Allevent=0xa0,umask=0x1101CMS Horizontal Egress Occupancy : AD - All : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_m2p_txr_horz_occupancy.ad_crduncore ioCMS Horizontal Egress Occupancy : AD - Creditedevent=0xa0,umask=0x1001CMS Horizontal Egress Occupancy : AD - Credited : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2p_txr_horz_occupancy.ad_uncrduncore ioCMS Horizontal Egress Occupancy : AD - Uncreditedevent=0xa0,umask=101CMS Horizontal Egress Occupancy : AD - Uncredited : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2p_txr_horz_occupancy.akuncore ioCMS Horizontal Egress Occupancy : AKevent=0xa0,umask=201CMS Horizontal Egress Occupancy : AK : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2p_txr_horz_occupancy.akc_uncrduncore ioCMS Horizontal Egress Occupancy : AKC - Uncreditedevent=0xa0,umask=0x8001CMS Horizontal Egress Occupancy : AKC - Uncredited : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2p_txr_horz_occupancy.bl_alluncore ioCMS Horizontal Egress Occupancy : BL - Allevent=0xa0,umask=0x4401CMS Horizontal Egress Occupancy : BL - All : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncreditedunc_m2p_txr_horz_occupancy.bl_crduncore ioCMS Horizontal Egress Occupancy : BL - Creditedevent=0xa0,umask=0x4001CMS Horizontal Egress Occupancy : BL - Credited : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2p_txr_horz_occupancy.bl_uncrduncore ioCMS Horizontal Egress Occupancy : BL - Uncreditedevent=0xa0,umask=401CMS Horizontal Egress Occupancy : BL - Uncredited : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2p_txr_horz_occupancy.ivuncore ioCMS Horizontal Egress Occupancy : IVevent=0xa0,umask=801CMS Horizontal Egress Occupancy : IV : Occupancy event for the Transgress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Horizontal Ring on the Meshunc_m2p_txr_horz_starved.ad_alluncore ioCMS Horizontal Egress Injection Starvation : AD - Allevent=0xa5,umask=101CMS Horizontal Egress Injection Starvation : AD - All : Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time. : All == Credited + Uncreditedunc_m2p_txr_horz_starved.ad_uncrduncore ioCMS Horizontal Egress Injection Starvation : AD - Uncreditedevent=0xa5,umask=101CMS Horizontal Egress Injection Starvation : AD - Uncredited : Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_m2p_txr_horz_starved.akuncore ioCMS Horizontal Egress Injection Starvation : AKevent=0xa5,umask=201CMS Horizontal Egress Injection Starvation : AK : Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_m2p_txr_horz_starved.akc_uncrduncore ioCMS Horizontal Egress Injection Starvation : AKC - Uncreditedevent=0xa5,umask=0x8001CMS Horizontal Egress Injection Starvation : AKC - Uncredited : Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_m2p_txr_horz_starved.bl_alluncore ioCMS Horizontal Egress Injection Starvation : BL - Allevent=0xa5,umask=401CMS Horizontal Egress Injection Starvation : BL - All : Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time. : All == Credited + Uncreditedunc_m2p_txr_horz_starved.bl_uncrduncore ioCMS Horizontal Egress Injection Starvation : BL - Uncreditedevent=0xa5,umask=401CMS Horizontal Egress Injection Starvation : BL - Uncredited : Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_m2p_txr_horz_starved.ivuncore ioCMS Horizontal Egress Injection Starvation : IVevent=0xa5,umask=801CMS Horizontal Egress Injection Starvation : IV : Counts injection starvation.  This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of timeunc_m2p_txr_vert_ads_used.ad_ag0uncore ioCMS Vertical ADS Used : AD - Agent 0event=0x9c,umask=101CMS Vertical ADS Used : AD - Agent 0 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m2p_txr_vert_ads_used.ad_ag1uncore ioCMS Vertical ADS Used : AD - Agent 1event=0x9c,umask=0x1001CMS Vertical ADS Used : AD - Agent 1 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m2p_txr_vert_ads_used.bl_ag0uncore ioCMS Vertical ADS Used : BL - Agent 0event=0x9c,umask=401CMS Vertical ADS Used : BL - Agent 0 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m2p_txr_vert_ads_used.bl_ag1uncore ioCMS Vertical ADS Used : BL - Agent 1event=0x9c,umask=0x4001CMS Vertical ADS Used : BL - Agent 1 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agentunc_m2p_txr_vert_bypass.ad_ag0uncore ioCMS Vertical ADS Used : AD - Agent 0event=0x9d,umask=101CMS Vertical ADS Used : AD - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m2p_txr_vert_bypass.ad_ag1uncore ioCMS Vertical ADS Used : AD - Agent 1event=0x9d,umask=0x1001CMS Vertical ADS Used : AD - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m2p_txr_vert_bypass.ak_ag0uncore ioCMS Vertical ADS Used : AK - Agent 0event=0x9d,umask=201CMS Vertical ADS Used : AK - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m2p_txr_vert_bypass.ak_ag1uncore ioCMS Vertical ADS Used : AK - Agent 1event=0x9d,umask=0x2001CMS Vertical ADS Used : AK - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m2p_txr_vert_bypass.bl_ag0uncore ioCMS Vertical ADS Used : BL - Agent 0event=0x9d,umask=401CMS Vertical ADS Used : BL - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m2p_txr_vert_bypass.bl_ag1uncore ioCMS Vertical ADS Used : BL - Agent 1event=0x9d,umask=0x4001CMS Vertical ADS Used : BL - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m2p_txr_vert_bypass.iv_ag1uncore ioCMS Vertical ADS Used : IV - Agent 1event=0x9d,umask=801CMS Vertical ADS Used : IV - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m2p_txr_vert_bypass_1.akc_ag0uncore ioCMS Vertical ADS Used : AKC - Agent 0event=0x9e,umask=101CMS Vertical ADS Used : AKC - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m2p_txr_vert_bypass_1.akc_ag1uncore ioCMS Vertical ADS Used : AKC - Agent 1event=0x9e,umask=201CMS Vertical ADS Used : AKC - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agentunc_m2p_txr_vert_cycles_full0.ad_ag0uncore ioCycles CMS Vertical Egress Queue Is Full : AD - Agent 0event=0x94,umask=101Cycles CMS Vertical Egress Queue Is Full : AD - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m2p_txr_vert_cycles_full0.ad_ag1uncore ioCycles CMS Vertical Egress Queue Is Full : AD - Agent 1event=0x94,umask=0x1001Cycles CMS Vertical Egress Queue Is Full : AD - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring.  This is commonly used for outbound requestsunc_m2p_txr_vert_cycles_full0.ak_ag0uncore ioCycles CMS Vertical Egress Queue Is Full : AK - Agent 0event=0x94,umask=201Cycles CMS Vertical Egress Queue Is Full : AK - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m2p_txr_vert_cycles_full0.ak_ag1uncore ioCycles CMS Vertical Egress Queue Is Full : AK - Agent 1event=0x94,umask=0x2001Cycles CMS Vertical Egress Queue Is Full : AK - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ringunc_m2p_txr_vert_cycles_full0.bl_ag0uncore ioCycles CMS Vertical Egress Queue Is Full : BL - Agent 0event=0x94,umask=401Cycles CMS Vertical Egress Queue Is Full : BL - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_m2p_txr_vert_cycles_full0.bl_ag1uncore ioCycles CMS Vertical Egress Queue Is Full : BL - Agent 1event=0x94,umask=0x4001Cycles CMS Vertical Egress Queue Is Full : BL - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_m2p_txr_vert_cycles_full0.iv_ag0uncore ioCycles CMS Vertical Egress Queue Is Full : IV - Agent 0event=0x94,umask=801Cycles CMS Vertical Egress Queue Is Full : IV - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring.  This is commonly used for snoops to the coresunc_m2p_txr_vert_cycles_full1.akc_ag0uncore ioCycles CMS Vertical Egress Queue Is Full : AKC - Agent 0event=0x95,umask=101Cycles CMS Vertical Egress Queue Is Full : AKC - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m2p_txr_vert_cycles_full1.akc_ag1uncore ioCycles CMS Vertical Egress Queue Is Full : AKC - Agent 1event=0x95,umask=201Cycles CMS Vertical Egress Queue Is Full : AKC - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m2p_txr_vert_cycles_ne0.ad_ag0uncore ioCycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 0event=0x96,umask=101Cycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m2p_txr_vert_cycles_ne0.ad_ag1uncore ioCycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 1event=0x96,umask=0x1001Cycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring.  This is commonly used for outbound requestsunc_m2p_txr_vert_cycles_ne0.ak_ag0uncore ioCycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 0event=0x96,umask=201Cycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m2p_txr_vert_cycles_ne0.ak_ag1uncore ioCycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 1event=0x96,umask=0x2001Cycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ringunc_m2p_txr_vert_cycles_ne0.bl_ag0uncore ioCycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 0event=0x96,umask=401Cycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_m2p_txr_vert_cycles_ne0.bl_ag1uncore ioCycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 1event=0x96,umask=0x4001Cycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_m2p_txr_vert_cycles_ne0.iv_ag0uncore ioCycles CMS Vertical Egress Queue Is Not Empty : IV - Agent 0event=0x96,umask=801Cycles CMS Vertical Egress Queue Is Not Empty : IV - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring.  This is commonly used for snoops to the coresunc_m2p_txr_vert_cycles_ne1.akc_ag0uncore ioCycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 0event=0x97,umask=101Cycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m2p_txr_vert_cycles_ne1.akc_ag1uncore ioCycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 1event=0x97,umask=201Cycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m2p_txr_vert_inserts0.ad_ag0uncore ioCMS Vert Egress Allocations : AD - Agent 0event=0x92,umask=101CMS Vert Egress Allocations : AD - Agent 0 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m2p_txr_vert_inserts0.ad_ag1uncore ioCMS Vert Egress Allocations : AD - Agent 1event=0x92,umask=0x1001CMS Vert Egress Allocations : AD - Agent 1 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring.  This is commonly used for outbound requestsunc_m2p_txr_vert_inserts0.ak_ag0uncore ioCMS Vert Egress Allocations : AK - Agent 0event=0x92,umask=201CMS Vert Egress Allocations : AK - Agent 0 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m2p_txr_vert_inserts0.ak_ag1uncore ioCMS Vert Egress Allocations : AK - Agent 1event=0x92,umask=0x2001CMS Vert Egress Allocations : AK - Agent 1 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ringunc_m2p_txr_vert_inserts0.bl_ag0uncore ioCMS Vert Egress Allocations : BL - Agent 0event=0x92,umask=401CMS Vert Egress Allocations : BL - Agent 0 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_m2p_txr_vert_inserts0.bl_ag1uncore ioCMS Vert Egress Allocations : BL - Agent 1event=0x92,umask=0x4001CMS Vert Egress Allocations : BL - Agent 1 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_m2p_txr_vert_inserts0.iv_ag0uncore ioCMS Vert Egress Allocations : IV - Agent 0event=0x92,umask=801CMS Vert Egress Allocations : IV - Agent 0 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring.  This is commonly used for snoops to the coresunc_m2p_txr_vert_inserts1.akc_ag0uncore ioCMS Vert Egress Allocations : AKC - Agent 0event=0x93,umask=101CMS Vert Egress Allocations : AKC - Agent 0 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m2p_txr_vert_inserts1.akc_ag1uncore ioCMS Vert Egress Allocations : AKC - Agent 1event=0x93,umask=201CMS Vert Egress Allocations : AKC - Agent 1 : Number of allocations into the Common Mesh Stop Egress.  The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m2p_txr_vert_nack0.ad_ag0uncore ioCMS Vertical Egress NACKs : AD - Agent 0event=0x98,umask=101CMS Vertical Egress NACKs : AD - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m2p_txr_vert_nack0.ad_ag1uncore ioCMS Vertical Egress NACKs : AD - Agent 1event=0x98,umask=0x1001CMS Vertical Egress NACKs : AD - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m2p_txr_vert_nack0.ak_ag0uncore ioCMS Vertical Egress NACKs : AK - Agent 0event=0x98,umask=201CMS Vertical Egress NACKs : AK - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m2p_txr_vert_nack0.ak_ag1uncore ioCMS Vertical Egress NACKs : AK - Agent 1event=0x98,umask=0x2001CMS Vertical Egress NACKs : AK - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m2p_txr_vert_nack0.bl_ag0uncore ioCMS Vertical Egress NACKs : BL - Agent 0event=0x98,umask=401CMS Vertical Egress NACKs : BL - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m2p_txr_vert_nack0.bl_ag1uncore ioCMS Vertical Egress NACKs : BL - Agent 1event=0x98,umask=0x4001CMS Vertical Egress NACKs : BL - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m2p_txr_vert_nack0.iv_ag0uncore ioCMS Vertical Egress NACKs : IVevent=0x98,umask=801CMS Vertical Egress NACKs : IV : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m2p_txr_vert_nack1.akc_ag0uncore ioCMS Vertical Egress NACKs : AKC - Agent 0event=0x99,umask=101CMS Vertical Egress NACKs : AKC - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m2p_txr_vert_nack1.akc_ag1uncore ioCMS Vertical Egress NACKs : AKC - Agent 1event=0x99,umask=201CMS Vertical Egress NACKs : AKC - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ringunc_m2p_txr_vert_occupancy0.ad_ag0uncore ioCMS Vert Egress Occupancy : AD - Agent 0event=0x90,umask=101CMS Vert Egress Occupancy : AD - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m2p_txr_vert_occupancy0.ad_ag1uncore ioCMS Vert Egress Occupancy : AD - Agent 1event=0x90,umask=0x1001CMS Vert Egress Occupancy : AD - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring.  This is commonly used for outbound requestsunc_m2p_txr_vert_occupancy0.ak_ag0uncore ioCMS Vert Egress Occupancy : AK - Agent 0event=0x90,umask=201CMS Vert Egress Occupancy : AK - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m2p_txr_vert_occupancy0.ak_ag1uncore ioCMS Vert Egress Occupancy : AK - Agent 1event=0x90,umask=0x2001CMS Vert Egress Occupancy : AK - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ringunc_m2p_txr_vert_occupancy0.bl_ag0uncore ioCMS Vert Egress Occupancy : BL - Agent 0event=0x90,umask=401CMS Vert Egress Occupancy : BL - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring.  This is commonly used to send data from the cache to various destinationsunc_m2p_txr_vert_occupancy0.bl_ag1uncore ioCMS Vert Egress Occupancy : BL - Agent 1event=0x90,umask=0x4001CMS Vert Egress Occupancy : BL - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring.  This is commonly used for transferring writeback data to the cacheunc_m2p_txr_vert_occupancy0.iv_ag0uncore ioCMS Vert Egress Occupancy : IV - Agent 0event=0x90,umask=801CMS Vert Egress Occupancy : IV - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring.  This is commonly used for snoops to the coresunc_m2p_txr_vert_occupancy1.akc_ag0uncore ioCMS Vert Egress Occupancy : AKC - Agent 0event=0x91,umask=101CMS Vert Egress Occupancy : AKC - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring.  Some example include outbound requests, snoop requests, and snoop responsesunc_m2p_txr_vert_occupancy1.akc_ag1uncore ioCMS Vert Egress Occupancy : AKC - Agent 1event=0x91,umask=201CMS Vert Egress Occupancy : AKC - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop  The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring.  This is commonly used for credit returns and GO responsesunc_m2p_txr_vert_starved0.ad_ag0uncore ioCMS Vertical Egress Injection Starvation : AD - Agent 0event=0x9a,umask=101CMS Vertical Egress Injection Starvation : AD - Agent 0 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m2p_txr_vert_starved0.ad_ag1uncore ioCMS Vertical Egress Injection Starvation : AD - Agent 1event=0x9a,umask=0x1001CMS Vertical Egress Injection Starvation : AD - Agent 1 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m2p_txr_vert_starved0.ak_ag0uncore ioCMS Vertical Egress Injection Starvation : AK - Agent 0event=0x9a,umask=201CMS Vertical Egress Injection Starvation : AK - Agent 0 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m2p_txr_vert_starved0.ak_ag1uncore ioCMS Vertical Egress Injection Starvation : AK - Agent 1event=0x9a,umask=0x2001CMS Vertical Egress Injection Starvation : AK - Agent 1 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m2p_txr_vert_starved0.bl_ag0uncore ioCMS Vertical Egress Injection Starvation : BL - Agent 0event=0x9a,umask=401CMS Vertical Egress Injection Starvation : BL - Agent 0 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m2p_txr_vert_starved0.bl_ag1uncore ioCMS Vertical Egress Injection Starvation : BL - Agent 1event=0x9a,umask=0x4001CMS Vertical Egress Injection Starvation : BL - Agent 1 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m2p_txr_vert_starved0.iv_ag0uncore ioCMS Vertical Egress Injection Starvation : IVevent=0x9a,umask=801CMS Vertical Egress Injection Starvation : IV : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m2p_txr_vert_starved1.akc_ag0uncore ioCMS Vertical Egress Injection Starvation : AKC - Agent 0event=0x9b,umask=101CMS Vertical Egress Injection Starvation : AKC - Agent 0 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m2p_txr_vert_starved1.akc_ag1uncore ioCMS Vertical Egress Injection Starvation : AKC - Agent 1event=0x9b,umask=201CMS Vertical Egress Injection Starvation : AKC - Agent 1 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m2p_txr_vert_starved1.tgcuncore ioCMS Vertical Egress Injection Starvation : AKC - Agent 0event=0x9b,umask=401CMS Vertical Egress Injection Starvation : AKC - Agent 0 : Counts injection starvation.  This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of timeunc_m2p_vert_ring_ad_in_use.dn_evenuncore ioVertical AD Ring In Use : Down and Evenevent=0xb0,umask=401Vertical AD Ring In Use : Down and Even : Counts the number of cycles that the Vertical AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings  -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_vert_ring_ad_in_use.dn_odduncore ioVertical AD Ring In Use : Down and Oddevent=0xb0,umask=801Vertical AD Ring In Use : Down and Odd : Counts the number of cycles that the Vertical AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings  -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_vert_ring_ad_in_use.up_evenuncore ioVertical AD Ring In Use : Up and Evenevent=0xb0,umask=101Vertical AD Ring In Use : Up and Even : Counts the number of cycles that the Vertical AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings  -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_vert_ring_ad_in_use.up_odduncore ioVertical AD Ring In Use : Up and Oddevent=0xb0,umask=201Vertical AD Ring In Use : Up and Odd : Counts the number of cycles that the Vertical AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings  -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_vert_ring_akc_in_use.dn_evenuncore ioVertical AKC Ring In Use : Down and Evenevent=0xb4,umask=401Vertical AKC Ring In Use : Down and Even : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_vert_ring_akc_in_use.dn_odduncore ioVertical AKC Ring In Use : Down and Oddevent=0xb4,umask=801Vertical AKC Ring In Use : Down and Odd : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_vert_ring_akc_in_use.up_evenuncore ioVertical AKC Ring In Use : Up and Evenevent=0xb4,umask=101Vertical AKC Ring In Use : Up and Even : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_vert_ring_akc_in_use.up_odduncore ioVertical AKC Ring In Use : Up and Oddevent=0xb4,umask=201Vertical AKC Ring In Use : Up and Odd : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_vert_ring_ak_in_use.dn_evenuncore ioVertical AK Ring In Use : Down and Evenevent=0xb1,umask=401Vertical AK Ring In Use : Down and Even : Counts the number of cycles that the Vertical AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_vert_ring_ak_in_use.dn_odduncore ioVertical AK Ring In Use : Down and Oddevent=0xb1,umask=801Vertical AK Ring In Use : Down and Odd : Counts the number of cycles that the Vertical AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_vert_ring_ak_in_use.up_evenuncore ioVertical AK Ring In Use : Up and Evenevent=0xb1,umask=101Vertical AK Ring In Use : Up and Even : Counts the number of cycles that the Vertical AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_vert_ring_ak_in_use.up_odduncore ioVertical AK Ring In Use : Up and Oddevent=0xb1,umask=201Vertical AK Ring In Use : Up and Odd : Counts the number of cycles that the Vertical AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_vert_ring_bl_in_use.dn_evenuncore ioVertical BL Ring in Use : Down and Evenevent=0xb2,umask=401Vertical BL Ring in Use : Down and Even : Counts the number of cycles that the Vertical BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_vert_ring_bl_in_use.dn_odduncore ioVertical BL Ring in Use : Down and Oddevent=0xb2,umask=801Vertical BL Ring in Use : Down and Odd : Counts the number of cycles that the Vertical BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_vert_ring_bl_in_use.up_evenuncore ioVertical BL Ring in Use : Up and Evenevent=0xb2,umask=101Vertical BL Ring in Use : Up and Even : Counts the number of cycles that the Vertical BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_vert_ring_bl_in_use.up_odduncore ioVertical BL Ring in Use : Up and Oddevent=0xb2,umask=201Vertical BL Ring in Use : Up and Odd : Counts the number of cycles that the Vertical BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_vert_ring_iv_in_use.dnuncore ioVertical IV Ring in Use : Downevent=0xb3,umask=401Vertical IV Ring in Use : Down : Counts the number of cycles that the Vertical IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODDunc_m2p_vert_ring_iv_in_use.upuncore ioVertical IV Ring in Use : Upevent=0xb3,umask=101Vertical IV Ring in Use : Up : Counts the number of cycles that the Vertical IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring.  Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN.  To monitor the Odd ring, they should select both UP_ODD and DN_ODDunc_m2p_vert_ring_tgc_in_use.dn_evenuncore ioVertical TGC Ring In Use : Down and Evenevent=0xb5,umask=401Vertical TGC Ring In Use : Down and Even : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_vert_ring_tgc_in_use.dn_odduncore ioVertical TGC Ring In Use : Down and Oddevent=0xb5,umask=801Vertical TGC Ring In Use : Down and Odd : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_vert_ring_tgc_in_use.up_evenuncore ioVertical TGC Ring In Use : Up and Evenevent=0xb5,umask=101Vertical TGC Ring In Use : Up and Even : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m2p_vert_ring_tgc_in_use.up_odduncore ioVertical TGC Ring In Use : Up and Oddevent=0xb5,umask=201Vertical TGC Ring In Use : Up and Odd : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_m_act_count.alluncore memoryDRAM Activate Count : All Activatesevent=1,umask=0xb01DRAM Activate Count : All Activates : Counts the number of DRAM Activate commands sent on this channel.  Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS.  One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activatesunc_m_act_count.bypuncore memoryDRAM Activate Count : Activate due to Bypassevent=1,umask=801DRAM Activate Count : Activate due to Bypass : Counts the number of DRAM Activate commands sent on this channel.  Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS.  One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activatesunc_m_cas_count.alluncore memoryAll DRAM CAS commands issuedevent=4,umask=0x3f01Counts the total number of DRAM CAS commands issued on this channelunc_m_cas_count.rduncore memoryAll DRAM read CAS commands issued (including underfills)event=4,umask=0xf01Counts the total number of DRAM Read CAS commands, w/ and w/o auto-pre, issued on this channel.  This includes underfillsunc_m_cas_count.rd_pre_reguncore memoryDRAM RD_CAS and WR_CAS Commands. : DRAM RD_CAS commands w/auto-preevent=4,umask=201DRAM RD_CAS and WR_CAS Commands. : DRAM RD_CAS commands w/auto-pre : DRAM RD_CAS and WR_CAS Commands : Counts the total number or DRAM Read CAS commands issued on this channel.  This includes both regular RD CAS commands as well as those with explicit Precharge.  AutoPre is only used in systems that are using closed page policy.  We do not filter based on major mode, as RD_CAS is not issued during WMM (with the exception of underfills)unc_m_cas_count.rd_pre_underfilluncore memoryDRAM RD_CAS and WR_CAS Commandsevent=4,umask=801DRAM RD_CAS and WR_CAS Commands. : DRAM RD_CAS and WR_CAS Commandsunc_m_cas_count.rd_reguncore memoryAll DRAM read CAS commands issued (does not include underfills)event=4,umask=101Counts the total number of DRAM Read CAS commands issued on this channel.  This includes both regular RD CAS commands as well as those with implicit Precharge.   We do not filter based on major mode, as RD_CAS is not issued during WMM (with the exception of underfills)unc_m_cas_count.rd_underfilluncore memoryDRAM underfill read CAS commands issuedevent=4,umask=401Counts the total of DRAM Read CAS commands issued due to an underfillunc_m_cas_count.wruncore memoryAll DRAM write CAS commands issuedevent=4,umask=0x3001Counts the total number of DRAM Write CAS commands issued, w/ and w/o auto-pre, on this channelunc_m_cas_count.wr_nonpreuncore memoryDRAM RD_CAS and WR_CAS Commands. : DRAM WR_CAS commands w/o auto-preevent=4,umask=0x1001DRAM RD_CAS and WR_CAS Commands. : DRAM WR_CAS commands w/o auto-pre : DRAM RD_CAS and WR_CAS Commandsunc_m_cas_count.wr_preuncore memoryDRAM RD_CAS and WR_CAS Commands. : DRAM WR_CAS commands w/ auto-preevent=4,umask=0x2001DRAM RD_CAS and WR_CAS Commands. : DRAM WR_CAS commands w/ auto-pre : DRAM RD_CAS and WR_CAS Commandsunc_m_clockticks_freerununcore memoryFree running counter that increments for the Memory Controllerevent=0xff,umask=0x1001unc_m_dram_pre_alluncore memoryDRAM Precharge All Commandsevent=0x4401DRAM Precharge All Commands : Counts the number of times that the precharge all command was sentunc_m_dram_refresh.highuncore memoryNumber of DRAM Refreshes Issuedevent=0x45,umask=401Number of DRAM Refreshes Issued : Counts the number of refreshes issuedunc_m_dram_refresh.opportunisticuncore memoryNumber of DRAM Refreshes Issuedevent=0x45,umask=101Number of DRAM Refreshes Issued : Counts the number of refreshes issuedunc_m_dram_refresh.panicuncore memoryNumber of DRAM Refreshes Issuedevent=0x45,umask=201Number of DRAM Refreshes Issued : Counts the number of refreshes issuedunc_m_hclockticksuncore memoryHalf clockticks for IMCevent=0xff01unc_m_parity_errorsuncore memoryUNC_M_PARITY_ERRORSevent=0x2c01unc_m_pcls.rduncore memoryUNC_M_PCLS.RDevent=0xa0,umask=101unc_m_pcls.totaluncore memoryUNC_M_PCLS.TOTALevent=0xa0,umask=401unc_m_pcls.wruncore memoryUNC_M_PCLS.WRevent=0xa0,umask=201unc_m_pmm_cmd1.alluncore memoryPMM Commands : Allevent=0xea,umask=101PMM Commands : All : Counts all commands issued to PMMunc_m_pmm_cmd1.miscuncore memoryPMM Commands : Misc Commands (error, flow ACKs)event=0xea,umask=0x8001unc_m_pmm_cmd1.misc_gntuncore memoryPMM Commands : Misc GNTsevent=0xea,umask=0x4001unc_m_pmm_cmd1.rduncore memoryPMM Commands : Reads - RPQevent=0xea,umask=201PMM Commands : Reads - RPQ : Counts read requests issued to the PMM RPQunc_m_pmm_cmd1.rpq_gntsuncore memoryPMM Commands : RPQ GNTsevent=0xea,umask=0x1001unc_m_pmm_cmd1.ufill_rduncore memoryPMM Commands : Underfill readsevent=0xea,umask=801PMM Commands : Underfill reads : Counts underfill read commands, due to a partial write, issued to PMMunc_m_pmm_cmd1.wpq_gntsuncore memoryPMM Commands : Underfill GNTsevent=0xea,umask=0x2001unc_m_pmm_cmd1.wruncore memoryPMM Commands : Writesevent=0xea,umask=401PMM Commands : Writes : Counts write commands issued to PMMunc_m_pmm_cmd2.nodata_expuncore memoryPMM Commands - Part 2 : Expected No data packet (ERID matched NDP encoding)event=0xeb,umask=201unc_m_pmm_cmd2.nodata_unexpuncore memoryPMM Commands - Part 2 : Unexpected No data packet (ERID matched a Read, but data was a NDP)event=0xeb,umask=401unc_m_pmm_cmd2.opp_rduncore memoryPMM Commands - Part 2 : Opportunistic Readsevent=0xeb,umask=101unc_m_pmm_cmd2.pmm_ecc_erroruncore memoryPMM Commands - Part 2 : ECC Errorsevent=0xeb,umask=0x2001unc_m_pmm_cmd2.pmm_erid_erroruncore memoryPMM Commands - Part 2 : ERID detectable parity errorevent=0xeb,umask=0x4001unc_m_pmm_cmd2.pmm_erid_starveduncore memoryPMM Commands - Part 2event=0xeb,umask=0x8001unc_m_pmm_cmd2.reqs_slot0uncore memoryPMM Commands - Part 2 : Read Requests - Slot 0event=0xeb,umask=801unc_m_pmm_cmd2.reqs_slot1uncore memoryPMM Commands - Part 2 : Read Requests - Slot 1event=0xeb,umask=0x1001unc_m_pmm_rpq_insertsuncore memoryPMM Read Queue Insertsevent=0xe301PMM Read Queue Inserts : Counts number of read requests allocated in the PMM Read Pending Queue.   This includes both ISOCH and non-ISOCH requestsunc_m_pmm_rpq_occupancy.alluncore memoryPMM Read Pending Queue Occupancyevent=0xe0,umask=101PMM Read Pending Queue Occupancy : Accumulates the per cycle occupancy of the PMM Read Pending Queueunc_m_pmm_rpq_occupancy.gnt_waituncore memoryPMM Read Pending Queue Occupancyevent=0xe0,umask=401PMM Read Pending Queue Occupancy : Accumulates the per cycle occupancy of the PMM Read Pending Queueunc_m_pmm_rpq_occupancy.no_gntuncore memoryPMM Read Pending Queue Occupancyevent=0xe0,umask=201PMM Read Pending Queue Occupancy : Accumulates the per cycle occupancy of the PMM Read Pending Queueunc_m_pmm_wpq_flushuncore memoryUNC_M_PMM_WPQ_FLUSHevent=0xe801unc_m_pmm_wpq_flush_cycuncore memoryUNC_M_PMM_WPQ_FLUSH_CYCevent=0xe901unc_m_pmm_wpq_insertsuncore memoryPMM Write Queue Insertsevent=0xe701PMM Write Queue Inserts : Counts number of  write requests allocated in the PMM Write Pending Queueunc_m_pmm_wpq_occupancy.alluncore memoryPMM Write Pending Queue Occupancyevent=0xe4,umask=101PMM Write Pending Queue Occupancy : Accumulates the per cycle occupancy of the PMM Write Pending Queueunc_m_pmm_wpq_occupancy.casuncore memoryPMM Write Pending Queue Occupancyevent=0xe4,umask=201PMM Write Pending Queue Occupancy : Accumulates the per cycle occupancy of the PMM Write Pending Queueunc_m_pmm_wpq_occupancy.pwruncore memoryPMM Write Pending Queue Occupancyevent=0xe4,umask=401PMM Write Pending Queue Occupancy : Accumulates the per cycle occupancy of the PMM Write Pending Queueunc_m_power_throttle_cycles.slot0uncore memoryThrottle Cycles for Rank 0event=0x46,umask=101Throttle Cycles for Rank 0 : Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling.  It is not possible to distinguish between the two.  This can be filtered by rank.  If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1. : Thermal throttling is performed per DIMM.  We support 3 DIMMs per channel.  This ID allows us to filter by IDunc_m_power_throttle_cycles.slot1uncore memoryThrottle Cycles for Rank 0event=0x46,umask=201Throttle Cycles for Rank 0 : Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling.  It is not possible to distinguish between the two.  This can be filtered by rank.  If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1unc_m_pre_count.alluncore memoryDRAM Precharge commandsevent=2,umask=0x1c01DRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channelunc_m_pre_count.page_missuncore memoryDRAM Precharge commands. : Precharge due to page missevent=2,umask=0xc01DRAM Precharge commands. : Precharge due to page miss : Counts the number of DRAM Precharge commands sent on this channel. : Pages Misses are due to precharges from bank scheduler (rd/wr requests)unc_m_pre_count.pgtuncore memoryDRAM Precharge commands. : Precharge due to page tableevent=2,umask=0x1001DRAM Precharge commands. : Precharge due to page table : Counts the number of DRAM Precharge commands sent on this channel. : Precharges from Page Tableunc_m_pre_count.rduncore memoryDRAM Precharge commands. : Precharge due to readevent=2,umask=401DRAM Precharge commands. : Precharge due to read : Counts the number of DRAM Precharge commands sent on this channel. : Precharge from read bank schedulerunc_m_pre_count.wruncore memoryDRAM Precharge commands. : Precharge due to writeevent=2,umask=801DRAM Precharge commands. : Precharge due to write : Counts the number of DRAM Precharge commands sent on this channel. : Precharge from write bank schedulerunc_m_rdb_fulluncore memoryRead Data Buffer Fullevent=0x1901unc_m_rdb_insertsuncore memoryRead Data Buffer Insertsevent=0x1701unc_m_rdb_not_emptyuncore memoryRead Data Buffer Not Emptyevent=0x1801unc_m_rdb_occupancyuncore memoryRead Data Buffer Occupancyevent=0x1a01unc_m_rpq_cycles_full_pch0uncore memoryRead Pending Queue Full Cyclesevent=0x1201Read Pending Queue Full Cycles : Counts the number of cycles when the Read Pending Queue is full.  When the RPQ is full, the HA will not be able to issue any additional read requests into the iMC.  This count should be similar count in the HA which tracks the number of cycles that the HA has no RPQ credits, just somewhat smaller to account for the credit return overhead.  We generally do not expect to see RPQ become full except for potentially during Write Major Mode or while running with slow DRAM.  This event only tracks non-ISOC queue entriesunc_m_rpq_cycles_full_pch1uncore memoryRead Pending Queue Full Cyclesevent=0x1501Read Pending Queue Full Cycles : Counts the number of cycles when the Read Pending Queue is full.  When the RPQ is full, the HA will not be able to issue any additional read requests into the iMC.  This count should be similar count in the HA which tracks the number of cycles that the HA has no RPQ credits, just somewhat smaller to account for the credit return overhead.  We generally do not expect to see RPQ become full except for potentially during Write Major Mode or while running with slow DRAM.  This event only tracks non-ISOC queue entriesunc_m_rpq_cycles_ne.pch0uncore memoryRead Pending Queue Not Emptyevent=0x11,umask=101Read Pending Queue Not Empty : Counts the number of cycles that the Read Pending Queue is not empty.  This can then be used to calculate the average occupancy (in conjunction with the Read Pending Queue Occupancy count).  The RPQ is used to schedule reads out to the memory controller and to track the requests.  Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC.  They deallocate after the CAS command has been issued to memory.  This filter is to be used in conjunction with the occupancy filter so that one can correctly track the average occupancies for schedulable entries and scheduled requestsunc_m_rpq_cycles_ne.pch1uncore memoryRead Pending Queue Not Emptyevent=0x11,umask=201Read Pending Queue Not Empty : Counts the number of cycles that the Read Pending Queue is not empty.  This can then be used to calculate the average occupancy (in conjunction with the Read Pending Queue Occupancy count).  The RPQ is used to schedule reads out to the memory controller and to track the requests.  Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC.  They deallocate after the CAS command has been issued to memory.  This filter is to be used in conjunction with the occupancy filter so that one can correctly track the average occupancies for schedulable entries and scheduled requestsunc_m_sb_accesses.acceptsuncore memoryScoreboard Accesses : Scoreboard Accesses Acceptedevent=0xd2,umask=501unc_m_sb_accesses.fmrd_cmpsuncore memoryThis event is deprecatedevent=0xd2,umask=0x4011unc_m_sb_accesses.fmwr_cmpsuncore memoryThis event is deprecatedevent=0xd2,umask=0x8011unc_m_sb_accesses.nmrd_cmpsuncore memoryThis event is deprecatedevent=0xd2,umask=0x1011unc_m_sb_accesses.nmwr_cmpsuncore memoryThis event is deprecatedevent=0xd2,umask=0x2011unc_m_sb_accesses.rejectsuncore memoryScoreboard Accesses : Scoreboard Accesses Rejectedevent=0xd2,umask=0xa01unc_m_sb_canary.fmrd_starveduncore memoryThis event is deprecated. Refer to new event UNC_M_SB_CANARY.FM_RD_STARVEDevent=0xd9,umask=0x2011unc_m_sb_canary.fmtgrwr_starveduncore memoryThis event is deprecated. Refer to new event UNC_M_SB_CANARY.FM_TGR_WR_STARVEDevent=0xd9,umask=0x8011unc_m_sb_canary.fmwr_starveduncore memoryThis event is deprecated. Refer to new event UNC_M_SB_CANARY.FM_WR_STARVEDevent=0xd9,umask=0x4011unc_m_sb_canary.nmrd_starveduncore memoryThis event is deprecated. Refer to new event UNC_M_SB_CANARY.NM_RD_STARVEDevent=0xd9,umask=811unc_m_sb_canary.nmwr_starveduncore memoryThis event is deprecated. Refer to new event UNC_M_SB_CANARY.NM_WR_STARVEDevent=0xd9,umask=0x1011unc_m_sb_pref_inserts.pmmuncore memoryScoreboard Prefetch Inserts : Persistent Memevent=0xda,umask=401unc_m_sb_pref_occupancy.pmemuncore memoryThis event is deprecated. Refer to new event UNC_M_SB_PREF_OCCUPANCY.PMMevent=0xdb,umask=411unc_m_sb_strv_alloc.fmrduncore memoryThis event is deprecated. Refer to new event UNC_M_SB_STRV_ALLOC.FM_RDevent=0xd7,umask=211unc_m_sb_strv_alloc.fmtgruncore memoryThis event is deprecated. Refer to new event UNC_M_SB_STRV_ALLOC.FM_TGRevent=0xd7,umask=0x1011unc_m_sb_strv_alloc.fmwruncore memoryThis event is deprecated. Refer to new event UNC_M_SB_STRV_ALLOC.FM_WRevent=0xd7,umask=811unc_m_sb_strv_alloc.nmrduncore memoryThis event is deprecated. Refer to new event UNC_M_SB_STRV_ALLOC.NM_RDevent=0xd7,umask=111unc_m_sb_strv_alloc.nmwruncore memoryThis event is deprecated. Refer to new event UNC_M_SB_STRV_ALLOC.NM_WRevent=0xd7,umask=411unc_m_sb_strv_dealloc.fmrduncore memoryThis event is deprecated. Refer to new event UNC_M_SB_STRV_DEALLOC.FM_RDevent=0xde,umask=211unc_m_sb_strv_dealloc.fmtgruncore memoryThis event is deprecated. Refer to new event UNC_M_SB_STRV_DEALLOC.FM_TGRevent=0xde,umask=0x1011unc_m_sb_strv_dealloc.fmwruncore memoryThis event is deprecated. Refer to new event UNC_M_SB_STRV_DEALLOC.FM_WRevent=0xde,umask=811unc_m_sb_strv_dealloc.nmrduncore memoryThis event is deprecated. Refer to new event UNC_M_SB_STRV_DEALLOC.NM_RDevent=0xde,umask=111unc_m_sb_strv_dealloc.nmwruncore memoryThis event is deprecated. Refer to new event UNC_M_SB_STRV_DEALLOC.NM_WRevent=0xde,umask=411unc_m_sb_strv_occ.fmrduncore memoryThis event is deprecated. Refer to new event UNC_M_SB_STRV_OCC.FM_RDevent=0xd8,umask=211unc_m_sb_strv_occ.fmtgruncore memoryThis event is deprecated. Refer to new event UNC_M_SB_STRV_OCC.FM_TGRevent=0xd8,umask=0x1011unc_m_sb_strv_occ.fmwruncore memoryThis event is deprecated. Refer to new event UNC_M_SB_STRV_OCC.FM_WRevent=0xd8,umask=811unc_m_sb_strv_occ.nmrduncore memoryThis event is deprecated. Refer to new event UNC_M_SB_STRV_OCC.NM_RDevent=0xd8,umask=111unc_m_sb_strv_occ.nmwruncore memoryThis event is deprecated. Refer to new event UNC_M_SB_STRV_OCC.NM_WRevent=0xd8,umask=411unc_m_tagchk.hituncore memory2LM Tag Check : Hit in Near Memory Cacheevent=0xd3,umask=101unc_m_tagchk.miss_cleanuncore memory2LM Tag Check : Miss, no data in this lineevent=0xd3,umask=201unc_m_tagchk.miss_dirtyuncore memory2LM Tag Check : Miss, existing data may be evicted to Far Memoryevent=0xd3,umask=401unc_m_tagchk.nm_rd_hituncore memory2LM Tag Check : Read Hit in Near Memory Cacheevent=0xd3,umask=801unc_m_tagchk.nm_wr_hituncore memory2LM Tag Check : Write Hit in Near Memory Cacheevent=0xd3,umask=0x1001unc_m_wpq_cycles_full_pch0uncore memoryWrite Pending Queue Full Cyclesevent=0x2201Write Pending Queue Full Cycles : Counts the number of cycles when the Write Pending Queue is full.  When the WPQ is full, the HA will not be able to issue any additional write requests into the iMC.  This count should be similar count in the CHA which tracks the number of cycles that the CHA has no WPQ credits, just somewhat smaller to account for the credit return overheadunc_m_wpq_cycles_full_pch1uncore memoryWrite Pending Queue Full Cyclesevent=0x1601Write Pending Queue Full Cycles : Counts the number of cycles when the Write Pending Queue is full.  When the WPQ is full, the HA will not be able to issue any additional write requests into the iMC.  This count should be similar count in the CHA which tracks the number of cycles that the CHA has no WPQ credits, just somewhat smaller to account for the credit return overheadunc_m_wpq_cycles_ne.pch0uncore memoryWrite Pending Queue Not Emptyevent=0x21,umask=101Write Pending Queue Not Empty : Counts the number of cycles that the Write Pending Queue is not empty.  This can then be used to calculate the average queue occupancy (in conjunction with the WPQ Occupancy Accumulation count).  The WPQ is used to schedule write out to the memory controller and to track the writes.  Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the CHA to the iMC.  They deallocate after being issued to DRAM.  Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have posted to the iMC.  This is not to be confused with actually performing the write to DRAM.  Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latenciesunc_m_wpq_cycles_ne.pch1uncore memoryWrite Pending Queue Not Emptyevent=0x21,umask=201Write Pending Queue Not Empty : Counts the number of cycles that the Write Pending Queue is not empty.  This can then be used to calculate the average queue occupancy (in conjunction with the WPQ Occupancy Accumulation count).  The WPQ is used to schedule write out to the memory controller and to track the writes.  Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the CHA to the iMC.  They deallocate after being issued to DRAM.  Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have posted to the iMC.  This is not to be confused with actually performing the write to DRAM.  Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latenciesunc_m_wpq_read_hit.pch0uncore memoryWrite Pending Queue CAM Matchevent=0x23,umask=101Write Pending Queue CAM Match : Counts the number of times a request hits in the WPQ (write-pending queue).  The iMC allows writes and reads to pass up other writes to different addresses.  Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address.  When reads hit, they are able to directly pull their data from the WPQ instead of going to memory.  Writes that hit will overwrite the existing data.  Partial writes that hit will not need to do underfill reads and will simply update their relevant sectionsunc_m_wpq_read_hit.pch1uncore memoryWrite Pending Queue CAM Matchevent=0x23,umask=201Write Pending Queue CAM Match : Counts the number of times a request hits in the WPQ (write-pending queue).  The iMC allows writes and reads to pass up other writes to different addresses.  Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address.  When reads hit, they are able to directly pull their data from the WPQ instead of going to memory.  Writes that hit will overwrite the existing data.  Partial writes that hit will not need to do underfill reads and will simply update their relevant sectionsunc_m_wpq_write_hit.pch0uncore memoryWrite Pending Queue CAM Matchevent=0x24,umask=101Write Pending Queue CAM Match : Counts the number of times a request hits in the WPQ (write-pending queue).  The iMC allows writes and reads to pass up other writes to different addresses.  Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address.  When reads hit, they are able to directly pull their data from the WPQ instead of going to memory.  Writes that hit will overwrite the existing data.  Partial writes that hit will not need to do underfill reads and will simply update their relevant sectionsunc_m_wpq_write_hit.pch1uncore memoryWrite Pending Queue CAM Matchevent=0x24,umask=201Write Pending Queue CAM Match : Counts the number of times a request hits in the WPQ (write-pending queue).  The iMC allows writes and reads to pass up other writes to different addresses.  Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address.  When reads hit, they are able to directly pull their data from the WPQ instead of going to memory.  Writes that hit will overwrite the existing data.  Partial writes that hit will not need to do underfill reads and will simply update their relevant sectionsunc_p_clockticksuncore powerClockticks of the power control unit (PCU)event=001Clockticks of the power control unit (PCU) : The PCU runs off a fixed 1 GHz clock.  This event counts the number of pclk cycles measured while the counter was enabled.  The pclk, like the Memory Controller's dclk, counts at a constant rate making it a good measure of actual wall timeunc_p_pkg_residency_c3_cyclesuncore powerPackage C State Residency - C3event=0x2c01Package C State Residency - C3 : Counts the number of cycles when the package was in C3.  This event can be used in conjunction with edge detect to count C3 entrances (or exits using invert).  Residency events do not include transition timesunc_p_power_state_occupancy.cores_c0uncore powerNumber of cores in C-State : C0 and C1event=0x80,umask=0x4001Number of cores in C-State : C0 and C1 : This is an occupancy event that tracks the number of cores that are in the chosen C-State.  It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other detailsunc_p_power_state_occupancy.cores_c3uncore powerNumber of cores in C-State : C3event=0x80,umask=0x8001Number of cores in C-State : C3 : This is an occupancy event that tracks the number of cores that are in the chosen C-State.  It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other detailsunc_p_power_state_occupancy.cores_c6uncore powerNumber of cores in C-State : C6 and C7event=0x80,umask=0xc001Number of cores in C-State : C6 and C7 : This is an occupancy event that tracks the number of cores that are in the chosen C-State.  It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other detailsdtlb_load_misses.walk_completed_1gvirtual memoryPage walks completed due to a demand data load to a 1G pageevent=8,period=100003,umask=800Counts completed page walks  (1G sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a faultdtlb_store_misses.walk_completed_1gvirtual memoryPage walks completed due to a demand data store to a 1G pageevent=0x49,period=100003,umask=800Counts completed page walks  (1G sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a faultl1d.replacementcacheL1D data line replacementsevent=0x51,period=2000003,umask=100Counts the number of lines brought into the L1 data cachel1d_pend_miss.fb_fullcacheCycles a demand request was blocked due to Fill Buffers unavailabilityevent=0x48,cmask=1,period=2000003,umask=200Cycles a demand request was blocked due to Fill Buffers unavailabilityl1d_pend_miss.pending_cycles_anycacheCycles with L1D load Misses outstanding from any thread on physical coreevent=0x48,any=1,cmask=1,period=2000003,umask=100Cycles with L1D load Misses outstanding from any thread on physical corel2_l1d_wb_rqsts.allcacheNot rejected writebacks from L1D to L2 cache lines in any stateevent=0x28,period=200003,umask=0xf00l2_l1d_wb_rqsts.hit_ecacheNot rejected writebacks from L1D to L2 cache lines in E stateevent=0x28,period=200003,umask=400Not rejected writebacks from L1D to L2 cache lines in E statel2_l1d_wb_rqsts.hit_mcacheNot rejected writebacks from L1D to L2 cache lines in M stateevent=0x28,period=200003,umask=800Not rejected writebacks from L1D to L2 cache lines in M statel2_l1d_wb_rqsts.misscacheCount the number of modified Lines evicted from L1 and missed L2. (Non-rejected WBs from the DCU.)event=0x28,period=200003,umask=100Not rejected writebacks that missed LLCl2_lines_in.allcacheL2 cache lines filling L2event=0xf1,period=100003,umask=700L2 cache lines filling L2l2_lines_out.demand_cleancacheClean L2 cache lines evicted by demandevent=0xf2,period=100003,umask=100Clean L2 cache lines evicted by demandl2_lines_out.demand_dirtycacheDirty L2 cache lines evicted by demandevent=0xf2,period=100003,umask=200Dirty L2 cache lines evicted by demandl2_lines_out.dirty_allcacheDirty L2 cache lines filling the L2event=0xf2,period=100003,umask=0xa00Dirty L2 cache lines filling the L2l2_lines_out.pf_cleancacheClean L2 cache lines evicted by L2 prefetchevent=0xf2,period=100003,umask=400Clean L2 cache lines evicted by the MLC prefetcherl2_lines_out.pf_dirtycacheDirty L2 cache lines evicted by L2 prefetchevent=0xf2,period=100003,umask=800Dirty L2 cache lines evicted by the MLC prefetcherl2_rqsts.all_code_rdcacheL2 code requestsevent=0x24,period=200003,umask=0x3000Counts all L2 code requestsl2_rqsts.all_demand_data_rdcacheDemand Data Read requestsevent=0x24,period=200003,umask=300Counts any demand and L1 HW prefetch data load requests to L2l2_rqsts.all_pfcacheRequests from L2 hardware prefetchersevent=0x24,period=200003,umask=0xc000Counts all L2 HW prefetcher requestsl2_rqsts.all_rfocacheRFO requests to L2 cacheevent=0x24,period=200003,umask=0xc00Counts all L2 store RFO requestsl2_rqsts.code_rd_hitcacheL2 cache hits when fetching instructions, code readsevent=0x24,period=200003,umask=0x1000Number of instruction fetches that hit the L2 cachel2_rqsts.code_rd_misscacheL2 cache misses when fetching instructionsevent=0x24,period=200003,umask=0x2000Number of instruction fetches that missed the L2 cachel2_rqsts.demand_data_rd_hitcacheDemand Data Read requests that hit L2 cacheevent=0x24,period=200003,umask=100Demand Data Read requests that hit L2 cachel2_rqsts.pf_hitcacheRequests from the L2 hardware prefetchers that hit L2 cacheevent=0x24,period=200003,umask=0x4000Counts all L2 HW prefetcher requests that hit L2l2_rqsts.pf_misscacheRequests from the L2 hardware prefetchers that miss L2 cacheevent=0x24,period=200003,umask=0x8000Counts all L2 HW prefetcher requests that missed L2l2_rqsts.rfo_hitcacheRFO requests that hit L2 cacheevent=0x24,period=200003,umask=400RFO requests that hit L2 cachel2_rqsts.rfo_misscacheRFO requests that miss L2 cacheevent=0x24,period=200003,umask=800Counts the number of store RFO requests that miss the L2 cachel2_store_lock_rqsts.allcacheRFOs that access cache lines in any stateevent=0x27,period=200003,umask=0xf00RFOs that access cache lines in any statel2_store_lock_rqsts.hit_mcacheRFOs that hit cache lines in M stateevent=0x27,period=200003,umask=800RFOs that hit cache lines in M statel2_store_lock_rqsts.misscacheRFOs that miss cache linesevent=0x27,period=200003,umask=100RFOs that miss cache linesl2_trans.all_pfcacheL2 or LLC HW prefetches that access L2 cacheevent=0xf0,period=200003,umask=800Any MLC or LLC HW prefetch accessing L2, including rejectsl2_trans.demand_data_rdcacheDemand Data Read requests that access L2 cacheevent=0xf0,period=200003,umask=100Demand Data Read requests that access L2 cachelongest_lat_cache.misscacheCore-originated cacheable demand requests missed LLCevent=0x2e,period=100003,umask=0x4100This event counts each cache miss condition for references to the last level cachelongest_lat_cache.referencecacheCore-originated cacheable demand requests that refer to LLCevent=0x2e,period=100003,umask=0x4f00This event counts requests originating from the core that reference a cache line in the last level cachemem_load_uops_llc_hit_retired.xsnp_hitcacheRetired load uops which data sources were LLC and cross-core snoop hits in on-pkg core cache (Precise event)event=0xd2,period=20011,umask=200mem_load_uops_llc_hit_retired.xsnp_hitmcacheRetired load uops which data sources were HitM responses from shared LLC (Precise event)event=0xd2,period=20011,umask=400mem_load_uops_llc_hit_retired.xsnp_misscacheRetired load uops which data sources were LLC hit and cross-core snoop missed in on-pkg core cache (Precise event)event=0xd2,period=20011,umask=100mem_load_uops_llc_hit_retired.xsnp_nonecacheRetired load uops which data sources were hits in LLC without snoops required (Precise event)event=0xd2,period=100003,umask=800mem_load_uops_llc_miss_retired.local_dramcacheRetired load uops which data sources missed LLC but serviced from local dramevent=0xd3,period=100007,umask=100Retired load uops whose data source was local memory (cross-socket snoop not needed or missed)mem_load_uops_retired.hit_lfbcacheRetired load uops which data sources were load uops missed L1 but hit FB due to preceding miss to the same cache line with data not ready (Precise event)event=0xd1,period=100003,umask=0x4000mem_load_uops_retired.l1_hitcacheRetired load uops with L1 cache hits as data sources (Precise event)event=0xd1,period=2000003,umask=100mem_load_uops_retired.l1_misscacheRetired load uops which data sources following L1 data-cache miss (Precise event)event=0xd1,period=100003,umask=800mem_load_uops_retired.l2_hitcacheRetired load uops with L2 cache hits as data sources (Precise event)event=0xd1,period=100003,umask=200mem_load_uops_retired.l2_misscacheRetired load uops with L2 cache misses as data sources (Precise event)event=0xd1,period=50021,umask=0x1000mem_load_uops_retired.llc_hitcacheRetired load uops which data sources were data hits in LLC without snoops required (Precise event)event=0xd1,period=50021,umask=400mem_load_uops_retired.llc_misscacheMiss in last-level (L3) cache. Excludes Unknown data-source (Precise event)event=0xd1,period=100007,umask=0x2000mem_uops_retired.all_loadscacheAll retired load uops. (Precise Event)event=0xd0,period=2000003,umask=0x8100mem_uops_retired.all_storescacheAll retired store uops. (Precise Event)event=0xd0,period=2000003,umask=0x8200mem_uops_retired.lock_loadscacheRetired load uops with locked access. (Precise Event)event=0xd0,period=100007,umask=0x2100mem_uops_retired.split_loadscacheRetired load uops that split across a cacheline boundary. (Precise Event)event=0xd0,period=100003,umask=0x4100mem_uops_retired.split_storescacheRetired store uops that split across a cacheline boundary. (Precise Event)event=0xd0,period=100003,umask=0x4200mem_uops_retired.stlb_miss_loadscacheRetired load uops that miss the STLB. (Precise Event)event=0xd0,period=100003,umask=0x1100mem_uops_retired.stlb_miss_storescacheRetired store uops that miss the STLB. (Precise Event)event=0xd0,period=100003,umask=0x1200offcore_requests.demand_data_rdcacheDemand Data Read requests sent to uncoreevent=0xb0,period=100003,umask=100Demand data read requests sent to uncoreoffcore_requests_buffer.sq_fullcacheCases when offcore requests buffer cannot take more entries for coreevent=0xb2,period=2000003,umask=100Cases when offcore requests buffer cannot take more entries for coreoffcore_requests_outstanding.all_data_rdcacheOffcore outstanding cacheable Core Data Read transactions in SuperQueue (SQ), queue to uncoreevent=0x60,period=2000003,umask=800Offcore outstanding cacheable data read transactions in SQ to uncore. Set Cmask=1 to count cyclesoffcore_requests_outstanding.cycles_with_data_rdcacheCycles when offcore outstanding cacheable Core Data Read transactions are present in SuperQueue (SQ), queue to uncoreevent=0x60,cmask=1,period=2000003,umask=800Cycles when offcore outstanding cacheable Core Data Read transactions are present in SuperQueue (SQ), queue to uncoreoffcore_requests_outstanding.cycles_with_demand_code_rdcacheOffcore outstanding code reads transactions in SuperQueue (SQ), queue to uncore, every cycleevent=0x60,cmask=1,period=2000003,umask=200Offcore outstanding code reads transactions in SuperQueue (SQ), queue to uncore, every cycleoffcore_requests_outstanding.cycles_with_demand_data_rdcacheCycles when offcore outstanding Demand Data Read transactions are present in SuperQueue (SQ), queue to uncoreevent=0x60,cmask=1,period=2000003,umask=100Cycles when offcore outstanding Demand Data Read transactions are present in SuperQueue (SQ), queue to uncoreoffcore_requests_outstanding.cycles_with_demand_rfocacheOffcore outstanding demand rfo reads transactions in SuperQueue (SQ), queue to uncore, every cycleevent=0x60,cmask=1,period=2000003,umask=400Offcore outstanding demand rfo reads transactions in SuperQueue (SQ), queue to uncore, every cycleoffcore_requests_outstanding.demand_code_rdcacheOffcore outstanding code reads transactions in SuperQueue (SQ), queue to uncore, every cycleevent=0x60,period=2000003,umask=200Offcore outstanding Demand Code Read transactions in SQ to uncore. Set Cmask=1 to count cyclesoffcore_requests_outstanding.demand_data_rdcacheOffcore outstanding Demand Data Read transactions in uncore queueevent=0x60,period=2000003,umask=100Offcore outstanding Demand Data Read transactions in SQ to uncore. Set Cmask=1 to count cyclesoffcore_requests_outstanding.demand_data_rd_ge_6cacheCycles with at least 6 offcore outstanding Demand Data Read transactions in uncore queueevent=0x60,cmask=6,period=2000003,umask=100Cycles with at least 6 offcore outstanding Demand Data Read transactions in uncore queueoffcore_requests_outstanding.demand_rfocacheOffcore outstanding RFO store transactions in SuperQueue (SQ), queue to uncoreevent=0x60,period=2000003,umask=400Offcore outstanding RFO store transactions in SQ to uncore. Set Cmask=1 to count cyclesoffcore_response.all_code_rd.llc_hit.any_responsecacheCounts all demand & prefetch code reads that hit in the LLCevent=0xb7,period=100003,umask=1,offcore_rsp=0x3f803c024400offcore_response.all_code_rd.llc_hit.no_snoop_neededcacheCounts demand & prefetch code reads that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003c024400offcore_response.all_data_rd.any_responsecacheCounts all demand & prefetch data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x000105B300offcore_response.all_data_rd.llc_hit.any_responsecacheCounts all demand & prefetch data reads that hit in the LLCevent=0xb7,period=100003,umask=1,offcore_rsp=0x3f803c009100offcore_response.all_data_rd.llc_hit.hitm_other_corecacheCounts demand & prefetch data reads that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003c009100offcore_response.all_data_rd.llc_hit.hit_other_core_no_fwdcacheCounts demand & prefetch data reads that hit in the LLC and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003c009100offcore_response.all_data_rd.llc_hit.no_snoop_neededcacheCounts demand & prefetch data reads that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003c009100offcore_response.all_reads.any_responsecacheCounts all data/code/rfo references (demand & prefetch)event=0xb7,period=100003,umask=1,offcore_rsp=0x000107F700offcore_response.all_rfo.any_responsecacheCounts all demand & prefetch prefetch RFOsevent=0xb7,period=100003,umask=1,offcore_rsp=0x0001012200offcore_response.all_rfo.llc_hit.any_responsecacheCounts all demand & prefetch RFOs that hit in the LLCevent=0xb7,period=100003,umask=1,offcore_rsp=0x3f803c012200offcore_response.all_rfo.llc_hit.no_snoop_neededcacheCounts demand & prefetch RFOs that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003c012200offcore_response.corewb.any_responsecacheCounts all writebacks from the core to the LLCevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000800offcore_response.demand_code_rd.any_responsecacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x0001000400offcore_response.demand_code_rd.llc_hit.any_responsecacheCounts all demand code reads that hit in the LLCevent=0xb7,period=100003,umask=1,offcore_rsp=0x3f803c000400offcore_response.demand_code_rd.llc_hit.no_snoop_neededcacheCounts demand code reads that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003c000400offcore_response.demand_data_rd.any_responsecacheCounts all demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x0001000100offcore_response.demand_data_rd.llc_hit.any_responsecacheCounts all demand data reads that hit in the LLCevent=0xb7,period=100003,umask=1,offcore_rsp=0x3f803c000100offcore_response.demand_data_rd.llc_hit.hitm_other_corecacheCounts demand data reads that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003c000100offcore_response.demand_data_rd.llc_hit.hit_other_core_no_fwdcacheCounts demand data reads that hit in the LLC and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003c000100offcore_response.demand_data_rd.llc_hit.no_snoop_neededcacheCounts demand data reads that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003c000100offcore_response.demand_rfo.any_responsecacheCounts all demand rfo'sevent=0xb7,period=100003,umask=1,offcore_rsp=0x0001000200offcore_response.demand_rfo.llc_hit.any_responsecacheCounts all demand data writes (RFOs) that hit in the LLCevent=0xb7,period=100003,umask=1,offcore_rsp=0x3f803c000200offcore_response.demand_rfo.llc_hit.hitm_other_corecacheCounts demand data writes (RFOs) that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003c000200offcore_response.demand_rfo.llc_hit.no_snoop_neededcacheCounts demand data writes (RFOs) that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003c000200offcore_response.other.any_responsecacheCounts miscellaneous accesses that include port i/o, MMIO and uncacheable memory accesses. It also includes L2 hints sent to LLC to keep a line from being evicted out of the core cachesevent=0xb7,period=100003,umask=1,offcore_rsp=0x1800000offcore_response.split_lock_uc_lock.any_responsecacheCounts requests where the address of an atomic lock instruction spans a cache line boundary or the lock instruction is executed on uncacheable addressevent=0xb7,period=100003,umask=1,offcore_rsp=0x1040000offcore_response.streaming_stores.any_responsecacheCounts non-temporal storesevent=0xb7,period=100003,umask=1,offcore_rsp=0x1080000fp_comp_ops_exe.sse_packed_doublefloating pointNumber of SSE* or AVX-128 FP Computational packed double-precision uops issued this cycleevent=0x10,period=2000003,umask=0x1000Number of SSE* or AVX-128 FP Computational packed double-precision uops issued this cyclefp_comp_ops_exe.sse_packed_singlefloating pointNumber of SSE* or AVX-128 FP Computational packed single-precision uops issued this cycleevent=0x10,period=2000003,umask=0x4000Number of SSE* or AVX-128 FP Computational packed single-precision uops issued this cyclefp_comp_ops_exe.sse_scalar_doublefloating pointNumber of SSE* or AVX-128 FP Computational scalar double-precision uops issued this cycleevent=0x10,period=2000003,umask=0x8000Counts number of SSE* or AVX-128 double precision FP scalar uops executedfp_comp_ops_exe.sse_scalar_singlefloating pointNumber of SSE* or AVX-128 FP Computational scalar single-precision uops issued this cycleevent=0x10,period=2000003,umask=0x2000Number of SSE* or AVX-128 FP Computational scalar single-precision uops issued this cyclefp_comp_ops_exe.x87floating pointNumber of FP Computational Uops Executed this cycle. The number of FADD, FSUB, FCOM, FMULs, integer MULs and IMULs, FDIVs, FPREMs, FSQRTS, integer DIVs, and IDIVs. This event does not distinguish an FADD used in the middle of a transcendental flow from a sevent=0x10,period=2000003,umask=100Counts number of X87 uops executedother_assists.avx_storefloating pointNumber of GSSE memory assist for stores. GSSE microcode assist is being invoked whenever the hardware is unable to properly handle GSSE-256b operationsevent=0xc1,period=100003,umask=800Number of assists associated with 256-bit AVX store operationsother_assists.avx_to_ssefloating pointNumber of transitions from AVX-256 to legacy SSE when penalty applicableevent=0xc1,period=100003,umask=0x1000other_assists.sse_to_avxfloating pointNumber of transitions from SSE to AVX-256 when penalty applicableevent=0xc1,period=100003,umask=0x2000simd_fp_256.packed_doublefloating pointnumber of AVX-256 Computational FP double precision uops issued this cycleevent=0x11,period=2000003,umask=200Counts 256-bit packed double-precision floating-point instructionssimd_fp_256.packed_singlefloating pointnumber of GSSE-256 Computational FP single precision uops issued this cycleevent=0x11,period=2000003,umask=100Counts 256-bit packed single-precision floating-point instructionsdsb2mite_switches.countfrontendDecode Stream Buffer (DSB)-to-MITE switchesevent=0xab,period=2000003,umask=100Number of DSB to MITE switchesdsb2mite_switches.penalty_cyclesfrontendDecode Stream Buffer (DSB)-to-MITE switch true penalty cyclesevent=0xab,period=2000003,umask=200Cycles DSB to MITE switches caused delaydsb_fill.exceed_dsb_linesfrontendCycles when Decode Stream Buffer (DSB) fill encounter more than 3 Decode Stream Buffer (DSB) linesevent=0xac,period=2000003,umask=800DSB Fill encountered > 3 DSB linesicache.hitfrontendNumber of Instruction Cache, Streaming Buffer and Victim Cache Reads. both cacheable and noncacheable, including UC fetchesevent=0x80,period=2000003,umask=100Number of Instruction Cache, Streaming Buffer and Victim Cache Reads. both cacheable and noncacheable, including UC fetchesicache.ifetch_stallfrontendCycles where a code-fetch stalled due to L1 instruction-cache miss or an iTLB missevent=0x80,period=2000003,umask=400Cycles where a code-fetch stalled due to L1 instruction-cache miss or an iTLB missicache.missesfrontendInstruction cache, streaming buffer and victim cache missesevent=0x80,period=200003,umask=200Number of Instruction Cache, Streaming Buffer and Victim Cache Misses. Includes UC accessesidq.all_mite_cycles_any_uopsfrontendCycles MITE is delivering any Uopevent=0x79,cmask=1,period=2000003,umask=0x2400Counts cycles MITE is delivered at least one uops. Set Cmask = 1idq.dsb_cyclesfrontendCycles when uops are being delivered to Instruction Decode Queue (IDQ) from Decode Stream Buffer (DSB) pathevent=0x79,cmask=1,period=2000003,umask=800Cycles when uops are being delivered to Instruction Decode Queue (IDQ) from Decode Stream Buffer (DSB) pathidq.emptyfrontendInstruction Decode Queue (IDQ) empty cyclesevent=0x79,period=2000003,umask=200Counts cycles the IDQ is emptyidq.mite_cyclesfrontendCycles when uops are being delivered to Instruction Decode Queue (IDQ) from MITE pathevent=0x79,cmask=1,period=2000003,umask=400Cycles when uops are being delivered to Instruction Decode Queue (IDQ) from MITE pathidq.ms_cyclesfrontendCycles when uops are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busyevent=0x79,cmask=1,period=2000003,umask=0x3000Cycles when uops are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busyidq.ms_dsb_cyclesfrontendCycles when uops initiated by Decode Stream Buffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busyevent=0x79,cmask=1,period=2000003,umask=0x1000Cycles when uops initiated by Decode Stream Buffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busyidq.ms_dsb_occurfrontendDeliveries to Instruction Decode Queue (IDQ) initiated by Decode Stream Buffer (DSB) while Microcode Sequencer (MS) is busyevent=0x79,cmask=1,edge=1,period=2000003,umask=0x1000Deliveries to Instruction Decode Queue (IDQ) initiated by Decode Stream Buffer (DSB) while Microcode Sequencer (MS) is busyidq.ms_uopsfrontendUops delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busyevent=0x79,period=2000003,umask=0x3000Increment each cycle # of uops delivered to IDQ from MS by either DSB or MITE. Set Cmask = 1 to count cyclesidq_uops_not_delivered.corefrontendUops not delivered to Resource Allocation Table (RAT) per thread when backend of the machine is not stalledevent=0x9c,period=2000003,umask=100Count issue pipeline slots where no uop was delivered from the front end to the back end when there is no back-end stallidq_uops_not_delivered.cycles_0_uops_deliv.corefrontendCycles per thread when 4 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalledevent=0x9c,cmask=4,period=2000003,umask=100idq_uops_not_delivered.cycles_le_1_uop_deliv.corefrontendCycles per thread when 3 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalledevent=0x9c,cmask=3,period=2000003,umask=100machine_clears.memory_orderingmemoryCounts the number of machine clears due to memory order conflictsevent=0xc3,period=100003,umask=200mem_trans_retired.load_latency_gt_128memoryLoads with latency value being above 128 (Must be precise)event=0xcd,period=1009,umask=1,ldlat=0x8000Loads with latency value being above 128 (Must be precise)mem_trans_retired.load_latency_gt_16memoryLoads with latency value being above 16 (Must be precise)event=0xcd,period=20011,umask=1,ldlat=0x1000Loads with latency value being above 16 (Must be precise)mem_trans_retired.load_latency_gt_256memoryLoads with latency value being above 256 (Must be precise)event=0xcd,period=503,umask=1,ldlat=0x10000Loads with latency value being above 256 (Must be precise)mem_trans_retired.load_latency_gt_32memoryLoads with latency value being above 32 (Must be precise)event=0xcd,period=100007,umask=1,ldlat=0x2000Loads with latency value being above 32 (Must be precise)mem_trans_retired.load_latency_gt_4memoryLoads with latency value being above 4 (Must be precise)event=0xcd,period=100003,umask=1,ldlat=0x400Loads with latency value being above 4 (Must be precise)mem_trans_retired.load_latency_gt_512memoryLoads with latency value being above 512 (Must be precise)event=0xcd,period=101,umask=1,ldlat=0x20000Loads with latency value being above 512 (Must be precise)mem_trans_retired.load_latency_gt_64memoryLoads with latency value being above 64 (Must be precise)event=0xcd,period=2003,umask=1,ldlat=0x4000Loads with latency value being above 64 (Must be precise)mem_trans_retired.load_latency_gt_8memoryLoads with latency value being above 8 (Must be precise)event=0xcd,period=50021,umask=1,ldlat=0x800Loads with latency value being above 8 (Must be precise)mem_trans_retired.precise_storememorySample stores and collect precise store operation via PEBS record. PMC3 only (Must be precise)event=0xcd,period=2000003,umask=200misalign_mem_ref.storesmemorySpeculative cache line split STA uops dispatched to L1 cacheevent=5,period=2000003,umask=200Speculative cache-line split Store-address uops dispatched to L1Doffcore_response.all_code_rd.llc_miss.drammemoryCounts all demand & prefetch code reads that miss the LLC  and the data returned from dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x30040024400offcore_response.all_data_rd.llc_miss.drammemoryCounts all demand & prefetch data reads that miss the LLC  and the data returned from dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x30040009100offcore_response.all_reads.llc_miss.drammemoryCounts all data/code/rfo reads (demand & prefetch) that miss the LLC  and the data returned from dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x3004003f700offcore_response.data_in_socket.llc_miss.local_drammemoryCounts LLC replacementsevent=0xb7,period=100003,umask=1,offcore_rsp=0x6004001b300offcore_response.demand_code_rd.llc_miss.drammemoryCounts demand code reads that miss the LLC and the data returned from dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x30040000400offcore_response.demand_data_rd.llc_miss.drammemoryCounts demand data reads that miss the LLC and the data returned from dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x30040000100page_walks.llc_missmemoryNumber of any page walk that had a miss in LLCevent=0xbe,period=100003,umask=100cpl_cycles.ring0_transotherNumber of intervals between processor halts while thread is in ring 0event=0x5c,cmask=1,edge=1,period=100007,umask=100Number of intervals between processor halts while thread is in ring 0arith.fpu_divpipelineDivide operations executedevent=0x14,cmask=1,edge=1,period=100003,umask=400Divide operations executedarith.fpu_div_activepipelineCycles when divider is busy executing divide operationsevent=0x14,period=2000003,umask=100Cycles that the divider is active, includes INT and FP. Set 'edge =1, cmask=1' to count the number of dividesbr_inst_exec.all_conditionalpipelineSpeculative and retired macro-conditional branchesevent=0x88,period=200003,umask=0xc100Speculative and retired macro-conditional branchesbr_inst_exec.all_direct_jmppipelineSpeculative and retired macro-unconditional branches excluding calls and indirectsevent=0x88,period=200003,umask=0xc200Speculative and retired macro-unconditional branches excluding calls and indirectsbr_inst_exec.all_direct_near_callpipelineSpeculative and retired direct near callsevent=0x88,period=200003,umask=0xd000Speculative and retired direct near callsbr_inst_exec.all_indirect_jump_non_call_retpipelineSpeculative and retired indirect branches excluding calls and returnsevent=0x88,period=200003,umask=0xc400Speculative and retired indirect branches excluding calls and returnsbr_inst_exec.nontaken_conditionalpipelineNot taken macro-conditional branchesevent=0x88,period=200003,umask=0x4100Not taken macro-conditional branchesbr_inst_exec.taken_conditionalpipelineTaken speculative and retired macro-conditional branchesevent=0x88,period=200003,umask=0x8100Taken speculative and retired macro-conditional branchesbr_inst_exec.taken_direct_jumppipelineTaken speculative and retired macro-conditional branch instructions excluding calls and indirectsevent=0x88,period=200003,umask=0x8200Taken speculative and retired macro-conditional branch instructions excluding calls and indirectsbr_inst_exec.taken_direct_near_callpipelineTaken speculative and retired direct near callsevent=0x88,period=200003,umask=0x9000Taken speculative and retired direct near callsbr_inst_exec.taken_indirect_jump_non_call_retpipelineTaken speculative and retired indirect branches excluding calls and returnsevent=0x88,period=200003,umask=0x8400Taken speculative and retired indirect branches excluding calls and returnsbr_inst_exec.taken_indirect_near_callpipelineTaken speculative and retired indirect callsevent=0x88,period=200003,umask=0xa000Taken speculative and retired indirect callsbr_inst_exec.taken_indirect_near_returnpipelineTaken speculative and retired indirect branches with return mnemonicevent=0x88,period=200003,umask=0x8800Taken speculative and retired indirect branches with return mnemonicbr_inst_retired.conditionalpipelineConditional branch instructions retired (Precise event)event=0xc4,period=400009,umask=100br_inst_retired.far_branchpipelineFar branch instructions retiredevent=0xc4,period=100007,umask=0x4000Number of far branches retiredbr_inst_retired.near_callpipelineDirect and indirect near call instructions retired (Precise event)event=0xc4,period=100007,umask=200br_inst_retired.near_call_r3pipelineDirect and indirect macro near call instructions retired (captured in ring 3) (Precise event)event=0xc4,period=100007,umask=200br_inst_retired.near_returnpipelineReturn instructions retired (Precise event)event=0xc4,period=100007,umask=800br_inst_retired.near_takenpipelineTaken branch instructions retired (Precise event)event=0xc4,period=400009,umask=0x2000br_misp_exec.all_conditionalpipelineSpeculative and retired mispredicted macro conditional branchesevent=0x89,period=200003,umask=0xc100Speculative and retired mispredicted macro conditional branchesbr_misp_exec.all_indirect_jump_non_call_retpipelineMispredicted indirect branches excluding calls and returnsevent=0x89,period=200003,umask=0xc400Mispredicted indirect branches excluding calls and returnsbr_misp_exec.nontaken_conditionalpipelineNot taken speculative and retired mispredicted macro conditional branchesevent=0x89,period=200003,umask=0x4100Not taken speculative and retired mispredicted macro conditional branchesbr_misp_exec.taken_conditionalpipelineTaken speculative and retired mispredicted macro conditional branchesevent=0x89,period=200003,umask=0x8100Taken speculative and retired mispredicted macro conditional branchesbr_misp_exec.taken_indirect_jump_non_call_retpipelineTaken speculative and retired mispredicted indirect branches excluding calls and returnsevent=0x89,period=200003,umask=0x8400Taken speculative and retired mispredicted indirect branches excluding calls and returnsbr_misp_exec.taken_indirect_near_callpipelineTaken speculative and retired mispredicted indirect callsevent=0x89,period=200003,umask=0xa000Taken speculative and retired mispredicted indirect callsbr_misp_exec.taken_return_nearpipelineTaken speculative and retired mispredicted indirect branches with return mnemonicevent=0x89,period=200003,umask=0x8800Taken speculative and retired mispredicted indirect branches with return mnemonicbr_misp_retired.all_branches_pebspipelineMispredicted macro branch instructions retired (Must be precise)event=0xc5,period=400009,umask=400br_misp_retired.near_takenpipelinenumber of near branch instructions retired that were mispredicted and taken (Precise event)event=0xc5,period=400009,umask=0x2000cpu_clk_thread_unhalted.one_thread_activepipelineCount XClk pulses when this thread is unhalted and the other is haltedevent=0x3c,period=2000003,umask=200cpu_clk_thread_unhalted.ref_xclkpipelineReference cycles when the thread is unhalted (counts at 100 MHz rate)event=0x3c,period=2000003,umask=100Increments at the frequency of XCLK (100 MHz) when not haltedcpu_clk_thread_unhalted.ref_xclk_anypipelineReference cycles when the at least one thread on the physical core is unhalted. (counts at 100 MHz rate)event=0x3c,any=1,period=2000003,umask=100cpu_clk_unhalted.one_thread_activepipelineCount XClk pulses when this thread is unhalted and the other thread is haltedevent=0x3c,period=2000003,umask=200cpu_clk_unhalted.ref_tscpipelineReference cycles when the core is not in halt stateevent=0,period=2000003,umask=300cpu_clk_unhalted.ref_xclkpipelineReference cycles when the thread is unhalted (counts at 100 MHz rate)event=0x3c,period=2000003,umask=100Reference cycles when the thread is unhalted. (counts at 100 MHz rate)cpu_clk_unhalted.ref_xclk_anypipelineReference cycles when the at least one thread on the physical core is unhalted. (counts at 100 MHz rate)event=0x3c,any=1,period=2000003,umask=100cpu_clk_unhalted.threadpipelineCore cycles when the thread is not in halt stateevent=0x3c,period=200000300cpu_clk_unhalted.thread_anypipelineCore cycles when at least one thread on the physical core is not in halt stateevent=0x3c,any=1,period=200000300Core cycles when at least one thread on the physical core is not in halt statecpu_clk_unhalted.thread_p_anypipelineCore cycles when at least one thread on the physical core is not in halt stateevent=0x3c,any=1,period=200000300Core cycles when at least one thread on the physical core is not in halt statecycle_activity.cycles_l1d_pendingpipelineCycles with pending L1 cache miss loadsevent=0xa3,cmask=8,period=2000003,umask=800Cycles with pending L1 cache miss loads. Set AnyThread to count per corecycle_activity.cycles_l2_misspipelineCycles while L2 cache miss load* is outstandingevent=0xa3,cmask=1,period=2000003,umask=100cycle_activity.cycles_l2_pendingpipelineCycles with pending L2 cache miss loadsevent=0xa3,cmask=1,period=2000003,umask=100Cycles with pending L2 miss loads. Set AnyThread to count per corecycle_activity.cycles_ldm_pendingpipelineCycles with pending memory loadsevent=0xa3,cmask=2,period=2000003,umask=200Cycles with pending memory loads. Set AnyThread to count per corecycle_activity.cycles_no_executepipelineThis event increments by 1 for every cycle where there was no execute for this threadevent=0xa3,cmask=4,period=2000003,umask=400Total execution stallscycle_activity.stalls_l2_misspipelineExecution stalls while L2 cache miss load* is outstandingevent=0xa3,cmask=5,period=2000003,umask=500cycle_activity.stalls_l2_pendingpipelineExecution stalls due to L2 cache missesevent=0xa3,cmask=5,period=2000003,umask=500Number of loads missed L2cycle_activity.stalls_ldm_pendingpipelineExecution stalls due to memory subsystemevent=0xa3,cmask=6,period=2000003,umask=600ild_stall.lcppipelineStalls caused by changing prefix length of the instructionevent=0x87,period=2000003,umask=100inst_retired.anypipelineInstructions retired from executionevent=0xc0,period=200000300inst_retired.any_ppipelineNumber of instructions retired. General Counter   - architectural eventevent=0xc0,period=200000300Number of instructions at retirementinst_retired.prec_distpipelinePrecise instruction retired event with HW to reduce effect of PEBS shadow in IP distribution (Must be precise)event=0xc0,period=2000003,umask=100Precise instruction retired event with HW to reduce effect of PEBS shadow in IP distribution (Must be precise)int_misc.recovery_cyclespipelineNumber of cycles waiting for the checkpoints in Resource Allocation Table (RAT) to be recovered after Nuke due to all other cases except JEClear (e.g. whenever a ucode assist is needed like SSE exception, memory disambiguation, etc.)event=0xd,cmask=1,period=2000003,umask=300int_misc.recovery_stalls_countpipelineNumber of occurrences waiting for the checkpoints in Resource Allocation Table (RAT) to be recovered after Nuke due to all other cases except JEClear (e.g. whenever a ucode assist is needed like SSE exception, memory disambiguation, etc.)event=0xd,cmask=1,edge=1,period=2000003,umask=300ld_blocks.no_srpipelineThis event counts the number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in useevent=3,period=100003,umask=800The number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in useld_blocks.store_forwardpipelineCases when loads get true Block-on-Store blocking code preventing store forwardingevent=3,period=100003,umask=200Loads blocked by overlapping with store buffer that cannot be forwardedld_blocks_partial.address_aliaspipelineFalse dependencies in MOB due to partial compare on addressevent=7,period=100003,umask=100False dependencies in MOB due to partial compare on addresslsd.cycles_4_uopspipelineCycles 4 Uops delivered by the LSD, but didn't come from the decoderevent=0xa8,cmask=4,period=2000003,umask=100Cycles 4 Uops delivered by the LSD, but didn't come from the decoderlsd.cycles_activepipelineCycles Uops delivered by the LSD, but didn't come from the decoderevent=0xa8,cmask=1,period=2000003,umask=100Cycles Uops delivered by the LSD, but didn't come from the decodermachine_clears.maskmovpipelineThis event counts the number of executed Intel AVX masked load operations that refer to an illegal address range with the mask bits set to 0event=0xc3,period=100003,umask=0x2000Counts the number of executed AVX masked load operations that refer to an illegal address range with the mask bits set to 0machine_clears.smcpipelineSelf-modifying code (SMC) detectedevent=0xc3,period=100003,umask=400Number of self-modifying-code machine clears detectedother_assists.any_wb_assistpipelineNumber of times any microcode assist is invoked by HW upon uop writebackevent=0xc1,period=100003,umask=0x8000resource_stalls.anypipelineResource-related stall cyclesevent=0xa2,period=2000003,umask=100Cycles Allocation is stalled due to Resource Related reasonresource_stalls.sbpipelineCycles stalled due to no store buffers available. (not including draining form sync)event=0xa2,period=2000003,umask=800Cycles stalled due to no store buffers available (not including draining form sync)rs_events.empty_cyclespipelineCycles when Reservation Station (RS) is empty for the threadevent=0x5e,period=2000003,umask=100Cycles the RS is empty for the threaduops_dispatched_port.port_0pipelineCycles per thread when uops are dispatched to port 0event=0xa1,period=2000003,umask=100Cycles which a Uop is dispatched on port 0uops_dispatched_port.port_0_corepipelineCycles per core when uops are dispatched to port 0event=0xa1,any=1,period=2000003,umask=100Cycles per core when uops are dispatched to port 0uops_dispatched_port.port_1pipelineCycles per thread when uops are dispatched to port 1event=0xa1,period=2000003,umask=200Cycles which a Uop is dispatched on port 1uops_dispatched_port.port_1_corepipelineCycles per core when uops are dispatched to port 1event=0xa1,any=1,period=2000003,umask=200Cycles per core when uops are dispatched to port 1uops_dispatched_port.port_2pipelineCycles per thread when load or STA uops are dispatched to port 2event=0xa1,period=2000003,umask=0xc00Cycles which a Uop is dispatched on port 2uops_dispatched_port.port_2_corepipelineUops dispatched to port 2, loads and stores per core (speculative and retired)event=0xa1,any=1,period=2000003,umask=0xc00uops_dispatched_port.port_3pipelineCycles per thread when load or STA uops are dispatched to port 3event=0xa1,period=2000003,umask=0x3000Cycles which a Uop is dispatched on port 3uops_dispatched_port.port_3_corepipelineCycles per core when load or STA uops are dispatched to port 3event=0xa1,any=1,period=2000003,umask=0x3000Cycles per core when load or STA uops are dispatched to port 3uops_dispatched_port.port_4pipelineCycles per thread when uops are dispatched to port 4event=0xa1,period=2000003,umask=0x4000Cycles which a Uop is dispatched on port 4uops_dispatched_port.port_4_corepipelineCycles per core when uops are dispatched to port 4event=0xa1,any=1,period=2000003,umask=0x4000Cycles per core when uops are dispatched to port 4uops_dispatched_port.port_5pipelineCycles per thread when uops are dispatched to port 5event=0xa1,period=2000003,umask=0x8000Cycles which a Uop is dispatched on port 5uops_dispatched_port.port_5_corepipelineCycles per core when uops are dispatched to port 5event=0xa1,any=1,period=2000003,umask=0x8000Cycles per core when uops are dispatched to port 5uops_executed.corepipelineNumber of uops executed on the coreevent=0xb1,period=2000003,umask=200Counts total number of uops to be executed per-core each cycleuops_executed.core_cycles_ge_1pipelineCycles at least 1 micro-op is executed from any thread on physical coreevent=0xb1,cmask=1,period=2000003,umask=200Cycles at least 1 micro-op is executed from any thread on physical coreuops_executed.core_cycles_ge_2pipelineCycles at least 2 micro-op is executed from any thread on physical coreevent=0xb1,cmask=2,period=2000003,umask=200Cycles at least 2 micro-op is executed from any thread on physical coreuops_executed.core_cycles_ge_3pipelineCycles at least 3 micro-op is executed from any thread on physical coreevent=0xb1,cmask=3,period=2000003,umask=200Cycles at least 3 micro-op is executed from any thread on physical coreuops_executed.core_cycles_ge_4pipelineCycles at least 4 micro-op is executed from any thread on physical coreevent=0xb1,cmask=4,period=2000003,umask=200Cycles at least 4 micro-op is executed from any thread on physical coreuops_executed.core_cycles_nonepipelineCycles with no micro-ops executed from any thread on physical coreevent=0xb1,inv=1,period=2000003,umask=200Cycles with no micro-ops executed from any thread on physical coreuops_executed.stall_cyclespipelineCounts number of cycles no uops were dispatched to be executed on this threadevent=0xb1,cmask=1,inv=1,period=2000003,umask=100uops_executed.threadpipelineCounts the number of uops to be executed per-thread each cycleevent=0xb1,period=2000003,umask=100Counts total number of uops to be executed per-thread each cycle. Set Cmask = 1, INV =1 to count stall cyclesuops_issued.anypipelineUops that Resource Allocation Table (RAT) issues to Reservation Station (RS)event=0xe,period=2000003,umask=100Increments each cycle the # of Uops issued by the RAT to RS. Set Cmask = 1, Inv = 1, Any= 1to count stalled cycles of this coreuops_issued.core_stall_cyclespipelineCycles when Resource Allocation Table (RAT) does not issue Uops to Reservation Station (RS) for all threadsevent=0xe,any=1,cmask=1,inv=1,period=2000003,umask=100Cycles when Resource Allocation Table (RAT) does not issue Uops to Reservation Station (RS) for all threadsuops_issued.flags_mergepipelineNumber of flags-merge uops being allocatedevent=0xe,period=2000003,umask=0x1000Number of flags-merge uops allocated. Such uops adds delayuops_issued.slow_leapipelineNumber of slow LEA uops being allocated. A uop is generally considered SlowLea if it has 3 sources (e.g. 2 sources + immediate) regardless if as a result of LEA instruction or notevent=0xe,period=2000003,umask=0x2000Number of slow LEA or similar uops allocated. Such uop has 3 sources (e.g. 2 sources + immediate) regardless if as a result of LEA instruction or notuops_issued.stall_cyclespipelineCycles when Resource Allocation Table (RAT) does not issue Uops to Reservation Station (RS) for the threadevent=0xe,cmask=1,inv=1,period=2000003,umask=100Cycles when Resource Allocation Table (RAT) does not issue Uops to Reservation Station (RS) for the threaduops_retired.allpipelineRetired uops (Precise event)event=0xc2,period=2000003,umask=100uops_retired.retire_slotspipelineRetirement slots used (Precise event)event=0xc2,period=2000003,umask=200uops_retired.total_cyclespipelineCycles with less than 10 actually retired uopsevent=0xc2,cmask=10,inv=1,period=2000003,umask=100unc_arb_coh_trk_occupancy.alluncore interconnectCycles weighted by number of requests pending in Coherency Trackerevent=0x83,umask=101unc_arb_trk_occupancy.alluncore interconnectCounts cycles weighted by the number of requests waiting for data returning from the memory controller. Accounts for coherent and non-coherent requests initiated by IA cores, processor graphic units, or LLCevent=0x80,umask=101unc_arb_trk_occupancy.cycles_over_half_fulluncore interconnectCycles with at least half of the requests outstanding are waiting for data return from memory controller. Account for coherent and non-coherent requests initiated by IA Cores, Processor Graphics Unit, or LLCevent=0x80,cmask=10,umask=101unc_arb_trk_requests.evictionsuncore interconnectCounts the number of LLC evictions allocatedevent=0x81,umask=0x8001unc_arb_trk_requests.writesuncore interconnectCounts the number of allocated write entries, include full, partial, and LLC evictionsevent=0x81,umask=0x2001unc_clock.socketuncore interconnectThis 48-bit fixed counter counts the UCLK cyclesevent=0xff01dtlb_load_misses.large_page_walk_completedvirtual memoryPage walk for a large page completed for Demand loadevent=8,period=100003,umask=0x8800dtlb_load_misses.miss_causes_a_walkvirtual memoryDemand load Miss in all translation lookaside buffer (TLB) levels causes an page walk of any page sizeevent=8,period=100003,umask=0x8100Misses in all TLB levels that cause a page walk of any page size from demand loadsdtlb_load_misses.stlb_hitvirtual memoryLoad operations that miss the first DTLB level but hit the second and do not cause page walksevent=0x5f,period=100003,umask=400Counts load operations that missed 1st level DTLB but hit the 2nd leveldtlb_load_misses.walk_completedvirtual memoryDemand load Miss in all translation lookaside buffer (TLB) levels causes a page walk that completes of any page sizeevent=8,period=100003,umask=0x8200Misses in all TLB levels that caused page walk completed of any size by demand loadsdtlb_load_misses.walk_durationvirtual memoryDemand load cycles page miss handler (PMH) is busy with this walkevent=8,period=2000003,umask=0x8400Cycle PMH is busy with a walk due to demand loadsdtlb_store_misses.stlb_hitvirtual memoryStore operations that miss the first TLB level but hit the second and do not cause page walksevent=0x49,period=100003,umask=0x1000Store operations that miss the first TLB level but hit the second and do not cause page walksdtlb_store_misses.walk_completedvirtual memoryStore misses in all DTLB levels that cause completed page walksevent=0x49,period=100003,umask=200Miss in all TLB levels causes a page walk that completes of any page size (4K/2M/4M/1G)dtlb_store_misses.walk_durationvirtual memoryCycles when PMH is busy with page walksevent=0x49,period=2000003,umask=400Cycles PMH is busy with this walkept.walk_cyclesvirtual memoryCycle count for an Extended Page table walk.  The Extended Page Directory cache is used by Virtual Machine operating systems while the guest operating systems use the standard TLB cachesevent=0x4f,period=2000003,umask=0x1000itlb.itlb_flushvirtual memoryFlushing of the Instruction TLB (ITLB) pages, includes 4k/2M/4M pagesevent=0xae,period=100007,umask=100Counts the number of ITLB flushes, includes 4k/2M/4M pagesitlb_misses.large_page_walk_completedvirtual memoryCompleted page walks in ITLB due to STLB load misses for large pagesevent=0x85,period=100003,umask=0x8000Completed page walks in ITLB due to STLB load misses for large pagesitlb_misses.miss_causes_a_walkvirtual memoryMisses at all ITLB levels that cause page walksevent=0x85,period=100003,umask=100Misses in all ITLB levels that cause page walksitlb_misses.stlb_hitvirtual memoryOperations that miss the first ITLB level but hit the second and do not cause any page walksevent=0x85,period=100003,umask=0x1000Number of cache load STLB hits. No page walkitlb_misses.walk_completedvirtual memoryMisses in all ITLB levels that cause completed page walksevent=0x85,period=100003,umask=200Misses in all ITLB levels that cause completed page walksitlb_misses.walk_durationvirtual memoryCycles when PMH is busy with page walksevent=0x85,period=2000003,umask=400Cycle PMH is busy with a walktlb_flush.dtlb_threadvirtual memoryDTLB flush attempts of the thread-specific entriesevent=0xbd,period=100007,umask=100DTLB flush attempts of the thread-specific entriestlb_flush.stlb_anyvirtual memorySTLB flush attemptsevent=0xbd,period=100007,umask=0x2000Count number of STLB flush attemptsmem_load_uops_llc_miss_retired.local_dramcacheRetired load uops whose data source was local DRAM (Snoop not needed, Snoop Miss, or Snoop Hit data not forwarded)event=0xd3,period=100007,umask=300mem_load_uops_llc_miss_retired.remote_dramcacheRetired load uops whose data source was remote DRAM (Snoop not needed, Snoop Miss, or Snoop Hit data not forwarded)event=0xd3,period=100007,umask=0xc00mem_load_uops_llc_miss_retired.remote_fwdcacheData forwarded from remote cacheevent=0xd3,period=100007,umask=0x2000mem_load_uops_llc_miss_retired.remote_hitmcacheRemote cache HITMevent=0xd3,period=100007,umask=0x1000offcore_response.all_data_rd.llc_hit.snoop_misscacheCounts demand & prefetch data reads that hit in the LLC and sibling core snoop returned a clean responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003c009100offcore_response.all_pf_data_rd.llc_hit.any_responsecacheCounts all prefetch data reads that hit the LLCevent=0xb7,period=100003,umask=1,offcore_rsp=0x3f803c009000offcore_response.all_pf_data_rd.llc_hit.hitm_other_corecacheCounts prefetch data reads that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003c009000offcore_response.all_pf_data_rd.llc_hit.hit_other_core_no_fwdcacheCounts prefetch data reads that hit in the LLC and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003c009000offcore_response.all_pf_data_rd.llc_hit.no_snoop_neededcacheCounts prefetch data reads that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003c009000offcore_response.all_pf_data_rd.llc_hit.snoop_misscacheCounts prefetch data reads that hit in the LLC and sibling core snoop returned a clean responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003c009000offcore_response.all_reads.llc_hit.any_responsecacheCounts all data/code/rfo reads (demand & prefetch) that hit in the LLCevent=0xb7,period=100003,umask=1,offcore_rsp=0x3f803c03f700offcore_response.all_reads.llc_hit.hitm_other_corecacheCounts all data/code/rfo reads (demand & prefetch) that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003c03f700offcore_response.all_reads.llc_hit.hit_other_core_no_fwdcacheCounts all data/code/rfo reads (demand & prefetch) that hit in the LLC and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003c03f700offcore_response.all_reads.llc_hit.no_snoop_neededcacheCounts all data/code/rfo reads (demand & prefetch) that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003c03f700offcore_response.all_reads.llc_hit.snoop_misscacheCounts all data/code/rfo reads (demand & prefetch) that hit in the LLC and sibling core snoop returned a clean responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003c03f700offcore_response.demand_data_rd.llc_hit.snoop_misscacheCounts demand data reads that hit in the LLC and sibling core snoop returned a clean responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003c000100offcore_response.other.lru_hintscacheCounts L2 hints sent to LLC to keep a line from being evicted out of the core cachesevent=0xb7,period=100003,umask=1,offcore_rsp=0x803c800000offcore_response.other.portio_mmio_uccacheCounts miscellaneous accesses that include port i/o, MMIO and uncacheable memory accessesevent=0xb7,period=100003,umask=1,offcore_rsp=0x23ffc0800000offcore_response.pf_l2_code_rd.llc_hit.any_responsecacheCounts all prefetch (that bring data to L2) code reads that hit in the LLCevent=0xb7,period=100003,umask=1,offcore_rsp=0x3f803c004000offcore_response.pf_l2_data_rd.llc_hit.any_responsecacheCounts prefetch (that bring data to L2) data reads that hit in the LLCevent=0xb7,period=100003,umask=1,offcore_rsp=0x3f803c001000offcore_response.pf_l2_data_rd.llc_hit.hitm_other_corecacheCounts prefetch (that bring data to L2) data reads that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003c001000offcore_response.pf_l2_data_rd.llc_hit.hit_other_core_no_fwdcacheCounts prefetch (that bring data to L2) data reads that hit in the LLC and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003c001000offcore_response.pf_l2_data_rd.llc_hit.no_snoop_neededcacheCounts prefetch (that bring data to L2) data reads that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003c001000offcore_response.pf_l2_data_rd.llc_hit.snoop_misscacheCounts prefetch (that bring data to L2) data reads that hit in the LLC and the snoops sent to sibling cores return clean responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003c001000offcore_response.pf_llc_code_rd.llc_hit.any_responsecacheCounts all prefetch (that bring data to LLC only) code reads that hit in the LLCevent=0xb7,period=100003,umask=1,offcore_rsp=0x3f803c020000offcore_response.pf_llc_data_rd.llc_hit.any_responsecacheCounts prefetch (that bring data to LLC only) data reads that hit in the LLCevent=0xb7,period=100003,umask=1,offcore_rsp=0x3f803c008000offcore_response.pf_llc_data_rd.llc_hit.hitm_other_corecacheCounts prefetch (that bring data to LLC only) data reads that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003c008000offcore_response.pf_llc_data_rd.llc_hit.hit_other_core_no_fwdcacheCounts prefetch (that bring data to LLC only) data reads that hit in the LLC and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003c008000offcore_response.pf_llc_data_rd.llc_hit.no_snoop_neededcacheCounts prefetch (that bring data to LLC only) data reads that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003c008000offcore_response.pf_llc_data_rd.llc_hit.snoop_misscacheCounts prefetch (that bring data to LLC only) data reads that hit in the LLC and the snoops sent to sibling cores return clean responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003c008000offcore_response.all_code_rd.llc_miss.any_responsememoryCounts all demand & prefetch code reads that miss the LLCevent=0xb7,period=100003,umask=1,offcore_rsp=0x3fffc0024400offcore_response.all_code_rd.llc_miss.remote_drammemoryCounts all demand & prefetch code reads that miss the LLC  and the data returned from remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x67f80024400offcore_response.all_code_rd.llc_miss.remote_hit_forwardmemoryCounts all demand & prefetch code reads that miss the LLC  and the data forwarded from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x87f80024400offcore_response.all_data_rd.llc_miss.any_responsememoryCounts all demand & prefetch data reads that hits the LLCevent=0xb7,period=100003,umask=1,offcore_rsp=0x3fffc2009100offcore_response.all_reads.llc_miss.any_responsememoryCounts all data/code/rfo reads (demand & prefetch) that hit the LLCevent=0xb7,period=100003,umask=1,offcore_rsp=0x3fffc203f700offcore_response.all_reads.llc_miss.local_drammemoryCounts all data/code/rfo reads (demand & prefetch) that miss the LLC  and the data returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x6004003f700offcore_response.all_reads.llc_miss.remote_hitmmemoryCounts all data/code/rfo reads (demand & prefetch) that miss the LLC  the data is found in M state in remote cache and forwarded from thereevent=0xb7,period=100003,umask=1,offcore_rsp=0x107fc003f700offcore_response.all_reads.llc_miss.remote_hit_forwardmemoryCounts all data/code/rfo reads (demand & prefetch) that miss the LLC  and the data forwarded from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x87f8203f700offcore_response.demand_code_rd.llc_miss.any_responsememoryCounts all demand code reads that miss the LLCevent=0xb7,period=100003,umask=1,offcore_rsp=0x3fffc2000400offcore_response.demand_code_rd.llc_miss.local_drammemoryCounts all demand code reads that miss the LLC  and the data returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x60040000400offcore_response.demand_code_rd.llc_miss.remote_drammemoryCounts all demand code reads that miss the LLC  and the data returned from remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x67f80000400offcore_response.demand_code_rd.llc_miss.remote_hitmmemoryCounts all demand code reads that miss the LLC  the data is found in M state in remote cache and forwarded from thereevent=0xb7,period=100003,umask=1,offcore_rsp=0x107fc0000400offcore_response.demand_code_rd.llc_miss.remote_hit_forwardmemoryCounts all demand code reads that miss the LLC  and the data forwarded from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x87f82000400offcore_response.demand_data_rd.llc_miss.any_drammemoryCounts demand data reads that miss the LLC  and the data returned from remote & local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x67fc0000100offcore_response.demand_data_rd.llc_miss.any_responsememoryCounts demand data reads that miss in the LLCevent=0xb7,period=100003,umask=1,offcore_rsp=0x3fffc2000100offcore_response.demand_data_rd.llc_miss.local_drammemoryCounts demand data reads that miss the LLC  and the data returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x60040000100offcore_response.demand_data_rd.llc_miss.remote_drammemoryCounts demand data reads that miss the LLC  and the data returned from remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x67f80000100offcore_response.demand_data_rd.llc_miss.remote_hitmmemoryCounts demand data reads that miss the LLC  the data is found in M state in remote cache and forwarded from thereevent=0xb7,period=100003,umask=1,offcore_rsp=0x107fc0000100offcore_response.demand_data_rd.llc_miss.remote_hit_forwardmemoryCounts demand data reads that miss the LLC  and the data forwarded from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x87f82000100offcore_response.demand_rfo.llc_miss.remote_hitmmemoryCounts all demand data writes (RFOs) that miss the LLC and the data is found in M state in remote cache and forwarded from thereevent=0xb7,period=100003,umask=1,offcore_rsp=0x107fc2000200offcore_response.pf_l2_code_rd.llc_miss.any_responsememoryCounts all prefetch (that bring data to L2) code reads that miss the LLC  and the data returned from remote & local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x3fffc2004000offcore_response.pf_l2_data_rd.llc_miss.any_drammemoryCounts prefetch (that bring data to L2) data reads that miss the LLC  and the data returned from remote & local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x67fc0001000offcore_response.pf_l2_data_rd.llc_miss.any_responsememoryCounts prefetch (that bring data to L2) data reads that miss in the LLCevent=0xb7,period=100003,umask=1,offcore_rsp=0x3fffc2001000offcore_response.pf_l2_data_rd.llc_miss.local_drammemoryCounts prefetch (that bring data to L2) data reads that miss the LLC  and the data returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x60040001000offcore_response.pf_l2_data_rd.llc_miss.remote_drammemoryCounts prefetch (that bring data to L2) data reads  that miss the LLC  and the data returned from remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x67f80001000offcore_response.pf_l2_data_rd.llc_miss.remote_hitmmemoryCounts prefetch (that bring data to L2) data reads that miss the LLC  the data is found in M state in remote cache and forwarded from thereevent=0xb7,period=100003,umask=1,offcore_rsp=0x107fc0001000offcore_response.pf_l2_data_rd.llc_miss.remote_hit_forwardmemoryCounts prefetch (that bring data to L2) data reads that miss the LLC  and the data forwarded from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x87f82001000offcore_response.pf_llc_code_rd.llc_miss.any_responsememoryCounts all prefetch (that bring data to LLC only) code reads that miss in the LLCevent=0xb7,period=100003,umask=1,offcore_rsp=0x3fffc2020000offcore_response.pf_llc_data_rd.llc_miss.any_responsememoryCounts prefetch (that bring data to LLC only) data reads that miss in the LLCevent=0xb7,period=100003,umask=1,offcore_rsp=0x3fffc2008000unc_c_llc_lookup.anyuncore cacheCache Lookups; Any Requestevent=0x34,umask=0x1101Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set filter mask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CBoGlCtrl[22:17] bits correspond to [M'FMESI] state.; Filters for any transaction originating from the IPQ or IRQ.  This does not include lookups originating from the ISMQunc_c_llc_lookup.data_readuncore cacheCache Lookups; Data Read Requestevent=0x34,umask=301Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set filter mask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CBoGlCtrl[22:17] bits correspond to [M'FMESI] state.; Read transactionsunc_c_llc_lookup.niduncore cacheCache Lookups; Lookups that Match NIDevent=0x34,umask=0x4101Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set filter mask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CBoGlCtrl[22:17] bits correspond to [M'FMESI] state.; Qualify one of the other subevents by the Target NID.  The NID is programmed in Cn_MSR_PMON_BOX_FILTER.nid.   In conjunction with STATE = I, it is possible to monitor misses to specific NIDs in the systemunc_c_llc_lookup.remote_snoopuncore cacheCache Lookups; External Snoop Requestevent=0x34,umask=901Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set filter mask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CBoGlCtrl[22:17] bits correspond to [M'FMESI] state.; Filters for only snoop requests coming from the remote socket(s) through the IPQunc_c_llc_lookup.writeuncore cacheCache Lookups; Write Requestsevent=0x34,umask=501Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set filter mask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CBoGlCtrl[22:17] bits correspond to [M'FMESI] state.; Writeback transactions from L2 to the LLC  This includes all write transactions -- both Cacheable and UCunc_c_llc_victims.missuncore cacheLines Victimizedevent=0x37,umask=801Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_c_llc_victims.s_stateuncore cacheLines Victimized; Lines in S Stateevent=0x37,umask=401Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_c_ring_ad_used.ccwuncore cacheAD Ring In Use; Counterclockwiseevent=0x1b,umask=0xc01Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_ad_used.cwuncore cacheAD Ring In Use; Clockwiseevent=0x1b,umask=301Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_ad_used.downuncore cacheAD Ring In Use; Downevent=0x1b,umask=0xcc01Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_ad_used.down_vr0_evenuncore cacheAD Ring In Use; Down and Even on Vring 0event=0x1b,umask=401Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Even ring polarity on Virtual Ring 0unc_c_ring_ad_used.down_vr0_odduncore cacheAD Ring In Use; Down and Odd on Vring 0event=0x1b,umask=801Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Odd ring polarity on Virtual Ring 0unc_c_ring_ad_used.down_vr1_evenuncore cacheAD Ring In Use; Down and Even on VRing 1event=0x1b,umask=0x4001Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Even ring polarity on Virtual Ring 1unc_c_ring_ad_used.down_vr1_odduncore cacheAD Ring In Use; Down and Odd on VRing 1event=0x1b,umask=0x8001Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Odd ring polarity on Virtual Ring 1unc_c_ring_ad_used.upuncore cacheAD Ring In Use; Upevent=0x1b,umask=0x3301Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_ad_used.up_vr0_evenuncore cacheAD Ring In Use; Up and Even on Vring 0event=0x1b,umask=101Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Even ring polarity on Virtual Ring 0unc_c_ring_ad_used.up_vr0_odduncore cacheAD Ring In Use; Up and Odd on Vring 0event=0x1b,umask=201Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Odd ring polarity on Virtual Ring 0unc_c_ring_ad_used.up_vr1_evenuncore cacheAD Ring In Use; Up and Even on VRing 1event=0x1b,umask=0x1001Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Even ring polarity on Virtual Ring 1unc_c_ring_ad_used.up_vr1_odduncore cacheAD Ring In Use; Up and Odd on VRing 1event=0x1b,umask=0x2001Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Odd ring polarity on Virtual Ring 1unc_c_ring_ak_used.ccwuncore cacheAK Ring In Use; Counterclockwiseevent=0x1c,umask=0xc01Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_ak_used.cwuncore cacheAK Ring In Use; Clockwiseevent=0x1c,umask=301Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_ak_used.downuncore cacheAK Ring In Use; Downevent=0x1c,umask=0xcc01Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_ak_used.down_vr0_evenuncore cacheAK Ring In Use; Down and Even on Vring 0event=0x1c,umask=401Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Even ring polarity on Virtual Ring 0unc_c_ring_ak_used.down_vr0_odduncore cacheAK Ring In Use; Down and Odd on Vring 0event=0x1c,umask=801Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Odd ring polarity on Virtual Ring 0unc_c_ring_ak_used.down_vr1_evenuncore cacheAK Ring In Use; Down and Even on VRing 1event=0x1c,umask=0x4001Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Even ring polarity on Virtual Ring 1unc_c_ring_ak_used.down_vr1_odduncore cacheAK Ring In Use; Down and Odd on VRing 1event=0x1c,umask=0x8001Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Odd ring polarity on Virtual Ring 1unc_c_ring_ak_used.upuncore cacheAK Ring In Use; Upevent=0x1c,umask=0x3301Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_ak_used.up_vr0_evenuncore cacheAK Ring In Use; Up and Even on Vring 0event=0x1c,umask=101Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Even ring polarity on Virtual Ring 0unc_c_ring_ak_used.up_vr0_odduncore cacheAK Ring In Use; Up and Odd on Vring 0event=0x1c,umask=201Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Odd ring polarity on Virtual Ring 0unc_c_ring_ak_used.up_vr1_evenuncore cacheAK Ring In Use; Up and Even on VRing 1event=0x1c,umask=0x1001Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Even ring polarity on Virtual Ring 1unc_c_ring_ak_used.up_vr1_odduncore cacheAK Ring In Use; Up and Odd on VRing 1event=0x1c,umask=0x2001Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Odd ring polarity on Virtual Ring 1unc_c_ring_bl_used.ccwuncore cacheBL Ring in Use; Counterclockwiseevent=0x1d,umask=0xc01Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_bl_used.cwuncore cacheBL Ring in Use; Clockwiseevent=0x1d,umask=301Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_bl_used.downuncore cacheBL Ring in Use; Downevent=0x1d,umask=0xcc01Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_bl_used.down_vr0_evenuncore cacheBL Ring in Use; Down and Even on Vring 0event=0x1d,umask=401Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Even ring polarity on Virtual Ring 0unc_c_ring_bl_used.down_vr0_odduncore cacheBL Ring in Use; Down and Odd on Vring 0event=0x1d,umask=801Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Odd ring polarity on Virtual Ring 0unc_c_ring_bl_used.down_vr1_evenuncore cacheBL Ring in Use; Down and Even on VRing 1event=0x1d,umask=0x4001Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Even ring polarity on Virtual Ring 1unc_c_ring_bl_used.down_vr1_odduncore cacheBL Ring in Use; Down and Odd on VRing 1event=0x1d,umask=0x8001Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Odd ring polarity on Virtual Ring 1unc_c_ring_bl_used.upuncore cacheBL Ring in Use; Upevent=0x1d,umask=0x3301Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_bl_used.up_vr0_evenuncore cacheBL Ring in Use; Up and Even on Vring 0event=0x1d,umask=101Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Even ring polarity on Virtual Ring 0unc_c_ring_bl_used.up_vr0_odduncore cacheBL Ring in Use; Up and Odd on Vring 0event=0x1d,umask=201Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Odd ring polarity on Virtual Ring 0unc_c_ring_bl_used.up_vr1_evenuncore cacheBL Ring in Use; Up and Even on VRing 1event=0x1d,umask=0x1001Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Even ring polarity on Virtual Ring 1unc_c_ring_bl_used.up_vr1_odduncore cacheBL Ring in Use; Up and Odd on VRing 1event=0x1d,umask=0x2001Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Odd ring polarity on Virtual Ring 1unc_c_ring_bounces.ad_irquncore cacheNumber of LLC responses that bounced on the Ringevent=5,umask=201unc_c_ring_bounces.akuncore cacheNumber of LLC responses that bounced on the Ring.; Acknowledgements to coreevent=5,umask=401unc_c_ring_bounces.ak_coreuncore cacheNumber of LLC responses that bounced on the Ring.: Acknowledgements to coreevent=5,umask=201unc_c_ring_bounces.bluncore cacheNumber of LLC responses that bounced on the Ring.; Data Responses to coreevent=5,umask=801unc_c_ring_bounces.bl_coreuncore cacheNumber of LLC responses that bounced on the Ring.: Data Responses to coreevent=5,umask=401unc_c_ring_bounces.iv_coreuncore cacheNumber of LLC responses that bounced on the Ring.: Snoops of processor's cacheevent=5,umask=801unc_c_ring_iv_used.anyuncore cacheIV Ring in Use; Anyevent=0x1e,umask=0xf01Counts the number of cycles that the IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters any polarityunc_c_ring_iv_used.downuncore cacheIV Ring in Use; Downevent=0x1e,umask=0xcc01Counts the number of cycles that the IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for Down polarityunc_c_ring_iv_used.upuncore cacheIV Ring in Use; Upevent=0x1e,umask=0x3301Counts the number of cycles that the IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for Up polarityunc_c_ring_sink_starved.ad_ipquncore cacheevent=6,umask=201unc_c_ring_sink_starved.ad_irquncore cacheevent=6,umask=101unc_c_ring_sink_starved.ivuncore cacheevent=6,umask=0x1001unc_c_ring_src_thrtluncore cacheevent=701unc_c_rxr_ext_starved.prquncore cacheIngress Arbiter Blocking Cyclesevent=0x12,umask=401IRQ is blocking the ingress queue and causing the starvationunc_c_rxr_inserts.irq_rejecteduncore cacheIngress Allocations: IRQ Rejectedevent=0x13,umask=201Counts number of allocations per cycle into the specified Ingress queueunc_c_rxr_inserts.vfifouncore cacheIngress Allocations; VFIFOevent=0x13,umask=0x1001Counts number of allocations per cycle into the specified Ingress queue.; Counts the number of allocations into the IRQ Ordering FIFO.  In JKT, it is necessary to keep IO requests in order.  Therefore, they are allocated into an ordering FIFO that sits next to the IRQ, and must be satisfied from the FIFO in order (with respect to each other).  This event, in conjunction with the Occupancy Accumulator event, can be used to calculate average lifetime in the FIFO.  Transactions are allocated into the FIFO as soon as they enter the Cachebo (and the IRQ) and are deallocated from the FIFO as soon as they are deallocated from the IRQunc_c_rxr_ismq_retry.wb_creditsuncore cacheISMQ Retries; No WB Creditsevent=0x33,umask=0x8001Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the cores.; Retries of writes to local memory due to lack of HT WB creditsunc_c_rxr_occupancy.irq_rejecteduncore cacheIRQ Rejectedevent=0x11,umask=201Counts number of entries in the specified Ingress queue in each cycleunc_c_rxr_occupancy.vfifouncore cacheIngress Occupancy; VFIFOevent=0x11,umask=0x1001Counts number of entries in the specified Ingress queue in each cycle.; Accumulates the number of used entries in the IRQ Ordering FIFO in each cycle.  In JKT, it is necessary to keep IO requests in order.  Therefore, they are allocated into an ordering FIFO that sits next to the IRQ, and must be satisfied from the FIFO in order (with respect to each other).  This event, in conjunction with the Allocations event, can be used to calculate average lifetime in the FIFO.  This event can be used in conjunction with the Not Empty event to calculate average queue occupancy. Transactions are allocated into the FIFO as soon as they enter the Cachebo (and the IRQ) and are deallocated from the FIFO as soon as they are deallocated from the IRQunc_h_bt_bypassuncore cacheBT Bypassevent=0x5201Number of transactions that bypass the BT (fifo) to HTunc_h_bt_cycles_ne.localuncore cacheBT Cycles Not Empty: Localevent=0x42,umask=101Cycles the Backup Tracker (BT) is not empty. The BT is the actual HOM tracker in IVTunc_h_bt_cycles_ne.remoteuncore cacheBT Cycles Not Empty: Remoteevent=0x42,umask=201Cycles the Backup Tracker (BT) is not empty. The BT is the actual HOM tracker in IVTunc_h_bt_occupancy.localuncore cacheBT Occupancy; Localevent=0x43,umask=101Accumulates the occupancy of the HA BT pool in every cycle.  This can be used with the not empty stat to calculate average queue occupancy or the allocations stat in order to calculate average queue latency.  HA BTs are allocated as soon as a request enters the HA and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ringunc_h_bt_occupancy.reads_localuncore cacheBT Occupancy; Reads Localevent=0x43,umask=401Accumulates the occupancy of the HA BT pool in every cycle.  This can be used with the not empty stat to calculate average queue occupancy or the allocations stat in order to calculate average queue latency.  HA BTs are allocated as soon as a request enters the HA and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ringunc_h_bt_occupancy.reads_remoteuncore cacheBT Occupancy; Reads Remoteevent=0x43,umask=801Accumulates the occupancy of the HA BT pool in every cycle.  This can be used with the not empty stat to calculate average queue occupancy or the allocations stat in order to calculate average queue latency.  HA BTs are allocated as soon as a request enters the HA and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ringunc_h_bt_occupancy.remoteuncore cacheBT Occupancy; Remoteevent=0x43,umask=201Accumulates the occupancy of the HA BT pool in every cycle.  This can be used with the not empty stat to calculate average queue occupancy or the allocations stat in order to calculate average queue latency.  HA BTs are allocated as soon as a request enters the HA and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ringunc_h_bt_occupancy.writes_localuncore cacheBT Occupancy; Writes Localevent=0x43,umask=0x1001Accumulates the occupancy of the HA BT pool in every cycle.  This can be used with the not empty stat to calculate average queue occupancy or the allocations stat in order to calculate average queue latency.  HA BTs are allocated as soon as a request enters the HA and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ringunc_h_bt_occupancy.writes_remoteuncore cacheBT Occupancy; Writes Remoteevent=0x43,umask=0x2001Accumulates the occupancy of the HA BT pool in every cycle.  This can be used with the not empty stat to calculate average queue occupancy or the allocations stat in order to calculate average queue latency.  HA BTs are allocated as soon as a request enters the HA and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ringunc_h_conflict_cycles.ackcnfltsuncore cacheConflict Checks; Acknowledge Conflictsevent=0xb,umask=801Count the number of Ackcnfltsunc_h_conflict_cycles.cmp_fwdsuncore cacheConflict Checks; Cmp Fwdsevent=0xb,umask=0x1001Count the number of Cmp_Fwd. This will give the number of late conflictsunc_h_conflict_cycles.conflictuncore cacheConflict Checks; Conflict Detectedevent=0xb,umask=201Counts the number of cycles that we are handling conflictsunc_h_conflict_cycles.lastuncore cacheConflict Checks; Last in conflict chainevent=0xb,umask=401Count every last conflictor in conflict chain. Can be used to compute the average conflict chain length as (#Ackcnflts/#LastConflictor)+1. This can be used to give a feel for the conflict chain lengths while analyzing lock kernelsunc_h_directory_lookup.anyuncore cacheDirectory Lookups: Any stateevent=0xc,umask=0x1001Counts the number of transactions that looked up the directory.  Can be filtered by requests that had to snoop and those that did not have tounc_h_directory_lookup.snoop_auncore cacheDirectory Lookups: Snoop Aevent=0xc,umask=801Counts the number of transactions that looked up the directory.  Can be filtered by requests that had to snoop and those that did not have tounc_h_directory_lookup.snoop_suncore cacheDirectory Lookups: Snoop Sevent=0xc,umask=201Counts the number of transactions that looked up the directory.  Can be filtered by requests that had to snoop and those that did not have tounc_h_directory_lookup.state_auncore cacheDirectory Lookups: A Stateevent=0xc,umask=0x8001Counts the number of transactions that looked up the directory.  Can be filtered by requests that had to snoop and those that did not have tounc_h_directory_lookup.state_iuncore cacheDirectory Lookups: I Stateevent=0xc,umask=0x2001Counts the number of transactions that looked up the directory.  Can be filtered by requests that had to snoop and those that did not have tounc_h_directory_lookup.state_suncore cacheDirectory Lookups: S Stateevent=0xc,umask=0x4001Counts the number of transactions that looked up the directory.  Can be filtered by requests that had to snoop and those that did not have tounc_h_directory_update.a2iuncore cacheDirectory Updates: A2Ievent=0xd,umask=0x2001Counts the number of directory updates that were required.  These result in writes to the memory controller.  This can be filtered by directory sets and directory clearsunc_h_directory_update.a2suncore cacheDirectory Updates: A2Sevent=0xd,umask=0x4001Counts the number of directory updates that were required.  These result in writes to the memory controller.  This can be filtered by directory sets and directory clearsunc_h_directory_update.i2auncore cacheDirectory Updates: I2Aevent=0xd,umask=401Counts the number of directory updates that were required.  These result in writes to the memory controller.  This can be filtered by directory sets and directory clearsunc_h_directory_update.i2suncore cacheDirectory Updates: I2Sevent=0xd,umask=201Counts the number of directory updates that were required.  These result in writes to the memory controller.  This can be filtered by directory sets and directory clearsunc_h_directory_update.s2auncore cacheDirectory Updates: S2Aevent=0xd,umask=0x1001Counts the number of directory updates that were required.  These result in writes to the memory controller.  This can be filtered by directory sets and directory clearsunc_h_directory_update.s2iuncore cacheDirectory Updates: S2Ievent=0xd,umask=801Counts the number of directory updates that were required.  These result in writes to the memory controller.  This can be filtered by directory sets and directory clearsunc_h_igr_ad_qpi2_accumulatoruncore cacheAD QPI Link 2 Credit Accumulatorevent=0x5901Accumulates the number of credits available to the QPI Link 2 AD Ingress bufferunc_h_igr_bl_qpi2_accumulatoruncore cacheBL QPI Link 2 Credit Accumulatorevent=0x5a01Accumulates the number of credits available to the QPI Link 2 BL Ingress bufferunc_h_igr_credits_ad_qpi2uncore cacheAD QPI Link 2 Credit Accumulatorevent=0x5901Accumulates the number of credits available to the QPI Link 2 AD Ingress bufferunc_h_igr_credits_bl_qpi2uncore cacheBL QPI Link 2 Credit Accumulatorevent=0x5a01Accumulates the number of credits available to the QPI Link 2 BL Ingress bufferunc_h_iodc_conflicts.anyuncore cacheIODC Conflicts; Any Conflictevent=0x57,umask=101unc_h_iodc_conflicts.lastuncore cacheIODC Conflicts; Last Conflictevent=0x57,umask=401unc_h_iodc_conflicts.remote_invi2e_same_rtiduncore cacheIODC Conflicts: Remote InvItoE - Same RTIDevent=0x57,umask=101unc_h_iodc_conflicts.remote_other_same_addruncore cacheIODC Conflicts: Remote (Other) - Same Addrevent=0x57,umask=401unc_h_iodc_insertsuncore cacheIODC Insertsevent=0x5601IODC Allocationsunc_h_iodc_olen_wbmtoiuncore cacheNum IODC 0 Length Writesevent=0x5801Num IODC 0 Length Writebacks M to I - All of which are droppedunc_h_ring_ad_used.ccwuncore cacheHA AD Ring in Use; Counterclockwiseevent=0x3e,umask=0xcc01Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_h_ring_ad_used.ccw_vr0_evenuncore cacheHA AD Ring in Use; Counterclockwise and Even on VRing 0event=0x3e,umask=401Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarity on Virtual Ring 0unc_h_ring_ad_used.ccw_vr0_odduncore cacheHA AD Ring in Use; Counterclockwise and Odd on VRing 0event=0x3e,umask=801Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarity on Virtual Ring 0unc_h_ring_ad_used.ccw_vr1_evenuncore cacheHA AD Ring in Use; Counterclockwise and Even on VRing 1event=0x3e,umask=0x4001Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarity on Virtual Ring 1unc_h_ring_ad_used.ccw_vr1_odduncore cacheHA AD Ring in Use; Counterclockwise and Odd on VRing 1event=0x3e,umask=0x8001Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarity on Virtual Ring 1unc_h_ring_ad_used.cwuncore cacheHA AD Ring in Use; Clockwiseevent=0x3e,umask=0x3301Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_h_ring_ad_used.cw_vr0_evenuncore cacheHA AD Ring in Use; Clockwise and Even on VRing 0event=0x3e,umask=101Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarity on Virtual Ring 0unc_h_ring_ad_used.cw_vr0_odduncore cacheHA AD Ring in Use; Clockwise and Odd on VRing 0event=0x3e,umask=201Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarity on Virtual Ring 0unc_h_ring_ad_used.cw_vr1_evenuncore cacheHA AD Ring in Use; Clockwise and Even on VRing 1event=0x3e,umask=0x1001Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarity on Virtual Ring 1unc_h_ring_ad_used.cw_vr1_odduncore cacheHA AD Ring in Use; Clockwise and Odd on VRing 1event=0x3e,umask=0x2001Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarity on Virtual Ring 1unc_h_ring_ak_used.ccwuncore cacheHA AK Ring in Use; Counterclockwiseevent=0x3f,umask=0xcc01Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_h_ring_ak_used.ccw_vr0_evenuncore cacheHA AK Ring in Use; Counterclockwise and Even on VRing 0event=0x3f,umask=401Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarity on Virtual Ring 0unc_h_ring_ak_used.ccw_vr0_odduncore cacheHA AK Ring in Use; Counterclockwise and Odd on VRing 0event=0x3f,umask=801Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarity on Virtual Ring 0unc_h_ring_ak_used.ccw_vr1_evenuncore cacheHA AK Ring in Use; Counterclockwise and Even on VRing 1event=0x3f,umask=0x4001Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarity on Virtual Ring 1unc_h_ring_ak_used.ccw_vr1_odduncore cacheHA AK Ring in Use; Counterclockwise and Odd on VRing 1event=0x3f,umask=0x8001Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarity on Virtual Ring 1unc_h_ring_ak_used.cwuncore cacheHA AK Ring in Use; Clockwiseevent=0x3f,umask=0x3301Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_h_ring_ak_used.cw_vr0_evenuncore cacheHA AK Ring in Use; Clockwise and Even on VRing 0event=0x3f,umask=101Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarity on Virtual Ring 0unc_h_ring_ak_used.cw_vr0_odduncore cacheHA AK Ring in Use; Clockwise and Odd on VRing 0event=0x3f,umask=201Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarity on Virtual Ring 0unc_h_ring_ak_used.cw_vr1_evenuncore cacheHA AK Ring in Use; Clockwise and Even on VRing 1event=0x3f,umask=0x1001Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarity on Virtual Ring 1unc_h_ring_ak_used.cw_vr1_odduncore cacheHA AK Ring in Use; Clockwise and Odd on VRing 1event=0x3f,umask=0x2001Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarity on Virtual Ring 1unc_h_ring_bl_used.ccwuncore cacheHA BL Ring in Use; Counterclockwiseevent=0x40,umask=0xcc01Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_h_ring_bl_used.ccw_vr0_evenuncore cacheHA BL Ring in Use; Counterclockwise and Even on VRing 0event=0x40,umask=401Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarity on Virtual Ring 0unc_h_ring_bl_used.ccw_vr0_odduncore cacheHA BL Ring in Use; Counterclockwise and Odd on VRing 0event=0x40,umask=801Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarity on Virtual Ring 0unc_h_ring_bl_used.ccw_vr1_evenuncore cacheHA BL Ring in Use; Counterclockwise and Even on VRing 1event=0x40,umask=0x4001Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarity on Virtual Ring 1unc_h_ring_bl_used.ccw_vr1_odduncore cacheHA BL Ring in Use; Counterclockwise and Odd on VRing 1event=0x40,umask=0x8001Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarity on Virtual Ring 1unc_h_ring_bl_used.cwuncore cacheHA BL Ring in Use; Clockwiseevent=0x40,umask=0x3301Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_h_ring_bl_used.cw_vr0_evenuncore cacheHA BL Ring in Use; Clockwise and Even on VRing 0event=0x40,umask=101Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarity on Virtual Ring 0unc_h_ring_bl_used.cw_vr0_odduncore cacheHA BL Ring in Use; Clockwise and Odd on VRing 0event=0x40,umask=201Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarity on Virtual Ring 0unc_h_ring_bl_used.cw_vr1_evenuncore cacheHA BL Ring in Use; Clockwise and Even on VRing 1event=0x40,umask=0x1001Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarity on Virtual Ring 1unc_h_ring_bl_used.cw_vr1_odduncore cacheHA BL Ring in Use; Clockwise and Odd on VRing 1event=0x40,umask=0x2001Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarity on Virtual Ring 1unc_h_snoop_resp.rspsfwduncore cacheSnoop Responses Received; RspSFwdevent=0x21,umask=801Counts the total number of RspI snoop responses received.  Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system.   In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received.  For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1.; Filters for a snoop response of RspSFwd.  This is returned when a remote caching agent forwards data but holds on to its currently copy.  This is common for data and code reads that hit in a remote socket in E or F stateunc_h_tracker_cycles_neuncore cacheTracker Cycles Not Emptyevent=301Counts the number of cycles when the local HA tracker pool is not empty.  This can be used with edge detect to identify the number of situations when the pool became empty.  This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure.  In other words, this buffer could be completely empty, but there may still be credits in use by the CBos.  This stat can be used in conjunction with the occupancy accumulation stat in order to calculate average queue occpancy.  HA trackers are allocated as soon as a request enters the HA if an HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ringunc_h_txr_ad_occupancy.sched0uncore cacheAD Egress Occupancy; Scheduler 0event=0x28,umask=101AD Egress Occupancy; Filter for occupancy from scheduler bank 0unc_h_txr_ad_occupancy.sched1uncore cacheAD Egress Occupancy; Scheduler 1event=0x28,umask=201AD Egress Occupancy; Filter for occupancy from scheduler bank 1unc_h_txr_ak.crd_cbouncore cacheOutbound Ring Transactions on AK: CRD Transactions to Cboevent=0xe,umask=201unc_h_txr_ak_occupancy.sched0uncore cacheAK Egress Occupancy; Scheduler 0event=0x30,umask=101AK Egress Occupancy; Filter for occupancy from scheduler bank 0unc_h_txr_ak_occupancy.sched1uncore cacheAK Egress Occupancy; Scheduler 1event=0x30,umask=201AK Egress Occupancy; Filter for occupancy from scheduler bank 1unc_h_txr_bl_occupancy.alluncore cacheBL Egress Occupancy: Allevent=0x34,umask=301unc_h_txr_bl_occupancy.sched0uncore cacheBL Egress Occupancy; Scheduler 0event=0x34,umask=101BL Egress Occupancy; Filter for occupancy from scheduler bank 0unc_h_txr_bl_occupancy.sched1uncore cacheBL Egress Occupancy; Scheduler 1event=0x34,umask=201BL Egress Occupancy; Filter for occupancy from scheduler bank 1unc_i_address_match.merge_countuncore interconnectAddress Match (Conflict) Count; Conflict Mergesevent=0x17,umask=201Counts the number of times when an inbound write (from a device to memory or another device) had an address match with another request in the write cache.; When two requests to the same address from the same source are received back to back, it is possible to merge the two of them togetherunc_i_address_match.stall_countuncore interconnectAddress Match (Conflict) Count; Conflict Stallsevent=0x17,umask=101Counts the number of times when an inbound write (from a device to memory or another device) had an address match with another request in the write cache.; When it is not possible to merge two conflicting requests, a stall event occurs.  This is bad for performanceunc_i_cache_ack_pending_occupancy.anyuncore interconnectWrite Ack Pending Occupancy; Any Sourceevent=0x14,umask=101Accumulates the number of writes that have acquired ownership but have not yet returned their data to the uncore.  These writes are generally queued up in the switch trying to get to the head of their queues so that they can post their data.  The queue occuapancy increments when the ACK is received, and decrements when either the data is returned OR a tickle is received and ownership is released.  Note that a single tickle can result in multiple decrements.; Tracks only those requests that come from the port specified in the IRP_PmonFilter.OrderingQ register.  This register allows one to select one specific queue.  It is not possible to monitor multiple queues at a timeunc_i_cache_ack_pending_occupancy.sourceuncore interconnectWrite Ack Pending Occupancy; Select Sourceevent=0x14,umask=201Accumulates the number of writes that have acquired ownership but have not yet returned their data to the uncore.  These writes are generally queued up in the switch trying to get to the head of their queues so that they can post their data.  The queue occuapancy increments when the ACK is received, and decrements when either the data is returned OR a tickle is received and ownership is released.  Note that a single tickle can result in multiple decrements.; Tracks all requests from any source portunc_i_cache_own_occupancy.anyuncore interconnectOutstanding Write Ownership Occupancy; Any Sourceevent=0x13,umask=101Accumulates the number of writes (and write prefetches) that are outstanding in the uncore trying to acquire ownership in each cycle.  This can be used with the write transaction count to calculate the average write latency in the uncore.  The occupancy increments when a write request is issued, and decrements when the data is returned.; Tracks all requests from any source portunc_i_cache_own_occupancy.sourceuncore interconnectOutstanding Write Ownership Occupancy; Select Sourceevent=0x13,umask=201Accumulates the number of writes (and write prefetches) that are outstanding in the uncore trying to acquire ownership in each cycle.  This can be used with the write transaction count to calculate the average write latency in the uncore.  The occupancy increments when a write request is issued, and decrements when the data is returned.; Tracks only those requests that come from the port specified in the IRP_PmonFilter.OrderingQ register.  This register allows one to select one specific queue.  It is not possible to monitor multiple queues at a timeunc_i_cache_read_occupancy.anyuncore interconnectOutstanding Read Occupancy; Any Sourceevent=0x10,umask=101Accumulates the number of reads that are outstanding in the uncore in each cycle.  This can be used with the read transaction count to calculate the average read latency in the uncore.  The occupancy increments when a read request is issued, and decrements when the data is returned.; Tracks all requests from any source portunc_i_cache_read_occupancy.sourceuncore interconnectOutstanding Read Occupancy; Select Sourceevent=0x10,umask=201Accumulates the number of reads that are outstanding in the uncore in each cycle.  This can be used with the read transaction count to calculate the average read latency in the uncore.  The occupancy increments when a read request is issued, and decrements when the data is returned.; Tracks only those requests that come from the port specified in the IRP_PmonFilter.OrderingQ register.  This register allows one to select one specific queue.  It is not possible to monitor multiple queues at a timeunc_i_cache_write_occupancy.anyuncore interconnectOutstanding Write Occupancy; Any Sourceevent=0x11,umask=101Accumulates the number of writes (and write prefetches)  that are outstanding in the uncore in each cycle.  This can be used with the transaction count event to calculate the average latency in the uncore.  The occupancy increments when the ownership fetch/prefetch is issued, and decrements the data is returned to the uncore.; Tracks all requests from any source portunc_i_cache_write_occupancy.sourceuncore interconnectOutstanding Write Occupancy; Select Sourceevent=0x11,umask=201Accumulates the number of writes (and write prefetches)  that are outstanding in the uncore in each cycle.  This can be used with the transaction count event to calculate the average latency in the uncore.  The occupancy increments when the ownership fetch/prefetch is issued, and decrements the data is returned to the uncore.; Tracks only those requests that come from the port specified in the IRP_PmonFilter.OrderingQ register.  This register allows one to select one specific queue.  It is not possible to monitor multiple queues at a timeunc_i_rxr_ak_cycles_fulluncore interconnectevent=0xb01Counts the number of cycles when the AK Ingress is full.  This queue is where the IRP receives responses from R2PCIe (the ring)unc_i_rxr_ak_occupancyuncore interconnectevent=0xc01Accumulates the occupancy of the AK Ingress in each cycles.  This queue is where the IRP receives responses from R2PCIe (the ring)unc_i_rxr_bl_drs_cycles_fulluncore interconnectevent=401Counts the number of cycles when the BL Ingress is full.  This queue is where the IRP receives data from R2PCIe (the ring).  It is used for data returns from read requests as well as outbound MMIO writesunc_i_rxr_bl_drs_occupancyuncore interconnectevent=701Accumulates the occupancy of the BL Ingress in each cycles.  This queue is where the IRP receives data from R2PCIe (the ring).  It is used for data returns from read requests as well as outbound MMIO writesunc_i_rxr_bl_ncb_cycles_fulluncore interconnectevent=501Counts the number of cycles when the BL Ingress is full.  This queue is where the IRP receives data from R2PCIe (the ring).  It is used for data returns from read requests as well as outbound MMIO writesunc_i_rxr_bl_ncb_occupancyuncore interconnectevent=801Accumulates the occupancy of the BL Ingress in each cycles.  This queue is where the IRP receives data from R2PCIe (the ring).  It is used for data returns from read requests as well as outbound MMIO writesunc_i_rxr_bl_ncs_cycles_fulluncore interconnectevent=601Counts the number of cycles when the BL Ingress is full.  This queue is where the IRP receives data from R2PCIe (the ring).  It is used for data returns from read requests as well as outbound MMIO writesunc_i_rxr_bl_ncs_occupancyuncore interconnectevent=901Accumulates the occupancy of the BL Ingress in each cycles.  This queue is where the IRP receives data from R2PCIe (the ring).  It is used for data returns from read requests as well as outbound MMIO writesunc_i_tickles.lost_ownershipuncore interconnectTickle Count; Ownership Lostevent=0x16,umask=101Counts the number of tickles that are received.  This is for both explicit (from Cbo) and implicit (internal conflict) tickles.; Tracks the number of requests that lost ownership as a result of a tickle.  When a tickle comes in, if the request is not at the head of the queue in the switch, then that request as well as any requests behind it in the switch queue will lose ownership and have to re-acquire it later when they get to the head of the queue.  This will therefore track the number of requests that lost ownership and not just the number of ticklesunc_i_tickles.top_of_queueuncore interconnectTickle Count; Data Returnedevent=0x16,umask=201Counts the number of tickles that are received.  This is for both explicit (from Cbo) and implicit (internal conflict) tickles.; Tracks the number of cases when a tickle was received but the requests was at the head of the queue in the switch.  In this case, data is returned rather than releasing ownershipunc_i_transactions.pd_prefetchesuncore interconnectInbound Transaction Count: Read Prefetchesevent=0x15,umask=401Counts the number of Inbound transactions from the IRP to the Uncore.  This can be filtered based on request type in addition to the source queue.  Note the special filtering equation.  We do OR-reduction on the request type.  If the SOURCE bit is set, then we also do AND qualification based on the source portIDunc_i_transactions.rd_prefetchesuncore interconnectInbound Transaction Count; Read Prefetchesevent=0x15,umask=401Counts the number of Inbound transactions from the IRP to the Uncore.  This can be filtered based on request type in addition to the source queue.  Note the special filtering equation.  We do OR-reduction on the request type.  If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Tracks the number of read prefetchesunc_i_transactions.readsuncore interconnectInbound Transaction Count; Readsevent=0x15,umask=101Counts the number of Inbound transactions from the IRP to the Uncore.  This can be filtered based on request type in addition to the source queue.  Note the special filtering equation.  We do OR-reduction on the request type.  If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Tracks only read requests (not including read prefetches)unc_i_transactions.writesuncore interconnectInbound Transaction Count; Writesevent=0x15,umask=201Counts the number of Inbound transactions from the IRP to the Uncore.  This can be filtered based on request type in addition to the source queue.  Note the special filtering equation.  We do OR-reduction on the request type.  If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Trackes only write requests.  Each write request should have a prefetch, so there is no need to explicitly track these requests.  For writes that are tickled and have to retry, the counter will be incremented for each retryunc_i_write_ordering_stall_cyclesuncore interconnectWrite Ordering Stallsevent=0x1a01Counts the number of cycles when there are pending write ACK's in the switch but the switch->IRP pipeline is not utilizedunc_q_clockticksuncore interconnectNumber of qfclksevent=0x1401Counts the number of clocks in the QPI LL.  This clock runs at 1/8th the GT/s speed of the QPI link.  For example, a 8GT/s link will have qfclk or 1GHz.  JKT does not support dynamic link speeds, so this frequency is fixedunc_q_match_maskuncore interconnectevent=0x3801unc_q_message.drs.anydatacuncore interconnectevent=0x3801unc_q_message.drs.anyrespuncore interconnectevent=0x3801unc_q_message.drs.anyresp11flitsuncore interconnectevent=0x3801unc_q_message.drs.anyresp9flitsuncore interconnectevent=0x3801unc_q_message.drs.datac_euncore interconnectevent=0x3801unc_q_message.drs.datac_e_cmpuncore interconnectevent=0x3801unc_q_message.drs.datac_e_frcackcnfltuncore interconnectevent=0x3801unc_q_message.drs.datac_funcore interconnectevent=0x3801unc_q_message.drs.datac_f_cmpuncore interconnectevent=0x3801unc_q_message.drs.datac_f_frcackcnfltuncore interconnectevent=0x3801unc_q_message.drs.datac_muncore interconnectevent=0x3801unc_q_message.drs.wbedatauncore interconnectevent=0x3801unc_q_message.drs.wbidatauncore interconnectevent=0x3801unc_q_message.drs.wbsdatauncore interconnectevent=0x3801unc_q_message.hom.anyrequncore interconnectevent=0x3801unc_q_message.hom.anyrespuncore interconnectevent=0x3801unc_q_message.hom.respfwduncore interconnectevent=0x3801unc_q_message.hom.respfwdiuncore interconnectevent=0x3801unc_q_message.hom.respfwdiwbuncore interconnectevent=0x3801unc_q_message.hom.respfwdsuncore interconnectevent=0x3801unc_q_message.hom.respfwdswbuncore interconnectevent=0x3801unc_q_message.hom.respiwbuncore interconnectevent=0x3801unc_q_message.hom.respswbuncore interconnectevent=0x3801unc_q_message.ncb.anyintuncore interconnectevent=0x3801unc_q_message.ncb.anymsguncore interconnectevent=0x3801unc_q_message.ncb.anymsg11flitsuncore interconnectevent=0x3801unc_q_message.ncb.anymsg9flitsuncore interconnectevent=0x3801unc_q_message.ncs.anymsg1or2flitsuncore interconnectevent=0x3801unc_q_message.ncs.anymsg3flitsuncore interconnectevent=0x3801unc_q_message.ncs.ncrduncore interconnectevent=0x3801unc_q_message.ndr.anycmpuncore interconnectevent=0x3801unc_q_message.snp.anysnpuncore interconnectevent=0x3801unc_q_rxl_flits_g0.datauncore interconnectFlits Received - Group 0; Data Tx Flitsevent=1,umask=201Counts the number of flits received from the QPI Link.  It includes filters for Idle, protocol, and Data Flits.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p.; Number of data flits received over QPI.  Each flit contains 64b of data.  This includes both DRS and NCB data flits (coherent and non-coherent).  This can be used to calculate the data bandwidth of the QPI link.  One can get a good picture of the QPI-link characteristics by evaluating the protocol flits, data flits, and idle/null flits.  This does not include the header flits that go in data packetsunc_q_rxl_flits_g0.non_datauncore interconnectFlits Received - Group 0; Non-Data protocol Tx Flitsevent=1,umask=401Counts the number of flits received from the QPI Link.  It includes filters for Idle, protocol, and Data Flits.  Each flit is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits.  Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as data bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information.  To calculate data bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p.; Number of non-NULL non-data flits received across QPI.  This basically tracks the protocol overhead on the QPI link.  One can get a good picture of the QPI-link characteristics by evaluating the protocol flits, data flits, and idle/null flits.  This includes the header flits for data packetsunc_q_rxl_inserts_drsuncore interconnectRx Flit Buffer Allocations - DRSevent=901Number of allocations into the QPI Rx Flit Buffer.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.  This monitors only DRS flitsunc_q_rxl_inserts_homuncore interconnectRx Flit Buffer Allocations - HOMevent=0xc01Number of allocations into the QPI Rx Flit Buffer.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.  This monitors only HOM flitsunc_q_rxl_inserts_ncbuncore interconnectRx Flit Buffer Allocations - NCBevent=0xa01Number of allocations into the QPI Rx Flit Buffer.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.  This monitors only NCB flitsunc_q_rxl_inserts_ncsuncore interconnectRx Flit Buffer Allocations - NCSevent=0xb01Number of allocations into the QPI Rx Flit Buffer.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.  This monitors only NCS flitsunc_q_rxl_inserts_ndruncore interconnectRx Flit Buffer Allocations - NDRevent=0xe01Number of allocations into the QPI Rx Flit Buffer.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.  This monitors only NDR flitsunc_q_rxl_inserts_snpuncore interconnectRx Flit Buffer Allocations - SNPevent=0xd01Number of allocations into the QPI Rx Flit Buffer.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.  This monitors only SNP flitsunc_q_rxl_occupancy_drsuncore interconnectRxQ Occupancy - DRSevent=0x1501Accumulates the number of elements in the QPI RxQ in each cycle.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.  This monitors DRS flits onlyunc_q_rxl_occupancy_homuncore interconnectRxQ Occupancy - HOMevent=0x1801Accumulates the number of elements in the QPI RxQ in each cycle.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.  This monitors HOM flits onlyunc_q_rxl_occupancy_ncbuncore interconnectRxQ Occupancy - NCBevent=0x1601Accumulates the number of elements in the QPI RxQ in each cycle.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.  This monitors NCB flits onlyunc_q_rxl_occupancy_ncsuncore interconnectRxQ Occupancy - NCSevent=0x1701Accumulates the number of elements in the QPI RxQ in each cycle.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.  This monitors NCS flits onlyunc_q_rxl_occupancy_ndruncore interconnectRxQ Occupancy - NDRevent=0x1a01Accumulates the number of elements in the QPI RxQ in each cycle.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.  This monitors NDR flits onlyunc_q_rxl_occupancy_snpuncore interconnectRxQ Occupancy - SNPevent=0x1901Accumulates the number of elements in the QPI RxQ in each cycle.  Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface.  If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency.  This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.  This monitors SNP flits onlyunc_q_txr_ak_ndr_credit_acquired.vn0uncore interconnectR3QPI Egress Credit Occupancy - AK NDR: for VN0event=0x29,umask=101Number of credits into the R3 (for transactions across the BGF) acquired each cycle. Local NDR message class to AK Egressunc_q_txr_ak_ndr_credit_acquired.vn1uncore interconnectR3QPI Egress Credit Occupancy - AK NDR: for VN1event=0x29,umask=201Number of credits into the R3 (for transactions across the BGF) acquired each cycle. Local NDR message class to AK Egressunc_q_txr_ak_ndr_credit_occupancy.vn0uncore interconnectR3QPI Egress Credit Occupancy - AK NDR: for VN0event=0x25,umask=101Occupancy event that tracks the number of credits into the R3 (for transactions across the BGF) available in each cycle.  Local NDR message class to AK Egressunc_q_txr_ak_ndr_credit_occupancy.vn1uncore interconnectR3QPI Egress Credit Occupancy - AK NDR: for VN1event=0x25,umask=201Occupancy event that tracks the number of credits into the R3 (for transactions across the BGF) available in each cycle.  Local NDR message class to AK Egressunc_r3_c_hi_ad_credits_empty.cbo10uncore interconnectCBox AD Credits Emptyevent=0x2c,umask=401No credits available to send to Cbox on the AD Ring (covers higher CBoxes); Cbox 10unc_r3_c_hi_ad_credits_empty.cbo11uncore interconnectCBox AD Credits Emptyevent=0x2c,umask=801No credits available to send to Cbox on the AD Ring (covers higher CBoxes); Cbox 11unc_r3_c_hi_ad_credits_empty.cbo12uncore interconnectCBox AD Credits Emptyevent=0x2c,umask=0x1001No credits available to send to Cbox on the AD Ring (covers higher CBoxes); Cbox 12unc_r3_c_hi_ad_credits_empty.cbo13uncore interconnectCBox AD Credits Emptyevent=0x2c,umask=0x2001No credits available to send to Cbox on the AD Ring (covers higher CBoxes); Cbox 13unc_r3_c_hi_ad_credits_empty.cbo14uncore interconnectCBox AD Credits Emptyevent=0x2c,umask=0x4001No credits available to send to Cbox on the AD Ring (covers higher CBoxes); Cbox 14&16unc_r3_c_hi_ad_credits_empty.cbo8uncore interconnectCBox AD Credits Emptyevent=0x2c,umask=101No credits available to send to Cbox on the AD Ring (covers higher CBoxes); Cbox 8unc_r3_c_hi_ad_credits_empty.cbo9uncore interconnectCBox AD Credits Emptyevent=0x2c,umask=201No credits available to send to Cbox on the AD Ring (covers higher CBoxes); Cbox 9unc_r3_c_lo_ad_credits_empty.cbo0uncore interconnectCBox AD Credits Emptyevent=0x2b,umask=101No credits available to send to Cbox on the AD Ring (covers lower CBoxes); Cbox 0unc_r3_c_lo_ad_credits_empty.cbo1uncore interconnectCBox AD Credits Emptyevent=0x2b,umask=201No credits available to send to Cbox on the AD Ring (covers lower CBoxes); Cbox 1unc_r3_c_lo_ad_credits_empty.cbo2uncore interconnectCBox AD Credits Emptyevent=0x2b,umask=401No credits available to send to Cbox on the AD Ring (covers lower CBoxes); Cbox 2unc_r3_c_lo_ad_credits_empty.cbo3uncore interconnectCBox AD Credits Emptyevent=0x2b,umask=801No credits available to send to Cbox on the AD Ring (covers lower CBoxes); Cbox 3unc_r3_c_lo_ad_credits_empty.cbo4uncore interconnectCBox AD Credits Emptyevent=0x2b,umask=0x1001No credits available to send to Cbox on the AD Ring (covers lower CBoxes); Cbox 4unc_r3_c_lo_ad_credits_empty.cbo5uncore interconnectCBox AD Credits Emptyevent=0x2b,umask=0x2001No credits available to send to Cbox on the AD Ring (covers lower CBoxes); Cbox 5unc_r3_c_lo_ad_credits_empty.cbo6uncore interconnectCBox AD Credits Emptyevent=0x2b,umask=0x4001No credits available to send to Cbox on the AD Ring (covers lower CBoxes); Cbox 6unc_r3_c_lo_ad_credits_empty.cbo7uncore interconnectCBox AD Credits Emptyevent=0x2b,umask=0x8001No credits available to send to Cbox on the AD Ring (covers lower CBoxes); Cbox 7unc_r3_ha_r2_bl_credits_empty.ha0uncore interconnectHA/R2 AD Credits Emptyevent=0x2f,umask=101No credits available to send to either HA or R2 on the BL Ring; HA0unc_r3_ha_r2_bl_credits_empty.ha1uncore interconnectHA/R2 AD Credits Emptyevent=0x2f,umask=201No credits available to send to either HA or R2 on the BL Ring; HA1unc_r3_ha_r2_bl_credits_empty.r2_ncbuncore interconnectHA/R2 AD Credits Emptyevent=0x2f,umask=401No credits available to send to either HA or R2 on the BL Ring; R2 NCB Messagesunc_r3_ha_r2_bl_credits_empty.r2_ncsuncore interconnectHA/R2 AD Credits Emptyevent=0x2f,umask=801No credits available to send to either HA or R2 on the BL Ring; R2 NCS Messagesunc_r3_qpi0_ad_credits_empty.vn0_homuncore interconnectQPI0 AD Credits Emptyevent=0x29,umask=201No credits available to send to QPI0 on the AD Ring; VN0 HOM Messagesunc_r3_qpi0_ad_credits_empty.vn0_ndruncore interconnectQPI0 AD Credits Emptyevent=0x29,umask=801No credits available to send to QPI0 on the AD Ring; VN0 NDR Messagesunc_r3_qpi0_ad_credits_empty.vn0_snpuncore interconnectQPI0 AD Credits Emptyevent=0x29,umask=401No credits available to send to QPI0 on the AD Ring; VN0 SNP Messagesunc_r3_qpi0_ad_credits_empty.vn1_homuncore interconnectQPI0 AD Credits Emptyevent=0x29,umask=0x1001No credits available to send to QPI0 on the AD Ring; VN1 HOM Messagesunc_r3_qpi0_ad_credits_empty.vn1_ndruncore interconnectQPI0 AD Credits Emptyevent=0x29,umask=0x4001No credits available to send to QPI0 on the AD Ring; VN1 NDR Messagesunc_r3_qpi0_ad_credits_empty.vn1_snpuncore interconnectQPI0 AD Credits Emptyevent=0x29,umask=0x2001No credits available to send to QPI0 on the AD Ring; VN1 SNP Messagesunc_r3_qpi0_ad_credits_empty.vnauncore interconnectQPI0 AD Credits Emptyevent=0x29,umask=101No credits available to send to QPI0 on the AD Ring; VNAunc_r3_qpi0_bl_credits_empty.vn0_homuncore interconnectQPI0 BL Credits Emptyevent=0x2d,umask=201No credits available to send to QPI0 on the BL Ring; VN0 HOM Messagesunc_r3_qpi0_bl_credits_empty.vn0_ndruncore interconnectQPI0 BL Credits Emptyevent=0x2d,umask=801No credits available to send to QPI0 on the BL Ring; VN0 NDR Messagesunc_r3_qpi0_bl_credits_empty.vn0_snpuncore interconnectQPI0 BL Credits Emptyevent=0x2d,umask=401No credits available to send to QPI0 on the BL Ring; VN0 SNP Messagesunc_r3_qpi0_bl_credits_empty.vn1_homuncore interconnectQPI0 BL Credits Emptyevent=0x2d,umask=0x1001No credits available to send to QPI0 on the BL Ring; VN1 HOM Messagesunc_r3_qpi0_bl_credits_empty.vn1_ndruncore interconnectQPI0 BL Credits Emptyevent=0x2d,umask=0x4001No credits available to send to QPI0 on the BL Ring; VN1 NDR Messagesunc_r3_qpi0_bl_credits_empty.vn1_snpuncore interconnectQPI0 BL Credits Emptyevent=0x2d,umask=0x2001No credits available to send to QPI0 on the BL Ring; VN1 SNP Messagesunc_r3_qpi0_bl_credits_empty.vnauncore interconnectQPI0 BL Credits Emptyevent=0x2d,umask=101No credits available to send to QPI0 on the BL Ring; VNAunc_r3_qpi1_ad_credits_empty.vn0_homuncore interconnectQPI1 AD Credits Emptyevent=0x2a,umask=201No credits available to send to QPI1 on the AD Ring; VN0 HOM Messagesunc_r3_qpi1_ad_credits_empty.vn0_ndruncore interconnectQPI1 AD Credits Emptyevent=0x2a,umask=801No credits available to send to QPI1 on the AD Ring; VN0 NDR Messagesunc_r3_qpi1_ad_credits_empty.vn0_snpuncore interconnectQPI1 AD Credits Emptyevent=0x2a,umask=401No credits available to send to QPI1 on the AD Ring; VN0 SNP Messagesunc_r3_qpi1_ad_credits_empty.vn1_homuncore interconnectQPI1 AD Credits Emptyevent=0x2a,umask=0x1001No credits available to send to QPI1 on the AD Ring; VN1 HOM Messagesunc_r3_qpi1_ad_credits_empty.vn1_ndruncore interconnectQPI1 AD Credits Emptyevent=0x2a,umask=0x4001No credits available to send to QPI1 on the AD Ring; VN1 NDR Messagesunc_r3_qpi1_ad_credits_empty.vn1_snpuncore interconnectQPI1 AD Credits Emptyevent=0x2a,umask=0x2001No credits available to send to QPI1 on the AD Ring; VN1 SNP Messagesunc_r3_qpi1_ad_credits_empty.vnauncore interconnectQPI1 AD Credits Emptyevent=0x2a,umask=101No credits available to send to QPI1 on the AD Ring; VNAunc_r3_qpi1_bl_credits_empty.vn0_homuncore interconnectQPI1 BL Credits Emptyevent=0x2e,umask=201No credits available to send to QPI1 on the BL Ring; VN0 HOM Messagesunc_r3_qpi1_bl_credits_empty.vn0_ndruncore interconnectQPI1 BL Credits Emptyevent=0x2e,umask=801No credits available to send to QPI1 on the BL Ring; VN0 NDR Messagesunc_r3_qpi1_bl_credits_empty.vn0_snpuncore interconnectQPI1 BL Credits Emptyevent=0x2e,umask=401No credits available to send to QPI1 on the BL Ring; VN0 SNP Messagesunc_r3_qpi1_bl_credits_empty.vn1_homuncore interconnectQPI1 BL Credits Emptyevent=0x2e,umask=0x1001No credits available to send to QPI1 on the BL Ring; VN1 HOM Messagesunc_r3_qpi1_bl_credits_empty.vn1_ndruncore interconnectQPI1 BL Credits Emptyevent=0x2e,umask=0x4001No credits available to send to QPI1 on the BL Ring; VN1 NDR Messagesunc_r3_qpi1_bl_credits_empty.vn1_snpuncore interconnectQPI1 BL Credits Emptyevent=0x2e,umask=0x2001No credits available to send to QPI1 on the BL Ring; VN1 SNP Messagesunc_r3_qpi1_bl_credits_empty.vnauncore interconnectQPI1 BL Credits Emptyevent=0x2e,umask=101No credits available to send to QPI1 on the BL Ring; VNAunc_r3_ring_ad_used.ccwuncore interconnectR3 AD Ring in Use; Counterclockwiseevent=7,umask=0xcc01Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r3_ring_ad_used.ccw_vr0_evenuncore interconnectR3 AD Ring in Use; Counterclockwise and Even on VRing 0event=7,umask=401Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarity on Virtual Ring 0unc_r3_ring_ad_used.ccw_vr0_odduncore interconnectR3 AD Ring in Use; Counterclockwise and Odd on VRing 0event=7,umask=801Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarity on Virtual Ring 0unc_r3_ring_ad_used.cwuncore interconnectR3 AD Ring in Use; Clockwiseevent=7,umask=0x3301Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r3_ring_ad_used.cw_vr0_evenuncore interconnectR3 AD Ring in Use; Clockwise and Even on VRing 0event=7,umask=101Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarity on Virtual Ring 0unc_r3_ring_ad_used.cw_vr0_odduncore interconnectR3 AD Ring in Use; Clockwise and Odd on VRing 0event=7,umask=201Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarity on Virtual Ring 0unc_r3_ring_ak_used.ccwuncore interconnectR3 AK Ring in Use; Counterclockwiseevent=8,umask=0xcc01Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r3_ring_ak_used.ccw_vr0_evenuncore interconnectR3 AK Ring in Use; Counterclockwise and Even on VRing 0event=8,umask=401Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarity on Virtual Ring 0unc_r3_ring_ak_used.ccw_vr0_odduncore interconnectR3 AK Ring in Use; Counterclockwise and Odd on VRing 0event=8,umask=801Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarity on Virtual Ring 0unc_r3_ring_ak_used.cwuncore interconnectR3 AK Ring in Use; Clockwiseevent=8,umask=0x3301Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r3_ring_ak_used.cw_vr0_evenuncore interconnectR3 AK Ring in Use; Clockwise and Even on VRing 0event=8,umask=101Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarity on Virtual Ring 0unc_r3_ring_ak_used.cw_vr0_odduncore interconnectR3 AK Ring in Use; Clockwise and Odd on VRing 0event=8,umask=201Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarity on Virtual Ring 0unc_r3_ring_bl_used.ccwuncore interconnectR3 BL Ring in Use; Counterclockwiseevent=9,umask=0xcc01Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r3_ring_bl_used.ccw_vr0_evenuncore interconnectR3 BL Ring in Use; Counterclockwise and Even on VRing 0event=9,umask=401Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarity on Virtual Ring 0unc_r3_ring_bl_used.ccw_vr0_odduncore interconnectR3 BL Ring in Use; Counterclockwise and Odd on VRing 0event=9,umask=801Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarity on Virtual Ring 0unc_r3_ring_bl_used.cwuncore interconnectR3 BL Ring in Use; Clockwiseevent=9,umask=0x3301Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r3_ring_bl_used.cw_vr0_evenuncore interconnectR3 BL Ring in Use; Clockwise and Even on VRing 0event=9,umask=101Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarity on Virtual Ring 0unc_r3_ring_bl_used.cw_vr0_odduncore interconnectR3 BL Ring in Use; Clockwise and Odd on VRing 0event=9,umask=201Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarity on Virtual Ring 0unc_r3_ring_iv_used.anyuncore interconnectR2 IV Ring in Use; Anyevent=0xa,umask=0xff01Counts the number of cycles that the IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.  The IV ring is unidirectional.  Whether UP or DN is used is dependent on the system programming.  Thereofore, one should generally set both the UP and DN bits for a given polarity (or both) at a given time.; Filters any polarityunc_r3_ring_iv_used.ccwuncore interconnectR2 IV Ring in Use; Counterclockwiseevent=0xa,umask=0xcc01Counts the number of cycles that the IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.  The IV ring is unidirectional.  Whether UP or DN is used is dependent on the system programming.  Thereofore, one should generally set both the UP and DN bits for a given polarity (or both) at a given time.; Filters for Counterclockwise polarityunc_r3_ring_iv_used.cwuncore interconnectR2 IV Ring in Use; Clockwiseevent=0xa,umask=0x3301Counts the number of cycles that the IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.  The IV ring is unidirectional.  Whether UP or DN is used is dependent on the system programming.  Thereofore, one should generally set both the UP and DN bits for a given polarity (or both) at a given time.; Filters for Clockwise polarityunc_r3_rxr_ad_bypasseduncore interconnectAD Ingress Bypassedevent=0x1201Counts the number of times when the AD Ingress was bypassed and an incoming transaction was bypassed directly across the BGF and into the qfclk domainunc_r3_rxr_bypassed.aduncore interconnectIngress Bypassedevent=0x12,umask=101Counts the number of times when the Ingress was bypassed and an incoming transaction was bypassed directly across the BGF and into the qfclk domainunc_r3_rxr_occupancy.drsuncore interconnectIngress Occupancy Accumulator; DRSevent=0x13,umask=801Accumulates the occupancy of a given QPI Ingress queue in each cycles.  This tracks one of the three ring Ingress buffers.  This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latency.; DRS Ingress Queueunc_r3_rxr_occupancy.homuncore interconnectIngress Occupancy Accumulator; HOMevent=0x13,umask=101Accumulates the occupancy of a given QPI Ingress queue in each cycles.  This tracks one of the three ring Ingress buffers.  This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latency.; HOM Ingress Queueunc_r3_rxr_occupancy.ncbuncore interconnectIngress Occupancy Accumulator; NCBevent=0x13,umask=0x1001Accumulates the occupancy of a given QPI Ingress queue in each cycles.  This tracks one of the three ring Ingress buffers.  This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latency.; NCB Ingress Queueunc_r3_rxr_occupancy.ncsuncore interconnectIngress Occupancy Accumulator; NCSevent=0x13,umask=0x2001Accumulates the occupancy of a given QPI Ingress queue in each cycles.  This tracks one of the three ring Ingress buffers.  This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latency.; NCS Ingress Queueunc_r3_rxr_occupancy.ndruncore interconnectIngress Occupancy Accumulator; NDRevent=0x13,umask=401Accumulates the occupancy of a given QPI Ingress queue in each cycles.  This tracks one of the three ring Ingress buffers.  This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latency.; NDR Ingress Queueunc_r3_rxr_occupancy.snpuncore interconnectIngress Occupancy Accumulator; SNPevent=0x13,umask=201Accumulates the occupancy of a given QPI Ingress queue in each cycles.  This tracks one of the three ring Ingress buffers.  This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latency.; SNP Ingress Queueunc_r3_txr_nack_ccw.aduncore interconnectEgress NACK; AK CCWevent=0x28,umask=101BL CounterClockwise Egress Queueunc_r3_txr_nack_ccw.akuncore interconnectEgress NACK; BL CWevent=0x28,umask=201AD Clockwise Egress Queueunc_r3_txr_nack_ccw.bluncore interconnectEgress NACK; BL CCWevent=0x28,umask=401AD CounterClockwise Egress Queueunc_r3_txr_nack_cw.aduncore interconnectEgress NACK; AD CWevent=0x26,umask=101AD Clockwise Egress Queueunc_r3_txr_nack_cw.akuncore interconnectEgress NACK; AD CCWevent=0x26,umask=201AD CounterClockwise Egress Queueunc_r3_txr_nack_cw.bluncore interconnectEgress NACK; AK CWevent=0x26,umask=401BL Clockwise Egress Queueunc_r3_vna_credits_acquireduncore interconnectVNA credit Acquisitionsevent=0x3301Number of QPI VNA Credit acquisitions.  This event can be used in conjunction with the VNA In-Use Accumulator to calculate the average lifetime of a credit holder.  VNA credits are used by all message classes in order to communicate across QPI.  If a packet is unable to acquire credits, it will then attempt to use credits from the VN0 pool.  Note that a single packet may require multiple flit buffers (i.e. when data is being transferred).  Therefore, this event will increment by the number of credits acquired in each cycle.  Filtering based on message class is not provided.  One can count the number of packets transferred in a given message class using an qfclk eventunc_r3_vna_credit_cycles_outuncore interconnectCycles with no VNA credits availableevent=0x3101Number of QPI uclk cycles when the transmitted has no VNA credits available and therefore cannot send any requests on this channel.  Note that this does not mean that no flits can be transmitted, as those holding VN0 credits will still (potentially) be able to transmit.  Generally it is the goal of the uncore that VNA credits should not run out, as this can substantially throttle back useful QPI bandwidthunc_r3_vna_credit_cycles_useduncore interconnectCycles with 1 or more VNA credits in useevent=0x3201Number of QPI uclk cycles with one or more VNA credits in use.  This event can be used in conjunction with the VNA In-Use Accumulator to calculate the average number of used VNA creditsunc_u_clockticksuncore interconnectevent=001unc_u_event_msg.int_priouncore interconnectVLW Receivedevent=0x42,umask=0x1001Virtual Logical Wire (legacy) message were received from Uncore.   Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadIDunc_u_event_msg.ipi_rcvduncore interconnectVLW Receivedevent=0x42,umask=401Virtual Logical Wire (legacy) message were received from Uncore.   Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadIDunc_u_event_msg.msi_rcvduncore interconnectVLW Receivedevent=0x42,umask=201Virtual Logical Wire (legacy) message were received from Uncore.   Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadIDunc_u_event_msg.vlw_rcvduncore interconnectVLW Receivedevent=0x42,umask=101Virtual Logical Wire (legacy) message were received from Uncore.   Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadIDunc_u_racu_requestsuncore interconnectRACU Requestevent=0x4601unc_r2_iio_credits_reject.drsuncore ioR2PCIe IIO Failed to Acquire a Credit; DRSevent=0x34,umask=801Counts the number of times that a request pending in the BL Ingress attempted to acquire either a NCB or NCS credit to transmit into the IIO, but was rejected because no credits were available.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly).; Credits to the IIO for the DRS message classunc_r2_ring_ad_used.ccwuncore ioR2 AD Ring in Use; Counterclockwiseevent=7,umask=0xcc01Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r2_ring_ad_used.ccw_vr0_evenuncore ioR2 AD Ring in Use; Counterclockwise and Even on VRing 0event=7,umask=401Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarity on Virtual Ring 0unc_r2_ring_ad_used.ccw_vr0_odduncore ioR2 AD Ring in Use; Counterclockwise and Odd on VRing 0event=7,umask=801Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarity on Virtual Ring 0unc_r2_ring_ad_used.ccw_vr1_evenuncore ioR2 AD Ring in Use; Counterclockwise and Even on VRing 1event=7,umask=0x4001Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarity on Virtual Ring 1unc_r2_ring_ad_used.ccw_vr1_odduncore ioR2 AD Ring in Use; Counterclockwise and Odd on VRing 1event=7,umask=0x8001Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarity on Virtual Ring 1unc_r2_ring_ad_used.cwuncore ioR2 AD Ring in Use; Clockwiseevent=7,umask=0x3301Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r2_ring_ad_used.cw_vr0_evenuncore ioR2 AD Ring in Use; Clockwise and Even on VRing 0event=7,umask=101Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarity on Virtual Ring 0unc_r2_ring_ad_used.cw_vr0_odduncore ioR2 AD Ring in Use; Clockwise and Odd on VRing 0event=7,umask=201Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarity on Virtual Ring 0unc_r2_ring_ad_used.cw_vr1_evenuncore ioR2 AD Ring in Use; Clockwise and Even on VRing 1event=7,umask=0x1001Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarity on Virtual Ring 1unc_r2_ring_ad_used.cw_vr1_odduncore ioR2 AD Ring in Use; Clockwise and Odd on VRing 1event=7,umask=0x2001Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarity on Virtual Ring 1unc_r2_ring_ak_used.ccwuncore ioR2 AK Ring in Use; Counterclockwiseevent=8,umask=0xcc01Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r2_ring_ak_used.ccw_vr0_evenuncore ioR2 AK Ring in Use; Counterclockwise and Even on VRing 0event=8,umask=401Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarity on Virtual Ring 0unc_r2_ring_ak_used.ccw_vr0_odduncore ioR2 AK Ring in Use; Counterclockwise and Odd on VRing 0event=8,umask=801Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarity on Virtual Ring 0unc_r2_ring_ak_used.ccw_vr1_evenuncore ioR2 AK Ring in Use; Counterclockwise and Even on VRing 1event=8,umask=0x4001Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarity on Virtual Ring 1unc_r2_ring_ak_used.ccw_vr1_odduncore ioR2 AK Ring in Use; Counterclockwise and Odd on VRing 1event=8,umask=0x8001Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarity on Virtual Ring 1unc_r2_ring_ak_used.cwuncore ioR2 AK Ring in Use; Clockwiseevent=8,umask=0x3301Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r2_ring_ak_used.cw_vr0_evenuncore ioR2 AK Ring in Use; Clockwise and Even on VRing 0event=8,umask=101Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarity on Virtual Ring 0unc_r2_ring_ak_used.cw_vr0_odduncore ioR2 AK Ring in Use; Clockwise and Odd on VRing 0event=8,umask=201Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarity on Virtual Ring 0unc_r2_ring_ak_used.cw_vr1_evenuncore ioR2 AK Ring in Use; Clockwise and Even on VRing 1event=8,umask=0x1001Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarity on Virtual Ring 1unc_r2_ring_ak_used.cw_vr1_odduncore ioR2 AK Ring in Use; Clockwise and Odd on VRing 1event=8,umask=0x2001Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarity on Virtual Ring 1unc_r2_ring_bl_used.ccwuncore ioR2 BL Ring in Use; Counterclockwiseevent=9,umask=0xcc01Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r2_ring_bl_used.ccw_vr0_evenuncore ioR2 BL Ring in Use; Counterclockwise and Even on VRing 0event=9,umask=401Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarity on Virtual Ring 0unc_r2_ring_bl_used.ccw_vr0_odduncore ioR2 BL Ring in Use; Counterclockwise and Odd on VRing 0event=9,umask=801Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarity on Virtual Ring 0unc_r2_ring_bl_used.ccw_vr1_evenuncore ioR2 BL Ring in Use; Counterclockwise and Even on VRing 1event=9,umask=0x4001Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarity on Virtual Ring 1unc_r2_ring_bl_used.ccw_vr1_odduncore ioR2 BL Ring in Use; Counterclockwise and Odd on VRing 1event=9,umask=0x8001Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarity on Virtual Ring 1unc_r2_ring_bl_used.cwuncore ioR2 BL Ring in Use; Clockwiseevent=9,umask=0x3301Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r2_ring_bl_used.cw_vr0_evenuncore ioR2 BL Ring in Use; Clockwise and Even on VRing 0event=9,umask=101Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarity on Virtual Ring 0unc_r2_ring_bl_used.cw_vr0_odduncore ioR2 BL Ring in Use; Clockwise and Odd on VRing 0event=9,umask=201Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarity on Virtual Ring 0unc_r2_ring_bl_used.cw_vr1_evenuncore ioR2 BL Ring in Use; Clockwise and Even on VRing 1event=9,umask=0x1001Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarity on Virtual Ring 1unc_r2_ring_bl_used.cw_vr1_odduncore ioR2 BL Ring in Use; Clockwise and Odd on VRing 1event=9,umask=0x2001Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarity on Virtual Ring 1unc_r2_ring_iv_used.anyuncore ioR2 IV Ring in Use; Anyevent=0xa,umask=0xff01Counts the number of cycles that the IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.  The IV ring is unidirectional.  Whether UP or DN is used is dependent on the system programming.  Thereofore, one should generally set both the UP and DN bits for a given polarity (or both) at a given time.; Filters any polarityunc_r2_ring_iv_used.ccwuncore ioR2 IV Ring in Use; Counterclockwiseevent=0xa,umask=0xcc01Counts the number of cycles that the IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.  The IV ring is unidirectional.  Whether UP or DN is used is dependent on the system programming.  Thereofore, one should generally set both the UP and DN bits for a given polarity (or both) at a given time.; Filters for Counterclockwise polarityunc_r2_ring_iv_used.cwuncore ioR2 IV Ring in Use; Clockwiseevent=0xa,umask=0x3301Counts the number of cycles that the IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.  The IV ring is unidirectional.  Whether UP or DN is used is dependent on the system programming.  Thereofore, one should generally set both the UP and DN bits for a given polarity (or both) at a given time.; Filters for Clockwise polarityunc_r2_rxr_ak_bouncesuncore ioAK Ingress Bouncedevent=0x1201Counts the number of times when a request destined for the AK ingress bouncedunc_r2_rxr_ak_bounces.ccwuncore ioAK Ingress Bounced; Counterclockwiseevent=0x12,umask=201Counts the number of times when a request destined for the AK ingress bouncedunc_r2_rxr_ak_bounces.cwuncore ioAK Ingress Bounced; Clockwiseevent=0x12,umask=101Counts the number of times when a request destined for the AK ingress bouncedunc_r2_txr_nack_ccw.aduncore ioEgress CCW NACK; AD CCWevent=0x28,umask=101AD CounterClockwise Egress Queueunc_r2_txr_nack_ccw.akuncore ioEgress CCW NACK; AK CCWevent=0x28,umask=201AK CounterClockwise Egress Queueunc_r2_txr_nack_ccw.bluncore ioEgress CCW NACK; BL CCWevent=0x28,umask=401BL CounterClockwise Egress Queueunc_r2_txr_nack_cw.aduncore ioEgress CW NACK; AD CWevent=0x26,umask=101AD Clockwise Egress Queueunc_r2_txr_nack_cw.akuncore ioEgress CW NACK; AK CWevent=0x26,umask=201AK Clockwise Egress Queueunc_r2_txr_nack_cw.bluncore ioEgress CW NACK; BL CWevent=0x26,umask=401BL Clockwise Egress Queueunc_m_power_pcu_throttlinguncore memoryevent=0x4201unc_m_rd_cas_rank0.bank0uncore memoryRD_CAS Access to Rank 0; Bank 0event=0xb0,umask=101unc_m_rd_cas_rank0.bank1uncore memoryRD_CAS Access to Rank 0; Bank 1event=0xb0,umask=201unc_m_rd_cas_rank0.bank2uncore memoryRD_CAS Access to Rank 0; Bank 2event=0xb0,umask=401unc_m_rd_cas_rank0.bank3uncore memoryRD_CAS Access to Rank 0; Bank 3event=0xb0,umask=801unc_m_rd_cas_rank0.bank4uncore memoryRD_CAS Access to Rank 0; Bank 4event=0xb0,umask=0x1001unc_m_rd_cas_rank0.bank5uncore memoryRD_CAS Access to Rank 0; Bank 5event=0xb0,umask=0x2001unc_m_rd_cas_rank0.bank6uncore memoryRD_CAS Access to Rank 0; Bank 6event=0xb0,umask=0x4001unc_m_rd_cas_rank0.bank7uncore memoryRD_CAS Access to Rank 0; Bank 7event=0xb0,umask=0x8001unc_m_rd_cas_rank1.bank0uncore memoryRD_CAS Access to Rank 1; Bank 0event=0xb1,umask=101unc_m_rd_cas_rank1.bank1uncore memoryRD_CAS Access to Rank 1; Bank 1event=0xb1,umask=201unc_m_rd_cas_rank1.bank2uncore memoryRD_CAS Access to Rank 1; Bank 2event=0xb1,umask=401unc_m_rd_cas_rank1.bank3uncore memoryRD_CAS Access to Rank 1; Bank 3event=0xb1,umask=801unc_m_rd_cas_rank1.bank4uncore memoryRD_CAS Access to Rank 1; Bank 4event=0xb1,umask=0x1001unc_m_rd_cas_rank1.bank5uncore memoryRD_CAS Access to Rank 1; Bank 5event=0xb1,umask=0x2001unc_m_rd_cas_rank1.bank6uncore memoryRD_CAS Access to Rank 1; Bank 6event=0xb1,umask=0x4001unc_m_rd_cas_rank1.bank7uncore memoryRD_CAS Access to Rank 1; Bank 7event=0xb1,umask=0x8001unc_m_rd_cas_rank2.bank0uncore memoryRD_CAS Access to Rank 2; Bank 0event=0xb2,umask=101unc_m_rd_cas_rank2.bank1uncore memoryRD_CAS Access to Rank 2; Bank 1event=0xb2,umask=201unc_m_rd_cas_rank2.bank2uncore memoryRD_CAS Access to Rank 2; Bank 2event=0xb2,umask=401unc_m_rd_cas_rank2.bank3uncore memoryRD_CAS Access to Rank 2; Bank 3event=0xb2,umask=801unc_m_rd_cas_rank2.bank4uncore memoryRD_CAS Access to Rank 2; Bank 4event=0xb2,umask=0x1001unc_m_rd_cas_rank2.bank5uncore memoryRD_CAS Access to Rank 2; Bank 5event=0xb2,umask=0x2001unc_m_rd_cas_rank2.bank6uncore memoryRD_CAS Access to Rank 2; Bank 6event=0xb2,umask=0x4001unc_m_rd_cas_rank2.bank7uncore memoryRD_CAS Access to Rank 2; Bank 7event=0xb2,umask=0x8001unc_m_rd_cas_rank3.bank0uncore memoryRD_CAS Access to Rank 3; Bank 0event=0xb3,umask=101unc_m_rd_cas_rank3.bank1uncore memoryRD_CAS Access to Rank 3; Bank 1event=0xb3,umask=201unc_m_rd_cas_rank3.bank2uncore memoryRD_CAS Access to Rank 3; Bank 2event=0xb3,umask=401unc_m_rd_cas_rank3.bank3uncore memoryRD_CAS Access to Rank 3; Bank 3event=0xb3,umask=801unc_m_rd_cas_rank3.bank4uncore memoryRD_CAS Access to Rank 3; Bank 4event=0xb3,umask=0x1001unc_m_rd_cas_rank3.bank5uncore memoryRD_CAS Access to Rank 3; Bank 5event=0xb3,umask=0x2001unc_m_rd_cas_rank3.bank6uncore memoryRD_CAS Access to Rank 3; Bank 6event=0xb3,umask=0x4001unc_m_rd_cas_rank3.bank7uncore memoryRD_CAS Access to Rank 3; Bank 7event=0xb3,umask=0x8001unc_m_rd_cas_rank4.bank0uncore memoryRD_CAS Access to Rank 4; Bank 0event=0xb4,umask=101unc_m_rd_cas_rank4.bank1uncore memoryRD_CAS Access to Rank 4; Bank 1event=0xb4,umask=201unc_m_rd_cas_rank4.bank2uncore memoryRD_CAS Access to Rank 4; Bank 2event=0xb4,umask=401unc_m_rd_cas_rank4.bank3uncore memoryRD_CAS Access to Rank 4; Bank 3event=0xb4,umask=801unc_m_rd_cas_rank4.bank4uncore memoryRD_CAS Access to Rank 4; Bank 4event=0xb4,umask=0x1001unc_m_rd_cas_rank4.bank5uncore memoryRD_CAS Access to Rank 4; Bank 5event=0xb4,umask=0x2001unc_m_rd_cas_rank4.bank6uncore memoryRD_CAS Access to Rank 4; Bank 6event=0xb4,umask=0x4001unc_m_rd_cas_rank4.bank7uncore memoryRD_CAS Access to Rank 4; Bank 7event=0xb4,umask=0x8001unc_m_rd_cas_rank5.bank0uncore memoryRD_CAS Access to Rank 5; Bank 0event=0xb5,umask=101unc_m_rd_cas_rank5.bank1uncore memoryRD_CAS Access to Rank 5; Bank 1event=0xb5,umask=201unc_m_rd_cas_rank5.bank2uncore memoryRD_CAS Access to Rank 5; Bank 2event=0xb5,umask=401unc_m_rd_cas_rank5.bank3uncore memoryRD_CAS Access to Rank 5; Bank 3event=0xb5,umask=801unc_m_rd_cas_rank5.bank4uncore memoryRD_CAS Access to Rank 5; Bank 4event=0xb5,umask=0x1001unc_m_rd_cas_rank5.bank5uncore memoryRD_CAS Access to Rank 5; Bank 5event=0xb5,umask=0x2001unc_m_rd_cas_rank5.bank6uncore memoryRD_CAS Access to Rank 5; Bank 6event=0xb5,umask=0x4001unc_m_rd_cas_rank5.bank7uncore memoryRD_CAS Access to Rank 5; Bank 7event=0xb5,umask=0x8001unc_m_rd_cas_rank6.bank0uncore memoryRD_CAS Access to Rank 6; Bank 0event=0xb6,umask=101unc_m_rd_cas_rank6.bank1uncore memoryRD_CAS Access to Rank 6; Bank 1event=0xb6,umask=201unc_m_rd_cas_rank6.bank2uncore memoryRD_CAS Access to Rank 6; Bank 2event=0xb6,umask=401unc_m_rd_cas_rank6.bank3uncore memoryRD_CAS Access to Rank 6; Bank 3event=0xb6,umask=801unc_m_rd_cas_rank6.bank4uncore memoryRD_CAS Access to Rank 6; Bank 4event=0xb6,umask=0x1001unc_m_rd_cas_rank6.bank5uncore memoryRD_CAS Access to Rank 6; Bank 5event=0xb6,umask=0x2001unc_m_rd_cas_rank6.bank6uncore memoryRD_CAS Access to Rank 6; Bank 6event=0xb6,umask=0x4001unc_m_rd_cas_rank6.bank7uncore memoryRD_CAS Access to Rank 6; Bank 7event=0xb6,umask=0x8001unc_m_rd_cas_rank7.bank0uncore memoryRD_CAS Access to Rank 7; Bank 0event=0xb7,umask=101unc_m_rd_cas_rank7.bank1uncore memoryRD_CAS Access to Rank 7; Bank 1event=0xb7,umask=201unc_m_rd_cas_rank7.bank2uncore memoryRD_CAS Access to Rank 7; Bank 2event=0xb7,umask=401unc_m_rd_cas_rank7.bank3uncore memoryRD_CAS Access to Rank 7; Bank 3event=0xb7,umask=801unc_m_rd_cas_rank7.bank4uncore memoryRD_CAS Access to Rank 7; Bank 4event=0xb7,umask=0x1001unc_m_rd_cas_rank7.bank5uncore memoryRD_CAS Access to Rank 7; Bank 5event=0xb7,umask=0x2001unc_m_rd_cas_rank7.bank6uncore memoryRD_CAS Access to Rank 7; Bank 6event=0xb7,umask=0x4001unc_m_rd_cas_rank7.bank7uncore memoryRD_CAS Access to Rank 7; Bank 7event=0xb7,umask=0x8001unc_m_wpq_insertsuncore memoryWrite Pending Queue Allocationsevent=0x2001Counts the number of allocations into the Write Pending Queue.  This can then be used to calculate the average queuing latency (in conjunction with the WPQ occupancy count).  The WPQ is used to schedule write out to the memory controller and to track the writes.  Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC.  They deallocate after being issued to DRAM.  Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have posted to the iMCunc_m_wr_cas_rank0.bank0uncore memoryWR_CAS Access to Rank 0; Bank 0event=0xb8,umask=101unc_m_wr_cas_rank0.bank1uncore memoryWR_CAS Access to Rank 0; Bank 1event=0xb8,umask=201unc_m_wr_cas_rank0.bank2uncore memoryWR_CAS Access to Rank 0; Bank 2event=0xb8,umask=401unc_m_wr_cas_rank0.bank3uncore memoryWR_CAS Access to Rank 0; Bank 3event=0xb8,umask=801unc_m_wr_cas_rank0.bank4uncore memoryWR_CAS Access to Rank 0; Bank 4event=0xb8,umask=0x1001unc_m_wr_cas_rank0.bank5uncore memoryWR_CAS Access to Rank 0; Bank 5event=0xb8,umask=0x2001unc_m_wr_cas_rank0.bank6uncore memoryWR_CAS Access to Rank 0; Bank 6event=0xb8,umask=0x4001unc_m_wr_cas_rank0.bank7uncore memoryWR_CAS Access to Rank 0; Bank 7event=0xb8,umask=0x8001unc_m_wr_cas_rank1.bank0uncore memoryWR_CAS Access to Rank 1; Bank 0event=0xb9,umask=101unc_m_wr_cas_rank1.bank1uncore memoryWR_CAS Access to Rank 1; Bank 1event=0xb9,umask=201unc_m_wr_cas_rank1.bank2uncore memoryWR_CAS Access to Rank 1; Bank 2event=0xb9,umask=401unc_m_wr_cas_rank1.bank3uncore memoryWR_CAS Access to Rank 1; Bank 3event=0xb9,umask=801unc_m_wr_cas_rank1.bank4uncore memoryWR_CAS Access to Rank 1; Bank 4event=0xb9,umask=0x1001unc_m_wr_cas_rank1.bank5uncore memoryWR_CAS Access to Rank 1; Bank 5event=0xb9,umask=0x2001unc_m_wr_cas_rank1.bank6uncore memoryWR_CAS Access to Rank 1; Bank 6event=0xb9,umask=0x4001unc_m_wr_cas_rank1.bank7uncore memoryWR_CAS Access to Rank 1; Bank 7event=0xb9,umask=0x8001unc_m_wr_cas_rank2.bank0uncore memoryWR_CAS Access to Rank 2; Bank 0event=0xba,umask=101unc_m_wr_cas_rank2.bank1uncore memoryWR_CAS Access to Rank 2; Bank 1event=0xba,umask=201unc_m_wr_cas_rank2.bank2uncore memoryWR_CAS Access to Rank 2; Bank 2event=0xba,umask=401unc_m_wr_cas_rank2.bank3uncore memoryWR_CAS Access to Rank 2; Bank 3event=0xba,umask=801unc_m_wr_cas_rank2.bank4uncore memoryWR_CAS Access to Rank 2; Bank 4event=0xba,umask=0x1001unc_m_wr_cas_rank2.bank5uncore memoryWR_CAS Access to Rank 2; Bank 5event=0xba,umask=0x2001unc_m_wr_cas_rank2.bank6uncore memoryWR_CAS Access to Rank 2; Bank 6event=0xba,umask=0x4001unc_m_wr_cas_rank2.bank7uncore memoryWR_CAS Access to Rank 2; Bank 7event=0xba,umask=0x8001unc_m_wr_cas_rank3.bank0uncore memoryWR_CAS Access to Rank 3; Bank 0event=0xbb,umask=101unc_m_wr_cas_rank3.bank1uncore memoryWR_CAS Access to Rank 3; Bank 1event=0xbb,umask=201unc_m_wr_cas_rank3.bank2uncore memoryWR_CAS Access to Rank 3; Bank 2event=0xbb,umask=401unc_m_wr_cas_rank3.bank3uncore memoryWR_CAS Access to Rank 3; Bank 3event=0xbb,umask=801unc_m_wr_cas_rank3.bank4uncore memoryWR_CAS Access to Rank 3; Bank 4event=0xbb,umask=0x1001unc_m_wr_cas_rank3.bank5uncore memoryWR_CAS Access to Rank 3; Bank 5event=0xbb,umask=0x2001unc_m_wr_cas_rank3.bank6uncore memoryWR_CAS Access to Rank 3; Bank 6event=0xbb,umask=0x4001unc_m_wr_cas_rank3.bank7uncore memoryWR_CAS Access to Rank 3; Bank 7event=0xbb,umask=0x8001unc_m_wr_cas_rank4.bank0uncore memoryWR_CAS Access to Rank 4; Bank 0event=0xbc,umask=101unc_m_wr_cas_rank4.bank1uncore memoryWR_CAS Access to Rank 4; Bank 1event=0xbc,umask=201unc_m_wr_cas_rank4.bank2uncore memoryWR_CAS Access to Rank 4; Bank 2event=0xbc,umask=401unc_m_wr_cas_rank4.bank3uncore memoryWR_CAS Access to Rank 4; Bank 3event=0xbc,umask=801unc_m_wr_cas_rank4.bank4uncore memoryWR_CAS Access to Rank 4; Bank 4event=0xbc,umask=0x1001unc_m_wr_cas_rank4.bank5uncore memoryWR_CAS Access to Rank 4; Bank 5event=0xbc,umask=0x2001unc_m_wr_cas_rank4.bank6uncore memoryWR_CAS Access to Rank 4; Bank 6event=0xbc,umask=0x4001unc_m_wr_cas_rank4.bank7uncore memoryWR_CAS Access to Rank 4; Bank 7event=0xbc,umask=0x8001unc_m_wr_cas_rank5.bank0uncore memoryWR_CAS Access to Rank 5; Bank 0event=0xbd,umask=101unc_m_wr_cas_rank5.bank1uncore memoryWR_CAS Access to Rank 5; Bank 1event=0xbd,umask=201unc_m_wr_cas_rank5.bank2uncore memoryWR_CAS Access to Rank 5; Bank 2event=0xbd,umask=401unc_m_wr_cas_rank5.bank3uncore memoryWR_CAS Access to Rank 5; Bank 3event=0xbd,umask=801unc_m_wr_cas_rank5.bank4uncore memoryWR_CAS Access to Rank 5; Bank 4event=0xbd,umask=0x1001unc_m_wr_cas_rank5.bank5uncore memoryWR_CAS Access to Rank 5; Bank 5event=0xbd,umask=0x2001unc_m_wr_cas_rank5.bank6uncore memoryWR_CAS Access to Rank 5; Bank 6event=0xbd,umask=0x4001unc_m_wr_cas_rank5.bank7uncore memoryWR_CAS Access to Rank 5; Bank 7event=0xbd,umask=0x8001unc_m_wr_cas_rank6.bank0uncore memoryWR_CAS Access to Rank 6; Bank 0event=0xbe,umask=101unc_m_wr_cas_rank6.bank1uncore memoryWR_CAS Access to Rank 6; Bank 1event=0xbe,umask=201unc_m_wr_cas_rank6.bank2uncore memoryWR_CAS Access to Rank 6; Bank 2event=0xbe,umask=401unc_m_wr_cas_rank6.bank3uncore memoryWR_CAS Access to Rank 6; Bank 3event=0xbe,umask=801unc_m_wr_cas_rank6.bank4uncore memoryWR_CAS Access to Rank 6; Bank 4event=0xbe,umask=0x1001unc_m_wr_cas_rank6.bank5uncore memoryWR_CAS Access to Rank 6; Bank 5event=0xbe,umask=0x2001unc_m_wr_cas_rank6.bank6uncore memoryWR_CAS Access to Rank 6; Bank 6event=0xbe,umask=0x4001unc_m_wr_cas_rank6.bank7uncore memoryWR_CAS Access to Rank 6; Bank 7event=0xbe,umask=0x8001unc_m_wr_cas_rank7.bank0uncore memoryWR_CAS Access to Rank 7; Bank 0event=0xbf,umask=101unc_m_wr_cas_rank7.bank1uncore memoryWR_CAS Access to Rank 7; Bank 1event=0xbf,umask=201unc_m_wr_cas_rank7.bank2uncore memoryWR_CAS Access to Rank 7; Bank 2event=0xbf,umask=401unc_m_wr_cas_rank7.bank3uncore memoryWR_CAS Access to Rank 7; Bank 3event=0xbf,umask=801unc_m_wr_cas_rank7.bank4uncore memoryWR_CAS Access to Rank 7; Bank 4event=0xbf,umask=0x1001unc_m_wr_cas_rank7.bank5uncore memoryWR_CAS Access to Rank 7; Bank 5event=0xbf,umask=0x2001unc_m_wr_cas_rank7.bank6uncore memoryWR_CAS Access to Rank 7; Bank 6event=0xbf,umask=0x4001unc_m_wr_cas_rank7.bank7uncore memoryWR_CAS Access to Rank 7; Bank 7event=0xbf,umask=0x8001unc_p_core0_transition_cyclesuncore powerCore 0 C State Transition Cyclesevent=0x7001Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core10_transition_cyclesuncore powerCore 10 C State Transition Cyclesevent=0x7a01Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core11_transition_cyclesuncore powerCore 11 C State Transition Cyclesevent=0x7b01Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core12_transition_cyclesuncore powerCore 12 C State Transition Cyclesevent=0x7c01Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core13_transition_cyclesuncore powerCore 13 C State Transition Cyclesevent=0x7d01Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core14_transition_cyclesuncore powerCore 14 C State Transition Cyclesevent=0x7e01Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core1_transition_cyclesuncore powerCore 1 C State Transition Cyclesevent=0x7101Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core2_transition_cyclesuncore powerCore 2 C State Transition Cyclesevent=0x7201Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core3_transition_cyclesuncore powerCore 3 C State Transition Cyclesevent=0x7301Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core4_transition_cyclesuncore powerCore 4 C State Transition Cyclesevent=0x7401Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core5_transition_cyclesuncore powerCore 5 C State Transition Cyclesevent=0x7501Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core6_transition_cyclesuncore powerCore 6 C State Transition Cyclesevent=0x7601Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core7_transition_cyclesuncore powerCore 7 C State Transition Cyclesevent=0x7701Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core8_transition_cyclesuncore powerCore 8 C State Transition Cyclesevent=0x7801Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core9_transition_cyclesuncore powerCore 9 C State Transition Cyclesevent=0x7901Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_delayed_c_state_abort_core0uncore powerDeep C State Rejection - Core 0event=0x1701Number of times that a deep C state was requested, but the delayed C state algorithm rejected the deep sleep state.  In other words, a wake event occurred before the timer expired that causes a transition into the deeper C stateunc_p_delayed_c_state_abort_core1uncore powerDeep C State Rejection - Core 1event=0x1801Number of times that a deep C state was requested, but the delayed C state algorithm rejected the deep sleep state.  In other words, a wake event occurred before the timer expired that causes a transition into the deeper C stateunc_p_delayed_c_state_abort_core10uncore powerDeep C State Rejection - Core 10event=0x2101Number of times that a deep C state was requested, but the delayed C state algorithm rejected the deep sleep state.  In other words, a wake event occurred before the timer expired that causes a transition into the deeper C stateunc_p_delayed_c_state_abort_core11uncore powerDeep C State Rejection - Core 11event=0x2201Number of times that a deep C state was requested, but the delayed C state algorithm rejected the deep sleep state.  In other words, a wake event occurred before the timer expired that causes a transition into the deeper C stateunc_p_delayed_c_state_abort_core12uncore powerDeep C State Rejection - Core 12event=0x2301Number of times that a deep C state was requested, but the delayed C state algorithm rejected the deep sleep state.  In other words, a wake event occurred before the timer expired that causes a transition into the deeper C stateunc_p_delayed_c_state_abort_core13uncore powerDeep C State Rejection - Core 13event=0x2401Number of times that a deep C state was requested, but the delayed C state algorithm rejected the deep sleep state.  In other words, a wake event occurred before the timer expired that causes a transition into the deeper C stateunc_p_delayed_c_state_abort_core14uncore powerDeep C State Rejection - Core 14event=0x2501Number of times that a deep C state was requested, but the delayed C state algorithm rejected the deep sleep state.  In other words, a wake event occurred before the timer expired that causes a transition into the deeper C stateunc_p_delayed_c_state_abort_core2uncore powerDeep C State Rejection - Core 2event=0x1901Number of times that a deep C state was requested, but the delayed C state algorithm rejected the deep sleep state.  In other words, a wake event occurred before the timer expired that causes a transition into the deeper C stateunc_p_delayed_c_state_abort_core3uncore powerDeep C State Rejection - Core 3event=0x1a01Number of times that a deep C state was requested, but the delayed C state algorithm rejected the deep sleep state.  In other words, a wake event occurred before the timer expired that causes a transition into the deeper C stateunc_p_delayed_c_state_abort_core4uncore powerDeep C State Rejection - Core 4event=0x1b01Number of times that a deep C state was requested, but the delayed C state algorithm rejected the deep sleep state.  In other words, a wake event occurred before the timer expired that causes a transition into the deeper C stateunc_p_delayed_c_state_abort_core5uncore powerDeep C State Rejection - Core 5event=0x1c01Number of times that a deep C state was requested, but the delayed C state algorithm rejected the deep sleep state.  In other words, a wake event occurred before the timer expired that causes a transition into the deeper C stateunc_p_delayed_c_state_abort_core6uncore powerDeep C State Rejection - Core 6event=0x1d01Number of times that a deep C state was requested, but the delayed C state algorithm rejected the deep sleep state.  In other words, a wake event occurred before the timer expired that causes a transition into the deeper C stateunc_p_delayed_c_state_abort_core7uncore powerDeep C State Rejection - Core 7event=0x1e01Number of times that a deep C state was requested, but the delayed C state algorithm rejected the deep sleep state.  In other words, a wake event occurred before the timer expired that causes a transition into the deeper C stateunc_p_delayed_c_state_abort_core8uncore powerDeep C State Rejection - Core 8event=0x1f01Number of times that a deep C state was requested, but the delayed C state algorithm rejected the deep sleep state.  In other words, a wake event occurred before the timer expired that causes a transition into the deeper C stateunc_p_delayed_c_state_abort_core9uncore powerDeep C State Rejection - Core 9event=0x2001Number of times that a deep C state was requested, but the delayed C state algorithm rejected the deep sleep state.  In other words, a wake event occurred before the timer expired that causes a transition into the deeper C stateunc_p_demotions_core0uncore powerCore 0 C State Demotionsevent=0x1e01Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core1uncore powerCore 1 C State Demotionsevent=0x1f01Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core10uncore powerCore 10 C State Demotionsevent=0x4201Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core11uncore powerCore 11 C State Demotionsevent=0x4301Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core12uncore powerCore 12 C State Demotionsevent=0x4401Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core13uncore powerCore 13 C State Demotionsevent=0x4501Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core14uncore powerCore 14 C State Demotionsevent=0x4601Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core2uncore powerCore 2 C State Demotionsevent=0x2001Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core3uncore powerCore 3 C State Demotionsevent=0x2101Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core4uncore powerCore 4 C State Demotionsevent=0x2201Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core5uncore powerCore 5 C State Demotionsevent=0x2301Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core6uncore powerCore 6 C State Demotionsevent=0x2401Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core7uncore powerCore 7 C State Demotionsevent=0x2501Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core8uncore powerCore 8 C State Demotionsevent=0x4001Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core9uncore powerCore 9 C State Demotionsevent=0x4101Counts the number of times when a configurable cores had a C-state demotionunc_p_freq_max_current_cyclesuncore powerCurrent Strongest Upper Limit Cyclesevent=701Counts the number of cycles when current is the upper limit on frequencyunc_p_freq_min_io_p_cyclesuncore powerIO P Limit Strongest Lower Limit Cyclesevent=0x6101Counts the number of cycles when IO P Limit is preventing us from dropping the frequency lower.  This algorithm monitors the needs to the IO subsystem on both local and remote sockets and will maintain a frequency high enough to maintain good IO BW.  This is necessary for when all the IA cores on a socket are idle but a user still would like to maintain high IO Bandwidthunc_p_freq_min_perf_p_cyclesuncore powerPerf P Limit Strongest Lower Limit Cyclesevent=0x6201Counts the number of cycles when Perf P Limit is preventing us from dropping the frequency lower.  Perf P Limit is an algorithm that takes input from remote sockets when determining if a socket should drop it's frequency down.  This is largely to minimize increases in snoop and remote read latenciesunc_p_freq_trans_cyclesuncore powerCycles spent changing Frequencyevent=0x6001Counts the number of cycles when the system is changing frequency.  This can not be filtered by thread ID.  One can also use it with the occupancy counter that monitors number of threads in C0 to estimate the performance impact that frequency transitions had on the systemunc_p_pkg_c_exit_latencyuncore powerPackage C State Exit Latencyevent=0x2601Counts the number of cycles that the package is transitioning from package C2 to C3unc_p_pkg_c_exit_latency_seluncore powerPackage C State Exit Latencyevent=0x2601Counts the number of cycles that the package is transitioning from package C2 to C3unc_p_pkg_c_state_residency_c0_cyclesuncore powerPackage C State Residency - C0event=0x2a01Counts the number of cycles that the package is in C0unc_p_pkg_c_state_residency_c2_cyclesuncore powerPackage C State Residency - C2event=0x2b01Counts the number of cycles that the package is in C2unc_p_pkg_c_state_residency_c3_cyclesuncore powerPackage C State Residency - C3event=0x2c01Counts the number of cycles that the package is in C3unc_p_pkg_c_state_residency_c6_cyclesuncore powerPackage C State Residency - C6event=0x2d01Counts the number of cycles that the package is in C6unc_p_total_transition_cyclesuncore powerTotal Core C State Transition Cyclesevent=0x6301Number of cycles spent performing core C state transitions across all coresunc_p_volt_trans_cycles_changeuncore powerCycles Changing Voltageevent=301Counts the number of cycles when the system is changing voltage.  There is no filtering supported with this event.  One can use it as a simple event, or use it conjunction with the occupancy events to monitor the number of cores or threads that were impacted by the transition.  This event is calculated by or'ing together the increasing and decreasing eventsunc_p_volt_trans_cycles_decreaseuncore powerCycles Decreasing Voltageevent=201Counts the number of cycles when the system is decreasing voltage.  There is no filtering supported with this event.  One can use it as a simple event, or use it conjunction with the occupancy events to monitor the number of cores or threads that were impacted by the transitionunc_p_volt_trans_cycles_increaseuncore powerCycles Increasing Voltageevent=101Counts the number of cycles when the system is increasing voltage.  There is no filtering supported with this event.  One can use it as a simple event, or use it conjunction with the occupancy events to monitor the number of cores or threads that were impacted by the transitionunc_p_vr_hot_cyclesuncore powerVR Hotevent=0x3201dtlb_load_misses.demand_ld_walk_completedvirtual memoryDemand load Miss in all translation lookaside buffer (TLB) levels causes a page walk that completes of any page sizeevent=8,period=100003,umask=0x8200dtlb_load_misses.demand_ld_walk_durationvirtual memoryDemand load cycles page miss handler (PMH) is busy with this walkevent=8,period=2000003,umask=0x8400l1d.allocated_in_mcacheAllocated L1D data cache lines in M stateevent=0x51,period=2000003,umask=200l1d.all_m_replacementcacheCache lines in M state evicted out of L1D due to Snoop HitM or dirty line replacementevent=0x51,period=2000003,umask=800l1d.evictioncacheL1D data cache lines in M state evicted due to replacementevent=0x51,period=2000003,umask=400l1d.replacementcacheL1D data line replacementsevent=0x51,period=2000003,umask=100This event counts L1D data line replacements.  Replacements occur when a new line is brought into the cache, causing eviction of a line loaded earlierl1d_blocks.bank_conflict_cyclescacheCycles when dispatched loads are cancelled due to L1D bank conflicts with other load portsevent=0xbf,cmask=1,period=100003,umask=500l1d_pend_miss.pendingcacheL1D miss outstanding duration in cyclesevent=0x48,period=2000003,umask=100l2_l1d_wb_rqsts.hit_ecacheNot rejected writebacks from L1D to L2 cache lines in E stateevent=0x28,period=200003,umask=400l2_l1d_wb_rqsts.hit_mcacheNot rejected writebacks from L1D to L2 cache lines in M stateevent=0x28,period=200003,umask=800l2_l1d_wb_rqsts.hit_scacheNot rejected writebacks from L1D to L2 cache lines in S stateevent=0x28,period=200003,umask=200l2_l1d_wb_rqsts.misscacheCount the number of modified Lines evicted from L1 and missed L2. (Non-rejected WBs from the DCU.)event=0x28,period=200003,umask=100l2_lines_in.ecacheL2 cache lines in E state filling L2event=0xf1,period=100003,umask=400l2_lines_in.icacheL2 cache lines in I state filling L2event=0xf1,period=100003,umask=100l2_lines_in.scacheL2 cache lines in S state filling L2event=0xf1,period=100003,umask=200l2_lines_out.demand_cleancacheClean L2 cache lines evicted by demandevent=0xf2,period=100003,umask=100l2_lines_out.demand_dirtycacheDirty L2 cache lines evicted by demandevent=0xf2,period=100003,umask=200l2_lines_out.dirty_allcacheDirty L2 cache lines filling the L2event=0xf2,period=100003,umask=0xa00l2_lines_out.pf_cleancacheClean L2 cache lines evicted by L2 prefetchevent=0xf2,period=100003,umask=400l2_lines_out.pf_dirtycacheDirty L2 cache lines evicted by L2 prefetchevent=0xf2,period=100003,umask=800l2_rqsts.all_code_rdcacheL2 code requestsevent=0x24,period=200003,umask=0x3000l2_rqsts.all_demand_data_rdcacheDemand Data Read requestsevent=0x24,period=200003,umask=300l2_rqsts.all_pfcacheRequests from L2 hardware prefetchersevent=0x24,period=200003,umask=0xc000l2_rqsts.all_rfocacheRFO requests to L2 cacheevent=0x24,period=200003,umask=0xc00l2_rqsts.code_rd_hitcacheL2 cache hits when fetching instructions, code readsevent=0x24,period=200003,umask=0x1000l2_rqsts.code_rd_misscacheL2 cache misses when fetching instructionsevent=0x24,period=200003,umask=0x2000l2_rqsts.demand_data_rd_hitcacheDemand Data Read requests that hit L2 cacheevent=0x24,period=200003,umask=100l2_rqsts.pf_hitcacheRequests from the L2 hardware prefetchers that hit L2 cacheevent=0x24,period=200003,umask=0x4000l2_rqsts.pf_misscacheRequests from the L2 hardware prefetchers that miss L2 cacheevent=0x24,period=200003,umask=0x8000l2_rqsts.rfo_hitcacheRFO requests that hit L2 cacheevent=0x24,period=200003,umask=400l2_rqsts.rfo_misscacheRFO requests that miss L2 cacheevent=0x24,period=200003,umask=800l2_store_lock_rqsts.allcacheRFOs that access cache lines in any stateevent=0x27,period=200003,umask=0xf00l2_store_lock_rqsts.hit_ecacheRFOs that hit cache lines in E stateevent=0x27,period=200003,umask=400l2_store_lock_rqsts.hit_mcacheRFOs that hit cache lines in M stateevent=0x27,period=200003,umask=800l2_store_lock_rqsts.misscacheRFOs that miss cache linesevent=0x27,period=200003,umask=100l2_trans.all_pfcacheL2 or LLC HW prefetches that access L2 cacheevent=0xf0,period=200003,umask=800l2_trans.all_requestscacheTransactions accessing L2 pipeevent=0xf0,period=200003,umask=0x8000l2_trans.code_rdcacheL2 cache accesses when fetching instructionsevent=0xf0,period=200003,umask=400l2_trans.demand_data_rdcacheDemand Data Read requests that access L2 cacheevent=0xf0,period=200003,umask=100l2_trans.l1d_wbcacheL1D writebacks that access L2 cacheevent=0xf0,period=200003,umask=0x1000l2_trans.l2_fillcacheL2 fill requests that access L2 cacheevent=0xf0,period=200003,umask=0x2000l2_trans.l2_wbcacheL2 writebacks that access L2 cacheevent=0xf0,period=200003,umask=0x4000l2_trans.rfocacheRFO requests that access L2 cacheevent=0xf0,period=200003,umask=200lock_cycles.cache_lock_durationcacheCycles when L1D is lockedevent=0x63,period=2000003,umask=200longest_lat_cache.misscacheCore-originated cacheable demand requests missed LLCevent=0x2e,period=100003,umask=0x4100longest_lat_cache.referencecacheCore-originated cacheable demand requests that refer to LLCevent=0x2e,period=100003,umask=0x4f00mem_load_uops_llc_hit_retired.xsnp_hitcacheRetired load uops which data sources were LLC and cross-core snoop hits in on-pkg core cacheevent=0xd2,period=20011,umask=200This event counts retired load uops that hit in the last-level cache (L3) and were found in a non-modified state in a neighboring core's private cache (same package).  Since the last level cache is inclusive, hits to the L3 may require snooping the private L2 caches of any cores on the same socket that have the line.  In this case, a snoop was required, and another L2 had the line in a non-modified statemem_load_uops_llc_hit_retired.xsnp_hitmcacheRetired load uops which data sources were HitM responses from shared LLCevent=0xd2,period=20011,umask=400This event counts retired load uops that hit in the last-level cache (L3) and were found in a non-modified state in a neighboring core's private cache (same package).  Since the last level cache is inclusive, hits to the L3 may require snooping the private L2 caches of any cores on the same socket that have the line.  In this case, a snoop was required, and another L2 had the line in a modified state, so the line had to be invalidated in that L2 cache and transferred to the requesting L2mem_load_uops_llc_hit_retired.xsnp_misscacheRetired load uops which data sources were LLC hit and cross-core snoop missed in on-pkg core cacheevent=0xd2,period=20011,umask=100mem_load_uops_llc_hit_retired.xsnp_nonecacheRetired load uops which data sources were hits in LLC without snoops requiredevent=0xd2,period=100003,umask=800mem_load_uops_llc_miss_retired.local_dramcacheData from local DRAM either Snoop not needed or Snoop Miss (RspI)event=0xd3,period=100007,umask=100mem_load_uops_llc_miss_retired.remote_dramcacheData from remote DRAM either Snoop not needed or Snoop Miss (RspI)event=0xd3,period=100007,umask=400mem_load_uops_retired.llc_hitcacheRetired load uops which data sources were data hits in LLC without snoops requiredevent=0xd1,period=50021,umask=400This event counts retired load uops that hit in the last-level (L3) cache without snoops requiredmem_load_uops_retired.llc_misscacheMiss in last-level (L3) cache. Excludes Unknown data-sourceevent=0xd1,period=100007,umask=0x2000mem_uops_retired.all_loadscacheAll retired load uops (Precise event)event=0xd0,period=2000003,umask=0x8100This event counts the number of load uops retired (Precise event)mem_uops_retired.all_storescacheAll retired store uops (Precise event)event=0xd0,period=2000003,umask=0x8200This event counts the number of store uops retired (Precise event)mem_uops_retired.lock_loadscacheRetired load uops with locked access (Precise event)event=0xd0,period=100007,umask=0x2100mem_uops_retired.split_loadscacheRetired load uops that split across a cacheline boundary (Precise event)event=0xd0,period=100003,umask=0x4100This event counts line-splitted load uops retired to the architected path. A line split is across 64B cache-line which includes a page split (4K) (Precise event)mem_uops_retired.split_storescacheRetired store uops that split across a cacheline boundary (Precise event)event=0xd0,period=100003,umask=0x4200This event counts line-splitted store uops retired to the architected path. A line split is across 64B cache-line which includes a page split (4K) (Precise event)mem_uops_retired.stlb_miss_loadscacheRetired load uops that miss the STLB (Precise event)event=0xd0,period=100003,umask=0x1100mem_uops_retired.stlb_miss_storescacheRetired store uops that miss the STLB (Precise event)event=0xd0,period=100003,umask=0x1200offcore_requests.all_data_rdcacheDemand and prefetch data readsevent=0xb0,period=100003,umask=800offcore_requests.demand_code_rdcacheCacheable and non-cacheable code read requestsevent=0xb0,period=100003,umask=200offcore_requests.demand_data_rdcacheDemand Data Read requests sent to uncoreevent=0xb0,period=100003,umask=100offcore_requests.demand_rfocacheDemand RFO requests including regular RFOs, locks, ItoMevent=0xb0,period=100003,umask=400offcore_requests_buffer.sq_fullcacheCases when offcore requests buffer cannot take more entries for coreevent=0xb2,period=2000003,umask=100offcore_requests_outstanding.all_data_rdcacheOffcore outstanding cacheable Core Data Read transactions in SuperQueue (SQ), queue to uncoreevent=0x60,period=2000003,umask=800offcore_requests_outstanding.cycles_with_data_rdcacheCycles when offcore outstanding cacheable Core Data Read transactions are present in SuperQueue (SQ), queue to uncoreevent=0x60,cmask=1,period=2000003,umask=800offcore_requests_outstanding.cycles_with_demand_data_rdcacheCycles when offcore outstanding Demand Data Read transactions are present in SuperQueue (SQ), queue to uncoreevent=0x60,cmask=1,period=2000003,umask=100offcore_requests_outstanding.cycles_with_demand_rfocacheOffcore outstanding demand rfo reads transactions in SuperQueue (SQ), queue to uncore, every cycleevent=0x60,cmask=1,period=2000003,umask=400offcore_requests_outstanding.demand_data_rdcacheOffcore outstanding Demand Data Read transactions in uncore queueevent=0x60,period=2000003,umask=100offcore_requests_outstanding.demand_data_rd_c6cacheCycles with at least 6 offcore outstanding Demand Data Read transactions in uncore queueevent=0x60,cmask=6,period=2000003,umask=100offcore_requests_outstanding.demand_rfocacheOffcore outstanding RFO store transactions in SuperQueue (SQ), queue to uncoreevent=0x60,period=2000003,umask=400fp_assist.anyfloating pointCycles with any input/output SSE or FP assistevent=0xca,cmask=1,period=100003,umask=0x1e00fp_assist.simd_inputfloating pointNumber of SIMD FP assists due to input valuesevent=0xca,period=100003,umask=0x1000fp_assist.simd_outputfloating pointNumber of SIMD FP assists due to Output valuesevent=0xca,period=100003,umask=800fp_assist.x87_inputfloating pointNumber of X87 assists due to input valueevent=0xca,period=100003,umask=400fp_assist.x87_outputfloating pointNumber of X87 assists due to output valueevent=0xca,period=100003,umask=200fp_comp_ops_exe.sse_packed_doublefloating pointNumber of SSE* or AVX-128 FP Computational packed double-precision uops issued this cycleevent=0x10,period=2000003,umask=0x1000fp_comp_ops_exe.sse_packed_singlefloating pointNumber of SSE* or AVX-128 FP Computational packed single-precision uops issued this cycleevent=0x10,period=2000003,umask=0x4000fp_comp_ops_exe.sse_scalar_doublefloating pointNumber of SSE* or AVX-128 FP Computational scalar double-precision uops issued this cycleevent=0x10,period=2000003,umask=0x8000fp_comp_ops_exe.sse_scalar_singlefloating pointNumber of SSE* or AVX-128 FP Computational scalar single-precision uops issued this cycleevent=0x10,period=2000003,umask=0x2000fp_comp_ops_exe.x87floating pointNumber of FP Computational Uops Executed this cycle. The number of FADD, FSUB, FCOM, FMULs, integer MULs and IMULs, FDIVs, FPREMs, FSQRTS, integer DIVs, and IDIVs. This event does not distinguish an FADD used in the middle of a transcendental flow from a sevent=0x10,period=2000003,umask=100other_assists.avx_storefloating pointNumber of GSSE memory assist for stores. GSSE microcode assist is being invoked whenever the hardware is unable to properly handle GSSE-256b operationsevent=0xc1,period=100003,umask=800simd_fp_256.packed_doublefloating pointNumber of AVX-256 Computational FP double precision uops issued this cycleevent=0x11,period=2000003,umask=200simd_fp_256.packed_singlefloating pointNumber of GSSE-256 Computational FP single precision uops issued this cycleevent=0x11,period=2000003,umask=100dsb2mite_switches.countfrontendDecode Stream Buffer (DSB)-to-MITE switchesevent=0xab,period=2000003,umask=100dsb2mite_switches.penalty_cyclesfrontendDecode Stream Buffer (DSB)-to-MITE switch true penalty cyclesevent=0xab,period=2000003,umask=200This event counts the cycles attributed to a switch from the Decoded Stream Buffer (DSB), which holds decoded instructions, to the legacy decode pipeline.  It excludes cycles when the back-end cannot  accept new micro-ops.  The penalty for these switches is potentially several cycles of instruction starvation, where no micro-ops are delivered to the back-enddsb_fill.all_cancelfrontendCases of cancelling valid Decode Stream Buffer (DSB) fill not because of exceeding way limitevent=0xac,period=2000003,umask=0xa00dsb_fill.exceed_dsb_linesfrontendCycles when Decode Stream Buffer (DSB) fill encounter more than 3 Decode Stream Buffer (DSB) linesevent=0xac,period=2000003,umask=800dsb_fill.other_cancelfrontendCases of cancelling valid DSB fill not because of exceeding way limitevent=0xac,period=2000003,umask=200icache.missesfrontendInstruction cache, streaming buffer and victim cache missesevent=0x80,period=200003,umask=200This event counts the number of instruction cache, streaming buffer and victim cache misses. Counting includes unchacheable accessesidq.all_dsb_cycles_4_uopsfrontendCycles Decode Stream Buffer (DSB) is delivering 4 Uopsevent=0x79,cmask=4,period=2000003,umask=0x1800idq.all_dsb_cycles_any_uopsfrontendCycles Decode Stream Buffer (DSB) is delivering any Uopevent=0x79,cmask=1,period=2000003,umask=0x1800idq.all_mite_cycles_4_uopsfrontendCycles MITE is delivering 4 Uopsevent=0x79,cmask=4,period=2000003,umask=0x2400idq.all_mite_cycles_any_uopsfrontendCycles MITE is delivering any Uopevent=0x79,cmask=1,period=2000003,umask=0x2400idq.dsb_uopsfrontendUops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) pathevent=0x79,period=2000003,umask=800idq.emptyfrontendInstruction Decode Queue (IDQ) empty cyclesevent=0x79,period=2000003,umask=200idq.mite_all_uopsfrontendUops delivered to Instruction Decode Queue (IDQ) from MITE pathevent=0x79,period=2000003,umask=0x3c00idq.mite_uopsfrontendUops delivered to Instruction Decode Queue (IDQ) from MITE pathevent=0x79,period=2000003,umask=400idq.ms_cyclesfrontendCycles when uops are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busyevent=0x79,cmask=1,period=2000003,umask=0x3000This event counts cycles during which the microcode sequencer assisted the front-end in delivering uops.  Microcode assists are used for complex instructions or scenarios that can't be handled by the standard decoder.  Using other instructions, if possible, will usually improve performance.  See the Intel? 64 and IA-32 Architectures Optimization Reference Manual for more informationidq.ms_dsb_uopsfrontendUops initiated by Decode Stream Buffer (DSB) that are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busyevent=0x79,period=2000003,umask=0x1000idq.ms_mite_uopsfrontendUops initiated by MITE and delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busyevent=0x79,period=2000003,umask=0x2000idq.ms_uopsfrontendUops delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busyevent=0x79,period=2000003,umask=0x3000idq_uops_not_delivered.corefrontendUops not delivered to Resource Allocation Table (RAT) per thread when backend of the machine is not stalled event=0x9c,period=2000003,umask=100This event counts the number of uops not delivered to the back-end per cycle, per thread, when the back-end was not stalled.  In the ideal case 4 uops can be delivered each cycle.  The event counts the undelivered uops - so if 3 were delivered in one cycle, the counter would be incremented by 1 for that cycle (4 - 3). If the back-end is stalled, the count for this event is not incremented even when uops were not delivered, because the back-end would not have been able to accept them.  This event is used in determining the front-end bound category of the top-down pipeline slots characterizationidq_uops_not_delivered.cycles_ge_1_uop_deliv.corefrontendCycles when 1 or more uops were delivered to the by the front endevent=0x9c,cmask=4,inv=1,period=2000003,umask=100machine_clears.memory_orderingmemoryCounts the number of machine clears due to memory order conflictsevent=0xc3,period=100003,umask=200This event counts the number of memory ordering Machine Clears detected. Memory Ordering Machine Clears can result from memory disambiguation, external snoops, or cross SMT-HW-thread snoop (stores) hitting load buffers.  Machine clears can have a significant performance impact if they are happening frequentlymem_trans_retired.load_latency_gt_128memoryLoads with latency value being above 128 (Must be precise)event=0xcd,period=1009,umask=1,ldlat=0x8000mem_trans_retired.load_latency_gt_16memoryLoads with latency value being above 16 (Must be precise)event=0xcd,period=20011,umask=1,ldlat=0x1000mem_trans_retired.load_latency_gt_256memoryLoads with latency value being above 256 (Must be precise)event=0xcd,period=503,umask=1,ldlat=0x10000mem_trans_retired.load_latency_gt_32memoryLoads with latency value being above 32 (Must be precise)event=0xcd,period=100007,umask=1,ldlat=0x2000mem_trans_retired.load_latency_gt_4memoryLoads with latency value being above 4  (Must be precise)event=0xcd,period=100003,umask=1,ldlat=0x400mem_trans_retired.load_latency_gt_512memoryLoads with latency value being above 512 (Must be precise)event=0xcd,period=101,umask=1,ldlat=0x20000mem_trans_retired.load_latency_gt_64memoryLoads with latency value being above 64 (Must be precise)event=0xcd,period=2003,umask=1,ldlat=0x4000mem_trans_retired.load_latency_gt_8memoryLoads with latency value being above 8 (Must be precise)event=0xcd,period=50021,umask=1,ldlat=0x800mem_trans_retired.precise_storememorySample stores and collect precise store operation via PEBS record. PMC3 only. (Precise Event - PEBS) (Must be precise)event=0xcd,period=2000003,umask=200misalign_mem_ref.loadsmemorySpeculative cache line split load uops dispatched to L1 cacheevent=5,period=2000003,umask=100misalign_mem_ref.storesmemorySpeculative cache line split STA uops dispatched to L1 cacheevent=5,period=2000003,umask=200offcore_response.all_demand_mlc_pref_reads.llc_miss.any_responsememoryThis event counts all LLC misses for all demand and L2 prefetches. LLC prefetches are excludedevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FFFC2007700offcore_response.all_demand_mlc_pref_reads.llc_miss.local_drammemoryCounts all local dram accesses for all demand and L2 prefetches. LLC prefetches are excludedevent=0xb7,period=100003,umask=1,offcore_rsp=0x60040007700offcore_response.all_demand_mlc_pref_reads.llc_miss.remote_hitm_hit_forwardmemoryThis event counts all remote cache-to-cache transfers (includes HITM and HIT-Forward) for all demand and L2 prefetches. LLC prefetches are excludedevent=0xb7,period=100003,umask=1,offcore_rsp=0x187FC2007700offcore_response.pf_llc_data_rd.llc_miss.any_responsememoryCounts prefetch (that bring data to LLC only) data reads that hit in the LLC and the snoops sent to sibling cores return clean responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x3fffc2008000cpl_cycles.ring0otherUnhalted core cycles when the thread is in ring 0event=0x5c,period=2000003,umask=100cpl_cycles.ring0_transotherNumber of intervals between processor halts while thread is in ring 0event=0x5c,cmask=1,edge=1,period=100007,umask=100cpl_cycles.ring123otherUnhalted core cycles when thread is in rings 1, 2, or 3event=0x5c,period=2000003,umask=200hw_pre_req.dl1_missotherHardware Prefetch requests that miss the L1D cache. This accounts for both L1 streamer and IP-based (IPP) HW prefetchers. A request is being counted each time it access the cache & miss it, including if a block is applicable or if hit the Fill Buffer for event=0x4e,period=2000003,umask=200insts_written_to_iq.instsotherValid instructions written to IQ per cycleevent=0x17,period=2000003,umask=100lock_cycles.split_lock_uc_lock_durationotherCycles when L1 and L2 are locked due to UC or split lockevent=0x63,period=2000003,umask=100agu_bypass_cancel.countpipelineThis event counts executed load operations with all the following traits: 1. addressing of the format [base + offset], 2. the offset is between 1 and 2047, 3. the address specified in the base register is in one page and the address [base+offset] is in anevent=0xb6,period=100003,umask=100arith.fpu_divpipelineDivide operations executedevent=0x14,cmask=1,edge=1,period=100003,umask=100This event counts the number of the divide operations executedarith.fpu_div_activepipelineCycles when divider is busy executing divide operationsevent=0x14,period=2000003,umask=100br_inst_exec.all_branchespipelineSpeculative and retired  branchesevent=0x88,period=200003,umask=0xff00br_inst_retired.all_branchespipelineAll (macro) branch instructions retiredevent=0xc4,period=40000900br_inst_retired.all_branches_pebspipelineAll (macro) branch instructions retired. (Precise Event - PEBS) (Must be precise)event=0xc4,period=400009,umask=400br_inst_retired.far_branchpipelineFar branch instructions retiredevent=0xc4,period=100007,umask=0x4000br_inst_retired.not_takenpipelineNot taken branch instructions retiredevent=0xc4,period=400009,umask=0x1000br_misp_exec.all_branchespipelineSpeculative and retired mispredicted macro conditional branchesevent=0x89,period=200003,umask=0xff00br_misp_exec.all_direct_near_callpipelineSpeculative and retired mispredicted direct near callsevent=0x89,period=200003,umask=0xd000br_misp_exec.taken_direct_near_callpipelineTaken speculative and retired mispredicted direct near callsevent=0x89,period=200003,umask=0x9000br_misp_retired.all_branchespipelineAll mispredicted macro branch instructions retiredevent=0xc5,period=40000900br_misp_retired.all_branches_pebspipelineMispredicted macro branch instructions retired. (Precise Event - PEBS) (Must be precise)event=0xc5,period=400009,umask=400br_misp_retired.near_callpipelineDirect and indirect mispredicted near call instructions retired (Precise event)event=0xc5,period=100007,umask=200br_misp_retired.not_takenpipelineMispredicted not taken branch instructions retired (Precise event)event=0xc5,period=400009,umask=0x1000br_misp_retired.takenpipelineMispredicted taken branch instructions retired (Precise event)event=0xc5,period=400009,umask=0x2000cpu_clk_thread_unhalted.ref_xclkpipelineReference cycles when the thread is unhalted (counts at 100 MHz rate)event=0x3c,period=2000003,umask=100cpu_clk_thread_unhalted.ref_xclk_anypipelineReference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate)event=0x3c,any=1,period=2000003,umask=100cpu_clk_unhalted.ref_tscpipelineReference cycles when the core is not in halt stateevent=0,period=2000003,umask=300This event counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. This event has a constant ratio with the CPU_CLK_UNHALTED.REF_XCLK event. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other eventscpu_clk_unhalted.ref_xclkpipelineReference cycles when the thread is unhalted (counts at 100 MHz rate)event=0x3c,period=2000003,umask=100cpu_clk_unhalted.ref_xclk_anypipelineReference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate)event=0x3c,any=1,period=2000003,umask=100cpu_clk_unhalted.thread_ppipelineThread cycles when thread is not in halt stateevent=0x3c,period=200000300cycle_activity.cycles_l1d_pendingpipelineEach cycle there was a miss-pending demand load this thread, increment by 1. Note this is in DCU and connected to Umask 1. Miss Pending demand load should be deduced by OR-ing increment bits of DCACHE_MISS_PEND.PENDINGevent=0xa3,cmask=2,period=2000003,umask=200cycle_activity.cycles_l2_pendingpipelineEach cycle there was a MLC-miss pending demand load this thread (i.e. Non-completed valid SQ entry allocated for demand load and waiting for Uncore), increment by 1. Note this is in MLC and connected to Umask 0event=0xa3,cmask=1,period=2000003,umask=100cycle_activity.cycles_no_dispatchpipelineEach cycle there was no dispatch for this thread, increment by 1. Note this is connect to Umask 2. No dispatch can be deduced from the UOPS_EXECUTED eventevent=0xa3,cmask=4,period=2000003,umask=400cycle_activity.stalls_l1d_pendingpipelineEach cycle there was a miss-pending demand load this thread and no uops dispatched, increment by 1. Note this is in DCU and connected to Umask 1 and 2. Miss Pending demand load should be deduced by OR-ing increment bits of DCACHE_MISS_PEND.PENDINGevent=0xa3,cmask=6,period=2000003,umask=600cycle_activity.stalls_l2_pendingpipelineEach cycle there was a MLC-miss pending demand load and no uops dispatched on this thread (i.e. Non-completed valid SQ entry allocated for demand load and waiting for Uncore), increment by 1. Note this is in MLC and connected to Umask 0 and 2event=0xa3,cmask=5,period=2000003,umask=500ild_stall.iq_fullpipelineStall cycles because IQ is fullevent=0x87,period=2000003,umask=400inst_retired.anypipelineInstructions retired from executionevent=0xc0,period=200000300This event counts the number of instructions retired from execution. For instructions that consist of multiple micro-ops, this event counts the retirement of the last micro-op of the instruction. Counting continues during hardware interrupts, traps, and inside interrupt handlersinst_retired.any_ppipelineNumber of instructions retired. General Counter   - architectural eventevent=0xc0,period=200000300inst_retired.prec_distpipelineInstructions retired. (Precise Event - PEBS) (Must be precise)event=0xc0,period=2000003,umask=100int_misc.rat_stall_cyclespipelineCycles when Resource Allocation Table (RAT) external stall is sent to Instruction Decode Queue (IDQ) for the threadevent=0xd,period=2000003,umask=0x4000int_misc.recovery_cyclespipelineNumber of cycles waiting for the checkpoints in Resource Allocation Table (RAT) to be recovered after Nuke due to all other cases except JEClear (e.g. whenever a ucode assist is needed like SSE exception, memory disambiguation, etc...)event=0xd,cmask=1,period=2000003,umask=300int_misc.recovery_stalls_countpipelineNumber of occurrences waiting for the checkpoints in Resource Allocation Table (RAT) to be recovered after Nuke due to all other cases except JEClear (e.g. whenever a ucode assist is needed like SSE exception, memory disambiguation, etc...)event=0xd,cmask=1,edge=1,period=2000003,umask=300ld_blocks.all_blockpipelineNumber of cases where any load ends up with a valid block-code written to the load buffer (including blocks due to Memory Order Buffer (MOB), Data Cache Unit (DCU), TLB, but load has no DCU miss)event=3,period=100003,umask=0x1000ld_blocks.data_unknownpipelineLoads delayed due to SB blocks, preceding store operations with known addresses but unknown dataevent=3,period=100003,umask=100ld_blocks.store_forwardpipelineCases when loads get true Block-on-Store blocking code preventing store forwardingevent=3,period=100003,umask=200This event counts loads that followed a store to the same address, where the data could not be forwarded inside the pipeline from the store to the load.  The most common reason why store forwarding would be blocked is when a load's address range overlaps with a preceding smaller uncompleted store.  See the table of not supported store forwards in the Intel? 64 and IA-32 Architectures Optimization Reference Manual.  The penalty for blocked store forwarding is that the load must wait for the store to complete before it can be issuedld_blocks_partial.address_aliaspipelineFalse dependencies in MOB due to partial compareevent=7,period=100003,umask=100Aliasing occurs when a load is issued after a store and their memory addresses are offset by 4K.  This event counts the number of loads that aliased with a preceding store, resulting in an extended address check in the pipeline.  The enhanced address check typically has a performance penalty of 5 cyclesld_blocks_partial.all_sta_blockpipelineThis event counts the number of times that load operations are temporarily blocked because of older stores, with addresses that are not yet known. A load operation may incur more than one block of this typeevent=7,period=100003,umask=800load_hit_pre.hw_pfpipelineNot software-prefetch load dispatches that hit FB allocated for hardware prefetchevent=0x4c,period=100003,umask=200load_hit_pre.sw_pfpipelineNot software-prefetch load dispatches that hit FB allocated for software prefetchevent=0x4c,period=100003,umask=100other_assists.itlb_miss_retiredpipelineRetired instructions experiencing ITLB missesevent=0xc1,period=100003,umask=200partial_rat_stalls.flags_merge_uoppipelineIncrements the number of flags-merge uops in flight each cycleevent=0x59,period=2000003,umask=0x2000partial_rat_stalls.flags_merge_uop_cyclespipelinePerformance sensitive flags-merging uops added by Sandy Bridge u-archevent=0x59,cmask=1,period=2000003,umask=0x2000This event counts the number of cycles spent executing performance-sensitive flags-merging uops. For example, shift CL (merge_arith_flags). For more details, See the Intel? 64 and IA-32 Architectures Optimization Reference Manualpartial_rat_stalls.mul_single_uoppipelineMultiply packed/scalar single precision uops allocatedevent=0x59,period=2000003,umask=0x8000partial_rat_stalls.slow_lea_windowpipelineCycles with at least one slow LEA uop being allocatedevent=0x59,period=2000003,umask=0x4000This event counts the number of cycles with at least one slow LEA uop being allocated. A uop is generally considered as slow LEA if it has three sources (for example, two sources and immediate) regardless of whether it is a result of LEA instruction or not. Examples of the slow LEA uop are or uops with base, index, and offset source operands using base and index reqisters, where base is EBR/RBP/R13, using RIP relative or 16-bit addressing modes. See the Intel? 64 and IA-32 Architectures Optimization Reference Manual for more details about slow LEA instructionsresource_stalls.anypipelineResource-related stall cyclesevent=0xa2,period=2000003,umask=100resource_stalls.lbpipelineCounts the cycles of stall due to lack of load buffersevent=0xa2,period=2000003,umask=200resource_stalls.lb_sbpipelineResource stalls due to load or store buffers all being in useevent=0xa2,period=2000003,umask=0xa00resource_stalls.mem_rspipelineResource stalls due to memory buffers or Reservation Station (RS) being fully utilizedevent=0xa2,period=2000003,umask=0xe00resource_stalls.ooo_rsrcpipelineResource stalls due to Rob being full, FCSW, MXCSR and OTHERevent=0xa2,period=2000003,umask=0xf000resource_stalls.sbpipelineCycles stalled due to no store buffers available. (not including draining form sync)event=0xa2,period=2000003,umask=800resource_stalls2.all_fl_emptypipelineCycles with either free list is emptyevent=0x5b,period=2000003,umask=0xc00resource_stalls2.all_prf_controlpipelineResource stalls2 control structures full for physical registersevent=0x5b,period=2000003,umask=0xf00resource_stalls2.bob_fullpipelineCycles when Allocator is stalled if BOB is full and new branch needs itevent=0x5b,period=2000003,umask=0x4000resource_stalls2.ooo_rsrcpipelineResource stalls out of order resources fullevent=0x5b,period=2000003,umask=0x4f00rob_misc_events.lbr_insertspipelineCount cases of saving new LBRevent=0xcc,period=2000003,umask=0x2000rs_events.empty_cyclespipelineCycles when Reservation Station (RS) is empty for the threadevent=0x5e,period=2000003,umask=100rs_events.empty_endpipelineCounts end of periods where the Reservation Station (RS) was empty. Could be useful to precisely locate Frontend Latency Bound issuesevent=0x5e,cmask=1,edge=1,inv=1,period=2000003,umask=100uops_dispatched.corepipelineUops dispatched from any threadevent=0xb1,period=2000003,umask=200uops_dispatched.threadpipelineUops dispatched per threadevent=0xb1,period=2000003,umask=100uops_dispatched_port.port_0pipelineCycles per thread when uops are dispatched to port 0event=0xa1,period=2000003,umask=100uops_dispatched_port.port_0_corepipelineCycles per core when uops are dispatched to port 0event=0xa1,any=1,period=2000003,umask=100uops_dispatched_port.port_1pipelineCycles per thread when uops are dispatched to port 1event=0xa1,period=2000003,umask=200uops_dispatched_port.port_1_corepipelineCycles per core when uops are dispatched to port 1event=0xa1,any=1,period=2000003,umask=200uops_dispatched_port.port_2pipelineCycles per thread when load or STA uops are dispatched to port 2event=0xa1,period=2000003,umask=0xc00uops_dispatched_port.port_2_corepipelineCycles per core when load or STA uops are dispatched to port 2event=0xa1,any=1,period=2000003,umask=0xc00uops_dispatched_port.port_3pipelineCycles per thread when load or STA uops are dispatched to port 3event=0xa1,period=2000003,umask=0x3000uops_dispatched_port.port_3_corepipelineCycles per core when load or STA uops are dispatched to port 3event=0xa1,any=1,period=2000003,umask=0x3000uops_dispatched_port.port_4pipelineCycles per thread when uops are dispatched to port 4event=0xa1,period=2000003,umask=0x4000uops_dispatched_port.port_4_corepipelineCycles per core when uops are dispatched to port 4event=0xa1,any=1,period=2000003,umask=0x4000uops_dispatched_port.port_5pipelineCycles per thread when uops are dispatched to port 5event=0xa1,period=2000003,umask=0x8000uops_dispatched_port.port_5_corepipelineCycles per core when uops are dispatched to port 5event=0xa1,any=1,period=2000003,umask=0x8000uops_issued.anypipelineUops that Resource Allocation Table (RAT) issues to Reservation Station (RS)event=0xe,period=2000003,umask=100This event counts the number of Uops issued by the front-end of the pipeilne to the back-enduops_retired.allpipelineActually retired uops (Precise event)event=0xc2,period=2000003,umask=100This event counts the number of micro-ops retired (Precise event)uops_retired.core_stall_cyclespipelineCycles without actually retired uopsevent=0xc2,cmask=1,inv=1,period=2000003,umask=100uops_retired.retire_slotspipelineRetirement slots used (Precise event)event=0xc2,period=2000003,umask=200This event counts the number of retirement slots used each cycle.  There are potentially 4 slots that can be used each cycle - meaning, 4 micro-ops or 4 instructions could retire each cycle.  This event is used in determining the 'Retiring' category of the Top-Down pipeline slots characterization (Precise event)unc_c_ismq_drd_miss_occuncore cacheevent=0x2101unc_c_llc_lookup.data_readuncore cacheCache Lookups; Data Read Requestevent=0x34,umask=301Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set filter mask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CBoGlCtrl[22:18] bits correspond to [FMESI] stateunc_c_llc_lookup.niduncore cacheCache Lookups; RTIDevent=0x34,umask=0x4101Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set filter mask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CBoGlCtrl[22:18] bits correspond to [FMESI] stateunc_c_llc_lookup.remote_snoopuncore cacheCache Lookups; External Snoop Requestevent=0x34,umask=901Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set filter mask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CBoGlCtrl[22:18] bits correspond to [FMESI] stateunc_c_llc_lookup.writeuncore cacheCache Lookups; Write Requestsevent=0x34,umask=501Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2.  This has numerous filters available.  Note the non-standard filtering equation.  This event will count requests that lookup the cache multiple times with multiple increments.  One must ALWAYS set filter mask bit 0 and select a state or states to match.  Otherwise, the event will count nothing.   CBoGlCtrl[22:18] bits correspond to [FMESI] stateunc_c_llc_victims.niduncore cacheLines Victimized; Victimized Lines that Match NIDevent=0x37,umask=0x4001Counts the number of lines that were victimized on a fill.  This can be filtered by the state that the line was inunc_c_misc.rfo_hit_suncore cacheCbo Misc; RFO HitSevent=0x39,umask=801Miscellaneous events in the Cbounc_c_misc.rspi_was_fseuncore cacheCbo Misc; Silent Snoop Evictionevent=0x39,umask=101Miscellaneous events in the Cbounc_c_misc.wc_aliasinguncore cacheCbo Misc; Write Combining Aliasingevent=0x39,umask=201Miscellaneous events in the Cbounc_c_ring_ad_used.down_evenuncore cacheAD Ring In Use; Down and Evenevent=0x1b,umask=401Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the 'UP' direction is on the clockwise ring and 'DN' is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_ad_used.down_odduncore cacheAD Ring In Use; Down and Oddevent=0x1b,umask=801Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the 'UP' direction is on the clockwise ring and 'DN' is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_ad_used.up_evenuncore cacheAD Ring In Use; Up and Evenevent=0x1b,umask=101Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the 'UP' direction is on the clockwise ring and 'DN' is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_ad_used.up_odduncore cacheAD Ring In Use; Up and Oddevent=0x1b,umask=201Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the 'UP' direction is on the clockwise ring and 'DN' is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_ak_used.down_evenuncore cacheAK Ring In Use; Down and Evenevent=0x1c,umask=401Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the 'UP' direction is on the clockwise ring and 'DN' is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_ak_used.down_odduncore cacheAK Ring In Use; Down and Oddevent=0x1c,umask=801Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the 'UP' direction is on the clockwise ring and 'DN' is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_ak_used.up_evenuncore cacheAK Ring In Use; Up and Evenevent=0x1c,umask=101Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the 'UP' direction is on the clockwise ring and 'DN' is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_ak_used.up_odduncore cacheAK Ring In Use; Up and Oddevent=0x1c,umask=201Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the 'UP' direction is on the clockwise ring and 'DN' is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_bl_used.down_evenuncore cacheBL Ring in Use; Down and Evenevent=0x1d,umask=401Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the 'UP' direction is on the clockwise ring and 'DN' is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_bl_used.down_odduncore cacheBL Ring in Use; Down and Oddevent=0x1d,umask=801Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the 'UP' direction is on the clockwise ring and 'DN' is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_bl_used.up_evenuncore cacheBL Ring in Use; Up and Evenevent=0x1d,umask=101Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the 'UP' direction is on the clockwise ring and 'DN' is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_bl_used.up_odduncore cacheBL Ring in Use; Up and Oddevent=0x1d,umask=201Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from  the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring.  On the left side of the ring, the 'UP' direction is on the clockwise ring and 'DN' is on the counter-clockwise ring.  On the right side of the ring, this is reversed.  The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring.  In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ringunc_c_ring_bounces.ak_coreuncore cacheNumber of LLC responses that bounced on the Ring.; Acknowledgements to coreevent=5,umask=201unc_c_ring_bounces.bl_coreuncore cacheNumber of LLC responses that bounced on the Ring.; Data Responses to coreevent=5,umask=401unc_c_ring_bounces.iv_coreuncore cacheNumber of LLC responses that bounced on the Ring.; Snoops of processor's cacheevent=5,umask=801unc_c_ring_iv_used.anyuncore cacheBL Ring in Use; Anyevent=0x1e,umask=0xf01Counts the number of cycles that the IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.  There is only 1 IV ring in JKT.  Therefore, if one wants to monitor the 'Even' ring, they should select both UP_EVEN and DN_EVEN.  To monitor the 'Odd' ring, they should select both UP_ODD and DN_ODDunc_c_ring_sink_starved.ad_cacheuncore cacheevent=6,umask=101unc_c_ring_sink_starved.ak_coreuncore cacheevent=6,umask=201unc_c_ring_sink_starved.bl_coreuncore cacheevent=6,umask=401unc_c_ring_sink_starved.iv_coreuncore cacheevent=6,umask=801unc_c_rxr_ext_starved.ipquncore cacheIngress Arbiter Blocking Cycles; IRQevent=0x12,umask=201Counts cycles in external starvation.  This occurs when one of the ingress queues is being starved by the other queuesunc_c_rxr_ext_starved.irquncore cacheIngress Arbiter Blocking Cycles; IPQevent=0x12,umask=101Counts cycles in external starvation.  This occurs when one of the ingress queues is being starved by the other queuesunc_c_rxr_ext_starved.ismquncore cacheIngress Arbiter Blocking Cycles; ISMQevent=0x12,umask=401Counts cycles in external starvation.  This occurs when one of the ingress queues is being starved by the other queuesunc_c_rxr_ext_starved.ismq_bidsuncore cacheIngress Arbiter Blocking Cycles; ISMQ_BIDevent=0x12,umask=801Counts cycles in external starvation.  This occurs when one of the ingress queues is being starved by the other queuesunc_c_rxr_inserts.irq_rejecteduncore cacheIngress Allocations; IRQ Rejectedevent=0x13,umask=201Counts number of allocations per cycle into the specified Ingress queueunc_c_rxr_inserts.vfifouncore cacheIngress Allocations; VFIFOevent=0x13,umask=0x1001Counts number of allocations per cycle into the specified Ingress queueunc_c_rxr_int_starved.ipquncore cacheIngress Internal Starvation Cycles; IPQevent=0x14,umask=401Counts cycles in internal starvation.  This occurs when one (or more) of the entries in the ingress queue are being starved out by other entries in that queueunc_c_rxr_int_starved.irquncore cacheIngress Internal Starvation Cycles; IRQevent=0x14,umask=101Counts cycles in internal starvation.  This occurs when one (or more) of the entries in the ingress queue are being starved out by other entries in that queueunc_c_rxr_int_starved.ismquncore cacheIngress Internal Starvation Cycles; ISMQevent=0x14,umask=801Counts cycles in internal starvation.  This occurs when one (or more) of the entries in the ingress queue are being starved out by other entries in that queueunc_c_rxr_ipq_retry.addr_conflictuncore cacheProbe Queue Retries; Address Conflictevent=0x31,umask=401Number of times a snoop (probe) request had to retry.  Filters exist to cover some of the common cases retriesunc_c_rxr_ipq_retry.anyuncore cacheProbe Queue Retries; Any Rejectevent=0x31,umask=101Number of times a snoop (probe) request had to retry.  Filters exist to cover some of the common cases retriesunc_c_rxr_ipq_retry.fulluncore cacheProbe Queue Retries; No Egress Creditsevent=0x31,umask=201Number of times a snoop (probe) request had to retry.  Filters exist to cover some of the common cases retriesunc_c_rxr_irq_retry.addr_conflictuncore cacheIngress Request Queue Rejects; Address Conflictevent=0x32,umask=401unc_c_rxr_irq_retry.anyuncore cacheIngress Request Queue Rejects; Any Rejectevent=0x32,umask=101unc_c_rxr_irq_retry.fulluncore cacheIngress Request Queue Rejects; No Egress Creditsevent=0x32,umask=201unc_c_rxr_irq_retry.qpi_creditsuncore cacheIngress Request Queue Rejects; No QPI Creditsevent=0x32,umask=0x1001unc_c_rxr_irq_retry.rtiduncore cacheIngress Request Queue Rejects; No RTIDsevent=0x32,umask=801unc_c_rxr_ismq_retry.anyuncore cacheISMQ Retries; Any Rejectevent=0x33,umask=101Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the coresunc_c_rxr_ismq_retry.fulluncore cacheISMQ Retries; No Egress Creditsevent=0x33,umask=201Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the coresunc_c_rxr_ismq_retry.iio_creditsuncore cacheISMQ Retries; No IIO Creditsevent=0x33,umask=0x2001Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the coresunc_c_rxr_ismq_retry.rtiduncore cacheISMQ Retries; No RTIDsevent=0x33,umask=801Number of times a transaction flowing through the ISMQ had to retry.  Transaction pass through the ISMQ as responses for requests that already exist in the Cbo.  Some examples include: when data is returned or when snoop responses come back from the coresunc_c_rxr_occupancy.irq_rejecteduncore cacheIngress Occupancy; IRQ Rejectedevent=0x11,umask=201Counts number of entries in the specified Ingress queue in each cycleunc_c_rxr_occupancy.vfifouncore cacheIngress Occupancy; VFIFOevent=0x11,umask=0x1001Counts number of entries in the specified Ingress queue in each cycleunc_c_tor_inserts.evictionuncore cacheTOR Inserts; Evictionsevent=0x35,umask=401Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182)unc_c_tor_inserts.miss_alluncore cacheTOR Inserts; Miss Allevent=0x35,umask=0xa01Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182)unc_c_tor_inserts.miss_opcodeuncore cacheTOR Inserts; Miss Opcode Matchevent=0x35,umask=301Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182)unc_c_tor_inserts.nid_alluncore cacheTOR Inserts; NID Matchedevent=0x35,umask=0x4801Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182)unc_c_tor_inserts.nid_evictionuncore cacheTOR Inserts; NID Matched Evictionsevent=0x35,umask=0x4401Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182)unc_c_tor_inserts.nid_miss_alluncore cacheTOR Inserts; NID Matched Miss Allevent=0x35,umask=0x4a01Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182)unc_c_tor_inserts.nid_miss_opcodeuncore cacheTOR Inserts; NID and Opcode Matched Missevent=0x35,umask=0x4301Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182)unc_c_tor_inserts.nid_opcodeuncore cacheTOR Inserts; NID and Opcode Matchedevent=0x35,umask=0x4101Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182)unc_c_tor_inserts.nid_wbuncore cacheTOR Inserts; NID Matched Writebacksevent=0x35,umask=0x5001Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182)unc_c_tor_inserts.opcodeuncore cacheTOR Inserts; Opcode Matchevent=0x35,umask=101Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182)unc_c_tor_inserts.wbuncore cacheTOR Inserts; Writebacksevent=0x35,umask=0x1001Counts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent.  There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc  to DRD (0x182)unc_c_tor_occupancy.alluncore cacheTOR Occupancy; Anyevent=0x36,umask=801For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)unc_c_tor_occupancy.evictionuncore cacheTOR Occupancy; Evictionsevent=0x36,umask=401For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)unc_c_tor_occupancy.miss_alluncore cacheTOR Occupancy; Miss Allevent=0x36,umask=0xa01For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)unc_c_tor_occupancy.miss_opcodeuncore cacheTOR Occupancy; Miss Opcode Matchevent=0x36,umask=301For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)unc_c_tor_occupancy.nid_alluncore cacheTOR Occupancy; NID Matchedevent=0x36,umask=0x4801For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)unc_c_tor_occupancy.nid_evictionuncore cacheTOR Occupancy; NID Matched Evictionsevent=0x36,umask=0x4401For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)unc_c_tor_occupancy.nid_miss_alluncore cacheTOR Occupancy; NID Matchedevent=0x36,umask=0x4a01For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)unc_c_tor_occupancy.nid_miss_opcodeuncore cacheTOR Occupancy; NID and Opcode Matched Missevent=0x36,umask=0x4301For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)unc_c_tor_occupancy.nid_opcodeuncore cacheTOR Occupancy; NID and Opcode Matchedevent=0x36,umask=0x4101For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)unc_c_tor_occupancy.opcodeuncore cacheTOR Occupancy; Opcode Matchevent=0x36,umask=101For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.   There are a number of subevent 'filters' but only a subset of the subevent combinations are valid.  Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set.  If, for example, one wanted to count DRD Local Misses, one should select 'MISS_OPC_MATCH' and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)unc_c_txr_ads_useduncore cacheevent=401unc_c_txr_inserts.ad_cacheuncore cacheEgress Allocations; AD - Cacheboevent=2,umask=101Number of allocations into the Cbo Egress.  The Egress is used to queue up requests destined for the ringunc_c_txr_inserts.ad_coreuncore cacheEgress Allocations; AD - Coreboevent=2,umask=0x1001Number of allocations into the Cbo Egress.  The Egress is used to queue up requests destined for the ringunc_c_txr_inserts.ak_cacheuncore cacheEgress Allocations; AK - Cacheboevent=2,umask=201Number of allocations into the Cbo Egress.  The Egress is used to queue up requests destined for the ringunc_c_txr_inserts.ak_coreuncore cacheEgress Allocations; AK - Coreboevent=2,umask=0x2001Number of allocations into the Cbo Egress.  The Egress is used to queue up requests destined for the ringunc_c_txr_inserts.bl_cacheuncore cacheEgress Allocations; BL - Cachenoevent=2,umask=401Number of allocations into the Cbo Egress.  The Egress is used to queue up requests destined for the ringunc_c_txr_inserts.bl_coreuncore cacheEgress Allocations; BL - Coreboevent=2,umask=0x4001Number of allocations into the Cbo Egress.  The Egress is used to queue up requests destined for the ringunc_c_txr_inserts.iv_cacheuncore cacheEgress Allocations; IV - Cacheboevent=2,umask=801Number of allocations into the Cbo Egress.  The Egress is used to queue up requests destined for the ringunc_c_txr_starved.akuncore cacheInjection Starvation; Onto AK Ringevent=3,umask=201Counts injection starvation.  This starvation is triggered when the Egress cannot send a transaction onto the ring for a long period of timeunc_c_txr_starved.bluncore cacheInjection Starvation; Onto BL Ringevent=3,umask=401Counts injection starvation.  This starvation is triggered when the Egress cannot send a transaction onto the ring for a long period of timeunc_h_bypass_imc.not_takenuncore cacheHA to iMC Bypass; Not Takenevent=0x14,umask=201Counts the number of times when the HA was able to bypass was attempted.  This is a latency optimization for situations when there is light loadings on the memory subsystem.  This can be filted by when the bypass was taken and when it was notunc_h_bypass_imc.takenuncore cacheHA to iMC Bypass; Takenevent=0x14,umask=101Counts the number of times when the HA was able to bypass was attempted.  This is a latency optimization for situations when there is light loadings on the memory subsystem.  This can be filted by when the bypass was taken and when it was notunc_h_conflict_cycles.conflictuncore cacheConflict Checks; Conflict Detectedevent=0xb,umask=201unc_h_conflict_cycles.no_conflictuncore cacheConflict Checks; No Conflictevent=0xb,umask=101unc_h_directory_lookup.no_snpuncore cacheDirectory Lookups; Snoop Not Neededevent=0xc,umask=201Counts the number of transactions that looked up the directory.  Can be filtered by requests that had to snoop and those that did not have tounc_h_directory_lookup.snpuncore cacheDirectory Lookups; Snoop Neededevent=0xc,umask=101Counts the number of transactions that looked up the directory.  Can be filtered by requests that had to snoop and those that did not have tounc_h_directory_update.clearuncore cacheDirectory Updates; Directory Clearevent=0xd,umask=201Counts the number of directory updates that were required.  These result in writes to the memory controller.  This can be filtered by directory sets and directory clearsunc_h_directory_update.setuncore cacheDirectory Updates; Directory Setevent=0xd,umask=101Counts the number of directory updates that were required.  These result in writes to the memory controller.  This can be filtered by directory sets and directory clearsunc_h_requests.readsuncore cacheRead and Write Requests; Readsevent=1,umask=301Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO).  Writes include all writes (streaming, evictions, HitM, etc)unc_h_requests.writesuncore cacheRead and Write Requests; Writesevent=1,umask=0xc01Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO).  Writes include all writes (streaming, evictions, HitM, etc)unc_h_ring_ad_used.ccw_evenuncore cacheHA AD Ring in Use; Counterclockwise and Evenevent=0x3e,umask=401Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_h_ring_ad_used.ccw_odduncore cacheHA AD Ring in Use; Counterclockwise and Oddevent=0x3e,umask=801Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_h_ring_ad_used.cw_evenuncore cacheHA AD Ring in Use; Clockwise and Evenevent=0x3e,umask=101Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_h_ring_ad_used.cw_odduncore cacheHA AD Ring in Use; Clockwise and Oddevent=0x3e,umask=201Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_h_ring_ak_used.ccw_evenuncore cacheHA AK Ring in Use; Counterclockwise and Evenevent=0x3f,umask=401Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_h_ring_ak_used.ccw_odduncore cacheHA AK Ring in Use; Counterclockwise and Oddevent=0x3f,umask=801Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_h_ring_ak_used.cw_evenuncore cacheHA AK Ring in Use; Clockwise and Evenevent=0x3f,umask=101Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_h_ring_ak_used.cw_odduncore cacheHA AK Ring in Use; Clockwise and Oddevent=0x3f,umask=201Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_h_ring_bl_used.ccw_evenuncore cacheHA BL Ring in Use; Counterclockwise and Evenevent=0x40,umask=401Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_h_ring_bl_used.ccw_odduncore cacheHA BL Ring in Use; Counterclockwise and Oddevent=0x40,umask=801Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_h_ring_bl_used.cw_evenuncore cacheHA BL Ring in Use; Clockwise and Evenevent=0x40,umask=101Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_h_ring_bl_used.cw_odduncore cacheHA BL Ring in Use; Clockwise and Oddevent=0x40,umask=201Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_h_rpq_cycles_no_reg_credits.chn0uncore cacheiMC RPQ Credits Empty - Regular; Channel 0event=0x15,umask=101Counts the number of cycles when there are no 'regular' credits available for posting reads from the HA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue).  This queue is broken into regular credits/buffers that are used by general reads, and 'special' requests such as ISOCH reads.  This count only tracks the regular credits  Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given timeunc_h_rpq_cycles_no_reg_credits.chn1uncore cacheiMC RPQ Credits Empty - Regular; Channel 1event=0x15,umask=201Counts the number of cycles when there are no 'regular' credits available for posting reads from the HA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue).  This queue is broken into regular credits/buffers that are used by general reads, and 'special' requests such as ISOCH reads.  This count only tracks the regular credits  Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given timeunc_h_rpq_cycles_no_reg_credits.chn2uncore cacheiMC RPQ Credits Empty - Regular; Channel 2event=0x15,umask=401Counts the number of cycles when there are no 'regular' credits available for posting reads from the HA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue).  This queue is broken into regular credits/buffers that are used by general reads, and 'special' requests such as ISOCH reads.  This count only tracks the regular credits  Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given timeunc_h_rpq_cycles_no_reg_credits.chn3uncore cacheiMC RPQ Credits Empty - Regular; Channel 3event=0x15,umask=801Counts the number of cycles when there are no 'regular' credits available for posting reads from the HA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue).  This queue is broken into regular credits/buffers that are used by general reads, and 'special' requests such as ISOCH reads.  This count only tracks the regular credits  Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given timeunc_h_rpq_cycles_no_spec_credits.chn0uncore cacheiMC RPQ Credits Empty - Special; Channel 0event=0x16,umask=101Counts the number of cycles when there are no 'special' credits available for posting reads from the HA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue).  This queue is broken into regular credits/buffers that are used by general reads, and 'special' requests such as ISOCH reads.  This count only tracks the 'special' credits.  This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given timeunc_h_rpq_cycles_no_spec_credits.chn1uncore cacheiMC RPQ Credits Empty - Special; Channel 1event=0x16,umask=201Counts the number of cycles when there are no 'special' credits available for posting reads from the HA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue).  This queue is broken into regular credits/buffers that are used by general reads, and 'special' requests such as ISOCH reads.  This count only tracks the 'special' credits.  This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given timeunc_h_rpq_cycles_no_spec_credits.chn2uncore cacheiMC RPQ Credits Empty - Special; Channel 2event=0x16,umask=401Counts the number of cycles when there are no 'special' credits available for posting reads from the HA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue).  This queue is broken into regular credits/buffers that are used by general reads, and 'special' requests such as ISOCH reads.  This count only tracks the 'special' credits.  This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given timeunc_h_rpq_cycles_no_spec_credits.chn3uncore cacheiMC RPQ Credits Empty - Special; Channel 3event=0x16,umask=801Counts the number of cycles when there are no 'special' credits available for posting reads from the HA into the iMC.  In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue).  This queue is broken into regular credits/buffers that are used by general reads, and 'special' requests such as ISOCH reads.  This count only tracks the 'special' credits.  This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given timeunc_h_tad_requests_g0.region0uncore cacheHA Requests to a TAD Region - Group 0; TAD Region 0event=0x1b,umask=101Counts the number of HA requests to a given TAD region.  There are up to 11 TAD (target address decode) regions in each home agent.  All requests destined for the memory controller must first be decoded to determine which TAD region they are in.  This event is filtered based on the TAD region ID, and covers regions 0 to 7.  This event is useful for understanding how applications are using the memory that is spread across the different memory regions.  It is particularly useful for 'Monroe' systems that use the TAD to enable individual channels to enter self-refresh to save powerunc_h_tad_requests_g0.region1uncore cacheHA Requests to a TAD Region - Group 0; TAD Region 1event=0x1b,umask=201Counts the number of HA requests to a given TAD region.  There are up to 11 TAD (target address decode) regions in each home agent.  All requests destined for the memory controller must first be decoded to determine which TAD region they are in.  This event is filtered based on the TAD region ID, and covers regions 0 to 7.  This event is useful for understanding how applications are using the memory that is spread across the different memory regions.  It is particularly useful for 'Monroe' systems that use the TAD to enable individual channels to enter self-refresh to save powerunc_h_tad_requests_g0.region2uncore cacheHA Requests to a TAD Region - Group 0; TAD Region 2event=0x1b,umask=401Counts the number of HA requests to a given TAD region.  There are up to 11 TAD (target address decode) regions in each home agent.  All requests destined for the memory controller must first be decoded to determine which TAD region they are in.  This event is filtered based on the TAD region ID, and covers regions 0 to 7.  This event is useful for understanding how applications are using the memory that is spread across the different memory regions.  It is particularly useful for 'Monroe' systems that use the TAD to enable individual channels to enter self-refresh to save powerunc_h_tad_requests_g0.region3uncore cacheHA Requests to a TAD Region - Group 0; TAD Region 3event=0x1b,umask=801Counts the number of HA requests to a given TAD region.  There are up to 11 TAD (target address decode) regions in each home agent.  All requests destined for the memory controller must first be decoded to determine which TAD region they are in.  This event is filtered based on the TAD region ID, and covers regions 0 to 7.  This event is useful for understanding how applications are using the memory that is spread across the different memory regions.  It is particularly useful for 'Monroe' systems that use the TAD to enable individual channels to enter self-refresh to save powerunc_h_tad_requests_g0.region4uncore cacheHA Requests to a TAD Region - Group 0; TAD Region 4event=0x1b,umask=0x1001Counts the number of HA requests to a given TAD region.  There are up to 11 TAD (target address decode) regions in each home agent.  All requests destined for the memory controller must first be decoded to determine which TAD region they are in.  This event is filtered based on the TAD region ID, and covers regions 0 to 7.  This event is useful for understanding how applications are using the memory that is spread across the different memory regions.  It is particularly useful for 'Monroe' systems that use the TAD to enable individual channels to enter self-refresh to save powerunc_h_tad_requests_g0.region5uncore cacheHA Requests to a TAD Region - Group 0; TAD Region 5event=0x1b,umask=0x2001Counts the number of HA requests to a given TAD region.  There are up to 11 TAD (target address decode) regions in each home agent.  All requests destined for the memory controller must first be decoded to determine which TAD region they are in.  This event is filtered based on the TAD region ID, and covers regions 0 to 7.  This event is useful for understanding how applications are using the memory that is spread across the different memory regions.  It is particularly useful for 'Monroe' systems that use the TAD to enable individual channels to enter self-refresh to save powerunc_h_tad_requests_g0.region6uncore cacheHA Requests to a TAD Region - Group 0; TAD Region 6event=0x1b,umask=0x4001Counts the number of HA requests to a given TAD region.  There are up to 11 TAD (target address decode) regions in each home agent.  All requests destined for the memory controller must first be decoded to determine which TAD region they are in.  This event is filtered based on the TAD region ID, and covers regions 0 to 7.  This event is useful for understanding how applications are using the memory that is spread across the different memory regions.  It is particularly useful for 'Monroe' systems that use the TAD to enable individual channels to enter self-refresh to save powerunc_h_tad_requests_g0.region7uncore cacheHA Requests to a TAD Region - Group 0; TAD Region 7event=0x1b,umask=0x8001Counts the number of HA requests to a given TAD region.  There are up to 11 TAD (target address decode) regions in each home agent.  All requests destined for the memory controller must first be decoded to determine which TAD region they are in.  This event is filtered based on the TAD region ID, and covers regions 0 to 7.  This event is useful for understanding how applications are using the memory that is spread across the different memory regions.  It is particularly useful for 'Monroe' systems that use the TAD to enable individual channels to enter self-refresh to save powerunc_h_tad_requests_g1.region10uncore cacheHA Requests to a TAD Region - Group 1; TAD Region 10event=0x1c,umask=401Counts the number of HA requests to a given TAD region.  There are up to 11 TAD (target address decode) regions in each home agent.  All requests destined for the memory controller must first be decoded to determine which TAD region they are in.  This event is filtered based on the TAD region ID, and covers regions 8 to 10.  This event is useful for understanding how applications are using the memory that is spread across the different memory regions.  It is particularly useful for 'Monroe' systems that use the TAD to enable individual channels to enter self-refresh to save powerunc_h_tad_requests_g1.region11uncore cacheHA Requests to a TAD Region - Group 1; TAD Region 11event=0x1c,umask=801Counts the number of HA requests to a given TAD region.  There are up to 11 TAD (target address decode) regions in each home agent.  All requests destined for the memory controller must first be decoded to determine which TAD region they are in.  This event is filtered based on the TAD region ID, and covers regions 8 to 10.  This event is useful for understanding how applications are using the memory that is spread across the different memory regions.  It is particularly useful for 'Monroe' systems that use the TAD to enable individual channels to enter self-refresh to save powerunc_h_tad_requests_g1.region8uncore cacheHA Requests to a TAD Region - Group 1; TAD Region 8event=0x1c,umask=101Counts the number of HA requests to a given TAD region.  There are up to 11 TAD (target address decode) regions in each home agent.  All requests destined for the memory controller must first be decoded to determine which TAD region they are in.  This event is filtered based on the TAD region ID, and covers regions 8 to 10.  This event is useful for understanding how applications are using the memory that is spread across the different memory regions.  It is particularly useful for 'Monroe' systems that use the TAD to enable individual channels to enter self-refresh to save powerunc_h_tad_requests_g1.region9uncore cacheHA Requests to a TAD Region - Group 1; TAD Region 9event=0x1c,umask=201Counts the number of HA requests to a given TAD region.  There are up to 11 TAD (target address decode) regions in each home agent.  All requests destined for the memory controller must first be decoded to determine which TAD region they are in.  This event is filtered based on the TAD region ID, and covers regions 8 to 10.  This event is useful for understanding how applications are using the memory that is spread across the different memory regions.  It is particularly useful for 'Monroe' systems that use the TAD to enable individual channels to enter self-refresh to save powerunc_h_tracker_inserts.alluncore cacheTracker Allocations; All Requestsevent=6,umask=301Counts the number of allocations into the local HA tracker pool.  This can be used in conjunction with the occupancy accumulation event in order to calculate average latency.  One cannot filter between reads and writes.  HA trackers are allocated as soon as a request enters the HA and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ringunc_h_txr_ad.ndruncore cacheOutbound NDR Ring Transactions; Non-data Responsesevent=0xf,umask=101Counts the number of outbound transactions on the AD ring.  This can be filtered by the NDR and SNP message classes.  See the filter descriptions for more detailsunc_h_txr_ad.snpuncore cacheOutbound NDR Ring Transactions; Snoopsevent=0xf,umask=201Counts the number of outbound transactions on the AD ring.  This can be filtered by the NDR and SNP message classes.  See the filter descriptions for more detailsunc_h_txr_ad_cycles_full.alluncore cacheAD Egress Full; Allevent=0x2a,umask=301unc_h_txr_ad_cycles_full.sched0uncore cacheAD Egress Full; Scheduler 0event=0x2a,umask=101unc_h_txr_ad_cycles_full.sched1uncore cacheAD Egress Full; Scheduler 1event=0x2a,umask=201unc_h_txr_ad_cycles_ne.alluncore cacheAD Egress Not Empty; Allevent=0x29,umask=301unc_h_txr_ad_cycles_ne.sched0uncore cacheAD Egress Not Empty; Scheduler 0event=0x29,umask=101unc_h_txr_ad_cycles_ne.sched1uncore cacheAD Egress Not Empty; Scheduler 1event=0x29,umask=201unc_h_txr_ad_inserts.alluncore cacheAD Egress Allocations; Allevent=0x27,umask=301unc_h_txr_ad_inserts.sched0uncore cacheAD Egress Allocations; Scheduler 0event=0x27,umask=101unc_h_txr_ad_inserts.sched1uncore cacheAD Egress Allocations; Scheduler 1event=0x27,umask=201unc_h_txr_ad_occupancy.alluncore cacheAD Egress Occupancy; Allevent=0x28,umask=301unc_h_txr_ad_occupancy.sched0uncore cacheAD Egress Occupancy; Scheduler 0event=0x28,umask=101unc_h_txr_ad_occupancy.sched1uncore cacheAD Egress Occupancy; Scheduler 1event=0x28,umask=201unc_h_txr_ak_cycles_full.alluncore cacheAK Egress Full; Allevent=0x32,umask=301unc_h_txr_ak_cycles_full.sched0uncore cacheAK Egress Full; Scheduler 0event=0x32,umask=101unc_h_txr_ak_cycles_full.sched1uncore cacheAK Egress Full; Scheduler 1event=0x32,umask=201unc_h_txr_ak_cycles_ne.alluncore cacheAK Egress Not Empty; Allevent=0x31,umask=301unc_h_txr_ak_cycles_ne.sched0uncore cacheAK Egress Not Empty; Scheduler 0event=0x31,umask=101unc_h_txr_ak_cycles_ne.sched1uncore cacheAK Egress Not Empty; Scheduler 1event=0x31,umask=201unc_h_txr_ak_inserts.alluncore cacheAK Egress Allocations; Allevent=0x2f,umask=301unc_h_txr_ak_inserts.sched0uncore cacheAK Egress Allocations; Scheduler 0event=0x2f,umask=101unc_h_txr_ak_inserts.sched1uncore cacheAK Egress Allocations; Scheduler 1event=0x2f,umask=201unc_h_txr_ak_ndruncore cacheOutbound NDR Ring Transactionsevent=0xe01Counts the number of outbound NDR transactions sent on the AK ring.  NDR stands for 'non-data response' and is generally used for completions that do not include data.  AK NDR is used for messages to the local socketunc_h_txr_ak_occupancy.alluncore cacheAK Egress Occupancy; Allevent=0x30,umask=301unc_h_txr_ak_occupancy.sched0uncore cacheAK Egress Occupancy; Scheduler 0event=0x30,umask=101unc_h_txr_ak_occupancy.sched1uncore cacheAK Egress Occupancy; Scheduler 1event=0x30,umask=201unc_h_txr_bl.drs_cacheuncore cacheOutbound DRS Ring Transactions to Cache; Data to Cacheevent=0x10,umask=101Counts the number of DRS messages sent out on the BL ring.   This can be filtered by the destinationunc_h_txr_bl.drs_coreuncore cacheOutbound DRS Ring Transactions to Cache; Data to Coreevent=0x10,umask=201Counts the number of DRS messages sent out on the BL ring.   This can be filtered by the destinationunc_h_txr_bl.drs_qpiuncore cacheOutbound DRS Ring Transactions to Cache; Data to QPIevent=0x10,umask=401Counts the number of DRS messages sent out on the BL ring.   This can be filtered by the destinationunc_h_txr_bl_cycles_full.alluncore cacheBL Egress Full; Allevent=0x36,umask=301unc_h_txr_bl_cycles_full.sched0uncore cacheBL Egress Full; Scheduler 0event=0x36,umask=101unc_h_txr_bl_cycles_full.sched1uncore cacheBL Egress Full; Scheduler 1event=0x36,umask=201unc_h_txr_bl_cycles_ne.alluncore cacheBL Egress Not Empty; Allevent=0x35,umask=301unc_h_txr_bl_cycles_ne.sched0uncore cacheBL Egress Not Empty; Scheduler 0event=0x35,umask=101unc_h_txr_bl_cycles_ne.sched1uncore cacheBL Egress Not Empty; Scheduler 1event=0x35,umask=201unc_h_txr_bl_inserts.alluncore cacheBL Egress Allocations; Allevent=0x33,umask=301unc_h_txr_bl_inserts.sched0uncore cacheBL Egress Allocations; Scheduler 0event=0x33,umask=101unc_h_txr_bl_inserts.sched1uncore cacheBL Egress Allocations; Scheduler 1event=0x33,umask=201unc_h_txr_bl_occupancy.alluncore cacheBL Egress Occupancy; Allevent=0x34,umask=301unc_h_txr_bl_occupancy.sched0uncore cacheBL Egress Occupancy; Scheduler 0event=0x34,umask=101unc_h_txr_bl_occupancy.sched1uncore cacheBL Egress Occupancy; Scheduler 1event=0x34,umask=201unc_h_wpq_cycles_no_reg_credits.chn0uncore cacheHA iMC CHN0 WPQ Credits Empty - Regular; Channel 0event=0x18,umask=101Counts the number of cycles when there are no 'regular' credits available for posting writes from the HA into the iMC.  In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue).  This queue is broken into regular credits/buffers that are used by general writes, and 'special' requests such as ISOCH writes.  This count only tracks the regular credits  Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given timeunc_h_wpq_cycles_no_reg_credits.chn1uncore cacheHA iMC CHN0 WPQ Credits Empty - Regular; Channel 1event=0x18,umask=201Counts the number of cycles when there are no 'regular' credits available for posting writes from the HA into the iMC.  In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue).  This queue is broken into regular credits/buffers that are used by general writes, and 'special' requests such as ISOCH writes.  This count only tracks the regular credits  Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given timeunc_h_wpq_cycles_no_reg_credits.chn2uncore cacheHA iMC CHN0 WPQ Credits Empty - Regular; Channel 2event=0x18,umask=401Counts the number of cycles when there are no 'regular' credits available for posting writes from the HA into the iMC.  In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue).  This queue is broken into regular credits/buffers that are used by general writes, and 'special' requests such as ISOCH writes.  This count only tracks the regular credits  Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given timeunc_h_wpq_cycles_no_reg_credits.chn3uncore cacheHA iMC CHN0 WPQ Credits Empty - Regular; Channel 3event=0x18,umask=801Counts the number of cycles when there are no 'regular' credits available for posting writes from the HA into the iMC.  In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue).  This queue is broken into regular credits/buffers that are used by general writes, and 'special' requests such as ISOCH writes.  This count only tracks the regular credits  Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given timeunc_h_wpq_cycles_no_spec_credits.chn0uncore cacheHA iMC CHN0 WPQ Credits Empty - Special; Channel 0event=0x19,umask=101Counts the number of cycles when there are no 'special' credits available for posting writes from the HA into the iMC.  In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue).  This queue is broken into regular credits/buffers that are used by general writes, and 'special' requests such as ISOCH writes.  This count only tracks the 'special' credits.  This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given timeunc_h_wpq_cycles_no_spec_credits.chn1uncore cacheHA iMC CHN0 WPQ Credits Empty - Special; Channel 1event=0x19,umask=201Counts the number of cycles when there are no 'special' credits available for posting writes from the HA into the iMC.  In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue).  This queue is broken into regular credits/buffers that are used by general writes, and 'special' requests such as ISOCH writes.  This count only tracks the 'special' credits.  This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given timeunc_h_wpq_cycles_no_spec_credits.chn2uncore cacheHA iMC CHN0 WPQ Credits Empty - Special; Channel 2event=0x19,umask=401Counts the number of cycles when there are no 'special' credits available for posting writes from the HA into the iMC.  In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue).  This queue is broken into regular credits/buffers that are used by general writes, and 'special' requests such as ISOCH writes.  This count only tracks the 'special' credits.  This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given timeunc_h_wpq_cycles_no_spec_credits.chn3uncore cacheHA iMC CHN0 WPQ Credits Empty - Special; Channel 3event=0x19,umask=801Counts the number of cycles when there are no 'special' credits available for posting writes from the HA into the iMC.  In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue).  This queue is broken into regular credits/buffers that are used by general writes, and 'special' requests such as ISOCH writes.  This count only tracks the 'special' credits.  This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH.  One can filter based on the memory controller channel.  One or more channels can be tracked at a given timeunc_i_address_match.merge_countuncore interconnectAddress Match (Conflict) Count; Conflict Mergesevent=0x17,umask=201Counts the number of times when an inbound write (from a device to memory or another device) had an address match with another request in the write cacheunc_i_address_match.stall_countuncore interconnectAddress Match (Conflict) Count; Conflict Stallsevent=0x17,umask=101Counts the number of times when an inbound write (from a device to memory or another device) had an address match with another request in the write cacheunc_i_cache_ack_pending_occupancy.anyuncore interconnectWrite Ack Pending Occupancy; Any Sourceevent=0x14,umask=101Accumulates the number of writes that have acquired ownership but have not yet returned their data to the uncore.  These writes are generally queued up in the switch trying to get to the head of their queues so that they can post their data.  The queue occuapancy increments when the ACK is received, and decrements when either the data is returned OR a tickle is received and ownership is released.  Note that a single tickle can result in multiple decrementsunc_i_cache_ack_pending_occupancy.sourceuncore interconnectWrite Ack Pending Occupancy; Select Sourceevent=0x14,umask=201Accumulates the number of writes that have acquired ownership but have not yet returned their data to the uncore.  These writes are generally queued up in the switch trying to get to the head of their queues so that they can post their data.  The queue occuapancy increments when the ACK is received, and decrements when either the data is returned OR a tickle is received and ownership is released.  Note that a single tickle can result in multiple decrementsunc_i_cache_own_occupancy.anyuncore interconnectOutstanding Write Ownership Occupancy; Any Sourceevent=0x13,umask=101Accumulates the number of writes (and write prefetches) that are outstanding in the uncore trying to acquire ownership in each cycle.  This can be used with the write transaction count to calculate the average write latency in the uncore.  The occupancy increments when a write request is issued, and decrements when the data is returnedunc_i_cache_own_occupancy.sourceuncore interconnectOutstanding Write Ownership Occupancy; Select Sourceevent=0x13,umask=201Accumulates the number of writes (and write prefetches) that are outstanding in the uncore trying to acquire ownership in each cycle.  This can be used with the write transaction count to calculate the average write latency in the uncore.  The occupancy increments when a write request is issued, and decrements when the data is returnedunc_i_cache_read_occupancy.anyuncore interconnectOutstanding Read Occupancy; Any Sourceevent=0x10,umask=101Accumulates the number of reads that are outstanding in the uncore in each cycle.  This can be used with the read transaction count to calculate the average read latency in the uncore.  The occupancy increments when a read request is issued, and decrements when the data is returnedunc_i_cache_read_occupancy.sourceuncore interconnectOutstanding Read Occupancy; Select Sourceevent=0x10,umask=201Accumulates the number of reads that are outstanding in the uncore in each cycle.  This can be used with the read transaction count to calculate the average read latency in the uncore.  The occupancy increments when a read request is issued, and decrements when the data is returnedunc_i_cache_total_occupancy.anyuncore interconnectTotal Write Cache Occupancy; Any Sourceevent=0x12,umask=101Accumulates the number of reads and writes that are outstanding in the uncore in each cycle.  This is effectively the sum of the READ_OCCUPANCY and WRITE_OCCUPANCY eventsunc_i_cache_total_occupancy.sourceuncore interconnectTotal Write Cache Occupancy; Select Sourceevent=0x12,umask=201Accumulates the number of reads and writes that are outstanding in the uncore in each cycle.  This is effectively the sum of the READ_OCCUPANCY and WRITE_OCCUPANCY eventsunc_i_cache_write_occupancy.anyuncore interconnectOutstanding Write Occupancy; Any Sourceevent=0x11,umask=101Accumulates the number of writes (and write prefetches)  that are outstanding in the uncore in each cycle.  This can be used with the transaction count event to calculate the average latency in the uncore.  The occupancy increments when the ownership fetch/prefetch is issued, and decrements the data is returned to the uncoreunc_i_cache_write_occupancy.sourceuncore interconnectOutstanding Write Occupancy; Select Sourceevent=0x11,umask=201Accumulates the number of writes (and write prefetches)  that are outstanding in the uncore in each cycle.  This can be used with the transaction count event to calculate the average latency in the uncore.  The occupancy increments when the ownership fetch/prefetch is issued, and decrements the data is returned to the uncoreunc_i_tickles.lost_ownershipuncore interconnectTickle Count; Ownership Lostevent=0x16,umask=101Counts the number of tickles that are received.  This is for both explicit (from Cbo) and implicit (internal conflict) ticklesunc_i_tickles.top_of_queueuncore interconnectTickle Count; Data Returnedevent=0x16,umask=201Counts the number of tickles that are received.  This is for both explicit (from Cbo) and implicit (internal conflict) ticklesunc_i_transactions.pd_prefetchesuncore interconnectInbound Transaction Count; Read Prefetchesevent=0x15,umask=401Counts the number of 'Inbound' transactions from the IRP to the Uncore.  This can be filtered based on request type in addition to the source queue.  Note the special filtering equation.  We do OR-reduction on the request type.  If the SOURCE bit is set, then we also do AND qualification based on the source portIDunc_i_transactions.readsuncore interconnectInbound Transaction Count; Readsevent=0x15,umask=101Counts the number of 'Inbound' transactions from the IRP to the Uncore.  This can be filtered based on request type in addition to the source queue.  Note the special filtering equation.  We do OR-reduction on the request type.  If the SOURCE bit is set, then we also do AND qualification based on the source portIDunc_i_transactions.writesuncore interconnectInbound Transaction Count; Writesevent=0x15,umask=201Counts the number of 'Inbound' transactions from the IRP to the Uncore.  This can be filtered based on request type in addition to the source queue.  Note the special filtering equation.  We do OR-reduction on the request type.  If the SOURCE bit is set, then we also do AND qualification based on the source portIDunc_q_clockticksuncore interconnectNumber of qfclksevent=0x1401Counts the number of clocks in the QPI LL.  This clock runs at 1/8th the 'GT/s' speed of the QPI link.  For example, a 8GT/s link will have qfclk or 1GHz.  JKT does not support dynamic link speeds, so this frequency is fixedunc_q_direct2core.failure_creditsuncore interconnectDirect 2 Core Spawning; Spawn Failure - Egress Creditsevent=0x13,umask=201Counts the number of DRS packets that we attempted to do direct2core on.  There are 4 mutually exclusive filters.  Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases.  Note that this does not count packets that are not candidates for Direct2Core.  The only candidates for Direct2Core are DRS packets destined for Cbosunc_q_direct2core.failure_credits_rbtuncore interconnectDirect 2 Core Spawning; Spawn Failure - Egress and RBTevent=0x13,umask=801Counts the number of DRS packets that we attempted to do direct2core on.  There are 4 mutually exclusive filters.  Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases.  Note that this does not count packets that are not candidates for Direct2Core.  The only candidates for Direct2Core are DRS packets destined for Cbosunc_q_direct2core.failure_rbtuncore interconnectDirect 2 Core Spawning; Spawn Failure - RBT Not Setevent=0x13,umask=401Counts the number of DRS packets that we attempted to do direct2core on.  There are 4 mutually exclusive filters.  Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases.  Note that this does not count packets that are not candidates for Direct2Core.  The only candidates for Direct2Core are DRS packets destined for Cbosunc_q_direct2core.successuncore interconnectDirect 2 Core Spawning; Spawn Successevent=0x13,umask=101Counts the number of DRS packets that we attempted to do direct2core on.  There are 4 mutually exclusive filters.  Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases.  Note that this does not count packets that are not candidates for Direct2Core.  The only candidates for Direct2Core are DRS packets destined for Cbosunc_q_rxl_crc_errors.link_inituncore interconnectCRC Errors Detected; LinkInitevent=3,umask=101Number of CRC errors detected in the QPI Agent.  Each QPI flit incorporates 8 bits of CRC for error detection.  This counts the number of flits where the CRC was able to detect an error.  After an error has been detected, the QPI agent will send a request to the transmitting socket to resend the flit (as well as any flits that came after it)unc_q_rxl_crc_errors.normal_opuncore interconnectCRC Errors Detected; Normal Operationsevent=3,umask=201Number of CRC errors detected in the QPI Agent.  Each QPI flit incorporates 8 bits of CRC for error detection.  This counts the number of flits where the CRC was able to detect an error.  After an error has been detected, the QPI agent will send a request to the transmitting socket to resend the flit (as well as any flits that came after it)unc_q_rxl_credits_consumed_vn0.drsuncore interconnectVN0 Credit Consumed; DRSevent=0x1e,umask=101Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer).  This includes packets that went through the RxQ and those that were bypasssedunc_q_rxl_credits_consumed_vn0.homuncore interconnectVN0 Credit Consumed; HOMevent=0x1e,umask=801Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer).  This includes packets that went through the RxQ and those that were bypasssedunc_q_rxl_credits_consumed_vn0.ncbuncore interconnectVN0 Credit Consumed; NCBevent=0x1e,umask=201Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer).  This includes packets that went through the RxQ and those that were bypasssedunc_q_rxl_credits_consumed_vn0.ncsuncore interconnectVN0 Credit Consumed; NCSevent=0x1e,umask=401Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer).  This includes packets that went through the RxQ and those that were bypasssedunc_q_rxl_credits_consumed_vn0.ndruncore interconnectVN0 Credit Consumed; NDRevent=0x1e,umask=0x2001Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer).  This includes packets that went through the RxQ and those that were bypasssedunc_q_rxl_credits_consumed_vn0.snpuncore interconnectVN0 Credit Consumed; SNPevent=0x1e,umask=0x1001Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer).  This includes packets that went through the RxQ and those that were bypasssedunc_q_rxl_flits_g0.datauncore interconnectFlits Received - Group 0; Data Tx Flitsevent=1,umask=201Counts the number of flits received from the QPI Link.  It includes filters for Idle, protocol, and Data Flits.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0punc_q_rxl_flits_g0.idleuncore interconnectFlits Received - Group 0; Idle and Null Flitsevent=1,umask=101Counts the number of flits received from the QPI Link.  It includes filters for Idle, protocol, and Data Flits.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0punc_q_rxl_flits_g0.non_datauncore interconnectFlits Received - Group 0; Non-Data protocol Tx Flitsevent=1,umask=401Counts the number of flits received from the QPI Link.  It includes filters for Idle, protocol, and Data Flits.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0punc_q_rxl_flits_g1.drsuncore interconnectFlits Received - Group 1; DRS Flits (both Header and Data)event=2,umask=0x1801Counts the number of flits received from the QPI Link.  This is one of three 'groups' that allow us to track flits.  It includes filters for SNP, HOM, and DRS message classes.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / timeunc_q_rxl_flits_g1.drs_datauncore interconnectFlits Received - Group 1; DRS Data Flitsevent=2,umask=801Counts the number of flits received from the QPI Link.  This is one of three 'groups' that allow us to track flits.  It includes filters for SNP, HOM, and DRS message classes.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / timeunc_q_rxl_flits_g1.drs_nondatauncore interconnectFlits Received - Group 1; DRS Header Flitsevent=2,umask=0x1001Counts the number of flits received from the QPI Link.  This is one of three 'groups' that allow us to track flits.  It includes filters for SNP, HOM, and DRS message classes.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / timeunc_q_rxl_flits_g1.homuncore interconnectFlits Received - Group 1; HOM Flitsevent=2,umask=601Counts the number of flits received from the QPI Link.  This is one of three 'groups' that allow us to track flits.  It includes filters for SNP, HOM, and DRS message classes.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / timeunc_q_rxl_flits_g1.hom_nonrequncore interconnectFlits Received - Group 1; HOM Non-Request Flitsevent=2,umask=401Counts the number of flits received from the QPI Link.  This is one of three 'groups' that allow us to track flits.  It includes filters for SNP, HOM, and DRS message classes.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / timeunc_q_rxl_flits_g1.hom_requncore interconnectFlits Received - Group 1; HOM Request Flitsevent=2,umask=201Counts the number of flits received from the QPI Link.  This is one of three 'groups' that allow us to track flits.  It includes filters for SNP, HOM, and DRS message classes.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / timeunc_q_rxl_flits_g1.snpuncore interconnectFlits Received - Group 1; SNP Flitsevent=2,umask=101Counts the number of flits received from the QPI Link.  This is one of three 'groups' that allow us to track flits.  It includes filters for SNP, HOM, and DRS message classes.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / timeunc_q_rxl_flits_g2.ncbuncore interconnectFlits Received - Group 2; Non-Coherent Rx Flitsevent=3,umask=0xc01Counts the number of flits received from the QPI Link.  This is one of three 'groups' that allow us to track flits.  It includes filters for NDR, NCB, and NCS message classes.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / timeunc_q_rxl_flits_g2.ncb_datauncore interconnectFlits Received - Group 2; Non-Coherent data Rx Flitsevent=3,umask=401Counts the number of flits received from the QPI Link.  This is one of three 'groups' that allow us to track flits.  It includes filters for NDR, NCB, and NCS message classes.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / timeunc_q_rxl_flits_g2.ncb_nondatauncore interconnectFlits Received - Group 2; Non-Coherent non-data Rx Flitsevent=3,umask=801Counts the number of flits received from the QPI Link.  This is one of three 'groups' that allow us to track flits.  It includes filters for NDR, NCB, and NCS message classes.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / timeunc_q_rxl_flits_g2.ncsuncore interconnectFlits Received - Group 2; Non-Coherent standard Rx Flitsevent=3,umask=0x1001Counts the number of flits received from the QPI Link.  This is one of three 'groups' that allow us to track flits.  It includes filters for NDR, NCB, and NCS message classes.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / timeunc_q_rxl_flits_g2.ndr_aduncore interconnectFlits Received - Group 2; Non-Data Response Rx Flits - ADevent=3,umask=101Counts the number of flits received from the QPI Link.  This is one of three 'groups' that allow us to track flits.  It includes filters for NDR, NCB, and NCS message classes.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / timeunc_q_rxl_flits_g2.ndr_akuncore interconnectFlits Received - Group 2; Non-Data Response Rx Flits - AKevent=3,umask=201Counts the number of flits received from the QPI Link.  This is one of three 'groups' that allow us to track flits.  It includes filters for NDR, NCB, and NCS message classes.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / timeunc_q_rxl_stalls.bgf_drsuncore interconnectStalls Sending to R3QPI; BGF Stall - HOMevent=0x35,umask=101Number of stalls trying to send to R3QPIunc_q_rxl_stalls.bgf_homuncore interconnectStalls Sending to R3QPI; BGF Stall - DRSevent=0x35,umask=801Number of stalls trying to send to R3QPIunc_q_rxl_stalls.bgf_ncbuncore interconnectStalls Sending to R3QPI; BGF Stall - SNPevent=0x35,umask=201Number of stalls trying to send to R3QPIunc_q_rxl_stalls.bgf_ncsuncore interconnectStalls Sending to R3QPI; BGF Stall - NDRevent=0x35,umask=401Number of stalls trying to send to R3QPIunc_q_rxl_stalls.bgf_ndruncore interconnectStalls Sending to R3QPI; BGF Stall - NCSevent=0x35,umask=0x2001Number of stalls trying to send to R3QPIunc_q_rxl_stalls.bgf_snpuncore interconnectStalls Sending to R3QPI; BGF Stall - NCBevent=0x35,umask=0x1001Number of stalls trying to send to R3QPIunc_q_rxl_stalls.egress_creditsuncore interconnectStalls Sending to R3QPI; Egress Creditsevent=0x35,umask=0x4001Number of stalls trying to send to R3QPIunc_q_rxl_stalls.gvuncore interconnectStalls Sending to R3QPI; GVevent=0x35,umask=0x8001Number of stalls trying to send to R3QPIunc_q_txl_crc_no_credits.almost_fulluncore interconnectCycles Stalled with no LLR Credits; LLR is almost fullevent=2,umask=201Number of cycles when the Tx side ran out of Link Layer Retry credits, causing the Tx to stallunc_q_txl_crc_no_credits.fulluncore interconnectCycles Stalled with no LLR Credits; LLR is fullevent=2,umask=101Number of cycles when the Tx side ran out of Link Layer Retry credits, causing the Tx to stallunc_q_txl_flits_g0.datauncore interconnectFlits Transferred - Group 0; Data Tx Flitsevent=0,umask=201Counts the number of flits transmitted across the QPI Link.  It includes filters for Idle, protocol, and Data Flits.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0punc_q_txl_flits_g0.idleuncore interconnectFlits Transferred - Group 0; Idle and Null Flitsevent=0,umask=101Counts the number of flits transmitted across the QPI Link.  It includes filters for Idle, protocol, and Data Flits.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0punc_q_txl_flits_g0.non_datauncore interconnectFlits Transferred - Group 0; Non-Data protocol Tx Flitsevent=0,umask=401Counts the number of flits transmitted across the QPI Link.  It includes filters for Idle, protocol, and Data Flits.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0punc_q_txl_flits_g1.drsuncore interconnectFlits Transferred - Group 1; DRS Flits (both Header and Data)event=0,umask=0x1801Counts the number of flits transmitted across the QPI Link.  This is one of three 'groups' that allow us to track flits.  It includes filters for SNP, HOM, and DRS message classes.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / timeunc_q_txl_flits_g1.drs_datauncore interconnectFlits Transferred - Group 1; DRS Data Flitsevent=0,umask=801Counts the number of flits transmitted across the QPI Link.  This is one of three 'groups' that allow us to track flits.  It includes filters for SNP, HOM, and DRS message classes.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / timeunc_q_txl_flits_g1.drs_nondatauncore interconnectFlits Transferred - Group 1; DRS Header Flitsevent=0,umask=0x1001Counts the number of flits transmitted across the QPI Link.  This is one of three 'groups' that allow us to track flits.  It includes filters for SNP, HOM, and DRS message classes.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / timeunc_q_txl_flits_g1.homuncore interconnectFlits Transferred - Group 1; HOM Flitsevent=0,umask=601Counts the number of flits transmitted across the QPI Link.  This is one of three 'groups' that allow us to track flits.  It includes filters for SNP, HOM, and DRS message classes.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / timeunc_q_txl_flits_g1.hom_nonrequncore interconnectFlits Transferred - Group 1; HOM Non-Request Flitsevent=0,umask=401Counts the number of flits transmitted across the QPI Link.  This is one of three 'groups' that allow us to track flits.  It includes filters for SNP, HOM, and DRS message classes.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / timeunc_q_txl_flits_g1.hom_requncore interconnectFlits Transferred - Group 1; HOM Request Flitsevent=0,umask=201Counts the number of flits transmitted across the QPI Link.  This is one of three 'groups' that allow us to track flits.  It includes filters for SNP, HOM, and DRS message classes.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / timeunc_q_txl_flits_g1.snpuncore interconnectFlits Transferred - Group 1; SNP Flitsevent=0,umask=101Counts the number of flits transmitted across the QPI Link.  This is one of three 'groups' that allow us to track flits.  It includes filters for SNP, HOM, and DRS message classes.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / timeunc_q_txl_flits_g2.ncbuncore interconnectFlits Transferred - Group 2; Non-Coherent Bypass Tx Flitsevent=1,umask=0xc01Counts the number of flits transmitted across the QPI Link.  This is one of three 'groups' that allow us to track flits.  It includes filters for NDR, NCB, and NCS message classes.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / timeunc_q_txl_flits_g2.ncb_datauncore interconnectFlits Transferred - Group 2; Non-Coherent data Tx Flitsevent=1,umask=401Counts the number of flits transmitted across the QPI Link.  This is one of three 'groups' that allow us to track flits.  It includes filters for NDR, NCB, and NCS message classes.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / timeunc_q_txl_flits_g2.ncb_nondatauncore interconnectFlits Transferred - Group 2; Non-Coherent non-data Tx Flitsevent=1,umask=801Counts the number of flits transmitted across the QPI Link.  This is one of three 'groups' that allow us to track flits.  It includes filters for NDR, NCB, and NCS message classes.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / timeunc_q_txl_flits_g2.ncsuncore interconnectFlits Transferred - Group 2; Non-Coherent standard Tx Flitsevent=1,umask=0x1001Counts the number of flits transmitted across the QPI Link.  This is one of three 'groups' that allow us to track flits.  It includes filters for NDR, NCB, and NCS message classes.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / timeunc_q_txl_flits_g2.ndr_aduncore interconnectFlits Transferred - Group 2; Non-Data Response Tx Flits - ADevent=1,umask=101Counts the number of flits transmitted across the QPI Link.  This is one of three 'groups' that allow us to track flits.  It includes filters for NDR, NCB, and NCS message classes.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / timeunc_q_txl_flits_g2.ndr_akuncore interconnectFlits Transferred - Group 2; Non-Data Response Tx Flits - AKevent=1,umask=201Counts the number of flits transmitted across the QPI Link.  This is one of three 'groups' that allow us to track flits.  It includes filters for NDR, NCB, and NCS message classes.  Each 'flit' is made up of 80 bits of information (in addition to some ECC data).  In full-width (L0) mode, flits are made up of four 'fits', each of which contains 20 bits of data (along with some additional ECC data).   In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit.  When one talks about QPI 'speed' (for example, 8.0 GT/s), the 'transfers' here refer to 'fits'.  Therefore, in L0, the system will transfer 1 'flit' at the rate of 1/4th the QPI speed.  One can calculate the bandwidth of the link by taking: flits*80b/time.  Note that this is not the same as 'data' bandwidth.  For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual 'data' and an additional 16 bits of other information.  To calculate 'data' bandwidth, one should therefore do: data flits * 8B / timeunc_r3_iio_credits_acquired.drsuncore interconnectto IIO BL Credit Acquiredevent=0x20,umask=801Counts the number of times the NCS/NCB/DRS credit is acquired in the QPI for sending messages on BL to the IIO.  There is one credit for each of these three message classes (three credits total).  NCS is used for reads to PCIe space, NCB is used for transferring data without coherency, and DRS is used for transferring data with coherency (cacheable PCI transactions).  This event can only track one message class at a timeunc_r3_iio_credits_acquired.ncbuncore interconnectto IIO BL Credit Acquiredevent=0x20,umask=0x1001Counts the number of times the NCS/NCB/DRS credit is acquired in the QPI for sending messages on BL to the IIO.  There is one credit for each of these three message classes (three credits total).  NCS is used for reads to PCIe space, NCB is used for transferring data without coherency, and DRS is used for transferring data with coherency (cacheable PCI transactions).  This event can only track one message class at a timeunc_r3_iio_credits_acquired.ncsuncore interconnectto IIO BL Credit Acquiredevent=0x20,umask=0x2001Counts the number of times the NCS/NCB/DRS credit is acquired in the QPI for sending messages on BL to the IIO.  There is one credit for each of these three message classes (three credits total).  NCS is used for reads to PCIe space, NCB is used for transferring data without coherency, and DRS is used for transferring data with coherency (cacheable PCI transactions).  This event can only track one message class at a timeunc_r3_iio_credits_reject.drsuncore interconnectto IIO BL Credit Rejectedevent=0x21,umask=801Counts the number of times that a request attempted to acquire an NCS/NCB/DRS credit in the QPI for sending messages on BL to the IIO but was rejected because no credit was available.  There is one credit for each of these three message classes (three credits total).  NCS is used for reads to PCIe space, NCB is used for transferring data without coherency, and DRS is used for transferring data with coherency (cacheable PCI transactions).  This event can only track one message class at a timeunc_r3_iio_credits_reject.ncbuncore interconnectto IIO BL Credit Rejectedevent=0x21,umask=0x1001Counts the number of times that a request attempted to acquire an NCS/NCB/DRS credit in the QPI for sending messages on BL to the IIO but was rejected because no credit was available.  There is one credit for each of these three message classes (three credits total).  NCS is used for reads to PCIe space, NCB is used for transferring data without coherency, and DRS is used for transferring data with coherency (cacheable PCI transactions).  This event can only track one message class at a timeunc_r3_iio_credits_reject.ncsuncore interconnectto IIO BL Credit Rejectedevent=0x21,umask=0x2001Counts the number of times that a request attempted to acquire an NCS/NCB/DRS credit in the QPI for sending messages on BL to the IIO but was rejected because no credit was available.  There is one credit for each of these three message classes (three credits total).  NCS is used for reads to PCIe space, NCB is used for transferring data without coherency, and DRS is used for transferring data with coherency (cacheable PCI transactions).  This event can only track one message class at a timeunc_r3_iio_credits_used.drsuncore interconnectto IIO BL Credit In Useevent=0x22,umask=801Counts the number of cycles when the NCS/NCB/DRS credit is in use in the QPI for sending messages on BL to the IIO.  There is one credit for each of these three message classes (three credits total).  NCS is used for reads to PCIe space, NCB is used for transferring data without coherency, and DRS is used for transferring data with coherency (cacheable PCI transactions).  This event can only track one message class at a timeunc_r3_iio_credits_used.ncbuncore interconnectto IIO BL Credit In Useevent=0x22,umask=0x1001Counts the number of cycles when the NCS/NCB/DRS credit is in use in the QPI for sending messages on BL to the IIO.  There is one credit for each of these three message classes (three credits total).  NCS is used for reads to PCIe space, NCB is used for transferring data without coherency, and DRS is used for transferring data with coherency (cacheable PCI transactions).  This event can only track one message class at a timeunc_r3_iio_credits_used.ncsuncore interconnectto IIO BL Credit In Useevent=0x22,umask=0x2001Counts the number of cycles when the NCS/NCB/DRS credit is in use in the QPI for sending messages on BL to the IIO.  There is one credit for each of these three message classes (three credits total).  NCS is used for reads to PCIe space, NCB is used for transferring data without coherency, and DRS is used for transferring data with coherency (cacheable PCI transactions).  This event can only track one message class at a timeunc_r3_ring_ad_used.ccw_evenuncore interconnectR3 AD Ring in Use; Counterclockwise and Evenevent=7,umask=401Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r3_ring_ad_used.ccw_odduncore interconnectR3 AD Ring in Use; Counterclockwise and Oddevent=7,umask=801Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r3_ring_ad_used.cw_evenuncore interconnectR3 AD Ring in Use; Clockwise and Evenevent=7,umask=101Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r3_ring_ad_used.cw_odduncore interconnectR3 AD Ring in Use; Clockwise and Oddevent=7,umask=201Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r3_ring_ak_used.ccw_evenuncore interconnectR3 AK Ring in Use; Counterclockwise and Evenevent=8,umask=401Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stopunc_r3_ring_ak_used.ccw_odduncore interconnectR3 AK Ring in Use; Counterclockwise and Oddevent=8,umask=801Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stopunc_r3_ring_ak_used.cw_evenuncore interconnectR3 AK Ring in Use; Clockwise and Evenevent=8,umask=101Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stopunc_r3_ring_ak_used.cw_odduncore interconnectR3 AK Ring in Use; Clockwise and Oddevent=8,umask=201Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stopunc_r3_ring_bl_used.ccw_evenuncore interconnectR3 BL Ring in Use; Counterclockwise and Evenevent=9,umask=401Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r3_ring_bl_used.ccw_odduncore interconnectR3 BL Ring in Use; Counterclockwise and Oddevent=9,umask=801Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r3_ring_bl_used.cw_evenuncore interconnectR3 BL Ring in Use; Clockwise and Evenevent=9,umask=101Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r3_ring_bl_used.cw_odduncore interconnectR3 BL Ring in Use; Clockwise and Oddevent=9,umask=201Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r3_ring_iv_used.anyuncore interconnectR3 IV Ring in Use; Anyevent=0xa,umask=0xf01Counts the number of cycles that the IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.  The IV ring is unidirectional.  Whether UP or DN is used is dependent on the system programming.  Thereofore, one should generally set both the UP and DN bits for a given polarity (or both) at a given timeunc_r3_rxr_cycles_ne.drsuncore interconnectIngress Cycles Not Empty; DRSevent=0x10,umask=801Counts the number of cycles when the QPI Ingress is not empty.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple countersunc_r3_rxr_cycles_ne.homuncore interconnectIngress Cycles Not Empty; HOMevent=0x10,umask=101Counts the number of cycles when the QPI Ingress is not empty.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple countersunc_r3_rxr_cycles_ne.ncbuncore interconnectIngress Cycles Not Empty; NCBevent=0x10,umask=0x1001Counts the number of cycles when the QPI Ingress is not empty.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple countersunc_r3_rxr_cycles_ne.ncsuncore interconnectIngress Cycles Not Empty; NCSevent=0x10,umask=0x2001Counts the number of cycles when the QPI Ingress is not empty.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple countersunc_r3_rxr_cycles_ne.ndruncore interconnectIngress Cycles Not Empty; NDRevent=0x10,umask=401Counts the number of cycles when the QPI Ingress is not empty.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple countersunc_r3_rxr_cycles_ne.snpuncore interconnectIngress Cycles Not Empty; SNPevent=0x10,umask=201Counts the number of cycles when the QPI Ingress is not empty.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple countersunc_r3_rxr_inserts.drsuncore interconnectIngress Allocations; DRSevent=0x11,umask=801Counts the number of allocations into the QPI Ingress.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple countersunc_r3_rxr_inserts.homuncore interconnectIngress Allocations; HOMevent=0x11,umask=101Counts the number of allocations into the QPI Ingress.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple countersunc_r3_rxr_inserts.ncbuncore interconnectIngress Allocations; NCBevent=0x11,umask=0x1001Counts the number of allocations into the QPI Ingress.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple countersunc_r3_rxr_inserts.ncsuncore interconnectIngress Allocations; NCSevent=0x11,umask=0x2001Counts the number of allocations into the QPI Ingress.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple countersunc_r3_rxr_inserts.ndruncore interconnectIngress Allocations; NDRevent=0x11,umask=401Counts the number of allocations into the QPI Ingress.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple countersunc_r3_rxr_inserts.snpuncore interconnectIngress Allocations; SNPevent=0x11,umask=201Counts the number of allocations into the QPI Ingress.  This tracks one of the three rings that are used by the QPI agent.  This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency.  Multiple ingress buffers can be tracked at a given time using multiple countersunc_r3_rxr_occupancy.drsuncore interconnectIngress Occupancy Accumulator; DRSevent=0x13,umask=801Accumulates the occupancy of a given QPI Ingress queue in each cycles.  This tracks one of the three ring Ingress buffers.  This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latencyunc_r3_rxr_occupancy.homuncore interconnectIngress Occupancy Accumulator; HOMevent=0x13,umask=101Accumulates the occupancy of a given QPI Ingress queue in each cycles.  This tracks one of the three ring Ingress buffers.  This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latencyunc_r3_rxr_occupancy.ncbuncore interconnectIngress Occupancy Accumulator; NCBevent=0x13,umask=0x1001Accumulates the occupancy of a given QPI Ingress queue in each cycles.  This tracks one of the three ring Ingress buffers.  This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latencyunc_r3_rxr_occupancy.ncsuncore interconnectIngress Occupancy Accumulator; NCSevent=0x13,umask=0x2001Accumulates the occupancy of a given QPI Ingress queue in each cycles.  This tracks one of the three ring Ingress buffers.  This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latencyunc_r3_rxr_occupancy.ndruncore interconnectIngress Occupancy Accumulator; NDRevent=0x13,umask=401Accumulates the occupancy of a given QPI Ingress queue in each cycles.  This tracks one of the three ring Ingress buffers.  This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latencyunc_r3_rxr_occupancy.snpuncore interconnectIngress Occupancy Accumulator; SNPevent=0x13,umask=201Accumulates the occupancy of a given QPI Ingress queue in each cycles.  This tracks one of the three ring Ingress buffers.  This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latencyunc_r3_vn0_credits_reject.drsuncore interconnectVN0 Credit Acquisition Failed on DRS; DRS Message Classevent=0x37,umask=801Number of times a request failed to acquire a DRS VN0 credit.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed.  This should generally be a rare situationunc_r3_vn0_credits_reject.homuncore interconnectVN0 Credit Acquisition Failed on DRS; HOM Message Classevent=0x37,umask=101Number of times a request failed to acquire a DRS VN0 credit.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed.  This should generally be a rare situationunc_r3_vn0_credits_reject.ncbuncore interconnectVN0 Credit Acquisition Failed on DRS; NCB Message Classevent=0x37,umask=0x1001Number of times a request failed to acquire a DRS VN0 credit.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed.  This should generally be a rare situationunc_r3_vn0_credits_reject.ncsuncore interconnectVN0 Credit Acquisition Failed on DRS; NCS Message Classevent=0x37,umask=0x2001Number of times a request failed to acquire a DRS VN0 credit.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed.  This should generally be a rare situationunc_r3_vn0_credits_reject.ndruncore interconnectVN0 Credit Acquisition Failed on DRS; NDR Message Classevent=0x37,umask=401Number of times a request failed to acquire a DRS VN0 credit.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed.  This should generally be a rare situationunc_r3_vn0_credits_reject.snpuncore interconnectVN0 Credit Acquisition Failed on DRS; SNP Message Classevent=0x37,umask=201Number of times a request failed to acquire a DRS VN0 credit.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed.  This should generally be a rare situationunc_r3_vn0_credits_used.drsuncore interconnectVN0 Credit Used; DRS Message Classevent=0x36,umask=801Number of times a VN0 credit was used on the DRS message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This counts the number of times a VN0 credit was used.  Note that a single VN0 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN0 will only count a single credit even though it may use multiple buffersunc_r3_vn0_credits_used.homuncore interconnectVN0 Credit Used; HOM Message Classevent=0x36,umask=101Number of times a VN0 credit was used on the DRS message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This counts the number of times a VN0 credit was used.  Note that a single VN0 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN0 will only count a single credit even though it may use multiple buffersunc_r3_vn0_credits_used.ncbuncore interconnectVN0 Credit Used; NCB Message Classevent=0x36,umask=0x1001Number of times a VN0 credit was used on the DRS message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This counts the number of times a VN0 credit was used.  Note that a single VN0 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN0 will only count a single credit even though it may use multiple buffersunc_r3_vn0_credits_used.ncsuncore interconnectVN0 Credit Used; NCS Message Classevent=0x36,umask=0x2001Number of times a VN0 credit was used on the DRS message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This counts the number of times a VN0 credit was used.  Note that a single VN0 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN0 will only count a single credit even though it may use multiple buffersunc_r3_vn0_credits_used.ndruncore interconnectVN0 Credit Used; NDR Message Classevent=0x36,umask=401Number of times a VN0 credit was used on the DRS message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This counts the number of times a VN0 credit was used.  Note that a single VN0 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN0 will only count a single credit even though it may use multiple buffersunc_r3_vn0_credits_used.snpuncore interconnectVN0 Credit Used; SNP Message Classevent=0x36,umask=201Number of times a VN0 credit was used on the DRS message channel.  In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into.  There are two credit pools, VNA and VN0.  VNA is a shared pool used to achieve high performance.  The VN0 pool has reserved entries for each message class and is used to prevent deadlock.  Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail.  This counts the number of times a VN0 credit was used.  Note that a single VN0 credit holds access to potentially multiple flit buffers.  For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits.  A transfer on VN0 will only count a single credit even though it may use multiple buffersunc_r3_vna_credits_reject.drsuncore interconnectVNA Credit Reject; DRS Message Classevent=0x34,umask=801Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full).  It is possible to filter this event by message class.  Some packets use more than one flit buffer, and therefore must acquire multiple credits.  Therefore, one could get a reject even if the VNA credits were not fully used up.  The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress).  VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially.  This can happen if the rest of the uncore is unable to drain the requests fast enoughunc_r3_vna_credits_reject.homuncore interconnectVNA Credit Reject; HOM Message Classevent=0x34,umask=101Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full).  It is possible to filter this event by message class.  Some packets use more than one flit buffer, and therefore must acquire multiple credits.  Therefore, one could get a reject even if the VNA credits were not fully used up.  The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress).  VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially.  This can happen if the rest of the uncore is unable to drain the requests fast enoughunc_r3_vna_credits_reject.ncbuncore interconnectVNA Credit Reject; NCB Message Classevent=0x34,umask=0x1001Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full).  It is possible to filter this event by message class.  Some packets use more than one flit buffer, and therefore must acquire multiple credits.  Therefore, one could get a reject even if the VNA credits were not fully used up.  The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress).  VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially.  This can happen if the rest of the uncore is unable to drain the requests fast enoughunc_r3_vna_credits_reject.ncsuncore interconnectVNA Credit Reject; NCS Message Classevent=0x34,umask=0x2001Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full).  It is possible to filter this event by message class.  Some packets use more than one flit buffer, and therefore must acquire multiple credits.  Therefore, one could get a reject even if the VNA credits were not fully used up.  The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress).  VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially.  This can happen if the rest of the uncore is unable to drain the requests fast enoughunc_r3_vna_credits_reject.ndruncore interconnectVNA Credit Reject; NDR Message Classevent=0x34,umask=401Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full).  It is possible to filter this event by message class.  Some packets use more than one flit buffer, and therefore must acquire multiple credits.  Therefore, one could get a reject even if the VNA credits were not fully used up.  The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress).  VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially.  This can happen if the rest of the uncore is unable to drain the requests fast enoughunc_r3_vna_credits_reject.snpuncore interconnectVNA Credit Reject; SNP Message Classevent=0x34,umask=201Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full).  It is possible to filter this event by message class.  Some packets use more than one flit buffer, and therefore must acquire multiple credits.  Therefore, one could get a reject even if the VNA credits were not fully used up.  The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress).  VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially.  This can happen if the rest of the uncore is unable to drain the requests fast enoughunc_u_msg_chnl_size_count.4buncore interconnectMsgCh Requests by Size; 4B Requestsevent=0x47,umask=101Number of transactions on the message channel filtered by request size.  This includes both reads and writesunc_u_msg_chnl_size_count.8buncore interconnectMsgCh Requests by Size; 8B Requestsevent=0x47,umask=201Number of transactions on the message channel filtered by request size.  This includes both reads and writesunc_u_phold_cycles.ack_to_deassertuncore interconnectCycles PHOLD Assert to Ack; ACK to Deassertevent=0x45,umask=201PHOLD cycles.  Filter from source CoreIDunc_u_racu_requests.countuncore interconnectRACU Requestevent=0x46,umask=101unc_u_u2c_events.livelockuncore interconnectMonitor Sent to T0; Livelockevent=0x43,umask=401Events coming from Uncore can be sent to one or all coresunc_u_u2c_events.lterroruncore interconnectMonitor Sent to T0; LTErrorevent=0x43,umask=801Events coming from Uncore can be sent to one or all coresunc_u_u2c_events.monitor_t0uncore interconnectMonitor Sent to T0; Monitor T0event=0x43,umask=101Events coming from Uncore can be sent to one or all coresunc_u_u2c_events.monitor_t1uncore interconnectMonitor Sent to T0; Monitor T1event=0x43,umask=201Events coming from Uncore can be sent to one or all coresunc_u_u2c_events.otheruncore interconnectMonitor Sent to T0; Otherevent=0x43,umask=0x8001Events coming from Uncore can be sent to one or all coresunc_r2_iio_credits_acquired.drsuncore ioR2PCIe IIO Credit Acquired; DRSevent=0x33,umask=801Counts the number of credits that are acquired in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use.  Transactions from the BL ring going into the IIO Agent must first acquire a credit.  These credits are for either the NCB or NCS message classes.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly)unc_r2_iio_credits_acquired.ncbuncore ioR2PCIe IIO Credit Acquired; NCBevent=0x33,umask=0x1001Counts the number of credits that are acquired in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use.  Transactions from the BL ring going into the IIO Agent must first acquire a credit.  These credits are for either the NCB or NCS message classes.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly)unc_r2_iio_credits_acquired.ncsuncore ioR2PCIe IIO Credit Acquired; NCSevent=0x33,umask=0x2001Counts the number of credits that are acquired in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use.  Transactions from the BL ring going into the IIO Agent must first acquire a credit.  These credits are for either the NCB or NCS message classes.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly)unc_r2_iio_credits_reject.drsuncore ioR2PCIe IIO Failed to Acquire a Credit; DRSevent=0x34,umask=801Counts the number of times that a request pending in the BL Ingress attempted to acquire either a NCB or NCS credit to transmit into the IIO, but was rejected because no credits were available.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly)unc_r2_iio_credits_reject.ncbuncore ioR2PCIe IIO Failed to Acquire a Credit; NCBevent=0x34,umask=0x1001Counts the number of times that a request pending in the BL Ingress attempted to acquire either a NCB or NCS credit to transmit into the IIO, but was rejected because no credits were available.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly)unc_r2_iio_credits_reject.ncsuncore ioR2PCIe IIO Failed to Acquire a Credit; NCSevent=0x34,umask=0x2001Counts the number of times that a request pending in the BL Ingress attempted to acquire either a NCB or NCS credit to transmit into the IIO, but was rejected because no credits were available.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly)unc_r2_iio_credits_used.drsuncore ioR2PCIe IIO Credits in Use; DRSevent=0x32,umask=801Counts the number of cycles when one or more credits in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use.  Transactions from the BL ring going into the IIO Agent must first acquire a credit.  These credits are for either the NCB or NCS message classes.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly)unc_r2_iio_credits_used.ncbuncore ioR2PCIe IIO Credits in Use; NCBevent=0x32,umask=0x1001Counts the number of cycles when one or more credits in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use.  Transactions from the BL ring going into the IIO Agent must first acquire a credit.  These credits are for either the NCB or NCS message classes.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly)unc_r2_iio_credits_used.ncsuncore ioR2PCIe IIO Credits in Use; NCSevent=0x32,umask=0x2001Counts the number of cycles when one or more credits in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use.  Transactions from the BL ring going into the IIO Agent must first acquire a credit.  These credits are for either the NCB or NCS message classes.  NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common).  NCS is used for reads to PCIe (and should be used sparingly)unc_r2_ring_ad_used.ccw_evenuncore ioR2 AD Ring in Use; Counterclockwise and Evenevent=7,umask=401Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r2_ring_ad_used.ccw_odduncore ioR2 AD Ring in Use; Counterclockwise and Oddevent=7,umask=801Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r2_ring_ad_used.cw_evenuncore ioR2 AD Ring in Use; Clockwise and Evenevent=7,umask=101Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r2_ring_ad_used.cw_odduncore ioR2 AD Ring in Use; Clockwise and Oddevent=7,umask=201Counts the number of cycles that the AD ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r2_ring_ak_used.ccw_evenuncore ioR2 AK Ring in Use; Counterclockwise and Evenevent=8,umask=401Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r2_ring_ak_used.ccw_odduncore ioR2 AK Ring in Use; Counterclockwise and Oddevent=8,umask=801Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r2_ring_ak_used.cw_evenuncore ioR2 AK Ring in Use; Clockwise and Evenevent=8,umask=101Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r2_ring_ak_used.cw_odduncore ioR2 AK Ring in Use; Clockwise and Oddevent=8,umask=201Counts the number of cycles that the AK ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r2_ring_bl_used.ccw_evenuncore ioR2 BL Ring in Use; Counterclockwise and Evenevent=9,umask=401Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r2_ring_bl_used.ccw_odduncore ioR2 BL Ring in Use; Counterclockwise and Oddevent=9,umask=801Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r2_ring_bl_used.cw_evenuncore ioR2 BL Ring in Use; Clockwise and Evenevent=9,umask=101Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r2_ring_bl_used.cw_odduncore ioR2 BL Ring in Use; Clockwise and Oddevent=9,umask=201Counts the number of cycles that the BL ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stopunc_r2_ring_iv_used.anyuncore ioR2 IV Ring in Use; Anyevent=0xa,umask=0xf01Counts the number of cycles that the IV ring is being used at this ring stop.  This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sunk into the ring stop.  The IV ring is unidirectional.  Whether UP or DN is used is dependent on the system programming.  Thereofore, one should generally set both the UP and DN bits for a given polarity (or both) at a given timeunc_r2_rxr_cycles_ne.drsuncore ioIngress Cycles Not Empty; DRSevent=0x10,umask=801Counts the number of cycles when the R2PCIe Ingress is not empty.  This tracks one of the three rings that are used by the R2PCIe agent.  This can be used in conjunction with the R2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple countersunc_r2_rxr_cycles_ne.ncbuncore ioIngress Cycles Not Empty; NCBevent=0x10,umask=0x1001Counts the number of cycles when the R2PCIe Ingress is not empty.  This tracks one of the three rings that are used by the R2PCIe agent.  This can be used in conjunction with the R2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple countersunc_r2_rxr_cycles_ne.ncsuncore ioIngress Cycles Not Empty; NCSevent=0x10,umask=0x2001Counts the number of cycles when the R2PCIe Ingress is not empty.  This tracks one of the three rings that are used by the R2PCIe agent.  This can be used in conjunction with the R2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy.  Multiple ingress buffers can be tracked at a given time using multiple countersunc_r2_txr_cycles_full.aduncore ioEgress Cycles Full; ADevent=0x25,umask=101Counts the number of cycles when the R2PCIe Egress buffer is fullunc_r2_txr_cycles_full.akuncore ioEgress Cycles Full; AKevent=0x25,umask=201Counts the number of cycles when the R2PCIe Egress buffer is fullunc_r2_txr_cycles_full.bluncore ioEgress Cycles Full; BLevent=0x25,umask=401Counts the number of cycles when the R2PCIe Egress buffer is fullunc_r2_txr_cycles_ne.aduncore ioEgress Cycles Not Empty; ADevent=0x23,umask=101Counts the number of cycles when the R2PCIe Egress is not empty.  This tracks one of the three rings that are used by the R2PCIe agent.  This can be used in conjunction with the R2PCIe Egress Occupancy Accumulator event in order to calculate average queue occupancy.  Only a single Egress queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_r2_txr_cycles_ne.akuncore ioEgress Cycles Not Empty; AKevent=0x23,umask=201Counts the number of cycles when the R2PCIe Egress is not empty.  This tracks one of the three rings that are used by the R2PCIe agent.  This can be used in conjunction with the R2PCIe Egress Occupancy Accumulator event in order to calculate average queue occupancy.  Only a single Egress queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_r2_txr_cycles_ne.bluncore ioEgress Cycles Not Empty; BLevent=0x23,umask=401Counts the number of cycles when the R2PCIe Egress is not empty.  This tracks one of the three rings that are used by the R2PCIe agent.  This can be used in conjunction with the R2PCIe Egress Occupancy Accumulator event in order to calculate average queue occupancy.  Only a single Egress queue can be tracked at any given time.  It is not possible to filter based on direction or polarityunc_r2_txr_nacks.aduncore ioEgress NACK; ADevent=0x26,umask=101Counts the number of times that the Egress received a NACK from the ring and could not issue a transactionunc_r2_txr_nacks.akuncore ioEgress NACK; AKevent=0x26,umask=201Counts the number of times that the Egress received a NACK from the ring and could not issue a transactionunc_r2_txr_nacks.bluncore ioEgress NACK; BLevent=0x26,umask=401Counts the number of times that the Egress received a NACK from the ring and could not issue a transactionunc_m_act_countuncore memoryDRAM Activate Countevent=101Counts the number of DRAM Activate commands sent on this channel.  Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS.  One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activatesunc_m_cas_count.alluncore memoryDRAM RD_CAS and WR_CAS Commands.; All DRAM WR_CAS (w/ and w/out auto-pre)event=4,umask=0xf01unc_m_cas_count.rduncore memoryDRAM RD_CAS and WR_CAS Commands.; All DRAM Reads (RD_CAS + Underfills)event=4,umask=301unc_m_cas_count.rd_reguncore memoryDRAM RD_CAS and WR_CAS Commands.; All DRAM RD_CAS (w/ and w/out auto-pre)event=4,umask=101unc_m_cas_count.rd_underfilluncore memoryDRAM RD_CAS and WR_CAS Commands.; Underfill Read Issuedevent=4,umask=201unc_m_cas_count.wruncore memoryDRAM RD_CAS and WR_CAS Commands.; All DRAM WR_CAS (both Modes)event=4,umask=0xc01unc_m_cas_count.wr_rmmuncore memoryDRAM RD_CAS and WR_CAS Commands.; DRAM WR_CAS (w/ and w/out auto-pre) in Read Major Modeevent=4,umask=801unc_m_cas_count.wr_wmmuncore memoryDRAM RD_CAS and WR_CAS Commands.; DRAM WR_CAS (w/ and w/out auto-pre) in Write Major Modeevent=4,umask=401unc_m_clockticksuncore memoryuclksevent=001Uncore Fixed Counter - uclksunc_m_major_modes.isochuncore memoryCycles in a Major Mode; Isoch Major Modeevent=7,umask=801Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel.   Major modea are channel-wide, and not a per-rank (or dimm or bank) modeunc_m_major_modes.partialuncore memoryCycles in a Major Mode; Partial Major Modeevent=7,umask=401Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel.   Major modea are channel-wide, and not a per-rank (or dimm or bank) modeunc_m_major_modes.readuncore memoryCycles in a Major Mode; Read Major Modeevent=7,umask=101Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel.   Major modea are channel-wide, and not a per-rank (or dimm or bank) modeunc_m_major_modes.writeuncore memoryCycles in a Major Mode; Write Major Modeevent=7,umask=201Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel.   Major modea are channel-wide, and not a per-rank (or dimm or bank) modeunc_m_power_throttle_cycles.rank0uncore memoryThrottle Cycles for Rank 0; DIMM IDevent=0x41,umask=101Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling.  It is not possible to distinguish between the two.  This can be filtered by rank.  If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1unc_m_preemption.rd_preempt_rduncore memoryRead Preemption Count; Read over Read Preemptionevent=8,umask=101Counts the number of times a read in the iMC preempts another read or write.  Generally reads to an open page are issued ahead of requests to closed pages.  This improves the page hit rate of the system.  However, high priority requests can cause pages of active requests to be closed in order to get them out.  This will reduce the latency of the high-priority request at the expense of lower bandwidth and increased overall average latencyunc_m_preemption.rd_preempt_wruncore memoryRead Preemption Count; Read over Write Preemptionevent=8,umask=201Counts the number of times a read in the iMC preempts another read or write.  Generally reads to an open page are issued ahead of requests to closed pages.  This improves the page hit rate of the system.  However, high priority requests can cause pages of active requests to be closed in order to get them out.  This will reduce the latency of the high-priority request at the expense of lower bandwidth and increased overall average latencyunc_m_pre_count.page_closeuncore memoryDRAM Precharge commands.; Precharge due to timer expirationevent=2,umask=201Counts the number of DRAM Precharge commands sent on this channelunc_m_pre_count.page_missuncore memoryDRAM Precharge commands.; Precharges due to page missevent=2,umask=101Counts the number of DRAM Precharge commands sent on this channelunc_m_rpq_occupancyuncore memoryRead Pending Queue Occupancyevent=0x8001Accumulates the occupancies of the Read Pending Queue each cycle.  This can then be used to calculate both the average occupancy (in conjunction with the number of cycles not empty) and the average latency (in conjunction with the number of allocations).  The RPQ is used to schedule reads out to the memory controller and to track the requests.  Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memoryunc_m_wpq_cycles_neuncore memoryWrite Pending Queue Not Emptyevent=0x2101Counts the number of cycles that the Write Pending Queue is not empty.  This can then be used to calculate the average queue occupancy (in conjunction with the WPQ Occupancy Accumulation count).  The WPQ is used to schedule write out to the memory controller and to track the writes.  Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC.  They deallocate after being issued to DRAM.  Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have 'posted' to the iMC.  This is not to be confused with actually performing the write to DRAM.  Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latenciesunc_m_wpq_insertsuncore memoryWrite Pending Queue Allocationsevent=0x2001Counts the number of allocations into the Write Pending Queue.  This can then be used to calculate the average queuing latency (in conjunction with the WPQ occupancy count).  The WPQ is used to schedule write out to the memory controller and to track the writes.  Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC.  They deallocate after being issued to DRAM.  Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have 'posted' to the iMCunc_m_wpq_occupancyuncore memoryWrite Pending Queue Occupancyevent=0x8101Accumulates the occupancies of the Write Pending Queue each cycle.  This can then be used to calculate both the average queue occupancy (in conjunction with the number of cycles not empty) and the average latency (in conjunction with the number of allocations).  The WPQ is used to schedule write out to the memory controller and to track the writes.  Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC.  They deallocate after being issued to DRAM.  Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have 'posted' to the iMC.  This is not to be confused with actually performing the write to DRAM.  Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latencies.  So, we provide filtering based on if the request has posted or not.  By using the 'not posted' filter, we can track how long writes spent in the iMC before completions were sent to the HA.  The 'posted' filter, on the other hand, provides information about how much queueing is actually happening in the iMC for writes before they are actually issued to memory.  High average occupancies will generally coincide with high write major mode countsunc_p_core0_transition_cyclesuncore powerCore C State Transition Cyclesevent=301Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core1_transition_cyclesuncore powerCore C State Transition Cyclesevent=401Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core2_transition_cyclesuncore powerCore C State Transition Cyclesevent=501Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core3_transition_cyclesuncore powerCore C State Transition Cyclesevent=601Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core4_transition_cyclesuncore powerCore C State Transition Cyclesevent=701Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core5_transition_cyclesuncore powerCore C State Transition Cyclesevent=801Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core6_transition_cyclesuncore powerCore C State Transition Cyclesevent=901Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_core7_transition_cyclesuncore powerCore C State Transition Cyclesevent=0xa01Number of cycles spent performing core C state transitions.  There is one event per coreunc_p_demotions_core0uncore powerCore C State Demotionsevent=0x1e01Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core1uncore powerCore C State Demotionsevent=0x1f01Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core2uncore powerCore C State Demotionsevent=0x2001Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core3uncore powerCore C State Demotionsevent=0x2101Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core4uncore powerCore C State Demotionsevent=0x2201Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core5uncore powerCore C State Demotionsevent=0x2301Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core6uncore powerCore C State Demotionsevent=0x2401Counts the number of times when a configurable cores had a C-state demotionunc_p_demotions_core7uncore powerCore C State Demotionsevent=0x2501Counts the number of times when a configurable cores had a C-state demotionunc_p_freq_min_io_p_cyclesuncore powerIO P Limit Strongest Lower Limit Cyclesevent=101Counts the number of cycles when IO P Limit is preventing us from dropping the frequency lower.  This algorithm monitors the needs to the IO subsystem on both local and remote sockets and will maintain a frequency high enough to maintain good IO BW.  This is necessary for when all the IA cores on a socket are idle but a user still would like to maintain high IO Bandwidthunc_p_freq_min_perf_p_cyclesuncore powerPerf P Limit Strongest Lower Limit Cyclesevent=201Counts the number of cycles when Perf P Limit is preventing us from dropping the frequency lower.  Perf P Limit is an algorithm that takes input from remote sockets when determining if a socket should drop it's frequency down.  This is largely to minimize increases in snoop and remote read latenciesunc_p_freq_trans_cyclesuncore powerCycles spent changing Frequencyevent=001Counts the number of cycles when the system is changing frequency.  This can not be filtered by thread ID.  One can also use it with the occupancy counter that monitors number of threads in C0 to estimate the performance impact that frequency transitions had on the systemunc_p_power_state_occupancy.cores_c0uncore powerNumber of cores in C0event=0x80,occ_sel=101This is an occupancy event that tracks the number of cores that are in C0.  It can be used by itself to get the average number of cores in C0, with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other detailsunc_p_power_state_occupancy.cores_c3uncore powerNumber of cores in C0event=0x80,occ_sel=201This is an occupancy event that tracks the number of cores that are in C0.  It can be used by itself to get the average number of cores in C0, with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other detailsunc_p_power_state_occupancy.cores_c6uncore powerNumber of cores in C0event=0x80,occ_sel=301This is an occupancy event that tracks the number of cores that are in C0.  It can be used by itself to get the average number of cores in C0, with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other detailsunc_p_total_transition_cyclesuncore powerTotal Core C State Transition Cyclesevent=0xb01Number of cycles spent performing core C state transitions across all coresdtlb_load_misses.miss_causes_a_walkvirtual memoryLoad misses in all DTLB levels that cause page walksevent=8,period=100003,umask=100dtlb_load_misses.stlb_hitvirtual memoryLoad operations that miss the first DTLB level but hit the second and do not cause page walksevent=8,period=100003,umask=0x1000This event counts load operations that miss the first DTLB level but hit the second and do not cause any page walks. The penalty in this case is approximately 7 cyclesdtlb_load_misses.walk_completedvirtual memoryLoad misses at all DTLB levels that cause completed page walksevent=8,period=100003,umask=200dtlb_load_misses.walk_durationvirtual memoryCycles when PMH is busy with page walksevent=8,period=2000003,umask=400This event counts cycles when the  page miss handler (PMH) is servicing page walks caused by DTLB load missesdtlb_store_misses.miss_causes_a_walkvirtual memoryStore misses in all DTLB levels that cause page walksevent=0x49,period=100003,umask=100dtlb_store_misses.stlb_hitvirtual memoryStore operations that miss the first TLB level but hit the second and do not cause page walksevent=0x49,period=100003,umask=0x1000dtlb_store_misses.walk_completedvirtual memoryStore misses in all DTLB levels that cause completed page walksevent=0x49,period=100003,umask=200dtlb_store_misses.walk_durationvirtual memoryCycles when PMH is busy with page walksevent=0x49,period=2000003,umask=400itlb.itlb_flushvirtual memoryFlushing of the Instruction TLB (ITLB) pages, includes 4k/2M/4M pagesevent=0xae,period=100007,umask=100itlb_misses.miss_causes_a_walkvirtual memoryMisses at all ITLB levels that cause page walksevent=0x85,period=100003,umask=100itlb_misses.stlb_hitvirtual memoryOperations that miss the first ITLB level but hit the second and do not cause any page walksevent=0x85,period=100003,umask=0x1000itlb_misses.walk_completedvirtual memoryMisses in all ITLB levels that cause completed page walksevent=0x85,period=100003,umask=200itlb_misses.walk_durationvirtual memoryCycles when PMH is busy with page walksevent=0x85,period=2000003,umask=400This event count cycles when Page Miss Handler (PMH) is servicing page walks caused by ITLB missestlb_flush.dtlb_threadvirtual memoryDTLB flush attempts of the thread-specific entriesevent=0xbd,period=100007,umask=100tlb_flush.stlb_anyvirtual memorySTLB flush attemptsevent=0xbd,period=100007,umask=0x2000core_reject_l2q.allcacheCounts the number of MEC requests that were not accepted into the L2Q because of any L2  queue reject condition. There is no concept of at-ret here. It might include requests due to instructions in the speculative pathevent=0x31,period=20000300fetch_stall.icache_fill_pending_cyclescacheThis event counts the number of core cycles the fetch stalls because of an icache miss. This is a cumulative count of cycles the NIP stalled for all icache missesevent=0x86,period=200003,umask=400l2_prefetcher.alloc_xqcacheCounts the number of L2HWP allocated into XQ GPevent=0x3e,period=100007,umask=400l2_requests.misscacheCounts the number of L2 cache missesevent=0x2e,period=200003,umask=0x4100l2_requests.referencecacheCounts the total number of L2 cache referencesevent=0x2e,period=200003,umask=0x4f00l2_requests_reject.allcacheCounts the number of MEC requests from the L2Q that reference a cache line (cacheable requests) excluding SW prefetches filling only to L2 cache and L1 evictions (automatically excludes L2HWP, UC, WC) that were rejected - Multiple repeated rejects should be counted multiple timesevent=0x30,period=20000300mem_uops_retired.all_loadscacheCounts all the load micro-ops retiredevent=4,period=200003,umask=0x4000This event counts the number of load micro-ops retiredmem_uops_retired.all_storescacheCounts all the store micro-ops retiredevent=4,period=200003,umask=0x8000This event counts the number of store micro-ops retiredmem_uops_retired.hitmcacheCounts the loads retired that get the data from the other core in the same tile in M state (Precise Event)  Supports address when preciseevent=4,period=200003,umask=0x2000This event counts the number of load micro-ops retired that got data from another core's cache. (Precise Event)  Supports address when precisemem_uops_retired.l1_miss_loadscacheCounts the number of load micro-ops retired that miss in L1 D cacheevent=4,period=200003,umask=100This event counts the number of load micro-ops retired that miss in L1 Data cache. Note that prefetch misses will not be countedmem_uops_retired.l2_hit_loadscacheCounts the number of load micro-ops retired that hit in the L2 (Precise Event)  Supports address when preciseevent=4,period=200003,umask=200This event counts the number of load micro-uops retired that hit in the L2 (Precise Event)  Supports address when precisemem_uops_retired.l2_miss_loadscacheCounts the number of load micro-ops retired that miss in the L2 (Precise Event)  Supports address when preciseevent=4,period=100007,umask=400This event counts the number of load micro-ops retired that miss in the L2 (Precise Event)  Supports address when precisemem_uops_retired.utlb_miss_loadscacheCounts the number of load micro-ops retired that caused micro TLB missevent=4,period=200003,umask=0x1000offcore_responsecacheCounts the matrix events specified by MSR_OFFCORE_RESPxevent=0xb7,period=100007,umask=100offcore_response.any_code_rd.any_responsecacheCounts Demand code reads and prefetch code read requests  that accounts for any responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001004400offcore_response.any_code_rd.l2_hit_far_tilecacheCounts Demand code reads and prefetch code read requests  that accounts for responses from snoop request hit with data forwarded from it Far(not in the same quadrant as the request)-other tile L2 in E/F/M state. Valid only in SNC4 Cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x180040004400offcore_response.any_code_rd.l2_hit_far_tile_e_fcacheCounts Demand code reads and prefetch code read requests  that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in E/F state. Valid only for SNC4 cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x080040004400offcore_response.any_code_rd.l2_hit_far_tile_mcacheCounts Demand code reads and prefetch code read requests  that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100040004400offcore_response.any_code_rd.l2_hit_near_tilecacheCounts Demand code reads and prefetch code read requests  that accounts for responses from snoop request hit with data forwarded from its Near-other tile L2 in E/F/M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x180018004400offcore_response.any_code_rd.l2_hit_near_tile_e_fcacheCounts Demand code reads and prefetch code read requests  that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in E/F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x080008004400offcore_response.any_code_rd.l2_hit_near_tile_mcacheCounts Demand code reads and prefetch code read requests  that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100008004400offcore_response.any_code_rd.l2_hit_this_tile_ecacheCounts Demand code reads and prefetch code read requests  that accounts for responses which hit its own tile's L2 with data in E stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000400004400offcore_response.any_code_rd.l2_hit_this_tile_fcacheCounts Demand code reads and prefetch code read requests  that accounts for responses which hit its own tile's L2 with data in F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x001000004400offcore_response.any_code_rd.l2_hit_this_tile_mcacheCounts Demand code reads and prefetch code read requests  that accounts for responses which hit its own tile's L2 with data in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000200004400offcore_response.any_code_rd.l2_hit_this_tile_scacheCounts Demand code reads and prefetch code read requests  that accounts for responses which hit its own tile's L2 with data in S stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000800004400offcore_response.any_code_rd.outstandingcacheCounts Demand code reads and prefetch code read requests  that are outstanding, per weighted cycle, from the time of the request to when any response is received. The outstanding response should be programmed only on PMC0event=0xb7,period=100007,umask=1,offcore_rsp=0x400000004400offcore_response.any_data_rd.any_responsecacheCounts Demand cacheable data and L1 prefetch data read requests  that accounts for any responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001309100offcore_response.any_data_rd.l2_hit_far_tilecacheCounts Demand cacheable data and L1 prefetch data read requests  that accounts for responses from snoop request hit with data forwarded from it Far(not in the same quadrant as the request)-other tile L2 in E/F/M state. Valid only in SNC4 Cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x180040309100offcore_response.any_data_rd.l2_hit_far_tile_e_fcacheCounts Demand cacheable data and L1 prefetch data read requests  that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in E/F state. Valid only for SNC4 cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x080040309100offcore_response.any_data_rd.l2_hit_far_tile_mcacheCounts Demand cacheable data and L1 prefetch data read requests  that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100040309100offcore_response.any_data_rd.l2_hit_near_tilecacheCounts Demand cacheable data and L1 prefetch data read requests  that accounts for responses from snoop request hit with data forwarded from its Near-other tile L2 in E/F/M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x180018309100offcore_response.any_data_rd.l2_hit_near_tile_e_fcacheCounts Demand cacheable data and L1 prefetch data read requests  that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in E/F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x080008309100offcore_response.any_data_rd.l2_hit_near_tile_mcacheCounts Demand cacheable data and L1 prefetch data read requests  that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100008309100offcore_response.any_data_rd.l2_hit_this_tile_ecacheCounts Demand cacheable data and L1 prefetch data read requests  that accounts for responses which hit its own tile's L2 with data in E stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000400309100offcore_response.any_data_rd.l2_hit_this_tile_fcacheCounts Demand cacheable data and L1 prefetch data read requests  that accounts for responses which hit its own tile's L2 with data in F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x001000309100offcore_response.any_data_rd.l2_hit_this_tile_mcacheCounts Demand cacheable data and L1 prefetch data read requests  that accounts for responses which hit its own tile's L2 with data in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000200309100offcore_response.any_data_rd.l2_hit_this_tile_scacheCounts Demand cacheable data and L1 prefetch data read requests  that accounts for responses which hit its own tile's L2 with data in S stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000800309100offcore_response.any_data_rd.outstandingcacheCounts Demand cacheable data and L1 prefetch data read requests  that are outstanding, per weighted cycle, from the time of the request to when any response is received. The outstanding response should be programmed only on PMC0event=0xb7,period=100007,umask=1,offcore_rsp=0x400000309100offcore_response.any_pf_l2.any_responsecacheCounts any Prefetch requests that accounts for any responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001007000offcore_response.any_pf_l2.l2_hit_far_tilecacheCounts any Prefetch requests that accounts for responses from snoop request hit with data forwarded from it Far(not in the same quadrant as the request)-other tile L2 in E/F/M state. Valid only in SNC4 Cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x180040007000offcore_response.any_pf_l2.l2_hit_far_tile_e_fcacheCounts any Prefetch requests that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in E/F state. Valid only for SNC4 cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x080040007000offcore_response.any_pf_l2.l2_hit_far_tile_mcacheCounts any Prefetch requests that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100040007000offcore_response.any_pf_l2.l2_hit_near_tilecacheCounts any Prefetch requests that accounts for responses from snoop request hit with data forwarded from its Near-other tile L2 in E/F/M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x180018007000offcore_response.any_pf_l2.l2_hit_near_tile_e_fcacheCounts any Prefetch requests that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in E/F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x080008007000offcore_response.any_pf_l2.l2_hit_near_tile_mcacheCounts any Prefetch requests that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100008007000offcore_response.any_pf_l2.l2_hit_this_tile_ecacheCounts any Prefetch requests that accounts for responses which hit its own tile's L2 with data in E stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000400007000offcore_response.any_pf_l2.l2_hit_this_tile_fcacheCounts any Prefetch requests that accounts for responses which hit its own tile's L2 with data in F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x001000007000offcore_response.any_pf_l2.l2_hit_this_tile_mcacheCounts any Prefetch requests that accounts for responses which hit its own tile's L2 with data in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000200007000offcore_response.any_pf_l2.outstandingcacheCounts any Prefetch requests that are outstanding, per weighted cycle, from the time of the request to when any response is received. The outstanding response should be programmed only on PMC0event=0xb7,period=100007,umask=1,offcore_rsp=0x400000007000offcore_response.any_read.any_responsecacheCounts any Read request  that accounts for any responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x00000132f700offcore_response.any_read.l2_hit_far_tilecacheCounts any Read request  that accounts for responses from snoop request hit with data forwarded from it Far(not in the same quadrant as the request)-other tile L2 in E/F/M state. Valid only in SNC4 Cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x18004032f700offcore_response.any_read.l2_hit_far_tile_e_fcacheCounts any Read request  that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in E/F state. Valid only for SNC4 cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x08004032f700offcore_response.any_read.l2_hit_far_tile_mcacheCounts any Read request  that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x10004032f700offcore_response.any_read.l2_hit_near_tilecacheCounts any Read request  that accounts for responses from snoop request hit with data forwarded from its Near-other tile L2 in E/F/M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x18001832f700offcore_response.any_read.l2_hit_near_tile_e_fcacheCounts any Read request  that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in E/F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x08000832f700offcore_response.any_read.l2_hit_near_tile_mcacheCounts any Read request  that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x10000832f700offcore_response.any_read.l2_hit_this_tile_ecacheCounts any Read request  that accounts for responses which hit its own tile's L2 with data in E stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x00040032f700offcore_response.any_read.l2_hit_this_tile_fcacheCounts any Read request  that accounts for responses which hit its own tile's L2 with data in F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x00100032f700offcore_response.any_read.l2_hit_this_tile_mcacheCounts any Read request  that accounts for responses which hit its own tile's L2 with data in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x00020032f700offcore_response.any_read.l2_hit_this_tile_scacheCounts any Read request  that accounts for responses which hit its own tile's L2 with data in S stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x00080032f700offcore_response.any_read.outstandingcacheCounts any Read request  that are outstanding, per weighted cycle, from the time of the request to when any response is received. The outstanding response should be programmed only on PMC0event=0xb7,period=100007,umask=1,offcore_rsp=0x40000032f700offcore_response.any_request.any_responsecacheCounts any request that accounts for any responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001800000offcore_response.any_request.l2_hit_far_tilecacheCounts any request that accounts for responses from snoop request hit with data forwarded from it Far(not in the same quadrant as the request)-other tile L2 in E/F/M state. Valid only in SNC4 Cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x180040800000offcore_response.any_request.l2_hit_far_tile_e_fcacheCounts any request that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in E/F state. Valid only for SNC4 cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x080040800000offcore_response.any_request.l2_hit_far_tile_mcacheCounts any request that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100040800000offcore_response.any_request.l2_hit_near_tilecacheCounts any request that accounts for responses from snoop request hit with data forwarded from its Near-other tile L2 in E/F/M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x180018800000offcore_response.any_request.l2_hit_near_tile_e_fcacheCounts any request that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in E/F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x080008800000offcore_response.any_request.l2_hit_near_tile_mcacheCounts any request that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100008800000offcore_response.any_request.l2_hit_this_tile_ecacheCounts any request that accounts for responses which hit its own tile's L2 with data in E stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000400800000offcore_response.any_request.l2_hit_this_tile_fcacheCounts any request that accounts for responses which hit its own tile's L2 with data in F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x001000800000offcore_response.any_request.l2_hit_this_tile_mcacheCounts any request that accounts for responses which hit its own tile's L2 with data in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000200800000offcore_response.any_request.l2_hit_this_tile_scacheCounts any request that accounts for responses which hit its own tile's L2 with data in S stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000800800000offcore_response.any_request.l2_misscacheAccounts for responses which miss its own tile's L2event=0xb7,period=100007,umask=1,offcore_rsp=0x18001981F800offcore_response.any_request.outstandingcacheCounts any request that are outstanding, per weighted cycle, from the time of the request to when any response is received. The outstanding response should be programmed only on PMC0event=0xb7,period=100007,umask=1,offcore_rsp=0x400000800000offcore_response.any_rfo.any_responsecacheCounts Demand cacheable data write requests  that accounts for any responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001002200offcore_response.any_rfo.l2_hit_far_tilecacheCounts Demand cacheable data write requests  that accounts for responses from snoop request hit with data forwarded from it Far(not in the same quadrant as the request)-other tile L2 in E/F/M state. Valid only in SNC4 Cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x180040002200offcore_response.any_rfo.l2_hit_far_tile_e_fcacheCounts Demand cacheable data write requests  that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in E/F state. Valid only for SNC4 cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x080040002200offcore_response.any_rfo.l2_hit_far_tile_mcacheCounts Demand cacheable data write requests  that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100040002200offcore_response.any_rfo.l2_hit_near_tilecacheCounts Demand cacheable data write requests  that accounts for responses from snoop request hit with data forwarded from its Near-other tile L2 in E/F/M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x180018002200offcore_response.any_rfo.l2_hit_near_tile_e_fcacheCounts Demand cacheable data write requests  that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in E/F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x080008002200offcore_response.any_rfo.l2_hit_near_tile_mcacheCounts Demand cacheable data write requests  that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100008002200offcore_response.any_rfo.l2_hit_this_tile_ecacheCounts Demand cacheable data write requests  that accounts for responses which hit its own tile's L2 with data in E stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000400002200offcore_response.any_rfo.l2_hit_this_tile_fcacheCounts Demand cacheable data write requests  that accounts for responses which hit its own tile's L2 with data in F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x001000002200offcore_response.any_rfo.l2_hit_this_tile_mcacheCounts Demand cacheable data write requests  that accounts for responses which hit its own tile's L2 with data in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000200002200offcore_response.any_rfo.l2_hit_this_tile_scacheCounts Demand cacheable data write requests  that accounts for responses which hit its own tile's L2 with data in S stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000800002200offcore_response.any_rfo.outstandingcacheCounts Demand cacheable data write requests  that are outstanding, per weighted cycle, from the time of the request to when any response is received. The outstanding response should be programmed only on PMC0event=0xb7,period=100007,umask=1,offcore_rsp=0x400000002200offcore_response.bus_locks.any_responsecacheCounts Bus locks and split lock requests that accounts for any responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001040000offcore_response.bus_locks.l2_hit_far_tilecacheCounts Bus locks and split lock requests that accounts for responses from snoop request hit with data forwarded from it Far(not in the same quadrant as the request)-other tile L2 in E/F/M state. Valid only in SNC4 Cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x180040040000offcore_response.bus_locks.l2_hit_far_tile_e_fcacheCounts Bus locks and split lock requests that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in E/F state. Valid only for SNC4 cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x080040040000offcore_response.bus_locks.l2_hit_far_tile_mcacheCounts Bus locks and split lock requests that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100040040000offcore_response.bus_locks.l2_hit_near_tilecacheCounts Bus locks and split lock requests that accounts for responses from snoop request hit with data forwarded from its Near-other tile L2 in E/F/M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x180018040000offcore_response.bus_locks.l2_hit_near_tile_e_fcacheCounts Bus locks and split lock requests that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in E/F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x080008040000offcore_response.bus_locks.l2_hit_near_tile_mcacheCounts Bus locks and split lock requests that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100008040000offcore_response.bus_locks.l2_hit_this_tile_ecacheCounts Bus locks and split lock requests that accounts for responses which hit its own tile's L2 with data in E stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000400040000offcore_response.bus_locks.l2_hit_this_tile_fcacheCounts Bus locks and split lock requests that accounts for responses which hit its own tile's L2 with data in F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x001000040000offcore_response.bus_locks.l2_hit_this_tile_mcacheCounts Bus locks and split lock requests that accounts for responses which hit its own tile's L2 with data in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000200040000offcore_response.bus_locks.l2_hit_this_tile_scacheCounts Bus locks and split lock requests that accounts for responses which hit its own tile's L2 with data in S stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000800040000offcore_response.bus_locks.outstandingcacheCounts Bus locks and split lock requests that are outstanding, per weighted cycle, from the time of the request to when any response is received. The outstanding response should be programmed only on PMC0event=0xb7,period=100007,umask=1,offcore_rsp=0x400000040000offcore_response.demand_code_rd.any_responsecacheCounts demand code reads and prefetch code reads that accounts for any responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001000400offcore_response.demand_code_rd.l2_hit_far_tilecacheCounts demand code reads and prefetch code reads that accounts for responses from snoop request hit with data forwarded from it Far(not in the same quadrant as the request)-other tile L2 in E/F/M state. Valid only in SNC4 Cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x180040000400offcore_response.demand_code_rd.l2_hit_far_tile_e_fcacheCounts demand code reads and prefetch code reads that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in E/F state. Valid only for SNC4 cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x080040000400offcore_response.demand_code_rd.l2_hit_far_tile_mcacheCounts demand code reads and prefetch code reads that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100040000400offcore_response.demand_code_rd.l2_hit_near_tilecacheCounts demand code reads and prefetch code reads that accounts for responses from snoop request hit with data forwarded from its Near-other tile L2 in E/F/M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x180018000400offcore_response.demand_code_rd.l2_hit_near_tile_e_fcacheCounts demand code reads and prefetch code reads that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in E/F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x080008000400offcore_response.demand_code_rd.l2_hit_near_tile_mcacheCounts demand code reads and prefetch code reads that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100008000400offcore_response.demand_code_rd.l2_hit_this_tile_ecacheCounts demand code reads and prefetch code reads that accounts for responses which hit its own tile's L2 with data in E stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000400000400offcore_response.demand_code_rd.l2_hit_this_tile_fcacheCounts demand code reads and prefetch code reads that accounts for responses which hit its own tile's L2 with data in F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x001000000400offcore_response.demand_code_rd.l2_hit_this_tile_mcacheCounts demand code reads and prefetch code reads that accounts for responses which hit its own tile's L2 with data in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000200000400offcore_response.demand_code_rd.l2_hit_this_tile_scacheCounts demand code reads and prefetch code reads that accounts for responses which hit its own tile's L2 with data in S stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000800000400offcore_response.demand_code_rd.outstandingcacheCounts demand code reads and prefetch code reads that are outstanding, per weighted cycle, from the time of the request to when any response is received. The outstanding response should be programmed only on PMC0event=0xb7,period=100007,umask=1,offcore_rsp=0x400000000400offcore_response.demand_data_rd.any_responsecacheCounts demand cacheable data and L1 prefetch data reads that accounts for any responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001000100offcore_response.demand_data_rd.l2_hit_far_tile_e_fcacheCounts demand cacheable data and L1 prefetch data reads that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in E/F state. Valid only for SNC4 cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x080040000100offcore_response.demand_data_rd.l2_hit_far_tile_mcacheCounts demand cacheable data and L1 prefetch data reads that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100040000100offcore_response.demand_data_rd.l2_hit_near_tile_e_fcacheCounts demand cacheable data and L1 prefetch data reads that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in E/F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x080008000100offcore_response.demand_data_rd.l2_hit_near_tile_mcacheCounts demand cacheable data and L1 prefetch data reads that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100008000100offcore_response.demand_data_rd.l2_hit_this_tile_ecacheCounts demand cacheable data and L1 prefetch data reads that accounts for responses which hit its own tile's L2 with data in E stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000400000100offcore_response.demand_data_rd.l2_hit_this_tile_fcacheCounts demand cacheable data and L1 prefetch data reads that accounts for responses which hit its own tile's L2 with data in F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x001000000100offcore_response.demand_data_rd.l2_hit_this_tile_mcacheCounts demand cacheable data and L1 prefetch data reads that accounts for responses which hit its own tile's L2 with data in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000200000100offcore_response.demand_data_rd.l2_hit_this_tile_scacheCounts demand cacheable data and L1 prefetch data reads that accounts for responses which hit its own tile's L2 with data in S stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000800000100offcore_response.demand_data_rd.outstandingcacheCounts demand cacheable data and L1 prefetch data reads that are outstanding, per weighted cycle, from the time of the request to when any response is received. The outstanding response should be programmed only on PMC0event=0xb7,period=100007,umask=1,offcore_rsp=0x400000000100offcore_response.demand_rfo.any_responsecacheCounts Demand cacheable data writes that accounts for any responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001000200offcore_response.demand_rfo.l2_hit_far_tilecacheCounts Demand cacheable data writes that accounts for responses from snoop request hit with data forwarded from it Far(not in the same quadrant as the request)-other tile L2 in E/F/M state. Valid only in SNC4 Cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x180040000200offcore_response.demand_rfo.l2_hit_far_tile_e_fcacheCounts Demand cacheable data writes that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in E/F state. Valid only for SNC4 cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x080040000200offcore_response.demand_rfo.l2_hit_far_tile_mcacheCounts Demand cacheable data writes that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100040000200offcore_response.demand_rfo.l2_hit_near_tilecacheCounts Demand cacheable data writes that accounts for responses from snoop request hit with data forwarded from its Near-other tile L2 in E/F/M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x180018000200offcore_response.demand_rfo.l2_hit_near_tile_e_fcacheCounts Demand cacheable data writes that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in E/F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x080008000200offcore_response.demand_rfo.l2_hit_near_tile_mcacheCounts Demand cacheable data writes that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100008000200offcore_response.demand_rfo.l2_hit_this_tile_ecacheCounts Demand cacheable data writes that accounts for responses which hit its own tile's L2 with data in E stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000400000200offcore_response.demand_rfo.l2_hit_this_tile_fcacheCounts Demand cacheable data writes that accounts for responses which hit its own tile's L2 with data in F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x001000000200offcore_response.demand_rfo.l2_hit_this_tile_mcacheCounts Demand cacheable data writes that accounts for responses which hit its own tile's L2 with data in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000200000200offcore_response.demand_rfo.l2_hit_this_tile_scacheCounts Demand cacheable data writes that accounts for responses which hit its own tile's L2 with data in S stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000800000200offcore_response.demand_rfo.outstandingcacheCounts Demand cacheable data writes that are outstanding, per weighted cycle, from the time of the request to when any response is received. The outstanding response should be programmed only on PMC0event=0xb7,period=100007,umask=1,offcore_rsp=0x400000000200offcore_response.full_streaming_stores.any_responsecacheCounts Full streaming stores (WC and should be programmed on PMC1) that accounts for any responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001080000offcore_response.partial_reads.any_responsecacheCounts Partial reads (UC or WC and is valid only for Outstanding response type).  that accounts for any responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001008000offcore_response.partial_reads.l2_hit_far_tilecacheCounts Partial reads (UC or WC and is valid only for Outstanding response type).  that accounts for responses from snoop request hit with data forwarded from it Far(not in the same quadrant as the request)-other tile L2 in E/F/M state. Valid only in SNC4 Cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x180040008000offcore_response.partial_reads.l2_hit_far_tile_e_fcacheCounts Partial reads (UC or WC and is valid only for Outstanding response type).  that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in E/F state. Valid only for SNC4 cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x080040008000offcore_response.partial_reads.l2_hit_far_tile_mcacheCounts Partial reads (UC or WC and is valid only for Outstanding response type).  that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100040008000offcore_response.partial_reads.l2_hit_near_tilecacheCounts Partial reads (UC or WC and is valid only for Outstanding response type).  that accounts for responses from snoop request hit with data forwarded from its Near-other tile L2 in E/F/M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x180018008000offcore_response.partial_reads.l2_hit_near_tile_e_fcacheCounts Partial reads (UC or WC and is valid only for Outstanding response type).  that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in E/F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x080008008000offcore_response.partial_reads.l2_hit_near_tile_mcacheCounts Partial reads (UC or WC and is valid only for Outstanding response type).  that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100008008000offcore_response.partial_reads.l2_hit_this_tile_ecacheCounts Partial reads (UC or WC and is valid only for Outstanding response type).  that accounts for responses which hit its own tile's L2 with data in E stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000400008000offcore_response.partial_reads.l2_hit_this_tile_fcacheCounts Partial reads (UC or WC and is valid only for Outstanding response type).  that accounts for responses which hit its own tile's L2 with data in F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x001000008000offcore_response.partial_reads.l2_hit_this_tile_mcacheCounts Partial reads (UC or WC and is valid only for Outstanding response type).  that accounts for responses which hit its own tile's L2 with data in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000200008000offcore_response.partial_reads.l2_hit_this_tile_scacheCounts Partial reads (UC or WC and is valid only for Outstanding response type).  that accounts for responses which hit its own tile's L2 with data in S stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000800008000offcore_response.partial_reads.outstandingcacheCounts Partial reads (UC or WC and is valid only for Outstanding response type).  that are outstanding, per weighted cycle, from the time of the request to when any response is received. The outstanding response should be programmed only on PMC0event=0xb7,period=100007,umask=1,offcore_rsp=0x400000008000offcore_response.partial_streaming_stores.any_responsecacheCounts Partial streaming stores (WC and should be programmed on PMC1) that accounts for any responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001400000offcore_response.partial_writes.any_responsecacheCounts Partial writes (UC or WT or WP and should be programmed on PMC1) that accounts for any responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001010000offcore_response.partial_writes.l2_hit_far_tilecacheCounts Partial writes (UC or WT or WP and should be programmed on PMC1) that accounts for responses from snoop request hit with data forwarded from it Far(not in the same quadrant as the request)-other tile L2 in E/F/M state. Valid only in SNC4 Cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x180040010000offcore_response.partial_writes.l2_hit_far_tile_e_fcacheCounts Partial writes (UC or WT or WP and should be programmed on PMC1) that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in E/F state. Valid only for SNC4 cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x080040010000offcore_response.partial_writes.l2_hit_far_tile_mcacheCounts Partial writes (UC or WT or WP and should be programmed on PMC1) that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100040010000offcore_response.partial_writes.l2_hit_near_tilecacheCounts Partial writes (UC or WT or WP and should be programmed on PMC1) that accounts for responses from snoop request hit with data forwarded from its Near-other tile L2 in E/F/M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x180018010000offcore_response.partial_writes.l2_hit_near_tile_e_fcacheCounts Partial writes (UC or WT or WP and should be programmed on PMC1) that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in E/F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x080008010000offcore_response.partial_writes.l2_hit_near_tile_mcacheCounts Partial writes (UC or WT or WP and should be programmed on PMC1) that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100008010000offcore_response.partial_writes.l2_hit_this_tile_ecacheCounts Partial writes (UC or WT or WP and should be programmed on PMC1) that accounts for responses which hit its own tile's L2 with data in E stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000400010000offcore_response.partial_writes.l2_hit_this_tile_fcacheCounts Partial writes (UC or WT or WP and should be programmed on PMC1) that accounts for responses which hit its own tile's L2 with data in F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x001000010000offcore_response.partial_writes.l2_hit_this_tile_mcacheCounts Partial writes (UC or WT or WP and should be programmed on PMC1) that accounts for responses which hit its own tile's L2 with data in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000200010000offcore_response.partial_writes.l2_hit_this_tile_scacheCounts Partial writes (UC or WT or WP and should be programmed on PMC1) that accounts for responses which hit its own tile's L2 with data in S stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000800010000offcore_response.pf_l1_data_rd.any_responsecacheCounts L1 data HW prefetches that accounts for any responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001200000offcore_response.pf_l1_data_rd.l2_hit_far_tilecacheCounts L1 data HW prefetches that accounts for responses from snoop request hit with data forwarded from it Far(not in the same quadrant as the request)-other tile L2 in E/F/M state. Valid only in SNC4 Cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x180040200000offcore_response.pf_l1_data_rd.l2_hit_far_tile_e_fcacheCounts L1 data HW prefetches that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in E/F state. Valid only for SNC4 cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x080040200000offcore_response.pf_l1_data_rd.l2_hit_far_tile_mcacheCounts L1 data HW prefetches that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100040200000offcore_response.pf_l1_data_rd.l2_hit_near_tilecacheCounts L1 data HW prefetches that accounts for responses from snoop request hit with data forwarded from its Near-other tile L2 in E/F/M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x180018200000offcore_response.pf_l1_data_rd.l2_hit_near_tile_e_fcacheCounts L1 data HW prefetches that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in E/F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x080008200000offcore_response.pf_l1_data_rd.l2_hit_near_tile_mcacheCounts L1 data HW prefetches that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100008200000offcore_response.pf_l1_data_rd.l2_hit_this_tile_ecacheCounts L1 data HW prefetches that accounts for responses which hit its own tile's L2 with data in E stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000400200000offcore_response.pf_l1_data_rd.l2_hit_this_tile_fcacheCounts L1 data HW prefetches that accounts for responses which hit its own tile's L2 with data in F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x001000200000offcore_response.pf_l1_data_rd.l2_hit_this_tile_mcacheCounts L1 data HW prefetches that accounts for responses which hit its own tile's L2 with data in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000200200000offcore_response.pf_l1_data_rd.l2_hit_this_tile_scacheCounts L1 data HW prefetches that accounts for responses which hit its own tile's L2 with data in S stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000800200000offcore_response.pf_l1_data_rd.outstandingcacheCounts L1 data HW prefetches that are outstanding, per weighted cycle, from the time of the request to when any response is received. The outstanding response should be programmed only on PMC0event=0xb7,period=100007,umask=1,offcore_rsp=0x400000200000offcore_response.pf_l2_code_rd.any_responsecacheCounts L2 code HW prefetches that accounts for any responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001004000offcore_response.pf_l2_code_rd.l2_hit_far_tilecacheCounts L2 code HW prefetches that accounts for responses from snoop request hit with data forwarded from it Far(not in the same quadrant as the request)-other tile L2 in E/F/M state. Valid only in SNC4 Cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x180040004000offcore_response.pf_l2_code_rd.l2_hit_far_tile_e_fcacheCounts L2 code HW prefetches that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in E/F state. Valid only for SNC4 cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x080040004000offcore_response.pf_l2_code_rd.l2_hit_far_tile_mcacheCounts L2 code HW prefetches that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100040004000offcore_response.pf_l2_code_rd.l2_hit_near_tilecacheCounts L2 code HW prefetches that accounts for responses from snoop request hit with data forwarded from its Near-other tile L2 in E/F/M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x180018004000offcore_response.pf_l2_code_rd.l2_hit_near_tile_e_fcacheCounts L2 code HW prefetches that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in E/F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x080008004000offcore_response.pf_l2_code_rd.l2_hit_near_tile_mcacheCounts L2 code HW prefetches that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100008004000offcore_response.pf_l2_code_rd.l2_hit_this_tile_ecacheCounts L2 code HW prefetches that accounts for responses which hit its own tile's L2 with data in E stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000400004000offcore_response.pf_l2_code_rd.l2_hit_this_tile_fcacheCounts L2 code HW prefetches that accounts for responses which hit its own tile's L2 with data in F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x001000004000offcore_response.pf_l2_code_rd.outstandingcacheCounts L2 code HW prefetches that are outstanding, per weighted cycle, from the time of the request to when any response is received. The outstanding response should be programmed only on PMC0event=0xb7,period=100007,umask=1,offcore_rsp=0x400000004000offcore_response.pf_l2_rfo.any_responsecacheCounts L2 data RFO prefetches (includes PREFETCHW instruction) that accounts for any responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001002000offcore_response.pf_l2_rfo.l2_hit_far_tile_e_fcacheCounts L2 data RFO prefetches (includes PREFETCHW instruction) that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in E/F state. Valid only for SNC4 cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x080040002000offcore_response.pf_l2_rfo.l2_hit_far_tile_mcacheCounts L2 data RFO prefetches (includes PREFETCHW instruction) that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100040002000offcore_response.pf_l2_rfo.l2_hit_near_tilecacheCounts L2 data RFO prefetches (includes PREFETCHW instruction) that accounts for responses from snoop request hit with data forwarded from its Near-other tile L2 in E/F/M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x180018002000offcore_response.pf_l2_rfo.l2_hit_near_tile_e_fcacheCounts L2 data RFO prefetches (includes PREFETCHW instruction) that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in E/F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x080008002000offcore_response.pf_l2_rfo.l2_hit_near_tile_mcacheCounts L2 data RFO prefetches (includes PREFETCHW instruction) that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100008002000offcore_response.pf_l2_rfo.l2_hit_this_tile_ecacheCounts L2 data RFO prefetches (includes PREFETCHW instruction) that accounts for responses which hit its own tile's L2 with data in E stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000400002000offcore_response.pf_l2_rfo.l2_hit_this_tile_fcacheCounts L2 data RFO prefetches (includes PREFETCHW instruction) that accounts for responses which hit its own tile's L2 with data in F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x001000002000offcore_response.pf_l2_rfo.l2_hit_this_tile_mcacheCounts L2 data RFO prefetches (includes PREFETCHW instruction) that accounts for responses which hit its own tile's L2 with data in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000200002000offcore_response.pf_l2_rfo.l2_hit_this_tile_scacheCounts L2 data RFO prefetches (includes PREFETCHW instruction) that accounts for responses which hit its own tile's L2 with data in S stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000800002000offcore_response.pf_software.any_responsecacheCounts Software Prefetches that accounts for any responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001100000offcore_response.pf_software.l2_hit_far_tilecacheCounts Software Prefetches that accounts for responses from snoop request hit with data forwarded from it Far(not in the same quadrant as the request)-other tile L2 in E/F/M state. Valid only in SNC4 Cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x180040100000offcore_response.pf_software.l2_hit_far_tile_e_fcacheCounts Software Prefetches that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in E/F state. Valid only for SNC4 cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x080040100000offcore_response.pf_software.l2_hit_far_tile_mcacheCounts Software Prefetches that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100040100000offcore_response.pf_software.l2_hit_near_tilecacheCounts Software Prefetches that accounts for responses from snoop request hit with data forwarded from its Near-other tile L2 in E/F/M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x180018100000offcore_response.pf_software.l2_hit_near_tile_e_fcacheCounts Software Prefetches that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in E/F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x080008100000offcore_response.pf_software.l2_hit_near_tile_mcacheCounts Software Prefetches that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100008100000offcore_response.pf_software.l2_hit_this_tile_ecacheCounts Software Prefetches that accounts for responses which hit its own tile's L2 with data in E stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000400100000offcore_response.pf_software.l2_hit_this_tile_fcacheCounts Software Prefetches that accounts for responses which hit its own tile's L2 with data in F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x001000100000offcore_response.pf_software.l2_hit_this_tile_mcacheCounts Software Prefetches that accounts for responses which hit its own tile's L2 with data in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000200100000offcore_response.pf_software.l2_hit_this_tile_scacheCounts Software Prefetches that accounts for responses which hit its own tile's L2 with data in S stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000800100000offcore_response.pf_software.outstandingcacheCounts Software Prefetches that are outstanding, per weighted cycle, from the time of the request to when any response is received. The outstanding response should be programmed only on PMC0event=0xb7,period=100007,umask=1,offcore_rsp=0x400000100000offcore_response.streaming_stores.any_responsecacheCounts all streaming stores (WC and should be programmed on PMC1) that accounts for any responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001480000offcore_response.uc_code_reads.any_responsecacheCounts UC code reads (valid only for Outstanding response type)  that accounts for any responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001020000offcore_response.uc_code_reads.l2_hit_far_tile_e_fcacheCounts UC code reads (valid only for Outstanding response type)  that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in E/F state. Valid only for SNC4 cluster modeevent=0xb7,period=100007,umask=1,offcore_rsp=0x080040020000offcore_response.uc_code_reads.l2_hit_far_tile_mcacheCounts UC code reads (valid only for Outstanding response type)  that accounts for responses from a snoop request hit with data forwarded from its Far(not in the same quadrant as the request)-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100040020000offcore_response.uc_code_reads.l2_hit_near_tilecacheCounts UC code reads (valid only for Outstanding response type)  that accounts for responses from snoop request hit with data forwarded from its Near-other tile L2 in E/F/M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x180018020000offcore_response.uc_code_reads.l2_hit_near_tile_e_fcacheCounts UC code reads (valid only for Outstanding response type)  that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in E/F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x080008020000offcore_response.uc_code_reads.l2_hit_near_tile_mcacheCounts UC code reads (valid only for Outstanding response type)  that accounts for responses from a snoop request hit with data forwarded from its Near-other tile's L2 in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x100008020000offcore_response.uc_code_reads.l2_hit_this_tile_ecacheCounts UC code reads (valid only for Outstanding response type)  that accounts for responses which hit its own tile's L2 with data in E stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000400020000offcore_response.uc_code_reads.l2_hit_this_tile_fcacheCounts UC code reads (valid only for Outstanding response type)  that accounts for responses which hit its own tile's L2 with data in F stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x001000020000offcore_response.uc_code_reads.l2_hit_this_tile_mcacheCounts UC code reads (valid only for Outstanding response type)  that accounts for responses which hit its own tile's L2 with data in M stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000200020000offcore_response.uc_code_reads.l2_hit_this_tile_scacheCounts UC code reads (valid only for Outstanding response type)  that accounts for responses which hit its own tile's L2 with data in S stateevent=0xb7,period=100007,umask=1,offcore_rsp=0x000800020000offcore_response.uc_code_reads.outstandingcacheCounts UC code reads (valid only for Outstanding response type)  that are outstanding, per weighted cycle, from the time of the request to when any response is received. The outstanding response should be programmed only on PMC0event=0xb7,period=100007,umask=1,offcore_rsp=0x400000020000machine_clears.fp_assistfloating pointCounts the number of floating operations retired that required microcode assistsevent=0xc3,period=200003,umask=400This event counts the number of times that the pipeline stalled due to FP operations needing assistsuops_retired.packed_simdfloating pointCounts the number of packed SSE, AVX, AVX2, AVX-512 micro-ops (both floating point and integer) except for loads (memory-to-register mov-type micro-ops), packed byte and word multipliesevent=0xc2,period=200003,umask=0x4000The length of the packed operation (128bits, 256bits or 512bits) is not taken into account when updating the counter; all count the same (+1). 
Mask (k) registers are ignored. For example: a micro-op operating with a mask that only enables one element or even zero elements will still trigger this counter (+1)
This event is defined at the micro-op level and not instruction level. Most instructions are implemented with one micro-op but not alluops_retired.scalar_simdfloating pointCounts the number of scalar SSE, AVX, AVX2, AVX-512 micro-ops except for loads (memory-to-register mov-type micro ops), division, sqrtevent=0xc2,period=200003,umask=0x2000This event is defined at the micro-op level and not instruction level. Most instructions are implemented with one micro-op but not allbaclears.allfrontendCounts the number of times the front end resteers for any branch as a result of another branch handling mechanism in the front endevent=0xe6,period=200003,umask=100baclears.condfrontendCounts the number of times the front end resteers for conditional branches as a result of another branch handling mechanism in the front endevent=0xe6,period=200003,umask=0x1000baclears.returnfrontendCounts the number of times the front end resteers for RET branches as a result of another branch handling mechanism in the front endevent=0xe6,period=200003,umask=800icache.accessesfrontendCounts all instruction fetches, including uncacheable fetchesevent=0x80,period=200003,umask=300icache.hitfrontendCounts all instruction fetches that hit the instruction cacheevent=0x80,period=200003,umask=100icache.missesfrontendCounts all instruction fetches that miss the instruction cache or produce memory requests. An instruction fetch miss is counted only once and not once for every cycle it is outstandingevent=0x80,period=200003,umask=200ms_decoded.ms_entryfrontendCounts the number of times the MSROM starts a flow of uopsevent=0xe7,period=200003,umask=100machine_clears.memory_orderingmemoryCounts the number of times the machine clears due to memory ordering hazardsevent=0xc3,period=200003,umask=200offcore_response.any_code_rd.ddrmemoryCounts Demand code reads and prefetch code read requests  that accounts for responses from DDR (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x018180004400offcore_response.any_code_rd.ddr_farmemoryCounts Demand code reads and prefetch code read requests  that accounts for data responses from DRAM Farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010100004400offcore_response.any_code_rd.ddr_nearmemoryCounts Demand code reads and prefetch code read requests  that accounts for data responses from DRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008080004400offcore_response.any_code_rd.mcdrammemoryCounts Demand code reads and prefetch code read requests  that accounts for responses from MCDRAM (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x018060004400offcore_response.any_code_rd.mcdram_farmemoryCounts Demand code reads and prefetch code read requests  that accounts for data responses from MCDRAM Far or Other tile L2 hit farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010040004400offcore_response.any_code_rd.mcdram_nearmemoryCounts Demand code reads and prefetch code read requests  that accounts for data responses from MCDRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008020004400offcore_response.any_data_rd.ddrmemoryCounts Demand cacheable data and L1 prefetch data read requests  that accounts for responses from DDR (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x018180309100offcore_response.any_data_rd.ddr_farmemoryCounts Demand cacheable data and L1 prefetch data read requests  that accounts for data responses from DRAM Farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010100309100offcore_response.any_data_rd.ddr_nearmemoryCounts Demand cacheable data and L1 prefetch data read requests  that accounts for data responses from DRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008080309100offcore_response.any_data_rd.mcdrammemoryCounts Demand cacheable data and L1 prefetch data read requests  that accounts for responses from MCDRAM (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x018060309100offcore_response.any_data_rd.mcdram_farmemoryCounts Demand cacheable data and L1 prefetch data read requests  that accounts for data responses from MCDRAM Far or Other tile L2 hit farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010040309100offcore_response.any_data_rd.mcdram_nearmemoryCounts Demand cacheable data and L1 prefetch data read requests  that accounts for data responses from MCDRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008020309100offcore_response.any_pf_l2.ddr_farmemoryCounts any Prefetch requests that accounts for data responses from DRAM Farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010100007000offcore_response.any_pf_l2.ddr_nearmemoryCounts any Prefetch requests that accounts for data responses from DRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008080007000offcore_response.any_pf_l2.mcdrammemoryCounts any Prefetch requests that accounts for responses from MCDRAM (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x018060007000offcore_response.any_pf_l2.mcdram_farmemoryCounts any Prefetch requests that accounts for data responses from MCDRAM Far or Other tile L2 hit farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010040007000offcore_response.any_pf_l2.mcdram_nearmemoryCounts any Prefetch requests that accounts for data responses from MCDRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008020007000offcore_response.any_read.ddrmemoryCounts any Read request  that accounts for responses from DDR (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x01818032f700offcore_response.any_read.ddr_farmemoryCounts any Read request  that accounts for data responses from DRAM Farevent=0xb7,period=100007,umask=1,offcore_rsp=0x01010032f700offcore_response.any_read.ddr_nearmemoryCounts any Read request  that accounts for data responses from DRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x00808032f700offcore_response.any_read.mcdrammemoryCounts any Read request  that accounts for responses from MCDRAM (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x01806032f700offcore_response.any_read.mcdram_farmemoryCounts any Read request  that accounts for data responses from MCDRAM Far or Other tile L2 hit farevent=0xb7,period=100007,umask=1,offcore_rsp=0x01004032f700offcore_response.any_read.mcdram_nearmemoryCounts any Read request  that accounts for data responses from MCDRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x00802032f700offcore_response.any_request.ddrmemoryCounts any request that accounts for responses from DDR (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x018180800000offcore_response.any_request.ddr_farmemoryCounts any request that accounts for data responses from DRAM Farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010100800000offcore_response.any_request.ddr_nearmemoryCounts any request that accounts for data responses from DRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008080800000offcore_response.any_request.mcdrammemoryCounts any request that accounts for responses from MCDRAM (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x018060800000offcore_response.any_request.mcdram_farmemoryCounts any request that accounts for data responses from MCDRAM Far or Other tile L2 hit farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010040800000offcore_response.any_request.mcdram_nearmemoryCounts any request that accounts for data responses from MCDRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008020800000offcore_response.any_rfo.ddrmemoryCounts Demand cacheable data write requests  that accounts for responses from DDR (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x018180002200offcore_response.any_rfo.ddr_farmemoryCounts Demand cacheable data write requests  that accounts for data responses from DRAM Farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010100002200offcore_response.any_rfo.ddr_nearmemoryCounts Demand cacheable data write requests  that accounts for data responses from DRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008080002200offcore_response.any_rfo.mcdrammemoryCounts Demand cacheable data write requests  that accounts for responses from MCDRAM (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x018060002200offcore_response.any_rfo.mcdram_farmemoryCounts Demand cacheable data write requests  that accounts for data responses from MCDRAM Far or Other tile L2 hit farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010040002200offcore_response.any_rfo.mcdram_nearmemoryCounts Demand cacheable data write requests  that accounts for data responses from MCDRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008020002200offcore_response.bus_locks.ddrmemoryCounts Bus locks and split lock requests that accounts for responses from DDR (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x018180040000offcore_response.bus_locks.ddr_farmemoryCounts Bus locks and split lock requests that accounts for data responses from DRAM Farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010100040000offcore_response.bus_locks.ddr_nearmemoryCounts Bus locks and split lock requests that accounts for data responses from DRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008080040000offcore_response.bus_locks.mcdrammemoryCounts Bus locks and split lock requests that accounts for responses from MCDRAM (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x018060040000offcore_response.bus_locks.mcdram_farmemoryCounts Bus locks and split lock requests that accounts for data responses from MCDRAM Far or Other tile L2 hit farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010040040000offcore_response.bus_locks.mcdram_nearmemoryCounts Bus locks and split lock requests that accounts for data responses from MCDRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008020040000offcore_response.demand_code_rd.ddrmemoryCounts demand code reads and prefetch code reads that accounts for responses from DDR (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x018180000400offcore_response.demand_code_rd.ddr_farmemoryCounts demand code reads and prefetch code reads that accounts for data responses from DRAM Farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010100000400offcore_response.demand_code_rd.ddr_nearmemoryCounts demand code reads and prefetch code reads that accounts for data responses from DRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008080000400offcore_response.demand_code_rd.mcdrammemoryCounts demand code reads and prefetch code reads that accounts for responses from MCDRAM (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x018060000400offcore_response.demand_code_rd.mcdram_farmemoryCounts demand code reads and prefetch code reads that accounts for data responses from MCDRAM Far or Other tile L2 hit farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010040000400offcore_response.demand_code_rd.mcdram_nearmemoryCounts demand code reads and prefetch code reads that accounts for data responses from MCDRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008020000400offcore_response.demand_data_rd.ddrmemoryCounts demand cacheable data and L1 prefetch data reads that accounts for responses from DDR (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x018180000100offcore_response.demand_data_rd.ddr_farmemoryCounts demand cacheable data and L1 prefetch data reads that accounts for data responses from DRAM Farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010100000100offcore_response.demand_data_rd.ddr_nearmemoryCounts demand cacheable data and L1 prefetch data reads that accounts for data responses from DRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008080000100offcore_response.demand_data_rd.mcdrammemoryCounts demand cacheable data and L1 prefetch data reads that accounts for responses from MCDRAM (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x018060000100offcore_response.demand_data_rd.mcdram_farmemoryCounts demand cacheable data and L1 prefetch data reads that accounts for data responses from MCDRAM Far or Other tile L2 hit farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010040000100offcore_response.demand_data_rd.mcdram_nearmemoryCounts demand cacheable data and L1 prefetch data reads that accounts for data responses from MCDRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008020000100offcore_response.demand_rfo.ddrmemoryCounts Demand cacheable data writes that accounts for responses from DDR (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x018180000200offcore_response.demand_rfo.ddr_farmemoryCounts Demand cacheable data writes that accounts for data responses from DRAM Farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010100000200offcore_response.demand_rfo.ddr_nearmemoryCounts Demand cacheable data writes that accounts for data responses from DRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008080000200offcore_response.demand_rfo.mcdrammemoryCounts Demand cacheable data writes that accounts for responses from MCDRAM (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x018060000200offcore_response.demand_rfo.mcdram_farmemoryCounts Demand cacheable data writes that accounts for data responses from MCDRAM Far or Other tile L2 hit farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010040000200offcore_response.demand_rfo.mcdram_nearmemoryCounts Demand cacheable data writes that accounts for data responses from MCDRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008020000200offcore_response.partial_reads.ddrmemoryCounts Partial reads (UC or WC and is valid only for Outstanding response type).  that accounts for responses from DDR (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x018180008000offcore_response.partial_reads.ddr_farmemoryCounts Partial reads (UC or WC and is valid only for Outstanding response type).  that accounts for data responses from DRAM Farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010100008000offcore_response.partial_reads.ddr_nearmemoryCounts Partial reads (UC or WC and is valid only for Outstanding response type).  that accounts for data responses from DRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008080008000offcore_response.partial_reads.mcdrammemoryCounts Partial reads (UC or WC and is valid only for Outstanding response type).  that accounts for responses from MCDRAM (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x018060008000offcore_response.partial_reads.mcdram_farmemoryCounts Partial reads (UC or WC and is valid only for Outstanding response type).  that accounts for data responses from MCDRAM Far or Other tile L2 hit farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010040008000offcore_response.partial_reads.mcdram_nearmemoryCounts Partial reads (UC or WC and is valid only for Outstanding response type).  that accounts for data responses from MCDRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008020008000offcore_response.partial_reads.non_drammemoryCounts Partial reads (UC or WC and is valid only for Outstanding response type).  that accounts for responses from any NON_DRAM system address. This includes MMIO transactionsevent=0xb7,period=100007,umask=1,offcore_rsp=0x200002008000offcore_response.partial_writes.ddr_farmemoryCounts Partial writes (UC or WT or WP and should be programmed on PMC1) that accounts for data responses from DRAM Farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010100010000offcore_response.partial_writes.ddr_nearmemoryCounts Partial writes (UC or WT or WP and should be programmed on PMC1) that accounts for data responses from DRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008080010000offcore_response.partial_writes.mcdrammemoryCounts Partial writes (UC or WT or WP and should be programmed on PMC1) that accounts for responses from MCDRAM (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x018060010000offcore_response.partial_writes.mcdram_farmemoryCounts Partial writes (UC or WT or WP and should be programmed on PMC1) that accounts for data responses from MCDRAM Far or Other tile L2 hit farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010040010000offcore_response.partial_writes.mcdram_nearmemoryCounts Partial writes (UC or WT or WP and should be programmed on PMC1) that accounts for data responses from MCDRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008020010000offcore_response.pf_l1_data_rd.ddrmemoryCounts L1 data HW prefetches that accounts for responses from DDR (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x018180200000offcore_response.pf_l1_data_rd.ddr_farmemoryCounts L1 data HW prefetches that accounts for data responses from DRAM Farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010100200000offcore_response.pf_l1_data_rd.ddr_nearmemoryCounts L1 data HW prefetches that accounts for data responses from DRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008080200000offcore_response.pf_l1_data_rd.mcdram_farmemoryCounts L1 data HW prefetches that accounts for data responses from MCDRAM Far or Other tile L2 hit farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010040200000offcore_response.pf_l1_data_rd.mcdram_nearmemoryCounts L1 data HW prefetches that accounts for data responses from MCDRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008020200000offcore_response.pf_l2_code_rd.ddrmemoryCounts L2 code HW prefetches that accounts for responses from DDR (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x018180004000offcore_response.pf_l2_code_rd.ddr_farmemoryCounts L2 code HW prefetches that accounts for data responses from DRAM Farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010100004000offcore_response.pf_l2_code_rd.ddr_nearmemoryCounts L2 code HW prefetches that accounts for data responses from DRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008080004000offcore_response.pf_l2_code_rd.mcdram_farmemoryCounts L2 code HW prefetches that accounts for data responses from MCDRAM Far or Other tile L2 hit farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010040004000offcore_response.pf_l2_code_rd.mcdram_nearmemoryCounts L2 code HW prefetches that accounts for data responses from MCDRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008020004000offcore_response.pf_l2_rfo.ddrmemoryCounts L2 data RFO prefetches (includes PREFETCHW instruction) that accounts for responses from DDR (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x018180002000offcore_response.pf_l2_rfo.ddr_farmemoryCounts L2 data RFO prefetches (includes PREFETCHW instruction) that accounts for data responses from DRAM Farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010100002000offcore_response.pf_l2_rfo.ddr_nearmemoryCounts L2 data RFO prefetches (includes PREFETCHW instruction) that accounts for data responses from DRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008080002000offcore_response.pf_l2_rfo.mcdrammemoryCounts L2 data RFO prefetches (includes PREFETCHW instruction) that accounts for responses from MCDRAM (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x018060002000offcore_response.pf_l2_rfo.mcdram_farmemoryCounts L2 data RFO prefetches (includes PREFETCHW instruction) that accounts for data responses from MCDRAM Far or Other tile L2 hit farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010040002000offcore_response.pf_l2_rfo.mcdram_nearmemoryCounts L2 data RFO prefetches (includes PREFETCHW instruction) that accounts for data responses from MCDRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008020002000offcore_response.pf_l2_rfo.non_drammemoryCounts L2 data RFO prefetches (includes PREFETCHW instruction) that accounts for responses from any NON_DRAM system address. This includes MMIO transactionsevent=0xb7,period=100007,umask=1,offcore_rsp=0x200002002000offcore_response.pf_software.ddrmemoryCounts Software Prefetches that accounts for responses from DDR (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x018180100000offcore_response.pf_software.ddr_farmemoryCounts Software Prefetches that accounts for data responses from DRAM Farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010100100000offcore_response.pf_software.ddr_nearmemoryCounts Software Prefetches that accounts for data responses from DRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008080100000offcore_response.pf_software.mcdrammemoryCounts Software Prefetches that accounts for responses from MCDRAM (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x018060100000offcore_response.pf_software.mcdram_farmemoryCounts Software Prefetches that accounts for data responses from MCDRAM Far or Other tile L2 hit farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010040100000offcore_response.pf_software.mcdram_nearmemoryCounts Software Prefetches that accounts for data responses from MCDRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008020100000offcore_response.uc_code_reads.ddrmemoryCounts UC code reads (valid only for Outstanding response type)  that accounts for responses from DDR (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x018180020000offcore_response.uc_code_reads.ddr_farmemoryCounts UC code reads (valid only for Outstanding response type)  that accounts for data responses from DRAM Farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010100020000offcore_response.uc_code_reads.ddr_nearmemoryCounts UC code reads (valid only for Outstanding response type)  that accounts for data responses from DRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008080020000offcore_response.uc_code_reads.mcdrammemoryCounts UC code reads (valid only for Outstanding response type)  that accounts for responses from MCDRAM (local and far)event=0xb7,period=100007,umask=1,offcore_rsp=0x018060020000offcore_response.uc_code_reads.mcdram_farmemoryCounts UC code reads (valid only for Outstanding response type)  that accounts for data responses from MCDRAM Far or Other tile L2 hit farevent=0xb7,period=100007,umask=1,offcore_rsp=0x010040020000offcore_response.uc_code_reads.mcdram_nearmemoryCounts UC code reads (valid only for Outstanding response type)  that accounts for data responses from MCDRAM Localevent=0xb7,period=100007,umask=1,offcore_rsp=0x008020020000br_inst_retired.all_branchespipelineCounts the number of branch instructions retired (Precise Event)event=0xc4,period=20000300br_inst_retired.callpipelineCounts the number of near CALL branch instructions retired. (Precise Event)event=0xc4,period=200003,umask=0xf900br_inst_retired.far_branchpipelineCounts the number of far branch instructions retired. (Precise Event)event=0xc4,period=200003,umask=0xbf00br_inst_retired.ind_callpipelineCounts the number of near indirect CALL branch instructions retired. (Precise Event)event=0xc4,period=200003,umask=0xfb00br_inst_retired.jccpipelineCounts the number of branch instructions retired that were conditional jumps. (Precise Event)event=0xc4,period=200003,umask=0x7e00br_inst_retired.non_return_indpipelineCounts the number of branch instructions retired that were near indirect CALL or near indirect JMP. (Precise Event)event=0xc4,period=200003,umask=0xeb00br_inst_retired.rel_callpipelineCounts the number of near relative CALL branch instructions retired. (Precise Event)event=0xc4,period=200003,umask=0xfd00br_inst_retired.returnpipelineCounts the number of near RET branch instructions retired. (Precise Event)event=0xc4,period=200003,umask=0xf700br_inst_retired.taken_jccpipelineCounts the number of branch instructions retired that were conditional jumps and predicted taken. (Precise Event)event=0xc4,period=200003,umask=0xfe00br_misp_retired.all_branchespipelineCounts the number of mispredicted branch instructions retired (Precise Event)event=0xc5,period=20000300br_misp_retired.callpipelineCounts the number of mispredicted near CALL branch instructions retired. (Precise Event)event=0xc5,period=200003,umask=0xf900br_misp_retired.far_branchpipelineCounts the number of mispredicted far branch instructions retired. (Precise Event)event=0xc5,period=200003,umask=0xbf00br_misp_retired.ind_callpipelineCounts the number of mispredicted near indirect CALL branch instructions retired. (Precise Event)event=0xc5,period=200003,umask=0xfb00br_misp_retired.jccpipelineCounts the number of mispredicted branch instructions retired that were conditional jumps. (Precise Event)event=0xc5,period=200003,umask=0x7e00br_misp_retired.non_return_indpipelineCounts the number of mispredicted branch instructions retired that were near indirect CALL or near indirect JMP. (Precise Event)event=0xc5,period=200003,umask=0xeb00br_misp_retired.rel_callpipelineCounts the number of mispredicted near relative CALL branch instructions retired. (Precise Event)event=0xc5,period=200003,umask=0xfd00br_misp_retired.returnpipelineCounts the number of mispredicted near RET branch instructions retired. (Precise Event)event=0xc5,period=200003,umask=0xf700br_misp_retired.taken_jccpipelineCounts the number of mispredicted branch instructions retired that were conditional jumps and predicted taken. (Precise Event)event=0xc5,period=200003,umask=0xfe00cpu_clk_unhalted.refpipelineCounts the number of unhalted reference clock cyclesevent=0x0,umask=0x03,period=200000300cpu_clk_unhalted.threadpipelineFixed Counter: Counts the number of unhalted core clock cyclesevent=0x3c,period=200000300This event counts the number of core cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios. The core frequency may change from time to time due to transitions associated with Enhanced Intel SpeedStep Technology or TM2. For this reason this event may have a changing ratio with regards to time. When the core frequency is constant, this event can approximate elapsed time while the core was not in the halt state. It is counted on a dedicated fixed countercpu_clk_unhalted.thread_ppipelineCounts the number of unhalted core clock cyclesevent=0x3c,period=200000300cycles_div_busy.allpipelineCycles the number of core cycles when divider is busy.  Does not imply a stall waiting for the dividerevent=0xcd,period=2000003,umask=100This event counts cycles when the divider is busy. More specifically cycles when the divide unit is unable to accept a new divide uop because it is busy processing a previously dispatched uop. The cycles will be counted irrespective of whether or not another divide uop is waiting to enter the divide unit (from the RS). This event counts integer divides, x87 divides, divss, divsd, sqrtss, sqrtsd event and does not count vector dividesinst_retired.anypipelineFixed Counter: Counts the number of instructions retiredevent=0xc0,period=200000300This event counts the number of instructions that retire.  For instructions that consist of multiple micro-ops, this event counts exactly once, as the last micro-op of the instruction retires.  The event continues counting while instructions retire, including during interrupt service routines caused by hardware interrupts, faults or trapsinst_retired.any_ppipelineCounts the total number of instructions retiredevent=0xc0,period=200000300inst_retired.any_pspipelineCounts the number of instructions retired (Precise Event)event=0xc0,period=200000300machine_clears.allpipelineCounts all machine clearsevent=0xc3,period=200003,umask=800machine_clears.smcpipelineCounts the number of times that the machine clears due to program modifying data within 1K of a recently fetched code pageevent=0xc3,period=200003,umask=100no_alloc_cycles.allpipelineCounts the total number of core cycles when no micro-ops are allocated for any reasonevent=0xca,period=200003,umask=0x7f00no_alloc_cycles.mispredictspipelineCounts the number of core cycles when no micro-ops are allocated and the alloc pipe is stalled waiting for a mispredicted branch to retireevent=0xca,period=200003,umask=400This event counts the number of core cycles when no uops are allocated and the alloc pipe is stalled waiting for a mispredicted branch to retireno_alloc_cycles.not_deliveredpipelineCounts the number of core cycles when no micro-ops are allocated, the IQ is empty, and no other condition is blocking allocationevent=0xca,period=200003,umask=0x9000This event counts the number of core cycles when no uops are allocated, the instruction queue is empty and the alloc pipe is stalled waiting for instructions to be fetchedno_alloc_cycles.rat_stallpipelineCounts the number of core cycles when no micro-ops are allocated and a RATstall (caused by reservation station full) is assertedevent=0xca,period=200003,umask=0x2000no_alloc_cycles.rob_fullpipelineCounts the number of core cycles when no micro-ops are allocated and the ROB is fullevent=0xca,period=200003,umask=100recycleq.any_ldpipelineCounts any retired load that was pushed into the recycle queue for any reasonevent=3,period=200003,umask=0x4000recycleq.any_stpipelineCounts any retired store that was pushed into the recycle queue for any reasonevent=3,period=200003,umask=0x8000recycleq.ld_block_std_notreadypipelineCounts the number of occurrences a retired load gets blocked because its address overlaps with a store whose data is not readyevent=3,period=200003,umask=200recycleq.ld_block_st_forwardpipelineCounts the number of occurrences a retired load gets blocked because its address partially overlaps with a store  (Precise Event)  Supports address when preciseevent=3,period=200003,umask=100This event counts the number of retired loads that were prohibited from receiving forwarded data from a previous store because of address mismatch  Supports address when preciserecycleq.ld_splitspipelineCounts the number of occurrences a retired load was pushed into the rehab queue because it sees a cache line split. Each split should be counted only once. (Precise Event)  Supports address when preciseevent=3,period=200003,umask=800This event counts the number of retired loads which was pushed into the recycled queue that experienced cache line boundary splits (Precise event). Not that each split should be counted only once  Supports address when preciserecycleq.lockpipelineCounts all the retired locked loads. It does not include stores because we would double count if we count storesevent=3,period=200003,umask=0x1000recycleq.sta_fullpipelineCounts the store micro-ops retired that were pushed in the rehab queue because the store address buffer is fullevent=3,period=200003,umask=0x2000recycleq.st_splitspipelineCounts the number of occurrences a retired store that is a cache line split. Each split should be counted only onceevent=3,period=200003,umask=400This event counts the number of retired store that experienced a cache line boundary split(Precise Event). Note that each spilt should be counted only oncers_full_stall.allpipelineCounts the total number of core cycles allocation pipeline is stalled when any one of the reservation stations is fullevent=0xcb,period=200003,umask=0x1f00rs_full_stall.mecpipelineCounts the number of core cycles when allocation pipeline is stalled and is waiting for a free MEC reservation station entryevent=0xcb,period=200003,umask=100uops_retired.allpipelineCounts the number of micro-ops retiredevent=0xc2,period=2000003,umask=0x1000This event counts the number of micro-ops (uops) retired. The processor decodes complex macro instructions into a sequence of simpler uops. Most instructions are composed of one or two uops. Some instructions are decoded into longer sequences such as repeat instructions, floating point transcendental instructions, and assistsuops_retired.mspipelineCounts the number of micro-ops retired that are from the complex flows issued by the micro-sequencer (MS)event=0xc2,period=2000003,umask=100This event counts the number of micro-ops retired that were supplied from MSROMunc_c_tor_inserts.ipq_hituncore cacheCounts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent -IPQevent=0x35,umask=0x1801unc_c_tor_inserts.ipq_missuncore cacheCounts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent -IPQevent=0x35,umask=0x2801unc_c_tor_inserts.irq_hituncore cacheCounts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent -IRQevent=0x35,umask=0x1101unc_c_tor_inserts.irq_missuncore cacheCounts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent -IRQevent=0x35,umask=0x2101unc_c_tor_inserts.loc_alluncore cacheCounts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent -IRQ or PRQevent=0x35,umask=0x3701unc_c_tor_inserts.prq_hituncore cacheCounts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent -PRQevent=0x35,umask=0x1401unc_c_tor_inserts.prq_missuncore cacheCounts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent -PRQevent=0x35,umask=0x2401unc_h_ag0_ad_crd_acquired.tgr0uncore cacheCMS Agent0 AD Credits Acquired For Transgress 0event=0x80,umask=101unc_h_ag0_ad_crd_acquired.tgr1uncore cacheCMS Agent0 AD Credits Acquired For Transgress 1event=0x80,umask=201unc_h_ag0_ad_crd_acquired.tgr2uncore cacheCMS Agent0 AD Credits Acquired For Transgress 2event=0x80,umask=401unc_h_ag0_ad_crd_acquired.tgr3uncore cacheCMS Agent0 AD Credits Acquired For Transgress 3event=0x80,umask=801unc_h_ag0_ad_crd_acquired.tgr4uncore cacheCMS Agent0 AD Credits Acquired For Transgress 4event=0x80,umask=0x1001unc_h_ag0_ad_crd_acquired.tgr5uncore cacheCMS Agent0 AD Credits Acquired For Transgress 5event=0x80,umask=0x2001unc_h_ag0_ad_crd_acquired.tgr6uncore cacheCMS Agent0 AD Credits Acquired For Transgress 6event=0x80,umask=0x4001unc_h_ag0_ad_crd_acquired.tgr7uncore cacheCMS Agent0 AD Credits Acquired For Transgress 7event=0x80,umask=0x8001unc_h_ag0_ad_crd_acquired_ext.any_of_tgr0_thru_tgr7uncore cacheCMS Agent0 AD Credits Acquired For Transgress 0-7event=0x81,umask=201unc_h_ag0_ad_crd_acquired_ext.tgr8uncore cacheCMS Agent0 AD Credits Acquired For Transgress 8event=0x81,umask=101unc_h_ag0_ad_crd_occupancy.tgr0uncore cacheCMS Agent0 AD Credits Occupancy For Transgress 0event=0x82,umask=101unc_h_ag0_ad_crd_occupancy.tgr1uncore cacheCMS Agent0 AD Credits Occupancy For Transgress 1event=0x82,umask=201unc_h_ag0_ad_crd_occupancy.tgr2uncore cacheCMS Agent0 AD Credits Occupancy For Transgress 2event=0x82,umask=401unc_h_ag0_ad_crd_occupancy.tgr3uncore cacheCMS Agent0 AD Credits Occupancy For Transgress 3event=0x82,umask=801unc_h_ag0_ad_crd_occupancy.tgr4uncore cacheCMS Agent0 AD Credits Occupancy For Transgress 4event=0x82,umask=0x1001unc_h_ag0_ad_crd_occupancy.tgr5uncore cacheCMS Agent0 AD Credits Occupancy For Transgress 5event=0x82,umask=0x2001unc_h_ag0_ad_crd_occupancy.tgr6uncore cacheCMS Agent0 AD Credits Occupancy For Transgress 6event=0x82,umask=0x4001unc_h_ag0_ad_crd_occupancy.tgr7uncore cacheCMS Agent0 AD Credits Occupancy For Transgress 7event=0x82,umask=0x8001unc_h_ag0_ad_crd_occupancy_ext.any_of_tgr0_thru_tgr7uncore cacheCMS Agent0 AD Credits Occupancy For Transgress 0-7event=0x83,umask=201unc_h_ag0_ad_crd_occupancy_ext.tgr8uncore cacheCMS Agent0 AD Credits Occupancy For Transgress 8event=0x83,umask=101unc_h_ag0_bl_crd_acquired.tgr0uncore cacheCMS Agent0 BL Credits Acquired For Transgress 0event=0x88,umask=101unc_h_ag0_bl_crd_acquired.tgr1uncore cacheCMS Agent0 BL Credits Acquired For Transgress 1event=0x88,umask=201unc_h_ag0_bl_crd_acquired.tgr2uncore cacheCMS Agent0 BL Credits Acquired For Transgress 2event=0x88,umask=401unc_h_ag0_bl_crd_acquired.tgr3uncore cacheCMS Agent0 BL Credits Acquired For Transgress 3event=0x88,umask=801unc_h_ag0_bl_crd_acquired.tgr4uncore cacheCMS Agent0 BL Credits Acquired For Transgress 4event=0x88,umask=0x1001unc_h_ag0_bl_crd_acquired.tgr5uncore cacheCMS Agent0 BL Credits Acquired For Transgress 5event=0x88,umask=0x2001unc_h_ag0_bl_crd_acquired.tgr6uncore cacheCMS Agent0 BL Credits Acquired For Transgress 6event=0x88,umask=0x4001unc_h_ag0_bl_crd_acquired.tgr7uncore cacheCMS Agent0 BL Credits Acquired For Transgress 7event=0x88,umask=0x8001unc_h_ag0_bl_crd_acquired_ext.any_of_tgr0_thru_tgr7uncore cacheCMS Agent0 BL Credits Acquired For Transgress 0-7event=0x89,umask=201unc_h_ag0_bl_crd_acquired_ext.tgr8uncore cacheCMS Agent0 BL Credits Acquired For Transgress 8event=0x89,umask=101unc_h_ag0_bl_crd_occupancy.tgr0uncore cacheCMS Agent0 BL Credits Occupancy For Transgress 0event=0x8a,umask=101unc_h_ag0_bl_crd_occupancy.tgr1uncore cacheCMS Agent0 BL Credits Occupancy For Transgress 1event=0x8a,umask=201unc_h_ag0_bl_crd_occupancy.tgr2uncore cacheCMS Agent0 BL Credits Occupancy For Transgress 2event=0x8a,umask=401unc_h_ag0_bl_crd_occupancy.tgr3uncore cacheCMS Agent0 BL Credits Occupancy For Transgress 3event=0x8a,umask=801unc_h_ag0_bl_crd_occupancy.tgr4uncore cacheCMS Agent0 BL Credits Occupancy For Transgress 4event=0x8a,umask=0x1001unc_h_ag0_bl_crd_occupancy.tgr5uncore cacheCMS Agent0 BL Credits Occupancy For Transgress 5event=0x8a,umask=0x2001unc_h_ag0_bl_crd_occupancy.tgr6uncore cacheCMS Agent0 BL Credits Occupancy For Transgress 6event=0x8a,umask=0x4001unc_h_ag0_bl_crd_occupancy.tgr7uncore cacheCMS Agent0 BL Credits Occupancy For Transgress 7event=0x8a,umask=0x8001unc_h_ag0_bl_crd_occupancy_ext.any_of_tgr0_thru_tgr7uncore cacheCMS Agent0 BL Credits Occupancy For Transgress 0-7event=0x8b,umask=201unc_h_ag0_bl_crd_occupancy_ext.tgr8uncore cacheCMS Agent0 BL Credits Occupancy For Transgress 8event=0x8b,umask=101unc_h_ag0_stall_no_crd_egress_horz_ad.tgr0uncore cacheStall on No AD Transgress Credits For Transgress 0event=0xd0,umask=101unc_h_ag0_stall_no_crd_egress_horz_ad.tgr1uncore cacheStall on No AD Transgress Credits For Transgress 1event=0xd0,umask=201unc_h_ag0_stall_no_crd_egress_horz_ad.tgr2uncore cacheStall on No AD Transgress Credits For Transgress 2event=0xd0,umask=401unc_h_ag0_stall_no_crd_egress_horz_ad.tgr3uncore cacheStall on No AD Transgress Credits For Transgress 3event=0xd0,umask=801unc_h_ag0_stall_no_crd_egress_horz_ad.tgr4uncore cacheStall on No AD Transgress Credits For Transgress 4event=0xd0,umask=0x1001unc_h_ag0_stall_no_crd_egress_horz_ad.tgr5uncore cacheStall on No AD Transgress Credits For Transgress 5event=0xd0,umask=0x2001unc_h_ag0_stall_no_crd_egress_horz_ad.tgr6uncore cacheStall on No AD Transgress Credits For Transgress 6event=0xd0,umask=0x4001unc_h_ag0_stall_no_crd_egress_horz_ad.tgr7uncore cacheStall on No AD Transgress Credits For Transgress 7event=0xd0,umask=0x8001unc_h_ag0_stall_no_crd_egress_horz_ad_ext.any_of_tgr0_thru_tgr7uncore cacheStall on No AD Transgress Credits For Transgress 0-7event=0xd1,umask=201unc_h_ag0_stall_no_crd_egress_horz_ad_ext.tgr8uncore cacheStall on No AD Transgress Credits For Transgress 8event=0xd1,umask=101unc_h_ag0_stall_no_crd_egress_horz_bl.tgr0uncore cacheStall on No AD Transgress Credits For Transgress 0event=0xd4,umask=101unc_h_ag0_stall_no_crd_egress_horz_bl.tgr1uncore cacheStall on No AD Transgress Credits For Transgress 1event=0xd4,umask=201unc_h_ag0_stall_no_crd_egress_horz_bl.tgr2uncore cacheStall on No AD Transgress Credits For Transgress 2event=0xd4,umask=401unc_h_ag0_stall_no_crd_egress_horz_bl.tgr3uncore cacheStall on No AD Transgress Credits For Transgress 3event=0xd4,umask=801unc_h_ag0_stall_no_crd_egress_horz_bl.tgr4uncore cacheStall on No AD Transgress Credits For Transgress 4event=0xd4,umask=0x1001unc_h_ag0_stall_no_crd_egress_horz_bl.tgr5uncore cacheStall on No AD Transgress Credits For Transgress 5event=0xd4,umask=0x2001unc_h_ag0_stall_no_crd_egress_horz_bl.tgr6uncore cacheStall on No AD Transgress Credits For Transgress 6event=0xd4,umask=0x4001unc_h_ag0_stall_no_crd_egress_horz_bl.tgr7uncore cacheStall on No AD Transgress Credits For Transgress 7event=0xd4,umask=0x8001unc_h_ag0_stall_no_crd_egress_horz_bl_ext.any_of_tgr0_thru_tgr7uncore cacheStall on No AD Transgress Credits For Transgress 0-7event=0xd5,umask=201unc_h_ag0_stall_no_crd_egress_horz_bl_ext.tgr8uncore cacheStall on No AD Transgress Credits For Transgress 8event=0xd5,umask=101unc_h_ag1_ad_crd_acquired.tgr0uncore cacheCMS Agent1 AD Credits Acquired For Transgress 0event=0x84,umask=101unc_h_ag1_ad_crd_acquired.tgr1uncore cacheCMS Agent1 AD Credits Acquired For Transgress 1event=0x84,umask=201unc_h_ag1_ad_crd_acquired.tgr2uncore cacheCMS Agent1 AD Credits Acquired For Transgress 2event=0x84,umask=401unc_h_ag1_ad_crd_acquired.tgr3uncore cacheCMS Agent1 AD Credits Acquired For Transgress 3event=0x84,umask=801unc_h_ag1_ad_crd_acquired.tgr4uncore cacheCMS Agent1 AD Credits Acquired For Transgress 4event=0x84,umask=0x1001unc_h_ag1_ad_crd_acquired.tgr5uncore cacheCMS Agent1 AD Credits Acquired For Transgress 5event=0x84,umask=0x2001unc_h_ag1_ad_crd_acquired.tgr6uncore cacheCMS Agent1 AD Credits Acquired For Transgress 6event=0x84,umask=0x4001unc_h_ag1_ad_crd_acquired.tgr7uncore cacheCMS Agent1 AD Credits Acquired For Transgress 7event=0x84,umask=0x8001unc_h_ag1_ad_crd_acquired_ext.any_of_tgr0_thru_tgr7uncore cacheCMS Agent1 AD Credits Acquired For Transgress 0-7event=0x85,umask=201unc_h_ag1_ad_crd_acquired_ext.tgr8uncore cacheCMS Agent1 AD Credits Acquired For Transgress 8event=0x85,umask=101unc_h_ag1_ad_crd_occupancy.tgr0uncore cacheCMS Agent1 AD Credits Occupancy For Transgress 0event=0x86,umask=101unc_h_ag1_ad_crd_occupancy.tgr1uncore cacheCMS Agent1 AD Credits Occupancy For Transgress 1event=0x86,umask=201unc_h_ag1_ad_crd_occupancy.tgr2uncore cacheCMS Agent1 AD Credits Occupancy For Transgress 2event=0x86,umask=401unc_h_ag1_ad_crd_occupancy.tgr3uncore cacheCMS Agent1 AD Credits Occupancy For Transgress 3event=0x86,umask=801unc_h_ag1_ad_crd_occupancy.tgr4uncore cacheCMS Agent1 AD Credits Occupancy For Transgress 4event=0x86,umask=0x1001unc_h_ag1_ad_crd_occupancy.tgr5uncore cacheCMS Agent1 AD Credits Occupancy For Transgress 5event=0x86,umask=0x2001unc_h_ag1_ad_crd_occupancy.tgr6uncore cacheCMS Agent1 AD Credits Occupancy For Transgress 6event=0x86,umask=0x4001unc_h_ag1_ad_crd_occupancy.tgr7uncore cacheCMS Agent1 AD Credits Occupancy For Transgress 7event=0x86,umask=0x8001unc_h_ag1_ad_crd_occupancy_ext.any_of_tgr0_thru_tgr7uncore cacheCMS Agent1 AD Credits Occupancy For Transgress 0-7event=0x87,umask=201unc_h_ag1_ad_crd_occupancy_ext.tgr8uncore cacheCMS Agent1 AD Credits Occupancy For Transgress 8event=0x87,umask=101unc_h_ag1_bl_crd_acquired.tgr0uncore cacheCMS Agent1 BL Credits Acquired For Transgress 0event=0x8c,umask=101unc_h_ag1_bl_crd_acquired.tgr1uncore cacheCMS Agent1 BL Credits Acquired For Transgress 1event=0x8c,umask=201unc_h_ag1_bl_crd_acquired.tgr2uncore cacheCMS Agent1 BL Credits Acquired For Transgress 2event=0x8c,umask=401unc_h_ag1_bl_crd_acquired.tgr3uncore cacheCMS Agent1 BL Credits Acquired For Transgress 3event=0x8c,umask=801unc_h_ag1_bl_crd_acquired.tgr4uncore cacheCMS Agent1 BL Credits Acquired For Transgress 4event=0x8c,umask=0x1001unc_h_ag1_bl_crd_acquired.tgr5uncore cacheCMS Agent1 BL Credits Acquired For Transgress 5event=0x8c,umask=0x2001unc_h_ag1_bl_crd_acquired.tgr6uncore cacheCMS Agent1 BL Credits Acquired For Transgress 6event=0x8c,umask=0x4001unc_h_ag1_bl_crd_acquired.tgr7uncore cacheCMS Agent1 BL Credits Acquired For Transgress 7event=0x8c,umask=0x8001unc_h_ag1_bl_crd_acquired_ext.any_of_tgr0_thru_tgr7uncore cacheCMS Agent1 BL Credits Acquired For Transgress 0-7event=0x8d,umask=201unc_h_ag1_bl_crd_acquired_ext.tgr8uncore cacheCMS Agent1 BL Credits Acquired For Transgress 8event=0x8d,umask=101unc_h_ag1_bl_crd_occupancy.tgr0uncore cacheCMS Agent1 BL Credits Occupancy For Transgress 0event=0x8e,umask=101unc_h_ag1_bl_crd_occupancy.tgr1uncore cacheCMS Agent1 BL Credits Occupancy For Transgress 1event=0x8e,umask=201unc_h_ag1_bl_crd_occupancy.tgr2uncore cacheCMS Agent1 BL Credits Occupancy For Transgress 2event=0x8e,umask=401unc_h_ag1_bl_crd_occupancy.tgr3uncore cacheCMS Agent1 BL Credits Occupancy For Transgress 3event=0x8e,umask=801unc_h_ag1_bl_crd_occupancy.tgr4uncore cacheCMS Agent1 BL Credits Occupancy For Transgress 4event=0x8e,umask=0x1001unc_h_ag1_bl_crd_occupancy.tgr5uncore cacheCMS Agent1 BL Credits Occupancy For Transgress 5event=0x8e,umask=0x2001unc_h_ag1_bl_crd_occupancy.tgr6uncore cacheCMS Agent1 BL Credits Occupancy For Transgress 6event=0x8e,umask=0x4001unc_h_ag1_bl_crd_occupancy.tgr7uncore cacheCMS Agent1 BL Credits Occupancy For Transgress 7event=0x8e,umask=0x8001unc_h_ag1_bl_crd_occupancy_ext.any_of_tgr0_thru_tgr7uncore cacheCMS Agent1 BL Credits Occupancy For Transgress 0-7event=0x8f,umask=201unc_h_ag1_bl_crd_occupancy_ext.tgr8uncore cacheCMS Agent1 BL Credits Occupancy For Transgress 8event=0x8f,umask=101unc_h_ag1_stall_no_crd_egress_horz_ad.tgr0uncore cacheStall on No AD Transgress Credits For Transgress 0event=0xd2,umask=101unc_h_ag1_stall_no_crd_egress_horz_ad.tgr1uncore cacheStall on No AD Transgress Credits For Transgress 1event=0xd2,umask=201unc_h_ag1_stall_no_crd_egress_horz_ad.tgr2uncore cacheStall on No AD Transgress Credits For Transgress 2event=0xd2,umask=401unc_h_ag1_stall_no_crd_egress_horz_ad.tgr3uncore cacheStall on No AD Transgress Credits For Transgress 3event=0xd2,umask=801unc_h_ag1_stall_no_crd_egress_horz_ad.tgr4uncore cacheStall on No AD Transgress Credits For Transgress 4event=0xd2,umask=0x1001unc_h_ag1_stall_no_crd_egress_horz_ad.tgr5uncore cacheStall on No AD Transgress Credits For Transgress 5event=0xd2,umask=0x2001unc_h_ag1_stall_no_crd_egress_horz_ad.tgr6uncore cacheStall on No AD Transgress Credits For Transgress 6event=0xd2,umask=0x4001unc_h_ag1_stall_no_crd_egress_horz_ad.tgr7uncore cacheStall on No AD Transgress Credits For Transgress 7event=0xd2,umask=0x8001unc_h_ag1_stall_no_crd_egress_horz_ad_ext.any_of_tgr0_thru_tgr7uncore cacheStall on No AD Transgress Credits For Transgress 0-7event=0xd3,umask=201unc_h_ag1_stall_no_crd_egress_horz_ad_ext.tgr8uncore cacheStall on No AD Transgress Credits For Transgress 8event=0xd3,umask=101unc_h_ag1_stall_no_crd_egress_horz_bl.tgr0uncore cacheStall on No AD Transgress Credits For Transgress 0event=0xd6,umask=101unc_h_ag1_stall_no_crd_egress_horz_bl.tgr1uncore cacheStall on No AD Transgress Credits For Transgress 1event=0xd6,umask=201unc_h_ag1_stall_no_crd_egress_horz_bl.tgr2uncore cacheStall on No AD Transgress Credits For Transgress 2event=0xd6,umask=401unc_h_ag1_stall_no_crd_egress_horz_bl.tgr3uncore cacheStall on No AD Transgress Credits For Transgress 3event=0xd6,umask=801unc_h_ag1_stall_no_crd_egress_horz_bl.tgr4uncore cacheStall on No AD Transgress Credits For Transgress 4event=0xd6,umask=0x1001unc_h_ag1_stall_no_crd_egress_horz_bl.tgr5uncore cacheStall on No AD Transgress Credits For Transgress 5event=0xd6,umask=0x2001unc_h_ag1_stall_no_crd_egress_horz_bl.tgr6uncore cacheStall on No AD Transgress Credits For Transgress 6event=0xd6,umask=0x4001unc_h_ag1_stall_no_crd_egress_horz_bl.tgr7uncore cacheStall on No AD Transgress Credits For Transgress 7event=0xd6,umask=0x8001unc_h_ag1_stall_no_crd_egress_horz_bl_ext.any_of_tgr0_thru_tgr7uncore cacheStall on No AD Transgress Credits For Transgress 0-7event=0xd7,umask=201unc_h_ag1_stall_no_crd_egress_horz_bl_ext.tgr8uncore cacheStall on No AD Transgress Credits For Transgress 8event=0xd7,umask=101unc_h_cache_lines_victimized.e_stateuncore cacheCache Lookups. Counts the number of times the LLC was accessed. Writeback transactions from L2 to the LLC  This includes all write transactions -- both Cacheable and UCevent=0x37,umask=201unc_h_cache_lines_victimized.f_stateuncore cacheCache Lookups. Counts the number of times the LLC was accessed. Filters for any transaction originating from the IPQ or IRQ.  This does not include lookups originating from the ISMQevent=0x37,umask=801unc_h_cache_lines_victimized.localuncore cacheLines Victimized that Match NIDevent=0x37,umask=0x2001unc_h_cache_lines_victimized.m_stateuncore cacheCache Lookups. Counts the number of times the LLC was accessed. Read transactionsevent=0x37,umask=101unc_h_cache_lines_victimized.remoteuncore cacheLines Victimized that Does Not Match NIDevent=0x37,umask=0x8001unc_h_cache_lines_victimized.s_stateuncore cacheCache Lookups. Counts the number of times the LLC was accessed. Filters for only snoop requests coming from the remote socket(s) through the IPQevent=0x37,umask=401unc_h_clockuncore cacheUncore Clocksevent=0xc001unc_h_egress_horz_ads_used.aduncore cacheCMS Horizontal ADS Usedevent=0x9d,umask=101unc_h_egress_horz_ads_used.akuncore cacheCMS Horizontal ADS Usedevent=0x9d,umask=201unc_h_egress_horz_ads_used.bluncore cacheCMS Horizontal ADS Usedevent=0x9d,umask=401unc_h_egress_horz_bypass.aduncore cacheCMS Horizontal Egress Bypass. AD ringevent=0x9f,umask=101unc_h_egress_horz_bypass.akuncore cacheCMS Horizontal Egress Bypass. AK ringevent=0x9f,umask=201unc_h_egress_horz_bypass.bluncore cacheCMS Horizontal Egress Bypass. BL ringevent=0x9f,umask=401unc_h_egress_horz_bypass.ivuncore cacheCMS Horizontal Egress Bypass. IV ringevent=0x9f,umask=801unc_h_egress_horz_cycles_full.aduncore cacheCycles CMS Horizontal Egress Queue is Full ADevent=0x96,umask=101unc_h_egress_horz_cycles_full.akuncore cacheCycles CMS Horizontal Egress Queue is Full AKevent=0x96,umask=201unc_h_egress_horz_cycles_full.bluncore cacheCycles CMS Horizontal Egress Queue is Full BLevent=0x96,umask=401unc_h_egress_horz_cycles_full.ivuncore cacheCycles CMS Horizontal Egress Queue is Full IVevent=0x96,umask=801unc_h_egress_horz_cycles_ne.aduncore cacheCycles CMS Horizontal Egress Queue is Not Empty ADevent=0x97,umask=101unc_h_egress_horz_cycles_ne.akuncore cacheCycles CMS Horizontal Egress Queue is Not Empty AKevent=0x97,umask=201unc_h_egress_horz_cycles_ne.bluncore cacheCycles CMS Horizontal Egress Queue is Not Empty BLevent=0x97,umask=401unc_h_egress_horz_cycles_ne.ivuncore cacheCycles CMS Horizontal Egress Queue is Not Empty IVevent=0x97,umask=801unc_h_egress_horz_inserts.aduncore cacheCMS Horizontal Egress Inserts ADevent=0x95,umask=101unc_h_egress_horz_inserts.akuncore cacheCMS Horizontal Egress Inserts AKevent=0x95,umask=201unc_h_egress_horz_inserts.bluncore cacheCMS Horizontal Egress Inserts BLevent=0x95,umask=401unc_h_egress_horz_inserts.ivuncore cacheCMS Horizontal Egress Inserts IVevent=0x95,umask=801unc_h_egress_horz_nack.aduncore cacheCMS Horizontal Egress NACKsevent=0x99,umask=101unc_h_egress_horz_nack.akuncore cacheCMS Horizontal Egress NACKsevent=0x99,umask=201unc_h_egress_horz_nack.bluncore cacheCMS Horizontal Egress NACKsevent=0x99,umask=401unc_h_egress_horz_nack.ivuncore cacheCMS Horizontal Egress NACKsevent=0x99,umask=801unc_h_egress_horz_occupancy.aduncore cacheCMS Horizontal Egress Occupancy ADevent=0x94,umask=101unc_h_egress_horz_occupancy.akuncore cacheCMS Horizontal Egress Occupancy AKevent=0x94,umask=201unc_h_egress_horz_occupancy.bluncore cacheCMS Horizontal Egress Occupancy BLevent=0x94,umask=401unc_h_egress_horz_occupancy.ivuncore cacheCMS Horizontal Egress Occupancy IVevent=0x94,umask=801unc_h_egress_horz_starved.aduncore cacheCMS Horizontal Egress Injection Starvationevent=0x9b,umask=101unc_h_egress_horz_starved.akuncore cacheCMS Horizontal Egress Injection Starvationevent=0x9b,umask=201unc_h_egress_horz_starved.bluncore cacheCMS Horizontal Egress Injection Starvationevent=0x9b,umask=401unc_h_egress_horz_starved.ivuncore cacheCMS Horizontal Egress Injection Starvationevent=0x9b,umask=801unc_h_egress_ordering.iv_snp_go_dnuncore cacheCounts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirementsevent=0xae,umask=401unc_h_egress_ordering.iv_snp_go_upuncore cacheCounts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirementsevent=0xae,umask=101unc_h_egress_vert_ads_used.ad_ag0uncore cacheCMS Vertical ADS Usedevent=0x9c,umask=101unc_h_egress_vert_ads_used.ad_ag1uncore cacheCMS Vertical ADS Usedevent=0x9c,umask=0x1001unc_h_egress_vert_ads_used.ak_ag0uncore cacheCMS Vertical ADS Usedevent=0x9c,umask=201unc_h_egress_vert_ads_used.ak_ag1uncore cacheCMS Vertical ADS Usedevent=0x9c,umask=0x2001unc_h_egress_vert_ads_used.bl_ag0uncore cacheCMS Vertical ADS Usedevent=0x9c,umask=401unc_h_egress_vert_ads_used.bl_ag1uncore cacheCMS Vertical ADS Usedevent=0x9c,umask=0x4001unc_h_egress_vert_bypass.ad_ag0uncore cacheCMS Vertical Egress Bypass. AD ring agent 0event=0x9e,umask=101unc_h_egress_vert_bypass.ad_ag1uncore cacheCMS Vertical Egress Bypass. AD ring agent 1event=0x9e,umask=0x1001unc_h_egress_vert_bypass.ak_ag0uncore cacheCMS Vertical Egress Bypass. AK ring agent 0event=0x9e,umask=201unc_h_egress_vert_bypass.ak_ag1uncore cacheCMS Vertical Egress Bypass. AK ring agent 1event=0x9e,umask=0x2001unc_h_egress_vert_bypass.bl_ag0uncore cacheCMS Vertical Egress Bypass. BL ring agent 0event=0x9e,umask=401unc_h_egress_vert_bypass.bl_ag1uncore cacheCMS Vertical Egress Bypass. BL ring agent 1event=0x9e,umask=0x4001unc_h_egress_vert_bypass.ivuncore cacheCMS Vertical Egress Bypass. IV ring agent 0event=0x9e,umask=801unc_h_egress_vert_cycles_full.ad_ag0uncore cacheCycles CMS Vertical Egress Queue Is Full AD - Agent 0event=0x92,umask=101unc_h_egress_vert_cycles_full.ad_ag1uncore cacheCycles CMS Vertical Egress Queue Is Full AD - Agent 1event=0x92,umask=0x1001unc_h_egress_vert_cycles_full.ak_ag0uncore cacheCycles CMS Vertical Egress Queue Is Full AK - Agent 0event=0x92,umask=201unc_h_egress_vert_cycles_full.ak_ag1uncore cacheCycles CMS Vertical Egress Queue Is Full AK - Agent 1event=0x92,umask=0x2001unc_h_egress_vert_cycles_full.bl_ag0uncore cacheCycles CMS Vertical Egress Queue Is Full BL - Agent 0event=0x92,umask=401unc_h_egress_vert_cycles_full.bl_ag1uncore cacheCycles CMS Vertical Egress Queue Is Full BL - Agent 1event=0x92,umask=0x4001unc_h_egress_vert_cycles_full.iv_ag0uncore cacheCycles CMS Vertical Egress Queue Is Full IV - Agent 0event=0x92,umask=801unc_h_egress_vert_cycles_ne.ad_ag0uncore cacheCycles CMS Vertical Egress Queue Is Not Empty AD - Agent 0event=0x93,umask=101unc_h_egress_vert_cycles_ne.ad_ag1uncore cacheCycles CMS Vertical Egress Queue Is Not Empty AD - Agent 1event=0x93,umask=0x1001unc_h_egress_vert_cycles_ne.ak_ag0uncore cacheCycles CMS Vertical Egress Queue Is Not Empty AK - Agent 0event=0x93,umask=201unc_h_egress_vert_cycles_ne.ak_ag1uncore cacheCycles CMS Vertical Egress Queue Is Not Empty AK - Agent 1event=0x93,umask=0x2001unc_h_egress_vert_cycles_ne.bl_ag0uncore cacheCycles CMS Vertical Egress Queue Is Not Empty BL - Agent 0event=0x93,umask=401unc_h_egress_vert_cycles_ne.bl_ag1uncore cacheCycles CMS Vertical Egress Queue Is Not Empty BL - Agent 1event=0x93,umask=0x4001unc_h_egress_vert_cycles_ne.iv_ag0uncore cacheCycles CMS Vertical Egress Queue Is Not Empty IV - Agent 0event=0x93,umask=801unc_h_egress_vert_inserts.ad_ag0uncore cacheCMS Vert Egress Allocations AD - Agent 0event=0x91,umask=101unc_h_egress_vert_inserts.ad_ag1uncore cacheCMS Vert Egress Allocations AD - Agent 1event=0x91,umask=0x1001unc_h_egress_vert_inserts.ak_ag0uncore cacheCMS Vert Egress Allocations AK - Agent 0event=0x91,umask=201unc_h_egress_vert_inserts.ak_ag1uncore cacheCMS Vert Egress Allocations AK - Agent 1event=0x91,umask=0x2001unc_h_egress_vert_inserts.bl_ag0uncore cacheCMS Vert Egress Allocations BL - Agent 0event=0x91,umask=401unc_h_egress_vert_inserts.bl_ag1uncore cacheCMS Vert Egress Allocations BL - Agent 1event=0x91,umask=0x4001unc_h_egress_vert_inserts.iv_ag0uncore cacheCMS Vert Egress Allocations IV - Agent 0event=0x91,umask=801unc_h_egress_vert_nack.ad_ag0uncore cacheCMS Vertical Egress NACKsevent=0x98,umask=101unc_h_egress_vert_nack.ad_ag1uncore cacheCMS Vertical Egress NACKsevent=0x98,umask=0x1001unc_h_egress_vert_nack.ak_ag0uncore cacheCMS Vertical Egress NACKs Onto AK Ringevent=0x98,umask=201unc_h_egress_vert_nack.ak_ag1uncore cacheCMS Vertical Egress NACKsevent=0x98,umask=0x2001unc_h_egress_vert_nack.bl_ag0uncore cacheCMS Vertical Egress NACKs Onto BL Ringevent=0x98,umask=401unc_h_egress_vert_nack.bl_ag1uncore cacheCMS Vertical Egress NACKsevent=0x98,umask=0x4001unc_h_egress_vert_nack.iv_ag0uncore cacheCMS Vertical Egress NACKsevent=0x98,umask=801unc_h_egress_vert_occupancy.ad_ag0uncore cacheCMS Vert Egress Occupancy AD - Agent 0event=0x90,umask=101unc_h_egress_vert_occupancy.ad_ag1uncore cacheCMS Vert Egress Occupancy AD - Agent 1event=0x90,umask=0x1001unc_h_egress_vert_occupancy.ak_ag0uncore cacheCMS Vert Egress Occupancy AK - Agent 0event=0x90,umask=201unc_h_egress_vert_occupancy.ak_ag1uncore cacheCMS Vert Egress Occupancy AK - Agent 1event=0x90,umask=0x2001unc_h_egress_vert_occupancy.bl_ag0uncore cacheCMS Vert Egress Occupancy BL - Agent 0event=0x90,umask=401unc_h_egress_vert_occupancy.bl_ag1uncore cacheCMS Vert Egress Occupancy BL - Agent 1event=0x90,umask=0x4001unc_h_egress_vert_occupancy.iv_ag0uncore cacheCMS Vert Egress Occupancy IV - Agent 0event=0x90,umask=801unc_h_egress_vert_starved.ad_ag0uncore cacheCMS Vertical Egress Injection Starvationevent=0x9a,umask=101unc_h_egress_vert_starved.ad_ag1uncore cacheCMS Vertical Egress Injection Starvationevent=0x9a,umask=0x1001unc_h_egress_vert_starved.ak_ag0uncore cacheCMS Vertical Egress Injection Starvation Onto AK Ringevent=0x9a,umask=201unc_h_egress_vert_starved.ak_ag1uncore cacheCMS Vertical Egress Injection Starvationevent=0x9a,umask=0x2001unc_h_egress_vert_starved.bl_ag0uncore cacheCMS Vertical Egress Injection Starvation Onto BL Ringevent=0x9a,umask=401unc_h_egress_vert_starved.bl_ag1uncore cacheCMS Vertical Egress Injection Starvationevent=0x9a,umask=0x4001unc_h_egress_vert_starved.iv_ag0uncore cacheCMS Vertical Egress Injection Starvationevent=0x9a,umask=801unc_h_fast_asserted.horzuncore cacheCounts cycles source throttling is asserted - horizontalevent=0xa5,umask=101unc_h_fast_asserted.vertuncore cacheCounts cycles source throttling is asserted - verticalevent=0xa501unc_h_horz_ring_ad_in_use.left_evenuncore cacheCounts the number of cycles that the Horizontal AD ring is being used at this ring stop - Left and Evenevent=0xa7,umask=101unc_h_horz_ring_ad_in_use.left_odduncore cacheCounts the number of cycles that the Horizontal AD ring is being used at this ring stop - Left and Oddevent=0xa7,umask=201unc_h_horz_ring_ad_in_use.right_evenuncore cacheCounts the number of cycles that the Horizontal AD ring is being used at this ring stop - Right and Evenevent=0xa7,umask=401unc_h_horz_ring_ad_in_use.right_odduncore cacheCounts the number of cycles that the Horizontal AD ring is being used at this ring stop - Right and Oddevent=0xa7,umask=801unc_h_horz_ring_ak_in_use.left_evenuncore cacheCounts the number of cycles that the Horizontal AK ring is being used at this ring stop - Left and Evenevent=0xa9,umask=101unc_h_horz_ring_ak_in_use.left_odduncore cacheCounts the number of cycles that the Horizontal AK ring is being used at this ring stop - Left and Oddevent=0xa9,umask=201unc_h_horz_ring_ak_in_use.right_evenuncore cacheCounts the number of cycles that the Horizontal AK ring is being used at this ring stop - Right and Evenevent=0xa9,umask=401unc_h_horz_ring_ak_in_use.right_odduncore cacheCounts the number of cycles that the Horizontal AK ring is being used at this ring stop - Right and Oddevent=0xa9,umask=801unc_h_horz_ring_bl_in_use.left_evenuncore cacheCounts the number of cycles that the Horizontal BL ring is being used at this ring stop - Left and Evenevent=0xab,umask=101unc_h_horz_ring_bl_in_use.left_odduncore cacheCounts the number of cycles that the Horizontal BL ring is being used at this ring stop - Left and Oddevent=0xab,umask=201unc_h_horz_ring_bl_in_use.right_evenuncore cacheCounts the number of cycles that the Horizontal BL ring is being used at this ring stop - Right and Evenevent=0xab,umask=401unc_h_horz_ring_bl_in_use.right_odduncore cacheCounts the number of cycles that the Horizontal BL ring is being used at this ring stop - Right and Oddevent=0xab,umask=801unc_h_horz_ring_iv_in_use.leftuncore cacheCounts the number of cycles that the Horizontal IV ring is being used at this ring stop - Leftevent=0xad,umask=101unc_h_horz_ring_iv_in_use.rightuncore cacheCounts the number of cycles that the Horizontal IV ring is being used at this ring stop - Rightevent=0xad,umask=401unc_h_ingress_inserts.ipquncore cacheIngress Allocations. Counts number of allocations per cycle into the specified Ingress queue. - IPQevent=0x13,umask=401unc_h_ingress_inserts.irquncore cacheIngress Allocations. Counts number of allocations per cycle into the specified Ingress queue. - IRQevent=0x13,umask=101unc_h_ingress_inserts.irq_rejuncore cacheIngress Allocations. Counts number of allocations per cycle into the specified Ingress queue. - IRQ Rejectedevent=0x13,umask=201unc_h_ingress_inserts.prquncore cacheIngress Allocations. Counts number of allocations per cycle into the specified Ingress queue. - PRQevent=0x13,umask=0x1001unc_h_ingress_inserts.prq_rejuncore cacheIngress Allocations. Counts number of allocations per cycle into the specified Ingress queue. - PRQ Rejectedevent=0x13,umask=0x2001unc_h_ingress_int_starved.ipquncore cacheCycles with the IPQ in Internal Starvationevent=0x14,umask=401unc_h_ingress_int_starved.irquncore cacheCycles with the IRQ in Internal Starvationevent=0x14,umask=101unc_h_ingress_int_starved.ismquncore cacheCycles with the ISMQ in Internal Starvationevent=0x14,umask=801unc_h_ingress_int_starved.prquncore cacheIngress internal starvation cycles. Counts cycles in internal starvation. This occurs when one or more of the entries in the ingress queue are being starved out by other entries in the queueevent=0x14,umask=0x1001unc_h_ingress_occupancy.ipquncore cacheIngress Occupancy. Counts number of entries in the specified Ingress queue in each cycle. - IPQevent=0x11,umask=401unc_h_ingress_occupancy.irquncore cacheIngress Occupancy. Counts number of entries in the specified Ingress queue in each cycle. - IRQevent=0x11,umask=101unc_h_ingress_occupancy.irq_rejuncore cacheIngress Occupancy. Counts number of entries in the specified Ingress queue in each cycle. - IRQ Rejectedevent=0x11,umask=201unc_h_ingress_occupancy.prquncore cacheIngress Occupancy. Counts number of entries in the specified Ingress queue in each cycle. - PRQevent=0x11,umask=0x1001unc_h_ingress_occupancy.prq_rejuncore cacheIngress Occupancy. Counts number of entries in the specified Ingress queue in each cycle. - PRQ Rejectedevent=0x11,umask=0x2001unc_h_ingress_retry_ipq0_reject.ad_req_vn0uncore cacheIngress Probe Queue Rejectsevent=0x22,umask=101unc_h_ingress_retry_ipq0_reject.ad_rsp_vn0uncore cacheIngress Probe Queue Rejectsevent=0x22,umask=201unc_h_ingress_retry_ipq0_reject.ak_non_upiuncore cacheIngress Probe Queue Rejectsevent=0x22,umask=0x4001unc_h_ingress_retry_ipq0_reject.bl_ncb_vn0uncore cacheIngress Probe Queue Rejectsevent=0x22,umask=0x1001unc_h_ingress_retry_ipq0_reject.bl_ncs_vn0uncore cacheIngress Probe Queue Rejectsevent=0x22,umask=0x2001unc_h_ingress_retry_ipq0_reject.bl_rsp_vn0uncore cacheIngress Probe Queue Rejectsevent=0x22,umask=401unc_h_ingress_retry_ipq0_reject.bl_wb_vn0uncore cacheIngress Probe Queue Rejectsevent=0x22,umask=801unc_h_ingress_retry_ipq0_reject.iv_non_upiuncore cacheIngress Probe Queue Rejectsevent=0x22,umask=0x8001unc_h_ingress_retry_ipq1_reject.allow_snpuncore cacheIngress Probe Queue Rejectsevent=0x23,umask=0x4001unc_h_ingress_retry_ipq1_reject.any_reject_ipq0uncore cacheIngress Probe Queue Rejectsevent=0x23,umask=101unc_h_ingress_retry_ipq1_reject.pa_matchuncore cacheIngress Probe Queue Rejectsevent=0x23,umask=0x8001unc_h_ingress_retry_ipq1_reject.sf_victimuncore cacheIngress Probe Queue Rejectsevent=0x23,umask=801unc_h_ingress_retry_ipq1_reject.sf_wayuncore cacheIngress Probe Queue Rejectsevent=0x23,umask=0x2001unc_h_ingress_retry_irq0_reject.ad_req_vn0uncore cacheIngress Request Queue Rejectsevent=0x18,umask=101unc_h_ingress_retry_irq0_reject.ad_rsp_vn0uncore cacheIngress Request Queue Rejectsevent=0x18,umask=201unc_h_ingress_retry_irq0_reject.ak_non_upiuncore cacheIngress Request Queue Rejectsevent=0x18,umask=0x4001unc_h_ingress_retry_irq0_reject.bl_ncb_vn0uncore cacheIngress Request Queue Rejectsevent=0x18,umask=0x1001unc_h_ingress_retry_irq0_reject.bl_ncs_vn0uncore cacheIngress Request Queue Rejectsevent=0x18,umask=0x2001unc_h_ingress_retry_irq0_reject.bl_rsp_vn0uncore cacheIngress Request Queue Rejectsevent=0x18,umask=401unc_h_ingress_retry_irq0_reject.bl_wb_vn0uncore cacheIngress Request Queue Rejectsevent=0x18,umask=801unc_h_ingress_retry_irq0_reject.iv_non_upiuncore cacheIngress Request Queue Rejectsevent=0x18,umask=0x8001unc_h_ingress_retry_irq1_reject.allow_snpuncore cacheIngress Request Queue Rejectsevent=0x19,umask=0x4001unc_h_ingress_retry_irq1_reject.any_reject_irq0uncore cacheIngress Request Queue Rejectsevent=0x19,umask=101unc_h_ingress_retry_irq1_reject.pa_matchuncore cacheIngress Request Queue Rejectsevent=0x19,umask=0x8001unc_h_ingress_retry_irq1_reject.sf_victimuncore cacheIngress Request Queue Rejectsevent=0x19,umask=801unc_h_ingress_retry_irq1_reject.sf_wayuncore cacheIngress Request Queue Rejectsevent=0x19,umask=0x2001unc_h_ingress_retry_ismq0_reject.ad_req_vn0uncore cacheISMQ Rejectsevent=0x24,umask=101unc_h_ingress_retry_ismq0_reject.ad_rsp_vn0uncore cacheISMQ Rejectsevent=0x24,umask=201unc_h_ingress_retry_ismq0_reject.ak_non_upiuncore cacheISMQ Rejectsevent=0x24,umask=0x4001unc_h_ingress_retry_ismq0_reject.bl_ncb_vn0uncore cacheISMQ Rejectsevent=0x24,umask=0x1001unc_h_ingress_retry_ismq0_reject.bl_ncs_vn0uncore cacheISMQ Rejectsevent=0x24,umask=0x2001unc_h_ingress_retry_ismq0_reject.bl_rsp_vn0uncore cacheISMQ Rejectsevent=0x24,umask=401unc_h_ingress_retry_ismq0_reject.bl_wb_vn0uncore cacheISMQ Rejectsevent=0x24,umask=801unc_h_ingress_retry_ismq0_reject.iv_non_upiuncore cacheISMQ Rejectsevent=0x24,umask=0x8001unc_h_ingress_retry_ismq0_retry.ad_req_vn0uncore cacheISMQ Retriesevent=0x2c,umask=101unc_h_ingress_retry_ismq0_retry.ad_rsp_vn0uncore cacheISMQ Retriesevent=0x2c,umask=201unc_h_ingress_retry_ismq0_retry.ak_non_upiuncore cacheISMQ Retriesevent=0x2c,umask=0x4001unc_h_ingress_retry_ismq0_retry.bl_ncb_vn0uncore cacheISMQ Retriesevent=0x2c,umask=0x1001unc_h_ingress_retry_ismq0_retry.bl_ncs_vn0uncore cacheISMQ Retriesevent=0x2c,umask=0x2001unc_h_ingress_retry_ismq0_retry.bl_rsp_vn0uncore cacheISMQ Retriesevent=0x2c,umask=401unc_h_ingress_retry_ismq0_retry.bl_wb_vn0uncore cacheISMQ Retriesevent=0x2c,umask=801unc_h_ingress_retry_ismq0_retry.iv_non_upiuncore cacheISMQ Retriesevent=0x2c,umask=0x8001unc_h_ingress_retry_other0_retry.ad_req_vn0uncore cacheOther Queue Retriesevent=0x2e,umask=101unc_h_ingress_retry_other0_retry.ad_rsp_vn0uncore cacheOther Queue Retriesevent=0x2e,umask=201unc_h_ingress_retry_other0_retry.ak_non_upiuncore cacheOther Queue Retriesevent=0x2e,umask=0x4001unc_h_ingress_retry_other0_retry.bl_ncb_vn0uncore cacheOther Queue Retriesevent=0x2e,umask=0x1001unc_h_ingress_retry_other0_retry.bl_ncs_vn0uncore cacheOther Queue Retriesevent=0x2e,umask=0x2001unc_h_ingress_retry_other0_retry.bl_rsp_vn0uncore cacheOther Queue Retriesevent=0x2e,umask=401unc_h_ingress_retry_other0_retry.bl_wb_vn0uncore cacheOther Queue Retriesevent=0x2e,umask=801unc_h_ingress_retry_other0_retry.iv_non_upiuncore cacheOther Queue Retriesevent=0x2e,umask=0x8001unc_h_ingress_retry_other1_retry.allow_snpuncore cacheOther Queue Retriesevent=0x2f,umask=0x4001unc_h_ingress_retry_other1_retry.any_reject_irq0uncore cacheOther Queue Retriesevent=0x2f,umask=101unc_h_ingress_retry_other1_retry.pa_matchuncore cacheOther Queue Retriesevent=0x2f,umask=0x8001unc_h_ingress_retry_other1_retry.sf_victimuncore cacheOther Queue Retriesevent=0x2f,umask=801unc_h_ingress_retry_other1_retry.sf_wayuncore cacheOther Queue Retriesevent=0x2f,umask=0x2001unc_h_ingress_retry_prq0_reject.ad_req_vn0uncore cacheIngress Request Queue Rejectsevent=0x20,umask=101unc_h_ingress_retry_prq0_reject.ad_rsp_vn0uncore cacheIngress Request Queue Rejectsevent=0x20,umask=201unc_h_ingress_retry_prq0_reject.ak_non_upiuncore cacheIngress Request Queue Rejectsevent=0x20,umask=0x4001unc_h_ingress_retry_prq0_reject.bl_ncb_vn0uncore cacheIngress Request Queue Rejectsevent=0x20,umask=0x1001unc_h_ingress_retry_prq0_reject.bl_ncs_vn0uncore cacheIngress Request Queue Rejectsevent=0x20,umask=0x2001unc_h_ingress_retry_prq0_reject.bl_rsp_vn0uncore cacheIngress Request Queue Rejectsevent=0x20,umask=401unc_h_ingress_retry_prq0_reject.bl_wb_vn0uncore cacheIngress Request Queue Rejectsevent=0x20,umask=801unc_h_ingress_retry_prq0_reject.iv_non_upiuncore cacheIngress Request Queue Rejectsevent=0x20,umask=0x8001unc_h_ingress_retry_prq1_reject.allow_snpuncore cacheIngress Request Queue Rejectsevent=0x21,umask=0x4001unc_h_ingress_retry_prq1_reject.any_reject_irq0uncore cacheIngress Request Queue Rejectsevent=0x21,umask=101unc_h_ingress_retry_prq1_reject.pa_matchuncore cacheIngress Request Queue Rejectsevent=0x21,umask=0x8001unc_h_ingress_retry_prq1_reject.sf_victimuncore cacheIngress Request Queue Rejectsevent=0x21,umask=801unc_h_ingress_retry_prq1_reject.sf_wayuncore cacheIngress Request Queue Rejectsevent=0x21,umask=0x2001unc_h_ingress_retry_req_q0_retry.ad_req_vn0uncore cacheREQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)event=0x2a,umask=101unc_h_ingress_retry_req_q0_retry.ad_rsp_vn0uncore cacheREQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)event=0x2a,umask=201unc_h_ingress_retry_req_q0_retry.ak_non_upiuncore cacheREQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)event=0x2a,umask=0x4001unc_h_ingress_retry_req_q0_retry.bl_ncb_vn0uncore cacheREQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)event=0x2a,umask=0x1001unc_h_ingress_retry_req_q0_retry.bl_ncs_vn0uncore cacheREQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)event=0x2a,umask=0x2001unc_h_ingress_retry_req_q0_retry.bl_rsp_vn0uncore cacheREQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)event=0x2a,umask=401unc_h_ingress_retry_req_q0_retry.bl_wb_vn0uncore cacheREQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)event=0x2a,umask=801unc_h_ingress_retry_req_q0_retry.iv_non_upiuncore cacheREQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)event=0x2a,umask=0x8001unc_h_ingress_retry_req_q1_retry.allow_snpuncore cacheREQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)event=0x2b,umask=0x4001unc_h_ingress_retry_req_q1_retry.any_reject_irq0uncore cacheREQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)event=0x2b,umask=101unc_h_ingress_retry_req_q1_retry.pa_matchuncore cacheREQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)event=0x2b,umask=0x8001unc_h_ingress_retry_req_q1_retry.sf_victimuncore cacheREQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)event=0x2b,umask=801unc_h_ingress_retry_req_q1_retry.sf_wayuncore cacheREQUESTQ includes:  IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)event=0x2b,umask=0x2001unc_h_misc.cv0_pref_missuncore cacheMiscellaneous events in the Cbo. CV0 Prefetch Missevent=0x39,umask=0x2001unc_h_misc.cv0_pref_vicuncore cacheMiscellaneous events in the Cbo. CV0 Prefetch Victimevent=0x39,umask=0x1001unc_h_misc.rfo_hit_suncore cacheMiscellaneous events in the Cbo. RFO HitSevent=0x39,umask=801unc_h_misc.rspi_was_fseuncore cacheMiscellaneous events in the Cbo. Silent Snoop Evictionevent=0x39,umask=101unc_h_misc.wc_aliasinguncore cacheMiscellaneous events in the Cbo. Write Combining Aliasingevent=0x39,umask=201unc_h_ring_bounces_horz.aduncore cacheNumber of incoming messages from the Horizontal ring that were bounced, by ring typeevent=0xa1,umask=101unc_h_ring_bounces_horz.akuncore cacheNumber of incoming messages from the Horizontal ring that were bounced, by ring type - Acknowledgements to coreevent=0xa1,umask=201unc_h_ring_bounces_horz.bluncore cacheNumber of incoming messages from the Horizontal ring that were bounced, by ring type - Data Responses to coreevent=0xa1,umask=401unc_h_ring_bounces_horz.ivuncore cacheNumber of incoming messages from the Horizontal ring that were bounced, by ring type - Snoops of processor's cacheevent=0xa1,umask=801unc_h_ring_bounces_vert.aduncore cacheNumber of incoming messages from the Vertical ring that were bounced, by ring typeevent=0xa0,umask=101unc_h_ring_bounces_vert.akuncore cacheNumber of incoming messages from the Vertical ring that were bounced, by ring type - Acknowledgements to coreevent=0xa0,umask=201unc_h_ring_bounces_vert.bluncore cacheNumber of incoming messages from the Vertical ring that were bounced, by ring type - Data Responses to coreevent=0xa0,umask=401unc_h_ring_bounces_vert.ivuncore cacheNumber of incoming messages from the Vertical ring that were bounced, by ring type - Snoops of processor's cacheevent=0xa0,umask=801unc_h_ring_sink_starved_horz.aduncore cacheHorizontal ring sink starvation count - AD ringevent=0xa3,umask=101unc_h_ring_sink_starved_horz.akuncore cacheHorizontal ring sink starvation count - AK ringevent=0xa3,umask=201unc_h_ring_sink_starved_horz.bluncore cacheHorizontal ring sink starvation count - BL ringevent=0xa3,umask=401unc_h_ring_sink_starved_horz.ivuncore cacheHorizontal ring sink starvation count - IV ringevent=0xa3,umask=801unc_h_ring_sink_starved_vert.aduncore cacheVertical ring sink starvation count - AD ringevent=0xa2,umask=101unc_h_ring_sink_starved_vert.akuncore cacheVertical ring sink starvation count - AK ringevent=0xa2,umask=201unc_h_ring_sink_starved_vert.bluncore cacheVertical ring sink starvation count - BL ringevent=0xa2,umask=401unc_h_ring_sink_starved_vert.ivuncore cacheVertical ring sink starvation count - IV ringevent=0xa2,umask=801unc_h_ring_src_thrtluncore cacheCounts cycles in throttle modeevent=0xa401unc_h_sf_lookup.anyuncore cacheCache Lookups. Counts the number of times the LLC was accessed. Filters for any transaction originating from the IPQ or IRQ.  This does not include lookups originating from the ISMQevent=0x34,umask=0x1101unc_h_sf_lookup.data_readuncore cacheCache Lookups. Counts the number of times the LLC was accessed. Read transactionsevent=0x34,umask=301unc_h_sf_lookup.remote_snoopuncore cacheCache Lookups. Counts the number of times the LLC was accessed. Filters for only snoop requests coming from the remote socket(s) through the IPQevent=0x34,umask=901unc_h_sf_lookup.writeuncore cacheCache Lookups. Counts the number of times the LLC was accessed. Writeback transactions from L2 to the LLC  This includes all write transactions -- both Cacheable and UCevent=0x34,umask=501unc_h_tg_ingress_busy_starved.ad_bncuncore cacheTransgress Injection Starvation. Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityevent=0xb4,umask=101unc_h_tg_ingress_busy_starved.ad_crduncore cacheTransgress Injection Starvation. Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityevent=0xb4,umask=0x1001unc_h_tg_ingress_busy_starved.bl_bncuncore cacheTransgress Injection Starvation. Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityevent=0xb4,umask=401unc_h_tg_ingress_busy_starved.bl_crduncore cacheTransgress Injection Starvation. Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, because a message from the other queue has higher priorityevent=0xb4,umask=0x4001unc_h_tg_ingress_bypass.ad_bncuncore cacheTransgress Ingress Bypass. Number of packets bypassing the CMS Ingress event=0xb2,umask=101unc_h_tg_ingress_bypass.ad_crduncore cacheTransgress Ingress Bypass. Number of packets bypassing the CMS Ingress event=0xb2,umask=0x1001unc_h_tg_ingress_bypass.ak_bncuncore cacheTransgress Ingress Bypass. Number of packets bypassing the CMS Ingress event=0xb2,umask=201unc_h_tg_ingress_bypass.bl_bncuncore cacheTransgress Ingress Bypass. Number of packets bypassing the CMS Ingress event=0xb2,umask=401unc_h_tg_ingress_bypass.bl_crduncore cacheTransgress Ingress Bypass. Number of packets bypassing the CMS Ingress event=0xb2,umask=0x4001unc_h_tg_ingress_bypass.iv_bncuncore cacheTransgress Ingress Bypass. Number of packets bypassing the CMS Ingress event=0xb2,umask=801unc_h_tg_ingress_crd_starved.ad_bncuncore cacheTransgress Injection Starvation. Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditevent=0xb3,umask=101unc_h_tg_ingress_crd_starved.ad_crduncore cacheTransgress Injection Starvation. Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditevent=0xb3,umask=0x1001unc_h_tg_ingress_crd_starved.ak_bncuncore cacheTransgress Injection Starvation. Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditevent=0xb3,umask=201unc_h_tg_ingress_crd_starved.bl_bncuncore cacheTransgress Injection Starvation. Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditevent=0xb3,umask=401unc_h_tg_ingress_crd_starved.bl_crduncore cacheTransgress Injection Starvation. Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditevent=0xb3,umask=0x4001unc_h_tg_ingress_crd_starved.ifvuncore cacheTransgress Injection Starvation. Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditevent=0xb3,umask=0x8001unc_h_tg_ingress_crd_starved.iv_bncuncore cacheTransgress Injection Starvation. Counts cycles under injection starvation mode.  This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time.  In this case, the Ingress is unable to forward to the Egress due to a lack of creditevent=0xb3,umask=801unc_h_tg_ingress_inserts.ad_bncuncore cacheTransgress Ingress Allocations. Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshevent=0xb1,umask=101unc_h_tg_ingress_inserts.ad_crduncore cacheTransgress Ingress Allocations. Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshevent=0xb1,umask=0x1001unc_h_tg_ingress_inserts.ak_bncuncore cacheTransgress Ingress Allocations. Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshevent=0xb1,umask=201unc_h_tg_ingress_inserts.bl_bncuncore cacheTransgress Ingress Allocations. Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshevent=0xb1,umask=401unc_h_tg_ingress_inserts.bl_crduncore cacheTransgress Ingress Allocations. Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshevent=0xb1,umask=0x4001unc_h_tg_ingress_inserts.iv_bncuncore cacheTransgress Ingress Allocations. Number of allocations into the CMS Ingress  The Ingress is used to queue up requests received from the meshevent=0xb1,umask=801unc_h_tg_ingress_occupancy.ad_bncuncore cacheTransgress Ingress Occupancy. Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshevent=0xb0,umask=101unc_h_tg_ingress_occupancy.ad_crduncore cacheTransgress Ingress Occupancy. Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshevent=0xb0,umask=0x1001unc_h_tg_ingress_occupancy.ak_bncuncore cacheTransgress Ingress Occupancy. Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshevent=0xb0,umask=201unc_h_tg_ingress_occupancy.bl_bncuncore cacheTransgress Ingress Occupancy. Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshevent=0xb0,umask=401unc_h_tg_ingress_occupancy.bl_crduncore cacheTransgress Ingress Occupancy. Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshevent=0xb0,umask=0x4001unc_h_tg_ingress_occupancy.iv_bncuncore cacheTransgress Ingress Occupancy. Occupancy event for the Ingress buffers in the CMS  The Ingress is used to queue up requests received from the meshevent=0xb0,umask=801unc_h_tor_inserts.evictuncore cacheCounts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent -SF/LLC Evictionsevent=0x35,umask=0x3201unc_h_tor_inserts.hituncore cacheCounts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent -Hit (Not a Miss)event=0x35,umask=0x1f01unc_h_tor_inserts.ipquncore cacheCounts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent -IPQevent=0x35,umask=0x3801unc_h_tor_inserts.irquncore cacheCounts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent -IRQevent=0x35,umask=0x3101unc_h_tor_inserts.missuncore cacheCounts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent -Missevent=0x35,umask=0x2f01unc_h_tor_inserts.prquncore cacheCounts the number of entries successfully inserted into the TOR that match  qualifications specified by the subevent -PRQevent=0x35,umask=0x3401unc_h_tor_occupancy.evictuncore cacheFor each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent -SF/LLC Evictionsevent=0x36,umask=0x3201unc_h_tor_occupancy.hituncore cacheFor each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent -Hit (Not a Miss)event=0x36,umask=0x1f01unc_h_tor_occupancy.ipquncore cacheFor each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent -IPQevent=0x36,umask=0x3801unc_h_tor_occupancy.ipq_hituncore cacheFor each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent -IPQ hitevent=0x36,umask=0x1801unc_h_tor_occupancy.ipq_missuncore cacheFor each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent -IPQ missevent=0x36,umask=0x2801unc_h_tor_occupancy.irquncore cacheFor each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent -IRQ or PRQevent=0x36,umask=0x3101unc_h_tor_occupancy.irq_hituncore cacheFor each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent -IRQ or PRQ hitevent=0x36,umask=0x1101unc_h_tor_occupancy.irq_missuncore cacheFor each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent -IRQ or PRQ missevent=0x36,umask=0x2101unc_h_tor_occupancy.missuncore cacheFor each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent -Missevent=0x36,umask=0x2f01unc_h_tor_occupancy.prquncore cacheFor each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent -PRQevent=0x36,umask=0x3401unc_h_tor_occupancy.prq_hituncore cacheFor each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent -PRQ hitevent=0x36,umask=0x1401unc_h_tor_occupancy.prq_missuncore cacheFor each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent -PRQ missevent=0x36,umask=0x2401unc_h_u_clockticksuncore cacheUncore Clocksevent=001unc_h_vert_ring_ad_in_use.dn_evenuncore cacheCounts the number of cycles that the Vertical AD ring is being used at this ring stop - Down and Evenevent=0xa6,umask=401unc_h_vert_ring_ad_in_use.dn_odduncore cacheCounts the number of cycles that the Vertical AD ring is being used at this ring stop - Down and Oddevent=0xa6,umask=801unc_h_vert_ring_ad_in_use.up_evenuncore cacheCounts the number of cycles that the Vertical AD ring is being used at this ring stop - Up and Evenevent=0xa6,umask=101unc_h_vert_ring_ad_in_use.up_odduncore cacheCounts the number of cycles that the Vertical AD ring is being used at this ring stop - Up and Oddevent=0xa6,umask=201unc_h_vert_ring_ak_in_use.dn_evenuncore cacheCounts the number of cycles that the Vertical AK ring is being used at this ring stop - Down and Evenevent=0xa8,umask=401unc_h_vert_ring_ak_in_use.dn_odduncore cacheCounts the number of cycles that the Vertical AK ring is being used at this ring stop - Down and Oddevent=0xa8,umask=801unc_h_vert_ring_ak_in_use.up_evenuncore cacheCounts the number of cycles that the Vertical AK ring is being used at this ring stop - Up and Evenevent=0xa8,umask=101unc_h_vert_ring_ak_in_use.up_odduncore cacheCounts the number of cycles that the Vertical AK ring is being used at this ring stop - Up and Oddevent=0xa8,umask=201unc_h_vert_ring_bl_in_use.dn_evenuncore cacheCounts the number of cycles that the Vertical BL ring is being used at this ring stop - Down and Evenevent=0xaa,umask=401unc_h_vert_ring_bl_in_use.dn_odduncore cacheCounts the number of cycles that the Vertical BL ring is being used at this ring stop - Down and Oddevent=0xaa,umask=801unc_h_vert_ring_bl_in_use.up_evenuncore cacheCounts the number of cycles that the Vertical BL ring is being used at this ring stop - Up and Evenevent=0xaa,umask=101unc_h_vert_ring_bl_in_use.up_odduncore cacheCounts the number of cycles that the Vertical BL ring is being used at this ring stop - Up and Oddevent=0xaa,umask=201unc_h_vert_ring_iv_in_use.dnuncore cacheCounts the number of cycles that the Vertical IV ring is being used at this ring stop - Downevent=0xac,umask=401unc_h_vert_ring_iv_in_use.upuncore cacheCounts the number of cycles that the Vertical IV ring is being used at this ring stop - Upevent=0xac,umask=101unc_m2p_egress_cycles_full.ad_0uncore ioEgress (to CMS) Cycles Full. Counts the number of cycles when the M2PCIe Egress is full.  AD_0event=0x25,umask=101unc_m2p_egress_cycles_full.ad_1uncore ioEgress (to CMS) Cycles Full. Counts the number of cycles when the M2PCIe Egress is full.  AD_1event=0x25,umask=801unc_m2p_egress_cycles_full.ak_0uncore ioEgress (to CMS) Cycles Full. Counts the number of cycles when the M2PCIe Egress is full.  AK_0event=0x25,umask=201unc_m2p_egress_cycles_full.ak_1uncore ioEgress (to CMS) Cycles Full. Counts the number of cycles when the M2PCIe Egress is full.  AK_1event=0x25,umask=0x1001unc_m2p_egress_cycles_full.bl_0uncore ioEgress (to CMS) Cycles Full. Counts the number of cycles when the M2PCIe Egress is full.  BL_0event=0x25,umask=401unc_m2p_egress_cycles_full.bl_1uncore ioEgress (to CMS) Cycles Full. Counts the number of cycles when the M2PCIe Egress is full.  BL_1event=0x25,umask=0x2001unc_m2p_egress_cycles_ne.ad_0uncore ioEgress (to CMS) Cycles Not Empty. Counts the number of cycles when the M2PCIe Egress is not empty.  AD_0event=0x23,umask=101unc_m2p_egress_cycles_ne.ad_1uncore ioEgress (to CMS) Cycles Not Empty. Counts the number of cycles when the M2PCIe Egress is not empty.  AD_1event=0x23,umask=801unc_m2p_egress_cycles_ne.ak_0uncore ioEgress (to CMS) Cycles Not Empty. Counts the number of cycles when the M2PCIe Egress is not empty.  AK_0event=0x23,umask=201unc_m2p_egress_cycles_ne.ak_1uncore ioEgress (to CMS) Cycles Not Empty. Counts the number of cycles when the M2PCIe Egress is not empty.  AK_1event=0x23,umask=0x1001unc_m2p_egress_cycles_ne.bl_0uncore ioEgress (to CMS) Cycles Not Empty. Counts the number of cycles when the M2PCIe Egress is not empty.  BL_0event=0x23,umask=401unc_m2p_egress_cycles_ne.bl_1uncore ioEgress (to CMS) Cycles Not Empty. Counts the number of cycles when the M2PCIe Egress is not empty.  BL_1event=0x23,umask=0x2001unc_m2p_egress_inserts.ad_0uncore ioEgress (to CMS) Ingress. Counts the number of number of messages inserted into the  the M2PCIe Egress queue. AD_0event=0x24,umask=101unc_m2p_egress_inserts.ad_1uncore ioEgress (to CMS) Ingress. Counts the number of number of messages inserted into the  the M2PCIe Egress queue. AD_1event=0x24,umask=0x1001unc_m2p_egress_inserts.ak_0uncore ioEgress (to CMS) Ingress. Counts the number of number of messages inserted into the  the M2PCIe Egress queue. AK_0event=0x24,umask=201unc_m2p_egress_inserts.ak_1uncore ioEgress (to CMS) Ingress. Counts the number of number of messages inserted into the  the M2PCIe Egress queue. AK_1event=0x24,umask=0x2001unc_m2p_egress_inserts.ak_crd_0uncore ioEgress (to CMS) Ingress. Counts the number of number of messages inserted into the  the M2PCIe Egress queue. AK_CRD_0event=0x24,umask=801unc_m2p_egress_inserts.ak_crd_1uncore ioEgress (to CMS) Ingress. Counts the number of number of messages inserted into the  the M2PCIe Egress queue. AK_CRD_1event=0x24,umask=0x8001unc_m2p_egress_inserts.bl_0uncore ioEgress (to CMS) Ingress. Counts the number of number of messages inserted into the  the M2PCIe Egress queue. BL_0event=0x24,umask=401unc_m2p_egress_inserts.bl_1uncore ioEgress (to CMS) Ingress. Counts the number of number of messages inserted into the  the M2PCIe Egress queue. BL_1event=0x24,umask=0x4001unc_m2p_ingress_cycles_ne.alluncore ioIngress Queue Cycles Not Empty. Counts the number of cycles when the M2PCIe Ingress is not empty.ALLevent=0x10,umask=0x8001unc_m2p_ingress_cycles_ne.cbo_idiuncore ioIngress Queue Cycles Not Empty. Counts the number of cycles when the M2PCIe Ingress is not empty.CBO_IDIevent=0x10,umask=101unc_m2p_ingress_cycles_ne.cbo_ncbuncore ioIngress Queue Cycles Not Empty. Counts the number of cycles when the M2PCIe Ingress is not empty.CBO_NCBevent=0x10,umask=201unc_m2p_ingress_cycles_ne.cbo_ncsuncore ioIngress Queue Cycles Not Empty. Counts the number of cycles when the M2PCIe Ingress is not empty.CBO_NCSevent=0x10,umask=401uncore_edc_uclkunc_e_edc_access.hit_cleanuncore memoryCounts the number of read requests and streaming stores that hit in MCDRAM cache and the data in MCDRAM is clean with respect to DDR. This event is only valid in cache and hybrid memory modeevent=2,umask=101unc_e_edc_access.hit_dirtyuncore memoryCounts the number of read requests and streaming stores that hit in MCDRAM cache and the data in MCDRAM is dirty with respect to DDR. This event is only valid in cache and hybrid memory modeevent=2,umask=201unc_e_edc_access.miss_cleanuncore memoryCounts the number of read requests and streaming stores that miss in MCDRAM cache and the data evicted from the MCDRAM is clean with respect to DDR. This event is only valid in cache and hybrid memory modeevent=2,umask=401unc_e_edc_access.miss_dirtyuncore memoryCounts the number of read requests and streaming stores that miss in MCDRAM cache and the data evicted from the MCDRAM is dirty with respect to DDR. This event is only valid in cache and hybrid memory modeevent=2,umask=801unc_e_edc_access.miss_invaliduncore memoryNumber of EDC Hits or Misses. Miss Ievent=2,umask=0x1001uncore_edc_eclkunc_e_e_clockticksuncore memoryECLK countevent=001unc_e_rpq_insertsuncore memoryCounts the number of read requests received by the MCDRAM controller. This event is valid in all three memory modes: flat, cache and hybrid. In cache and hybrid memory mode, this event counts all read requests as well as streaming stores that hit or miss in the MCDRAM cacheevent=1,umask=101unc_e_u_clockticksuncore memoryUCLK countevent=001unc_e_wpq_insertsuncore memoryCounts the number of write requests received by the MCDRAM controller. This event is valid in all three memory modes: flat, cache and hybrid. In cache and hybrid memory mode, this event counts all streaming stores, writebacks and, read requests that miss in MCDRAM cacheevent=2,umask=101uncore_imc_dclkunc_m_cas_count.alluncore memoryCAS Allevent=3,umask=301unc_m_cas_count.rduncore memoryCAS Readsevent=3,umask=101unc_m_cas_count.wruncore memoryCAS Writesevent=3,umask=201unc_m_d_clockticksuncore memoryDCLK countevent=001uncore_imc_uclkunc_m_u_clockticksuncore memoryUCLK countevent=001mem_uops_retired.dtlb_miss_loadsvirtual memoryCounts the number of load micro-ops retired that cause a DTLB miss (Precise Event)  Supports address when preciseevent=4,period=200003,umask=800page_walks.cyclesvirtual memoryCounts the total number of core cycles for all the page walks. The cycles for page walks started in speculative path will also be includedevent=5,period=200003,umask=300This event counts every cycle when a data (D) page walk or instruction (I) page walk is in progresspage_walks.d_side_cyclesvirtual memoryCounts the total number of core cycles for all the D-side page walks. The cycles for page walks started in speculative path will also be includedevent=5,period=200003,umask=100page_walks.d_side_walksvirtual memoryCounts the total D-side page walks that are completed or started. The page walks started in the speculative path will also be countedevent=5,edge=1,period=100003,umask=100page_walks.i_side_cyclesvirtual memoryCounts the total number of core cycles for all the I-side page walks. The cycles for page walks started in speculative path will also be includedevent=5,period=200003,umask=200This event counts every cycle when an I-side (walks due to an instruction fetch) page walk is in progresspage_walks.i_side_walksvirtual memoryCounts the total I-side page walks that are completedevent=5,edge=1,period=100003,umask=200page_walks.walksvirtual memoryCounts the total page walks that are completed (I-side and D-side)event=5,edge=1,period=100003,umask=300l2_request.allcacheCounts the number of L2 Cache Accesses Counts the total number of L2 Cache Accesses - sum of hits, misses, rejects  front door requests for CRd/DRd/RFO/ItoM/L2 Prefetches only, per core eventevent=0x24,period=1000003,umask=700Counts the number of L2 Cache Accesses Counts the total number of L2 Cache Accesses - sum of hits, misses, rejects  front door requests for CRd/DRd/RFO/ItoM/L2 Prefetches onlymem_uops_retired.all_loadscacheCounts the number of load uops retired  Supports address when precise (Precise event)event=0xd0,period=200003,umask=0x8100mem_uops_retired.all_storescacheCounts the number of store uops retired  Supports address when precise (Precise event)event=0xd0,period=200003,umask=0x8200mem_uops_retired.load_latency_gt_1024cacheCounts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled  Supports address when precise (Must be precise)event=0xd0,period=200003,umask=5,ldlat=0x40000mem_uops_retired.load_latency_gt_128cacheCounts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled  Supports address when precise (Must be precise)event=0xd0,period=200003,umask=5,ldlat=0x8000mem_uops_retired.load_latency_gt_16cacheCounts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled  Supports address when precise (Must be precise)event=0xd0,period=200003,umask=5,ldlat=0x1000mem_uops_retired.load_latency_gt_2048cacheCounts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled  Supports address when precise (Must be precise)event=0xd0,period=200003,umask=5,ldlat=0x80000mem_uops_retired.load_latency_gt_256cacheCounts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled  Supports address when precise (Must be precise)event=0xd0,period=200003,umask=5,ldlat=0x10000mem_uops_retired.load_latency_gt_32cacheCounts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled  Supports address when precise (Must be precise)event=0xd0,period=200003,umask=5,ldlat=0x2000mem_uops_retired.load_latency_gt_4cacheCounts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled  Supports address when precise (Must be precise)event=0xd0,period=200003,umask=5,ldlat=0x400mem_uops_retired.load_latency_gt_512cacheCounts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled  Supports address when precise (Must be precise)event=0xd0,period=200003,umask=5,ldlat=0x20000mem_uops_retired.load_latency_gt_64cacheCounts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled  Supports address when precise (Must be precise)event=0xd0,period=200003,umask=5,ldlat=0x4000mem_uops_retired.load_latency_gt_8cacheCounts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled  Supports address when precise (Must be precise)event=0xd0,period=200003,umask=5,ldlat=0x800mem_uops_retired.store_latencycacheCounts the number of  stores uops retired same as MEM_UOPS_RETIRED.ALL_STORES  Supports address when precise (Must be precise)event=0xd0,period=200003,umask=600idq_bubbles.corefrontendThis event counts a subset of the Topdown Slots event that were no operation was delivered to the back-end pipeline due to instruction fetch limitations when the back-end could have accepted more operations. Common examples include instruction cache misses or x86 instruction decode limitationsevent=0x9c,period=1000003,umask=100This event counts a subset of the Topdown Slots event that were no operation was delivered to the back-end pipeline due to instruction fetch limitations when the back-end could have accepted more operations. Common examples include instruction cache misses or x86 instruction decode limitations. Software can use this event as the numerator for the Frontend Bound metric (or top-level category) of the Top-down Microarchitecture Analysis methodocr.demand_data_rd.l3_missmemoryCounts cacheable demand data reads were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBFC0000100ocr.demand_data_rd.l3_missmemoryCounts demand data reads that were not supplied by the L3 cacheevent=0x2a,period=100003,umask=1,offcore_rsp=0xFE7F800000100ocr.demand_rfo.l3_missmemoryCounts demand reads for ownership, including SWPREFETCHW which is an RFO were not supplied by the L3 cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FBFC0000200ocr.demand_rfo.l3_missmemoryCounts demand read for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cacheevent=0x2a,period=100003,umask=1,offcore_rsp=0xFE7F800000200ocr.demand_data_rd.any_responseotherCounts cacheable demand data reads Catch all value for any response types - this includes response types not define in the OCR.  If this is set all other response types will be ignoredevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000100ocr.demand_data_rd.dramotherCounts cacheable demand data reads were supplied by DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x1FBC00000100ocr.demand_data_rd.dramotherCounts demand data reads that were supplied by DRAMevent=0x2a,period=100003,umask=1,offcore_rsp=0x1E78000000100ocr.demand_rfo.any_responseotherCounts demand reads for ownership, including SWPREFETCHW which is an RFO Catch all value for any response types - this includes response types not define in the OCR.  If this is set all other response types will be ignoredevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000200cpu_clk_unhalted.corepipelineCore cycles when the core is not in a halt stateevent=0x3c,period=200000300Counts the number of core cycles while the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios. The core frequency may change from time to time due to transitions associated with Enhanced Intel SpeedStep Technology or TM2. For this reason this event may have a changing ratio with regards to time. When the core frequency is constant, this event can approximate elapsed time while the core was not in the halt state. It is counted on a dedicated fixed counter, leaving the programmable counters available for other eventscpu_clk_unhalted.core_ppipelineThread cycles when thread is not in halt state [This event is alias to CPU_CLK_UNHALTED.THREAD_P]event=0x3c,period=200000300This is an architectural event that counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling. For this reason, this event may have a changing ratio with regards to wall clock time. [This event is alias to CPU_CLK_UNHALTED.THREAD_P]cpu_clk_unhalted.ref_tscpipelineReference cycles when the core is not in halt stateevent=0,period=2000003,umask=300Counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'.  The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'.  After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this casecpu_clk_unhalted.ref_tsc_ppipelineCounts the number of unhalted reference clock cyclesevent=0x3c,period=2000003,umask=100Counts the number of reference cycles that the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. This event is not affected by core frequency changes and increments at a fixed frequency that is also used for the Time Stamp Counter (TSC). This event uses a programmable general purpose performance countercpu_clk_unhalted.ref_tsc_ppipelineReference cycles when the core is not in halt stateevent=0x3c,period=2000003,umask=100Counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'.  The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'.  After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this casecpu_clk_unhalted.threadpipelineCore cycles when the thread is not in a halt stateevent=0x3c,period=200000300Counts the number of core cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios. The core frequency may change from time to time due to transitions associated with Enhanced Intel SpeedStep Technology or TM2. For this reason this event may have a changing ratio with regards to time. When the core frequency is constant, this event can approximate elapsed time while the core was not in the halt state. It is counted on a dedicated fixed counter, leaving the programmable counters available for other eventscpu_clk_unhalted.thread_ppipelineThread cycles when thread is not in halt state [This event is alias to CPU_CLK_UNHALTED.CORE_P]event=0x3c,period=200000300This is an architectural event that counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling. For this reason, this event may have a changing ratio with regards to wall clock time. [This event is alias to CPU_CLK_UNHALTED.CORE_P]inst_retired.any_ppipelineCounts the number of instructions retired (Precise event)event=0xc0,period=200000300ld_blocks.store_forwardpipelineCounts the number of occurrences a retired load gets blocked because its address partially overlaps with an older store (size mismatch) - unknown_sta/bad_forward (Precise event)event=3,period=1000003,umask=200misc_retired.lbr_insertspipelineCounts the number of LBR entries recorded. Requires LBRs to be enabled in IA32_LBR_CTL (Precise event)event=0xe4,period=1000003,umask=100misc_retired.lbr_insertspipelineLBR record is inserted (Precise event)event=0xe4,period=1000003,umask=100topdown.backend_bound_slotspipelineThis event counts a subset of the Topdown Slots event that were not consumed by the back-end pipeline due to lack of back-end resources, as a result of memory subsystem delays, execution units limitations, or other conditionsevent=0xa4,period=10000003,umask=200This event counts a subset of the Topdown Slots event that were not consumed by the back-end pipeline due to lack of back-end resources, as a result of memory subsystem delays, execution units limitations, or other conditions. Software can use this event as the numerator for the Backend Bound metric (or top-level category) of the Top-down Microarchitecture Analysis methodtopdown.slotspipelineTMA slots available for an unhalted logical processor. Fixed counter - architectural eventevent=0,period=10000003,umask=400Number of available slots for an unhalted logical processor. The event increments by machine-width of the narrowest pipeline as employed by the Top-down Microarchitecture Analysis method (TMA). Software can use this event as the denominator for the top-level metrics of the TMA method. This architectural event is counted on a designated fixed counter (Fixed Counter 3)topdown.slots_ppipelineTMA slots available for an unhalted logical processor. General counter - architectural eventevent=0xa4,period=10000003,umask=100Counts the number of available slots for an unhalted logical processor. The event increments by machine-width of the narrowest pipeline as employed by the Top-down Microarchitecture Analysis methodtopdown_bad_speculation.allpipelineCounts the number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear. [This event is alias to TOPDOWN_BAD_SPECULATION.ALL_P]event=0x73,period=100000300topdown_bad_speculation.all_ppipelineCounts the number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear. [This event is alias to TOPDOWN_BAD_SPECULATION.ALL]event=0x73,period=100000300topdown_be_bound.allpipelineCounts the number of retirement slots not consumed due to backend stalls [This event is alias to TOPDOWN_BE_BOUND.ALL_P]event=0xa4,period=1000003,umask=200topdown_be_bound.all_ppipelineCounts the number of retirement slots not consumed due to backend stalls [This event is alias to TOPDOWN_BE_BOUND.ALL]event=0xa4,period=1000003,umask=200topdown_fe_bound.allpipelineFixed Counter: Counts the number of retirement slots not consumed due to front end stallsevent=0,period=1000003,umask=600topdown_fe_bound.all_ppipelineCounts the number of retirement slots not consumed due to front end stallsevent=0x9c,period=1000003,umask=100topdown_retiring.allpipelineFixed Counter: Counts the number of consumed retirement slots (Precise event)event=0,period=1000003,umask=700topdown_retiring.all_ppipelineCounts the number of consumed retirement slots (Precise event)event=0xc2,period=1000003,umask=200itlb_misses.walk_completedvirtual memoryCounts the number of page walks completed due to instruction fetch misses to any page sizeevent=0x85,period=2000003,umask=0xe00Counts the number of page walks completed due to instruction fetches whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to any page size.  Includes page walks that page faultlock_cycles.cache_lock_durationcacheCycles when L1D is lockedevent=0x42,period=2000003,umask=200This event counts the number of cycles when the L1D is locked. It is a superset of the 0x1 mask (BUS_LOCK_CLOCKS.BUS_LOCK_DURATION)mem_load_uops_misc_retired.local_dramcacheCounts the number of load ops retired that miss the L3 cache and hit in DRAMevent=0xd4,period=1000003,umask=200fp_vint_uops_executed.stdfloating pointCounts the number of uops executed on floating point and vector integer store data portevent=0xb2,period=1000003,umask=100ocr.demand_data_rd.dramotherCounts demand data reads that were supplied by DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x18400000100uncore_hac_cbounc_hac_cbo_tor_allocation.alluncore cacheNumber of all entries allocated. Includes also retriesevent=0x35,umask=801unc_hac_cbo_tor_allocation.drduncore cacheAsserted on coherent DRD + DRdPref  allocations into the queue. Cacheable onlyevent=0x35,umask=101uncore_hac_arbunc_hac_arb_coh_trk_requests.alluncore interconnectNumber of entries allocated. Account for Any type: e.g. Snoop,  etcevent=0x84,umask=101unc_hac_arb_req_trk_request.drduncore interconnectNumber of all coherent Data Read entries. Doesn't include prefetchesevent=0x81,umask=201unc_hac_arb_transactions.alluncore interconnectNumber of all CMI transactionsevent=0x8a,umask=101unc_hac_arb_transactions.readsuncore interconnectNumber of all CMI readsevent=0x8a,umask=201unc_hac_arb_transactions.writesuncore interconnectNumber of all CMI writes not including Mflushevent=0x8a,umask=401unc_hac_arb_trk_requests.alluncore interconnectTotal number of all outgoing entries allocated. Accounts for Coherent and non-coherent trafficevent=0x81,umask=101unc_mc0_rdcas_count_freerununcore memoryCounts every CAS read command sent from the Memory Controller 0 to DRAM (sum of all channels)event=0xff,umask=0x2001Counts every CAS read command sent from the Memory Controller 0 to DRAM (sum of all channels). Each CAS commands can be for 32B or 64B of dataunc_mc0_total_reqcount_freerununcore memoryCounts every read and write request entering the Memory Controller 0event=0xff,umask=0x1001Counts every read and write request entering the Memory Controller 0 (sum of all channels). All requests are counted as one, whether they are 32B or 64B Read/Write or partial/full line writes. Some write requests to the same address may merge to a single write command to DRAM. Therefore, the total request count may be higher than total DRAM BWunc_mc0_wrcas_count_freerununcore memoryCounts every CAS write command sent from the Memory Controller 0 to DRAM (sum of all channels)event=0xff,umask=0x3001Counts every CAS write command sent from the Memory Controller 0 to DRAM (sum of all channels).  Each CAS commands can be for 32B or 64B of dataunc_mc1_rdcas_count_freerununcore memoryCounts every CAS read command sent from the Memory Controller 1 to DRAM (sum of all channels)event=0xff,umask=0x2001Counts every CAS read command sent from the Memory Controller 1 to DRAM (sum of all channels). Each CAS commands can be for 32B or 64B of dataunc_mc1_total_reqcount_freerununcore memoryCounts every read and write request entering the Memory Controller 1event=0xff,umask=0x1001Counts every read and write request entering the Memory Controller 1 (sum of all channels). All requests are counted as one, whether they are 32B or 64B Read/Write or partial/full line writes. Some write requests to the same address may merge to a single write command to DRAM. Therefore, the total request count may be higher than total DRAM BWunc_mc1_wrcas_count_freerununcore memoryCounts every CAS write command sent from the Memory Controller 1 to DRAM (sum of all channels)event=0xff,umask=0x3001Counts every CAS write command sent from the Memory Controller 1 to DRAM (sum of all channels).  Each CAS commands can be for 32B or 64B of dataunc_m_rd_datauncore memoryNumber of bytes read from DRAM, in 32B chunks. Counter increments by 1 after receiving 32B chunk dataevent=0x3a01unc_m_total_datauncore memoryTotal number of read and write byte transfers to/from DRAM, in 32B chunks. Counter increments by 1 after sending or receiving 32B chunk dataevent=0x3c01unc_m_wr_datauncore memoryNumber of bytes written to DRAM, in 32B chunks. Counter increments by 1 after sending 32B chunk dataevent=0x3b01cache_lock_cycles.l1dcacheCycles L1D lockedevent=0x63,period=2000000,umask=200cache_lock_cycles.l1d_l2cacheCycles L1D and L2 lockedevent=0x63,period=2000000,umask=100l1d.m_evictcacheL1D cache lines replaced in M stateevent=0x51,period=2000000,umask=400l1d.m_replcacheL1D cache lines allocated in the M stateevent=0x51,period=2000000,umask=200l1d.m_snoop_evictcacheL1D snoop eviction of cache lines in M stateevent=0x51,period=2000000,umask=800l1d.replcacheL1 data cache lines allocatedevent=0x51,period=2000000,umask=100l1d_all_ref.anycacheAll references to the L1 data cacheevent=0x43,period=2000000,umask=100l1d_all_ref.cacheablecacheL1 data cacheable reads and writesevent=0x43,period=2000000,umask=200l1d_cache_ld.e_statecacheL1 data cache read in E stateevent=0x40,period=2000000,umask=400l1d_cache_ld.i_statecacheL1 data cache read in I state (misses)event=0x40,period=2000000,umask=100l1d_cache_ld.mesicacheL1 data cache readsevent=0x40,period=2000000,umask=0xf00l1d_cache_ld.m_statecacheL1 data cache read in M stateevent=0x40,period=2000000,umask=800l1d_cache_ld.s_statecacheL1 data cache read in S stateevent=0x40,period=2000000,umask=200l1d_cache_lock.e_statecacheL1 data cache load locks in E stateevent=0x42,period=2000000,umask=400l1d_cache_lock.hitcacheL1 data cache load lock hitsevent=0x42,period=2000000,umask=100l1d_cache_lock.m_statecacheL1 data cache load locks in M stateevent=0x42,period=2000000,umask=800l1d_cache_lock.s_statecacheL1 data cache load locks in S stateevent=0x42,period=2000000,umask=200l1d_cache_lock_fb_hitcacheL1D load lock accepted in fill bufferevent=0x53,period=2000000,umask=100l1d_cache_prefetch_lock_fb_hitcacheL1D prefetch load lock accepted in fill bufferevent=0x52,period=2000000,umask=100l1d_cache_st.e_statecacheL1 data cache stores in E stateevent=0x41,period=2000000,umask=400l1d_cache_st.m_statecacheL1 data cache stores in M stateevent=0x41,period=2000000,umask=800l1d_cache_st.s_statecacheL1 data cache stores in S stateevent=0x41,period=2000000,umask=200l1d_prefetch.misscacheL1D hardware prefetch missesevent=0x4e,period=200000,umask=200l1d_prefetch.requestscacheL1D hardware prefetch requestsevent=0x4e,period=200000,umask=100l1d_prefetch.triggerscacheL1D hardware prefetch requests triggeredevent=0x4e,period=200000,umask=400l1d_wb_l2.e_statecacheL1 writebacks to L2 in E stateevent=0x28,period=100000,umask=400l1d_wb_l2.i_statecacheL1 writebacks to L2 in I state (misses)event=0x28,period=100000,umask=100l1d_wb_l2.mesicacheAll L1 writebacks to L2event=0x28,period=100000,umask=0xf00l1d_wb_l2.m_statecacheL1 writebacks to L2 in M stateevent=0x28,period=100000,umask=800l1d_wb_l2.s_statecacheL1 writebacks to L2 in S stateevent=0x28,period=100000,umask=200l2_data_rqsts.anycacheAll L2 data requestsevent=0x26,period=200000,umask=0xff00l2_data_rqsts.demand.e_statecacheL2 data demand loads in E stateevent=0x26,period=200000,umask=400l2_data_rqsts.demand.i_statecacheL2 data demand loads in I state (misses)event=0x26,period=200000,umask=100l2_data_rqsts.demand.mesicacheL2 data demand requestsevent=0x26,period=200000,umask=0xf00l2_data_rqsts.demand.m_statecacheL2 data demand loads in M stateevent=0x26,period=200000,umask=800l2_data_rqsts.demand.s_statecacheL2 data demand loads in S stateevent=0x26,period=200000,umask=200l2_data_rqsts.prefetch.e_statecacheL2 data prefetches in E stateevent=0x26,period=200000,umask=0x4000l2_data_rqsts.prefetch.i_statecacheL2 data prefetches in the I state (misses)event=0x26,period=200000,umask=0x1000l2_data_rqsts.prefetch.mesicacheAll L2 data prefetchesevent=0x26,period=200000,umask=0xf000l2_data_rqsts.prefetch.m_statecacheL2 data prefetches in M stateevent=0x26,period=200000,umask=0x8000l2_data_rqsts.prefetch.s_statecacheL2 data prefetches in the S stateevent=0x26,period=200000,umask=0x2000l2_lines_in.anycacheL2 lines allocatedevent=0xf1,period=100000,umask=700l2_lines_in.e_statecacheL2 lines allocated in the E stateevent=0xf1,period=100000,umask=400l2_lines_in.s_statecacheL2 lines allocated in the S stateevent=0xf1,period=100000,umask=200l2_lines_out.anycacheL2 lines evictedevent=0xf2,period=100000,umask=0xf00l2_lines_out.demand_cleancacheL2 lines evicted by a demand requestevent=0xf2,period=100000,umask=100l2_lines_out.demand_dirtycacheL2 modified lines evicted by a demand requestevent=0xf2,period=100000,umask=200l2_lines_out.prefetch_cleancacheL2 lines evicted by a prefetch requestevent=0xf2,period=100000,umask=400l2_lines_out.prefetch_dirtycacheL2 modified lines evicted by a prefetch requestevent=0xf2,period=100000,umask=800l2_rqsts.ifetchescacheL2 instruction fetchesevent=0x24,period=200000,umask=0x3000l2_rqsts.ifetch_hitcacheL2 instruction fetch hitsevent=0x24,period=200000,umask=0x1000l2_rqsts.ifetch_misscacheL2 instruction fetch missesevent=0x24,period=200000,umask=0x2000l2_rqsts.ld_hitcacheL2 load hitsevent=0x24,period=200000,umask=100l2_rqsts.ld_misscacheL2 load missesevent=0x24,period=200000,umask=200l2_rqsts.loadscacheL2 requestsevent=0x24,period=200000,umask=300l2_rqsts.misscacheAll L2 missesevent=0x24,period=200000,umask=0xaa00l2_rqsts.prefetchescacheAll L2 prefetchesevent=0x24,period=200000,umask=0xc000l2_rqsts.prefetch_hitcacheL2 prefetch hitsevent=0x24,period=200000,umask=0x4000l2_rqsts.prefetch_misscacheL2 prefetch missesevent=0x24,period=200000,umask=0x8000l2_rqsts.referencescacheAll L2 requestsevent=0x24,period=200000,umask=0xff00l2_rqsts.rfoscacheL2 RFO requestsevent=0x24,period=200000,umask=0xc00l2_rqsts.rfo_hitcacheL2 RFO hitsevent=0x24,period=200000,umask=400l2_rqsts.rfo_misscacheL2 RFO missesevent=0x24,period=200000,umask=800l2_transactions.anycacheAll L2 transactionsevent=0xf0,period=200000,umask=0x8000l2_transactions.fillcacheL2 fill transactionsevent=0xf0,period=200000,umask=0x2000l2_transactions.ifetchcacheL2 instruction fetch transactionsevent=0xf0,period=200000,umask=400l2_transactions.l1d_wbcacheL1D writeback to L2 transactionsevent=0xf0,period=200000,umask=0x1000l2_transactions.loadcacheL2 Load transactionsevent=0xf0,period=200000,umask=100l2_transactions.prefetchcacheL2 prefetch transactionsevent=0xf0,period=200000,umask=800l2_transactions.rfocacheL2 RFO transactionsevent=0xf0,period=200000,umask=200l2_transactions.wbcacheL2 writeback to LLC transactionsevent=0xf0,period=200000,umask=0x4000l2_write.lock.e_statecacheL2 demand lock RFOs in E stateevent=0x27,period=100000,umask=0x4000l2_write.lock.hitcacheAll demand L2 lock RFOs that hit the cacheevent=0x27,period=100000,umask=0xe000l2_write.lock.i_statecacheL2 demand lock RFOs in I state (misses)event=0x27,period=100000,umask=0x1000l2_write.lock.mesicacheAll demand L2 lock RFOsevent=0x27,period=100000,umask=0xf000l2_write.lock.m_statecacheL2 demand lock RFOs in M stateevent=0x27,period=100000,umask=0x8000l2_write.lock.s_statecacheL2 demand lock RFOs in S stateevent=0x27,period=100000,umask=0x2000l2_write.rfo.hitcacheAll L2 demand store RFOs that hit the cacheevent=0x27,period=100000,umask=0xe00l2_write.rfo.i_statecacheL2 demand store RFOs in I state (misses)event=0x27,period=100000,umask=100l2_write.rfo.mesicacheAll L2 demand store RFOsevent=0x27,period=100000,umask=0xf00l2_write.rfo.m_statecacheL2 demand store RFOs in M stateevent=0x27,period=100000,umask=800l2_write.rfo.s_statecacheL2 demand store RFOs in S stateevent=0x27,period=100000,umask=200longest_lat_cache.misscacheLongest latency cache missevent=0x2e,period=100000,umask=0x4100longest_lat_cache.referencecacheLongest latency cache referenceevent=0x2e,period=200000,umask=0x4f00mem_inst_retired.latency_above_threshold_0cacheMemory instructions retired above 0 clocks (Precise Event)event=0xb,period=2000000,umask=0x10,ldlat=None00mem_inst_retired.latency_above_threshold_1024cacheMemory instructions retired above 1024 clocks (Precise Event)event=0xb,period=100,umask=0x10,ldlat=0x40000mem_inst_retired.latency_above_threshold_128cacheMemory instructions retired above 128 clocks (Precise Event)event=0xb,period=1000,umask=0x10,ldlat=0x8000mem_inst_retired.latency_above_threshold_16cacheMemory instructions retired above 16 clocks (Precise Event)event=0xb,period=10000,umask=0x10,ldlat=0x1000mem_inst_retired.latency_above_threshold_16384cacheMemory instructions retired above 16384 clocks (Precise Event)event=0xb,period=5,umask=0x10,ldlat=0x400000mem_inst_retired.latency_above_threshold_2048cacheMemory instructions retired above 2048 clocks (Precise Event)event=0xb,period=50,umask=0x10,ldlat=0x80000mem_inst_retired.latency_above_threshold_256cacheMemory instructions retired above 256 clocks (Precise Event)event=0xb,period=500,umask=0x10,ldlat=0x10000mem_inst_retired.latency_above_threshold_32cacheMemory instructions retired above 32 clocks (Precise Event)event=0xb,period=5000,umask=0x10,ldlat=0x2000mem_inst_retired.latency_above_threshold_32768cacheMemory instructions retired above 32768 clocks (Precise Event)event=0xb,period=3,umask=0x10,ldlat=0x800000mem_inst_retired.latency_above_threshold_4cacheMemory instructions retired above 4 clocks (Precise Event)event=0xb,period=50000,umask=0x10,ldlat=0x400mem_inst_retired.latency_above_threshold_4096cacheMemory instructions retired above 4096 clocks (Precise Event)event=0xb,period=20,umask=0x10,ldlat=0x100000mem_inst_retired.latency_above_threshold_512cacheMemory instructions retired above 512 clocks (Precise Event)event=0xb,period=200,umask=0x10,ldlat=0x20000mem_inst_retired.latency_above_threshold_64cacheMemory instructions retired above 64 clocks (Precise Event)event=0xb,period=2000,umask=0x10,ldlat=0x4000mem_inst_retired.latency_above_threshold_8cacheMemory instructions retired above 8 clocks (Precise Event)event=0xb,period=20000,umask=0x10,ldlat=0x800mem_inst_retired.latency_above_threshold_8192cacheMemory instructions retired above 8192 clocks (Precise Event)event=0xb,period=10,umask=0x10,ldlat=0x200000mem_inst_retired.loadscacheInstructions retired which contains a load (Precise Event)event=0xb,period=2000000,umask=100mem_inst_retired.storescacheInstructions retired which contains a store (Precise Event)event=0xb,period=2000000,umask=200mem_load_retired.hit_lfbcacheRetired loads that miss L1D and hit an previously allocated LFB (Precise Event)event=0xcb,period=200000,umask=0x4000mem_load_retired.l1d_hitcacheRetired loads that hit the L1 data cache (Precise Event)event=0xcb,period=2000000,umask=100mem_load_retired.l2_hitcacheRetired loads that hit the L2 cache (Precise Event)event=0xcb,period=200000,umask=200mem_load_retired.llc_misscacheRetired loads that miss the LLC cache (Precise Event)event=0xcb,period=10000,umask=0x1000mem_load_retired.llc_unshared_hitcacheRetired loads that hit valid versions in the LLC cache (Precise Event)event=0xcb,period=40000,umask=400mem_load_retired.other_core_l2_hit_hitmcacheRetired loads that hit sibling core's L2 in modified or unmodified states (Precise Event)event=0xcb,period=40000,umask=800mem_uncore_retired.local_dramcacheLoad instructions retired with a data source of local DRAM or locally homed remote hitm (Precise Event)event=0xf,period=10000,umask=0x2000mem_uncore_retired.other_core_l2_hitmcacheLoad instructions retired that HIT modified data in sibling core (Precise Event)event=0xf,period=40000,umask=200mem_uncore_retired.remote_cache_local_home_hitcacheLoad instructions retired remote cache HIT data source (Precise Event)event=0xf,period=20000,umask=800mem_uncore_retired.remote_dramcacheLoad instructions retired remote DRAM and remote home-remote cache HITM (Precise Event)event=0xf,period=10000,umask=0x1000mem_uncore_retired.uncacheablecacheLoad instructions retired IO (Precise Event)event=0xf,period=4000,umask=0x8000offcore_requests.l1d_writebackcacheOffcore L1 data cache writebacksevent=0xb0,period=100000,umask=0x4000offcore_requests_sq_fullcacheOffcore requests blocked due to Super Queue fullevent=0xb2,period=100000,umask=100offcore_response.any_data.any_cache_dramcacheOffcore data reads satisfied by any cache or DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7F1100offcore_response.any_data.any_locationcacheAll offcore data readsevent=0xb7,period=100000,umask=1,offcore_rsp=0xFF1100offcore_response.any_data.io_csr_mmiocacheOffcore data reads satisfied by the IO, CSR, MMIO unitevent=0xb7,period=100000,umask=1,offcore_rsp=0x801100offcore_response.any_data.llc_hit_no_other_corecacheOffcore data reads satisfied by the LLC and not found in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x11100offcore_response.any_data.llc_hit_other_core_hitcacheOffcore data reads satisfied by the LLC and HIT in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x21100offcore_response.any_data.llc_hit_other_core_hitmcacheOffcore data reads satisfied by the LLC  and HITM in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x41100offcore_response.any_data.local_cachecacheOffcore data reads satisfied by the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0x71100offcore_response.any_data.local_cache_dramcacheOffcore data reads satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x471100offcore_response.any_data.remote_cachecacheOffcore data reads satisfied by a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x181100offcore_response.any_data.remote_cache_dramcacheOffcore data reads satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x381100offcore_response.any_data.remote_cache_hitcacheOffcore data reads that HIT in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x101100offcore_response.any_data.remote_cache_hitmcacheOffcore data reads that HITM in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x81100offcore_response.any_ifetch.any_cache_dramcacheOffcore code reads satisfied by any cache or DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7F4400offcore_response.any_ifetch.any_locationcacheAll offcore code readsevent=0xb7,period=100000,umask=1,offcore_rsp=0xFF4400offcore_response.any_ifetch.io_csr_mmiocacheOffcore code reads satisfied by the IO, CSR, MMIO unitevent=0xb7,period=100000,umask=1,offcore_rsp=0x804400offcore_response.any_ifetch.llc_hit_no_other_corecacheOffcore code reads satisfied by the LLC and not found in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x14400offcore_response.any_ifetch.llc_hit_other_core_hitcacheOffcore code reads satisfied by the LLC and HIT in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x24400offcore_response.any_ifetch.llc_hit_other_core_hitmcacheOffcore code reads satisfied by the LLC  and HITM in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x44400offcore_response.any_ifetch.local_cachecacheOffcore code reads satisfied by the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0x74400offcore_response.any_ifetch.local_cache_dramcacheOffcore code reads satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x474400offcore_response.any_ifetch.remote_cachecacheOffcore code reads satisfied by a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x184400offcore_response.any_ifetch.remote_cache_dramcacheOffcore code reads satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x384400offcore_response.any_ifetch.remote_cache_hitcacheOffcore code reads that HIT in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x104400offcore_response.any_ifetch.remote_cache_hitmcacheOffcore code reads that HITM in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x84400offcore_response.any_request.any_cache_dramcacheOffcore requests satisfied by any cache or DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7FFF00offcore_response.any_request.any_locationcacheAll offcore requestsevent=0xb7,period=100000,umask=1,offcore_rsp=0xFFFF00offcore_response.any_request.io_csr_mmiocacheOffcore requests satisfied by the IO, CSR, MMIO unitevent=0xb7,period=100000,umask=1,offcore_rsp=0x80FF00offcore_response.any_request.llc_hit_no_other_corecacheOffcore requests satisfied by the LLC and not found in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x1FF00offcore_response.any_request.llc_hit_other_core_hitcacheOffcore requests satisfied by the LLC and HIT in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x2FF00offcore_response.any_request.llc_hit_other_core_hitmcacheOffcore requests satisfied by the LLC  and HITM in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x4FF00offcore_response.any_request.local_cachecacheOffcore requests satisfied by the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0x7FF00offcore_response.any_request.local_cache_dramcacheOffcore requests satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x47FF00offcore_response.any_request.remote_cachecacheOffcore requests satisfied by a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x18FF00offcore_response.any_request.remote_cache_dramcacheOffcore requests satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x38FF00offcore_response.any_request.remote_cache_hitcacheOffcore requests that HIT in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x10FF00offcore_response.any_request.remote_cache_hitmcacheOffcore requests that HITM in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x8FF00offcore_response.any_rfo.any_cache_dramcacheOffcore RFO requests satisfied by any cache or DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7F2200offcore_response.any_rfo.any_locationcacheAll offcore RFO requestsevent=0xb7,period=100000,umask=1,offcore_rsp=0xFF2200offcore_response.any_rfo.io_csr_mmiocacheOffcore RFO requests satisfied by the IO, CSR, MMIO unitevent=0xb7,period=100000,umask=1,offcore_rsp=0x802200offcore_response.any_rfo.llc_hit_no_other_corecacheOffcore RFO requests satisfied by the LLC and not found in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x12200offcore_response.any_rfo.llc_hit_other_core_hitcacheOffcore RFO requests satisfied by the LLC and HIT in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x22200offcore_response.any_rfo.llc_hit_other_core_hitmcacheOffcore RFO requests satisfied by the LLC  and HITM in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x42200offcore_response.any_rfo.local_cachecacheOffcore RFO requests satisfied by the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0x72200offcore_response.any_rfo.local_cache_dramcacheOffcore RFO requests satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x472200offcore_response.any_rfo.remote_cachecacheOffcore RFO requests satisfied by a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x182200offcore_response.any_rfo.remote_cache_dramcacheOffcore RFO requests satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x382200offcore_response.any_rfo.remote_cache_hitcacheOffcore RFO requests that HIT in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x102200offcore_response.any_rfo.remote_cache_hitmcacheOffcore RFO requests that HITM in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x82200offcore_response.corewb.any_cache_dramcacheOffcore writebacks to any cache or DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7F0800offcore_response.corewb.any_locationcacheAll offcore writebacksevent=0xb7,period=100000,umask=1,offcore_rsp=0xFF0800offcore_response.corewb.io_csr_mmiocacheOffcore writebacks to the IO, CSR, MMIO unitevent=0xb7,period=100000,umask=1,offcore_rsp=0x800800offcore_response.corewb.llc_hit_no_other_corecacheOffcore writebacks to the LLC and not found in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x10800offcore_response.corewb.llc_hit_other_core_hitmcacheOffcore writebacks to the LLC  and HITM in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x40800offcore_response.corewb.local_cachecacheOffcore writebacks to the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0x70800offcore_response.corewb.local_cache_dramcacheOffcore writebacks to the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x470800offcore_response.corewb.remote_cachecacheOffcore writebacks to a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x180800offcore_response.corewb.remote_cache_dramcacheOffcore writebacks to a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x380800offcore_response.corewb.remote_cache_hitcacheOffcore writebacks that HIT in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x100800offcore_response.corewb.remote_cache_hitmcacheOffcore writebacks that HITM in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x80800offcore_response.data_ifetch.any_cache_dramcacheOffcore code or data read requests satisfied by any cache or DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7F7700offcore_response.data_ifetch.any_locationcacheAll offcore code or data read requestsevent=0xb7,period=100000,umask=1,offcore_rsp=0xFF7700offcore_response.data_ifetch.io_csr_mmiocacheOffcore code or data read requests satisfied by the IO, CSR, MMIO unitevent=0xb7,period=100000,umask=1,offcore_rsp=0x807700offcore_response.data_ifetch.llc_hit_no_other_corecacheOffcore code or data read requests satisfied by the LLC and not found in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x17700offcore_response.data_ifetch.llc_hit_other_core_hitcacheOffcore code or data read requests satisfied by the LLC and HIT in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x27700offcore_response.data_ifetch.llc_hit_other_core_hitmcacheOffcore code or data read requests satisfied by the LLC  and HITM in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x47700offcore_response.data_ifetch.local_cachecacheOffcore code or data read requests satisfied by the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0x77700offcore_response.data_ifetch.local_cache_dramcacheOffcore code or data read requests satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x477700offcore_response.data_ifetch.remote_cachecacheOffcore code or data read requests satisfied by a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x187700offcore_response.data_ifetch.remote_cache_dramcacheOffcore code or data read requests satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x387700offcore_response.data_ifetch.remote_cache_hitcacheOffcore code or data read requests that HIT in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x107700offcore_response.data_ifetch.remote_cache_hitmcacheOffcore code or data read requests that HITM in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x87700offcore_response.data_in.any_cache_dramcacheOffcore request = all data, response = any cache_dramevent=0xb7,period=100000,umask=1,offcore_rsp=0x7F3300offcore_response.data_in.any_locationcacheOffcore request = all data, response = any locationevent=0xb7,period=100000,umask=1,offcore_rsp=0xFF3300offcore_response.data_in.io_csr_mmiocacheOffcore data reads, RFOs, and prefetches satisfied by the IO, CSR, MMIO unitevent=0xb7,period=100000,umask=1,offcore_rsp=0x803300offcore_response.data_in.llc_hit_no_other_corecacheOffcore data reads, RFOs, and prefetches satisfied by the LLC and not found in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x13300offcore_response.data_in.llc_hit_other_core_hitcacheOffcore data reads, RFOs, and prefetches satisfied by the LLC and HIT in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x23300offcore_response.data_in.llc_hit_other_core_hitmcacheOffcore data reads, RFOs, and prefetches satisfied by the LLC  and HITM in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x43300offcore_response.data_in.local_cachecacheOffcore request = all data, response = local cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x73300offcore_response.data_in.local_cache_dramcacheOffcore request = all data, response = local cache or dramevent=0xb7,period=100000,umask=1,offcore_rsp=0x473300offcore_response.data_in.remote_cachecacheOffcore request = all data, response = remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x183300offcore_response.data_in.remote_cache_dramcacheOffcore request = all data, response = remote cache or dramevent=0xb7,period=100000,umask=1,offcore_rsp=0x383300offcore_response.data_in.remote_cache_hitcacheOffcore data reads, RFOs, and prefetches that HIT in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x103300offcore_response.data_in.remote_cache_hitmcacheOffcore data reads, RFOs, and prefetches that HITM in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x83300offcore_response.demand_data.any_cache_dramcacheOffcore demand data requests satisfied by any cache or DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7F0300offcore_response.demand_data.any_locationcacheAll offcore demand data requestsevent=0xb7,period=100000,umask=1,offcore_rsp=0xFF0300offcore_response.demand_data.io_csr_mmiocacheOffcore demand data requests satisfied by the IO, CSR, MMIO unitevent=0xb7,period=100000,umask=1,offcore_rsp=0x800300offcore_response.demand_data.llc_hit_no_other_corecacheOffcore demand data requests satisfied by the LLC and not found in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x10300offcore_response.demand_data.llc_hit_other_core_hitcacheOffcore demand data requests satisfied by the LLC and HIT in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x20300offcore_response.demand_data.llc_hit_other_core_hitmcacheOffcore demand data requests satisfied by the LLC  and HITM in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x40300offcore_response.demand_data.local_cachecacheOffcore demand data requests satisfied by the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0x70300offcore_response.demand_data.local_cache_dramcacheOffcore demand data requests satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x470300offcore_response.demand_data.remote_cachecacheOffcore demand data requests satisfied by a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x180300offcore_response.demand_data.remote_cache_dramcacheOffcore demand data requests satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x380300offcore_response.demand_data.remote_cache_hitcacheOffcore demand data requests that HIT in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x100300offcore_response.demand_data.remote_cache_hitmcacheOffcore demand data requests that HITM in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x80300offcore_response.demand_data_rd.any_cache_dramcacheOffcore demand data reads satisfied by any cache or DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7F0100offcore_response.demand_data_rd.any_locationcacheAll offcore demand data readsevent=0xb7,period=100000,umask=1,offcore_rsp=0xFF0100offcore_response.demand_data_rd.io_csr_mmiocacheOffcore demand data reads satisfied by the IO, CSR, MMIO unitevent=0xb7,period=100000,umask=1,offcore_rsp=0x800100offcore_response.demand_data_rd.llc_hit_no_other_corecacheOffcore demand data reads satisfied by the LLC and not found in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x10100offcore_response.demand_data_rd.llc_hit_other_core_hitcacheOffcore demand data reads satisfied by the LLC and HIT in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x20100offcore_response.demand_data_rd.llc_hit_other_core_hitmcacheOffcore demand data reads satisfied by the LLC  and HITM in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x40100offcore_response.demand_data_rd.local_cachecacheOffcore demand data reads satisfied by the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0x70100offcore_response.demand_data_rd.local_cache_dramcacheOffcore demand data reads satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x470100offcore_response.demand_data_rd.remote_cachecacheOffcore demand data reads satisfied by a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x180100offcore_response.demand_data_rd.remote_cache_dramcacheOffcore demand data reads satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x380100offcore_response.demand_data_rd.remote_cache_hitcacheOffcore demand data reads that HIT in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x100100offcore_response.demand_data_rd.remote_cache_hitmcacheOffcore demand data reads that HITM in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x80100offcore_response.demand_ifetch.any_cache_dramcacheOffcore demand code reads satisfied by any cache or DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7F0400offcore_response.demand_ifetch.any_locationcacheAll offcore demand code readsevent=0xb7,period=100000,umask=1,offcore_rsp=0xFF0400offcore_response.demand_ifetch.io_csr_mmiocacheOffcore demand code reads satisfied by the IO, CSR, MMIO unitevent=0xb7,period=100000,umask=1,offcore_rsp=0x800400offcore_response.demand_ifetch.llc_hit_no_other_corecacheOffcore demand code reads satisfied by the LLC and not found in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x10400offcore_response.demand_ifetch.llc_hit_other_core_hitcacheOffcore demand code reads satisfied by the LLC and HIT in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x20400offcore_response.demand_ifetch.llc_hit_other_core_hitmcacheOffcore demand code reads satisfied by the LLC  and HITM in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x40400offcore_response.demand_ifetch.local_cachecacheOffcore demand code reads satisfied by the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0x70400offcore_response.demand_ifetch.local_cache_dramcacheOffcore demand code reads satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x470400offcore_response.demand_ifetch.remote_cachecacheOffcore demand code reads satisfied by a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x180400offcore_response.demand_ifetch.remote_cache_dramcacheOffcore demand code reads satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x380400offcore_response.demand_ifetch.remote_cache_hitcacheOffcore demand code reads that HIT in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x100400offcore_response.demand_ifetch.remote_cache_hitmcacheOffcore demand code reads that HITM in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x80400offcore_response.demand_rfo.any_cache_dramcacheOffcore demand RFO requests satisfied by any cache or DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7F0200offcore_response.demand_rfo.any_locationcacheAll offcore demand RFO requestsevent=0xb7,period=100000,umask=1,offcore_rsp=0xFF0200offcore_response.demand_rfo.io_csr_mmiocacheOffcore demand RFO requests satisfied by the IO, CSR, MMIO unitevent=0xb7,period=100000,umask=1,offcore_rsp=0x800200offcore_response.demand_rfo.llc_hit_no_other_corecacheOffcore demand RFO requests satisfied by the LLC and not found in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x10200offcore_response.demand_rfo.llc_hit_other_core_hitcacheOffcore demand RFO requests satisfied by the LLC and HIT in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x20200offcore_response.demand_rfo.llc_hit_other_core_hitmcacheOffcore demand RFO requests satisfied by the LLC  and HITM in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x40200offcore_response.demand_rfo.local_cachecacheOffcore demand RFO requests satisfied by the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0x70200offcore_response.demand_rfo.local_cache_dramcacheOffcore demand RFO requests satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x470200offcore_response.demand_rfo.remote_cachecacheOffcore demand RFO requests satisfied by a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x180200offcore_response.demand_rfo.remote_cache_dramcacheOffcore demand RFO requests satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x380200offcore_response.demand_rfo.remote_cache_hitcacheOffcore demand RFO requests that HIT in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x100200offcore_response.demand_rfo.remote_cache_hitmcacheOffcore demand RFO requests that HITM in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x80200offcore_response.other.any_cache_dramcacheOffcore other requests satisfied by any cache or DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7F8000offcore_response.other.any_locationcacheAll offcore other requestsevent=0xb7,period=100000,umask=1,offcore_rsp=0xFF8000offcore_response.other.io_csr_mmiocacheOffcore other requests satisfied by the IO, CSR, MMIO unitevent=0xb7,period=100000,umask=1,offcore_rsp=0x808000offcore_response.other.llc_hit_no_other_corecacheOffcore other requests satisfied by the LLC and not found in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x18000offcore_response.other.llc_hit_other_core_hitcacheOffcore other requests satisfied by the LLC and HIT in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x28000offcore_response.other.llc_hit_other_core_hitmcacheOffcore other requests satisfied by the LLC  and HITM in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x48000offcore_response.other.local_cachecacheOffcore other requests satisfied by the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0x78000offcore_response.other.local_cache_dramcacheOffcore other requests satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x478000offcore_response.other.remote_cachecacheOffcore other requests satisfied by a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x188000offcore_response.other.remote_cache_dramcacheOffcore other requests satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x388000offcore_response.other.remote_cache_hitcacheOffcore other requests that HIT in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x108000offcore_response.other.remote_cache_hitmcacheOffcore other requests that HITM in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x88000offcore_response.pf_data.any_cache_dramcacheOffcore prefetch data requests satisfied by any cache or DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7F3000offcore_response.pf_data.any_locationcacheAll offcore prefetch data requestsevent=0xb7,period=100000,umask=1,offcore_rsp=0xFF3000offcore_response.pf_data.io_csr_mmiocacheOffcore prefetch data requests satisfied by the IO, CSR, MMIO unitevent=0xb7,period=100000,umask=1,offcore_rsp=0x803000offcore_response.pf_data.llc_hit_no_other_corecacheOffcore prefetch data requests satisfied by the LLC and not found in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x13000offcore_response.pf_data.llc_hit_other_core_hitcacheOffcore prefetch data requests satisfied by the LLC and HIT in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x23000offcore_response.pf_data.llc_hit_other_core_hitmcacheOffcore prefetch data requests satisfied by the LLC  and HITM in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x43000offcore_response.pf_data.local_cachecacheOffcore prefetch data requests satisfied by the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0x73000offcore_response.pf_data.local_cache_dramcacheOffcore prefetch data requests satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x473000offcore_response.pf_data.remote_cachecacheOffcore prefetch data requests satisfied by a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x183000offcore_response.pf_data.remote_cache_dramcacheOffcore prefetch data requests satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x383000offcore_response.pf_data.remote_cache_hitcacheOffcore prefetch data requests that HIT in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x103000offcore_response.pf_data.remote_cache_hitmcacheOffcore prefetch data requests that HITM in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x83000offcore_response.pf_data_rd.any_cache_dramcacheOffcore prefetch data reads satisfied by any cache or DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7F1000offcore_response.pf_data_rd.any_locationcacheAll offcore prefetch data readsevent=0xb7,period=100000,umask=1,offcore_rsp=0xFF1000offcore_response.pf_data_rd.io_csr_mmiocacheOffcore prefetch data reads satisfied by the IO, CSR, MMIO unitevent=0xb7,period=100000,umask=1,offcore_rsp=0x801000offcore_response.pf_data_rd.llc_hit_no_other_corecacheOffcore prefetch data reads satisfied by the LLC and not found in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x11000offcore_response.pf_data_rd.llc_hit_other_core_hitcacheOffcore prefetch data reads satisfied by the LLC and HIT in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x21000offcore_response.pf_data_rd.llc_hit_other_core_hitmcacheOffcore prefetch data reads satisfied by the LLC  and HITM in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x41000offcore_response.pf_data_rd.local_cachecacheOffcore prefetch data reads satisfied by the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0x71000offcore_response.pf_data_rd.local_cache_dramcacheOffcore prefetch data reads satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x471000offcore_response.pf_data_rd.remote_cachecacheOffcore prefetch data reads satisfied by a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x181000offcore_response.pf_data_rd.remote_cache_dramcacheOffcore prefetch data reads satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x381000offcore_response.pf_data_rd.remote_cache_hitcacheOffcore prefetch data reads that HIT in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x101000offcore_response.pf_data_rd.remote_cache_hitmcacheOffcore prefetch data reads that HITM in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x81000offcore_response.pf_ifetch.any_cache_dramcacheOffcore prefetch code reads satisfied by any cache or DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7F4000offcore_response.pf_ifetch.any_locationcacheAll offcore prefetch code readsevent=0xb7,period=100000,umask=1,offcore_rsp=0xFF4000offcore_response.pf_ifetch.io_csr_mmiocacheOffcore prefetch code reads satisfied by the IO, CSR, MMIO unitevent=0xb7,period=100000,umask=1,offcore_rsp=0x804000offcore_response.pf_ifetch.llc_hit_no_other_corecacheOffcore prefetch code reads satisfied by the LLC and not found in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x14000offcore_response.pf_ifetch.llc_hit_other_core_hitcacheOffcore prefetch code reads satisfied by the LLC and HIT in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x24000offcore_response.pf_ifetch.llc_hit_other_core_hitmcacheOffcore prefetch code reads satisfied by the LLC  and HITM in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x44000offcore_response.pf_ifetch.local_cachecacheOffcore prefetch code reads satisfied by the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0x74000offcore_response.pf_ifetch.local_cache_dramcacheOffcore prefetch code reads satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x474000offcore_response.pf_ifetch.remote_cachecacheOffcore prefetch code reads satisfied by a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x184000offcore_response.pf_ifetch.remote_cache_dramcacheOffcore prefetch code reads satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x384000offcore_response.pf_ifetch.remote_cache_hitcacheOffcore prefetch code reads that HIT in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x104000offcore_response.pf_ifetch.remote_cache_hitmcacheOffcore prefetch code reads that HITM in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x84000offcore_response.pf_rfo.any_cache_dramcacheOffcore prefetch RFO requests satisfied by any cache or DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7F2000offcore_response.pf_rfo.any_locationcacheAll offcore prefetch RFO requestsevent=0xb7,period=100000,umask=1,offcore_rsp=0xFF2000offcore_response.pf_rfo.io_csr_mmiocacheOffcore prefetch RFO requests satisfied by the IO, CSR, MMIO unitevent=0xb7,period=100000,umask=1,offcore_rsp=0x802000offcore_response.pf_rfo.llc_hit_no_other_corecacheOffcore prefetch RFO requests satisfied by the LLC and not found in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x12000offcore_response.pf_rfo.llc_hit_other_core_hitcacheOffcore prefetch RFO requests satisfied by the LLC and HIT in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x22000offcore_response.pf_rfo.llc_hit_other_core_hitmcacheOffcore prefetch RFO requests satisfied by the LLC  and HITM in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x42000offcore_response.pf_rfo.local_cachecacheOffcore prefetch RFO requests satisfied by the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0x72000offcore_response.pf_rfo.local_cache_dramcacheOffcore prefetch RFO requests satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x472000offcore_response.pf_rfo.remote_cachecacheOffcore prefetch RFO requests satisfied by a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x182000offcore_response.pf_rfo.remote_cache_dramcacheOffcore prefetch RFO requests satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x382000offcore_response.pf_rfo.remote_cache_hitcacheOffcore prefetch RFO requests that HIT in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x102000offcore_response.pf_rfo.remote_cache_hitmcacheOffcore prefetch RFO requests that HITM in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x82000offcore_response.prefetch.any_cache_dramcacheOffcore prefetch requests satisfied by any cache or DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7F7000offcore_response.prefetch.any_locationcacheAll offcore prefetch requestsevent=0xb7,period=100000,umask=1,offcore_rsp=0xFF7000offcore_response.prefetch.io_csr_mmiocacheOffcore prefetch requests satisfied by the IO, CSR, MMIO unitevent=0xb7,period=100000,umask=1,offcore_rsp=0x807000offcore_response.prefetch.llc_hit_no_other_corecacheOffcore prefetch requests satisfied by the LLC and not found in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x17000offcore_response.prefetch.llc_hit_other_core_hitcacheOffcore prefetch requests satisfied by the LLC and HIT in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x27000offcore_response.prefetch.llc_hit_other_core_hitmcacheOffcore prefetch requests satisfied by the LLC  and HITM in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x47000offcore_response.prefetch.local_cachecacheOffcore prefetch requests satisfied by the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0x77000offcore_response.prefetch.local_cache_dramcacheOffcore prefetch requests satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x477000offcore_response.prefetch.remote_cachecacheOffcore prefetch requests satisfied by a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x187000offcore_response.prefetch.remote_cache_dramcacheOffcore prefetch requests satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x387000offcore_response.prefetch.remote_cache_hitcacheOffcore prefetch requests that HIT in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x107000offcore_response.prefetch.remote_cache_hitmcacheOffcore prefetch requests that HITM in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x87000sq_misc.split_lockcacheSuper Queue lock splits across a cache lineevent=0xf4,period=2000000,umask=0x1000store_blocks.at_retcacheLoads delayed with at-Retirement block codeevent=6,period=200000,umask=400store_blocks.l1d_blockcacheCacheable loads delayed with L1D block codeevent=6,period=200000,umask=800fp_assist.allfloating pointX87 Floating point assists (Precise Event)event=0xf7,period=20000,umask=100fp_assist.inputfloating pointX87 Floating point assists for invalid input value (Precise Event)event=0xf7,period=20000,umask=400fp_assist.outputfloating pointX87 Floating point assists for invalid output value (Precise Event)event=0xf7,period=20000,umask=200fp_comp_ops_exe.mmxfloating pointMMX Uopsevent=0x10,period=2000000,umask=200fp_comp_ops_exe.sse2_integerfloating pointSSE2 integer Uopsevent=0x10,period=2000000,umask=800fp_comp_ops_exe.sse_double_precisionfloating pointSSE* FP double precision Uopsevent=0x10,period=2000000,umask=0x8000fp_comp_ops_exe.sse_fpfloating pointSSE and SSE2 FP Uopsevent=0x10,period=2000000,umask=400fp_comp_ops_exe.sse_fp_packedfloating pointSSE FP packed Uopsevent=0x10,period=2000000,umask=0x1000fp_comp_ops_exe.sse_fp_scalarfloating pointSSE FP scalar Uopsevent=0x10,period=2000000,umask=0x2000fp_comp_ops_exe.sse_single_precisionfloating pointSSE* FP single precision Uopsevent=0x10,period=2000000,umask=0x4000fp_comp_ops_exe.x87floating pointComputational floating-point operations executedevent=0x10,period=2000000,umask=100fp_mmx_trans.anyfloating pointAll Floating Point to and from MMX transitionsevent=0xcc,period=2000000,umask=300fp_mmx_trans.to_fpfloating pointTransitions from MMX to Floating Point instructionsevent=0xcc,period=2000000,umask=100fp_mmx_trans.to_mmxfloating pointTransitions from Floating Point to MMX instructionsevent=0xcc,period=2000000,umask=200simd_int_128.packfloating point128 bit SIMD integer pack operationsevent=0x12,period=200000,umask=400simd_int_128.packed_arithfloating point128 bit SIMD integer arithmetic operationsevent=0x12,period=200000,umask=0x2000simd_int_128.packed_logicalfloating point128 bit SIMD integer logical operationsevent=0x12,period=200000,umask=0x1000simd_int_128.packed_mpyfloating point128 bit SIMD integer multiply operationsevent=0x12,period=200000,umask=100simd_int_128.packed_shiftfloating point128 bit SIMD integer shift operationsevent=0x12,period=200000,umask=200simd_int_128.shuffle_movefloating point128 bit SIMD integer shuffle/move operationsevent=0x12,period=200000,umask=0x4000simd_int_128.unpackfloating point128 bit SIMD integer unpack operationsevent=0x12,period=200000,umask=800simd_int_64.packfloating pointSIMD integer 64 bit pack operationsevent=0xfd,period=200000,umask=400simd_int_64.packed_arithfloating pointSIMD integer 64 bit arithmetic operationsevent=0xfd,period=200000,umask=0x2000simd_int_64.packed_logicalfloating pointSIMD integer 64 bit logical operationsevent=0xfd,period=200000,umask=0x1000simd_int_64.packed_mpyfloating pointSIMD integer 64 bit packed multiply operationsevent=0xfd,period=200000,umask=100simd_int_64.packed_shiftfloating pointSIMD integer 64 bit shift operationsevent=0xfd,period=200000,umask=200simd_int_64.shuffle_movefloating pointSIMD integer 64 bit shuffle/move operationsevent=0xfd,period=200000,umask=0x4000simd_int_64.unpackfloating pointSIMD integer 64 bit unpack operationsevent=0xfd,period=200000,umask=800macro_insts.decodedfrontendInstructions decodedevent=0xd0,period=2000000,umask=100macro_insts.fusions_decodedfrontendMacro-fused instructions decodedevent=0xa6,period=2000000,umask=100two_uop_insts_decodedfrontendTwo Uop instructions decodedevent=0x19,period=2000000,umask=100offcore_response.any_data.any_drammemoryOffcore data reads satisfied by any DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x601100offcore_response.any_data.any_llc_missmemoryOffcore data reads that missed the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0xF81100offcore_response.any_data.local_drammemoryOffcore data reads satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x401100offcore_response.any_data.remote_drammemoryOffcore data reads satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x201100offcore_response.any_ifetch.any_drammemoryOffcore code reads satisfied by any DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x604400offcore_response.any_ifetch.any_llc_missmemoryOffcore code reads that missed the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0xF84400offcore_response.any_ifetch.local_drammemoryOffcore code reads satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x404400offcore_response.any_ifetch.remote_drammemoryOffcore code reads satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x204400offcore_response.any_request.any_drammemoryOffcore requests satisfied by any DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x60FF00offcore_response.any_request.any_llc_missmemoryOffcore requests that missed the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0xF8FF00offcore_response.any_request.local_drammemoryOffcore requests satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x40FF00offcore_response.any_request.remote_drammemoryOffcore requests satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x20FF00offcore_response.any_rfo.any_drammemoryOffcore RFO requests satisfied by any DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x602200offcore_response.any_rfo.any_llc_missmemoryOffcore RFO requests that missed the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0xF82200offcore_response.any_rfo.local_drammemoryOffcore RFO requests satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x402200offcore_response.any_rfo.remote_drammemoryOffcore RFO requests satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x202200offcore_response.corewb.any_drammemoryOffcore writebacks to any DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x600800offcore_response.corewb.any_llc_missmemoryOffcore writebacks that missed the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0xF80800offcore_response.corewb.local_drammemoryOffcore writebacks to the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x400800offcore_response.corewb.remote_drammemoryOffcore writebacks to a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x200800offcore_response.data_ifetch.any_drammemoryOffcore code or data read requests satisfied by any DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x607700offcore_response.data_ifetch.any_llc_missmemoryOffcore code or data read requests that missed the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0xF87700offcore_response.data_ifetch.local_drammemoryOffcore code or data read requests satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x407700offcore_response.data_ifetch.remote_drammemoryOffcore code or data read requests satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x207700offcore_response.data_in.any_drammemoryOffcore request = all data, response = any DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x603300offcore_response.data_in.any_llc_missmemoryOffcore request = all data, response = any LLC missevent=0xb7,period=100000,umask=1,offcore_rsp=0xF83300offcore_response.data_in.local_drammemoryOffcore data reads, RFOs, and prefetches satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x403300offcore_response.data_in.remote_drammemoryOffcore data reads, RFOs, and prefetches satisfied by the remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x203300offcore_response.demand_data.any_drammemoryOffcore demand data requests satisfied by any DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x600300offcore_response.demand_data.any_llc_missmemoryOffcore demand data requests that missed the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0xF80300offcore_response.demand_data.local_drammemoryOffcore demand data requests satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x400300offcore_response.demand_data.remote_drammemoryOffcore demand data requests satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x200300offcore_response.demand_data_rd.any_drammemoryOffcore demand data reads satisfied by any DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x600100offcore_response.demand_data_rd.any_llc_missmemoryOffcore demand data reads that missed the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0xF80100offcore_response.demand_data_rd.local_drammemoryOffcore demand data reads satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x400100offcore_response.demand_data_rd.remote_drammemoryOffcore demand data reads satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x200100offcore_response.demand_ifetch.any_drammemoryOffcore demand code reads satisfied by any DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x600400offcore_response.demand_ifetch.any_llc_missmemoryOffcore demand code reads that missed the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0xF80400offcore_response.demand_ifetch.local_drammemoryOffcore demand code reads satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x400400offcore_response.demand_ifetch.remote_drammemoryOffcore demand code reads satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x200400offcore_response.demand_rfo.any_drammemoryOffcore demand RFO requests satisfied by any DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x600200offcore_response.demand_rfo.any_llc_missmemoryOffcore demand RFO requests that missed the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0xF80200offcore_response.demand_rfo.local_drammemoryOffcore demand RFO requests satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x400200offcore_response.demand_rfo.remote_drammemoryOffcore demand RFO requests satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x200200offcore_response.other.any_drammemoryOffcore other requests satisfied by any DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x608000offcore_response.other.any_llc_missmemoryOffcore other requests that missed the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0xF88000offcore_response.other.remote_drammemoryOffcore other requests satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x208000offcore_response.pf_data.any_drammemoryOffcore prefetch data requests satisfied by any DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x603000offcore_response.pf_data.any_llc_missmemoryOffcore prefetch data requests that missed the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0xF83000offcore_response.pf_data.local_drammemoryOffcore prefetch data requests satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x403000offcore_response.pf_data.remote_drammemoryOffcore prefetch data requests satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x203000offcore_response.pf_data_rd.any_drammemoryOffcore prefetch data reads satisfied by any DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x601000offcore_response.pf_data_rd.any_llc_missmemoryOffcore prefetch data reads that missed the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0xF81000offcore_response.pf_data_rd.local_drammemoryOffcore prefetch data reads satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x401000offcore_response.pf_data_rd.remote_drammemoryOffcore prefetch data reads satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x201000offcore_response.pf_ifetch.any_drammemoryOffcore prefetch code reads satisfied by any DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x604000offcore_response.pf_ifetch.any_llc_missmemoryOffcore prefetch code reads that missed the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0xF84000offcore_response.pf_ifetch.local_drammemoryOffcore prefetch code reads satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x404000offcore_response.pf_ifetch.remote_drammemoryOffcore prefetch code reads satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x204000offcore_response.pf_rfo.any_drammemoryOffcore prefetch RFO requests satisfied by any DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x602000offcore_response.pf_rfo.any_llc_missmemoryOffcore prefetch RFO requests that missed the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0xF82000offcore_response.pf_rfo.local_drammemoryOffcore prefetch RFO requests satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x402000offcore_response.pf_rfo.remote_drammemoryOffcore prefetch RFO requests satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x202000offcore_response.prefetch.any_drammemoryOffcore prefetch requests satisfied by any DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x607000offcore_response.prefetch.any_llc_missmemoryOffcore prefetch requests that missed the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0xF87000offcore_response.prefetch.local_drammemoryOffcore prefetch requests satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x407000offcore_response.prefetch.remote_drammemoryOffcore prefetch requests satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x207000es_reg_renamesotherES segment renamesevent=0xd5,period=2000000,umask=100io_transactionsotherI/O transactionsevent=0x6c,period=2000000,umask=100l1i.cycles_stalledotherL1I instruction fetch stall cyclesevent=0x80,period=2000000,umask=400l1i.hitsotherL1I instruction fetch hitsevent=0x80,period=2000000,umask=100l1i.missesotherL1I instruction fetch missesevent=0x80,period=2000000,umask=200l1i.readsotherL1I Instruction fetchesevent=0x80,period=2000000,umask=300large_itlb.hitotherLarge ITLB hitevent=0x82,period=200000,umask=100load_dispatch.anyotherAll loads dispatchedevent=0x13,period=2000000,umask=700load_dispatch.mobotherLoads dispatched from the MOBevent=0x13,period=2000000,umask=400load_dispatch.rsotherLoads dispatched that bypass the MOBevent=0x13,period=2000000,umask=100load_dispatch.rs_delayedotherLoads dispatched from stage 305event=0x13,period=2000000,umask=200partial_address_aliasotherFalse dependencies due to partial address aliasingevent=7,period=200000,umask=100sb_drain.anyotherAll Store buffer stall cyclesevent=4,period=200000,umask=700seg_rename_stallsotherSegment rename stall cyclesevent=0xd4,period=2000000,umask=100snoop_response.hitotherThread responded HIT to snoopevent=0xb8,period=100000,umask=100snoop_response.hiteotherThread responded HITE to snoopevent=0xb8,period=100000,umask=200snoop_response.hitmotherThread responded HITM to snoopevent=0xb8,period=100000,umask=400sq_full_stall_cyclesotherSuper Queue full stall cyclesevent=0xf6,period=2000000,umask=100arith.cycles_div_busypipelineCycles the divider is busyevent=0x14,period=2000000,umask=100arith.divpipelineDivide Operations executedevent=0x14,cmask=1,edge=1,inv=1,period=2000000,umask=100arith.mulpipelineMultiply operations executedevent=0x14,period=2000000,umask=200baclear.bad_targetpipelineBACLEAR asserted with bad target addressevent=0xe6,period=2000000,umask=200baclear.clearpipelineBACLEAR asserted, regardless of causeevent=0xe6,period=2000000,umask=100baclear_force_iqpipelineInstruction queue forced BACLEARevent=0xa7,period=2000000,umask=100bpu_clears.earlypipelineEarly Branch Prediciton Unit clearsevent=0xe8,period=2000000,umask=100bpu_clears.latepipelineLate Branch Prediction Unit clearsevent=0xe8,period=2000000,umask=200bpu_missed_call_retpipelineBranch prediction unit missed call or returnevent=0xe5,period=2000000,umask=100br_inst_exec.anypipelineBranch instructions executedevent=0x88,period=200000,umask=0x7f00br_inst_exec.condpipelineConditional branch instructions executedevent=0x88,period=200000,umask=100br_inst_exec.directpipelineUnconditional branches executedevent=0x88,period=200000,umask=200br_inst_exec.direct_near_callpipelineUnconditional call branches executedevent=0x88,period=20000,umask=0x1000br_inst_exec.indirect_near_callpipelineIndirect call branches executedevent=0x88,period=20000,umask=0x2000br_inst_exec.indirect_non_callpipelineIndirect non call branches executedevent=0x88,period=20000,umask=400br_inst_exec.near_callspipelineCall branches executedevent=0x88,period=20000,umask=0x3000br_inst_exec.non_callspipelineAll non call branches executedevent=0x88,period=200000,umask=700br_inst_exec.return_nearpipelineIndirect return branches executedevent=0x88,period=20000,umask=800br_inst_exec.takenpipelineTaken branches executedevent=0x88,period=200000,umask=0x4000br_inst_retired.all_branchespipelineRetired branch instructions (Precise Event)event=0xc4,period=200000,umask=400br_inst_retired.conditionalpipelineRetired conditional branch instructions (Precise Event)event=0xc4,period=200000,umask=100br_inst_retired.near_callpipelineRetired near call instructions (Precise Event)event=0xc4,period=20000,umask=200br_misp_exec.anypipelineMispredicted branches executedevent=0x89,period=20000,umask=0x7f00br_misp_exec.condpipelineMispredicted conditional branches executedevent=0x89,period=20000,umask=100br_misp_exec.directpipelineMispredicted unconditional branches executedevent=0x89,period=20000,umask=200br_misp_exec.direct_near_callpipelineMispredicted non call branches executedevent=0x89,period=2000,umask=0x1000br_misp_exec.indirect_near_callpipelineMispredicted indirect call branches executedevent=0x89,period=2000,umask=0x2000br_misp_exec.indirect_non_callpipelineMispredicted indirect non call branches executedevent=0x89,period=2000,umask=400br_misp_exec.near_callspipelineMispredicted call branches executedevent=0x89,period=2000,umask=0x3000br_misp_exec.non_callspipelineMispredicted non call branches executedevent=0x89,period=20000,umask=700br_misp_exec.return_nearpipelineMispredicted return branches executedevent=0x89,period=2000,umask=800br_misp_exec.takenpipelineMispredicted taken branches executedevent=0x89,period=20000,umask=0x4000br_misp_retired.near_callpipelineMispredicted near retired calls (Precise Event)event=0xc5,period=2000,umask=200cpu_clk_unhalted.refpipelineReference cycles when thread is not halted (fixed counter)event=0x0,umask=0x03,period=200000300cpu_clk_unhalted.ref_ppipelineReference base clock (133 Mhz) cycles when thread is not halted (programmable counter)event=0x3c,period=100000,umask=100cpu_clk_unhalted.threadpipelineCycles when thread is not halted (fixed counter)event=0x3c,period=200000300cpu_clk_unhalted.thread_ppipelineCycles when thread is not halted (programmable counter)event=0x3c,period=200000000cpu_clk_unhalted.total_cyclespipelineTotal CPU cyclesevent=0x3c,cmask=2,inv=1,period=200000000ild_stall.anypipelineAny Instruction Length Decoder stall cyclesevent=0x87,period=2000000,umask=0xf00ild_stall.iq_fullpipelineInstruction Queue full stall cyclesevent=0x87,period=2000000,umask=400ild_stall.lcppipelineLength Change Prefix stall cyclesevent=0x87,period=2000000,umask=100ild_stall.mrupipelineStall cycles due to BPU MRU bypassevent=0x87,period=2000000,umask=200ild_stall.regenpipelineRegen stall cyclesevent=0x87,period=2000000,umask=800inst_decoded.dec0pipelineInstructions that must be decoded by decoder 0event=0x18,period=2000000,umask=100inst_queue_writespipelineInstructions written to instruction queueevent=0x17,period=2000000,umask=100inst_queue_write_cyclespipelineCycles instructions are written to the instruction queueevent=0x1e,period=2000000,umask=100inst_retired.anypipelineInstructions retired (fixed counter)event=0xc0,period=200000300inst_retired.any_ppipelineInstructions retired (Programmable counter and Precise Event) (Precise event)event=0xc0,period=200000300inst_retired.mmxpipelineRetired MMX instructions (Precise Event)event=0xc0,period=2000000,umask=400inst_retired.total_cyclespipelineTotal cycles (Precise Event)event=0xc0,cmask=16,inv=1,period=2000000,umask=100inst_retired.total_cycles_pspipelineTotal cycles (Precise Event)event=0xc0,cmask=16,inv=1,period=2000000,umask=100inst_retired.x87pipelineRetired floating-point operations (Precise Event)event=0xc0,period=2000000,umask=200load_hit_prepipelineLoad operations conflicting with software prefetchesevent=0x4c,period=200000,umask=100lsd.activepipelineCycles when uops were delivered by the LSDevent=0xa8,cmask=1,period=2000000,umask=100lsd.inactivepipelineCycles no uops were delivered by the LSDevent=0xa8,cmask=1,inv=1,period=2000000,umask=100lsd_overflowpipelineLoops that can't stream from the instruction queueevent=0x20,period=2000000,umask=100machine_clears.cyclespipelineCycles machine clear assertedevent=0xc3,period=20000,umask=100machine_clears.mem_orderpipelineExecution pipeline restart due to Memory ordering conflictsevent=0xc3,period=20000,umask=200machine_clears.smcpipelineSelf-Modifying Code detectedevent=0xc3,period=20000,umask=400rat_stalls.anypipelineAll RAT stall cyclesevent=0xd2,period=2000000,umask=0xf00rat_stalls.flagspipelineFlag stall cyclesevent=0xd2,period=2000000,umask=100rat_stalls.registerspipelinePartial register stall cyclesevent=0xd2,period=2000000,umask=200rat_stalls.rob_read_portpipelineROB read port stalls cyclesevent=0xd2,period=2000000,umask=400rat_stalls.scoreboardpipelineScoreboard stall cyclesevent=0xd2,period=2000000,umask=800resource_stalls.anypipelineResource related stall cyclesevent=0xa2,period=2000000,umask=100resource_stalls.fpcwpipelineFPU control word write stall cyclesevent=0xa2,period=2000000,umask=0x2000resource_stalls.loadpipelineLoad buffer stall cyclesevent=0xa2,period=2000000,umask=200resource_stalls.mxcsrpipelineMXCSR rename stall cyclesevent=0xa2,period=2000000,umask=0x4000resource_stalls.otherpipelineOther Resource related stall cyclesevent=0xa2,period=2000000,umask=0x8000resource_stalls.rob_fullpipelineROB full stall cyclesevent=0xa2,period=2000000,umask=0x1000resource_stalls.rs_fullpipelineReservation Station full stall cyclesevent=0xa2,period=2000000,umask=400resource_stalls.storepipelineStore buffer stall cyclesevent=0xa2,period=2000000,umask=800ssex_uops_retired.packed_doublepipelineSIMD Packed-Double Uops retired (Precise Event)event=0xc7,period=200000,umask=400ssex_uops_retired.packed_singlepipelineSIMD Packed-Single Uops retired (Precise Event)event=0xc7,period=200000,umask=100ssex_uops_retired.scalar_doublepipelineSIMD Scalar-Double Uops retired (Precise Event)event=0xc7,period=200000,umask=800ssex_uops_retired.scalar_singlepipelineSIMD Scalar-Single Uops retired (Precise Event)event=0xc7,period=200000,umask=200ssex_uops_retired.vector_integerpipelineSIMD Vector Integer Uops retired (Precise Event)event=0xc7,period=200000,umask=0x1000uops_decoded.esp_foldingpipelineStack pointer instructions decodedevent=0xd1,period=2000000,umask=400uops_decoded.esp_syncpipelineStack pointer sync operationsevent=0xd1,period=2000000,umask=800uops_decoded.ms_cycles_activepipelineUops decoded by Microcode Sequencerevent=0xd1,cmask=1,period=2000000,umask=200uops_decoded.stall_cyclespipelineCycles no Uops are decodedevent=0xd1,cmask=1,inv=1,period=2000000,umask=100uops_executed.core_active_cyclespipelineCycles Uops executed on any port (core count)event=0xb1,any=1,cmask=1,period=2000000,umask=0x3f00uops_executed.core_active_cycles_no_port5pipelineCycles Uops executed on ports 0-4 (core count)event=0xb1,any=1,cmask=1,period=2000000,umask=0x1f00uops_executed.core_stall_countpipelineUops executed on any port (core count)event=0xb1,any=1,cmask=1,edge=1,inv=1,period=2000000,umask=0x3f00uops_executed.core_stall_count_no_port5pipelineUops executed on ports 0-4 (core count)event=0xb1,any=1,cmask=1,edge=1,inv=1,period=2000000,umask=0x1f00uops_executed.core_stall_cyclespipelineCycles no Uops issued on any port (core count)event=0xb1,any=1,cmask=1,inv=1,period=2000000,umask=0x3f00uops_executed.core_stall_cycles_no_port5pipelineCycles no Uops issued on ports 0-4 (core count)event=0xb1,any=1,cmask=1,inv=1,period=2000000,umask=0x1f00uops_executed.port0pipelineUops executed on port 0event=0xb1,period=2000000,umask=100uops_executed.port015pipelineUops issued on ports 0, 1 or 5event=0xb1,period=2000000,umask=0x4000uops_executed.port015_stall_cyclespipelineCycles no Uops issued on ports 0, 1 or 5event=0xb1,cmask=1,inv=1,period=2000000,umask=0x4000uops_executed.port1pipelineUops executed on port 1event=0xb1,period=2000000,umask=200uops_executed.port234_corepipelineUops issued on ports 2, 3 or 4event=0xb1,any=1,period=2000000,umask=0x8000uops_executed.port2_corepipelineUops executed on port 2 (core count)event=0xb1,any=1,period=2000000,umask=400uops_executed.port3_corepipelineUops executed on port 3 (core count)event=0xb1,any=1,period=2000000,umask=800uops_executed.port4_corepipelineUops executed on port 4 (core count)event=0xb1,any=1,period=2000000,umask=0x1000uops_executed.port5pipelineUops executed on port 5event=0xb1,period=2000000,umask=0x2000uops_issued.anypipelineUops issuedevent=0xe,period=2000000,umask=100uops_issued.core_stall_cyclespipelineCycles no Uops were issued on any threadevent=0xe,any=1,cmask=1,inv=1,period=2000000,umask=100uops_issued.cycles_all_threadspipelineCycles Uops were issued on either threadevent=0xe,any=1,cmask=1,period=2000000,umask=100uops_issued.fusedpipelineFused Uops issuedevent=0xe,period=2000000,umask=200uops_issued.stall_cyclespipelineCycles no Uops were issuedevent=0xe,cmask=1,inv=1,period=2000000,umask=100uops_retired.active_cyclespipelineCycles Uops are being retired (Precise event)event=0xc2,cmask=1,period=2000000,umask=100uops_retired.anypipelineUops retired (Precise Event)event=0xc2,period=2000000,umask=100uops_retired.macro_fusedpipelineMacro-fused Uops retired (Precise Event)event=0xc2,period=2000000,umask=400uops_retired.retire_slotspipelineRetirement slots used (Precise Event)event=0xc2,period=2000000,umask=200uops_retired.stall_cyclespipelineCycles Uops are not retiring (Precise Event)event=0xc2,cmask=1,inv=1,period=2000000,umask=100uops_retired.total_cyclespipelineTotal cycles using precise uop retired event (Precise Event)event=0xc2,cmask=16,inv=1,period=2000000,umask=100uop_unfusionpipelineUop unfusions due to FP exceptionsevent=0xdb,period=2000000,umask=100dtlb_load_misses.anyvirtual memoryDTLB load missesevent=8,period=200000,umask=100dtlb_load_misses.pde_missvirtual memoryDTLB load miss caused by low part of addressevent=8,period=200000,umask=0x2000dtlb_load_misses.stlb_hitvirtual memoryDTLB second level hitevent=8,period=2000000,umask=0x1000dtlb_load_misses.walk_completedvirtual memoryDTLB load miss page walks completeevent=8,period=200000,umask=200dtlb_misses.anyvirtual memoryDTLB missesevent=0x49,period=200000,umask=100dtlb_misses.stlb_hitvirtual memoryDTLB first level misses but second level hitevent=0x49,period=200000,umask=0x1000dtlb_misses.walk_completedvirtual memoryDTLB miss page walksevent=0x49,period=200000,umask=200itlb_flushvirtual memoryITLB flushesevent=0xae,period=2000000,umask=100itlb_misses.anyvirtual memoryITLB missevent=0x85,period=200000,umask=100itlb_misses.walk_completedvirtual memoryITLB miss page walksevent=0x85,period=200000,umask=200itlb_miss_retiredvirtual memoryRetired instructions that missed the ITLB (Precise Event)event=0xc8,period=200000,umask=0x2000mem_load_retired.dtlb_missvirtual memoryRetired loads that miss the DTLB (Precise Event)event=0xcb,period=200000,umask=0x8000mem_store_retired.dtlb_missvirtual memoryRetired stores that miss the DTLB (Precise Event)event=0xc,period=200000,umask=100l2_rqsts.misscacheAll requests that miss L2 cacheevent=0x24,period=200003,umask=0x3f00Counts all requests that miss L2 cachel2_rqsts.referencescacheAll L2 requestsevent=0x24,period=200003,umask=0xff00Counts all L2 requestsunc_arb_dat_occupancy.alluncore interconnectEach cycle counts number of any coherent requests at memory controller that were issued by any coreevent=0x85,umask=101unc_arb_req_trk_occupancy.drduncore interconnectEach cycle counts number of valid coherent Data Read entries. Such entry is defined as valid when it is allocated until deallocation. Does not include prefetchesevent=0x80,umask=201unc_arb_trk_occupancy.alluncore interconnectEach cycle counts number of all outgoing valid entries in ReqTrk. Such entry is defined as valid from its allocation in ReqTrk until deallocation. Accounts for Coherent and non-coherent trafficevent=0x80,umask=101unc_arb_trk_occupancy.rduncore interconnectEach cycle counts number of valid coherent Data Read entries. Such entry is defined as valid when it is allocated until deallocation. Does not include prefetchesevent=0x80,umask=201unc_arb_trk_requests.rduncore interconnectCounts number of all coherent Data Read entries. Does not include prefetchesevent=0x81,umask=201mem_load_uops_llc_hit_retired.xsnp_hitcacheRetired load uops which data sources were LLC and cross-core snoop hits in on-pkg core cache. (Precise Event - PEBS) (Precise event)event=0xd2,period=20011,umask=200This event counts retired load uops that hit in the last-level cache (L3) and were found in a non-modified state in a neighboring core's private cache (same package).  Since the last level cache is inclusive, hits to the L3 may require snooping the private L2 caches of any cores on the same socket that have the line.  In this case, a snoop was required, and another L2 had the line in a non-modified state. (Precise Event - PEBS) (Precise event)mem_load_uops_llc_hit_retired.xsnp_hitmcacheRetired load uops which data sources were HitM responses from shared LLC. (Precise Event - PEBS) (Precise event)event=0xd2,period=20011,umask=400This event counts retired load uops that hit in the last-level cache (L3) and were found in a non-modified state in a neighboring core's private cache (same package).  Since the last level cache is inclusive, hits to the L3 may require snooping the private L2 caches of any cores on the same socket that have the line.  In this case, a snoop was required, and another L2 had the line in a modified state, so the line had to be invalidated in that L2 cache and transferred to the requesting L2. (Precise Event - PEBS) (Precise event)mem_load_uops_llc_hit_retired.xsnp_misscacheRetired load uops which data sources were LLC hit and cross-core snoop missed in on-pkg core cache. (Precise Event - PEBS) (Precise event)event=0xd2,period=20011,umask=100mem_load_uops_llc_hit_retired.xsnp_nonecacheRetired load uops which data sources were hits in LLC without snoops required. (Precise Event - PEBS) (Precise event)event=0xd2,period=100003,umask=800mem_load_uops_misc_retired.llc_misscacheRetired load uops with unknown information as data source in cache serviced the load. (Precise Event - PEBS) (Precise event)event=0xd4,period=100007,umask=200This event counts retired demand loads that missed the  last-level (L3) cache. This means that the load is usually satisfied from memory in a client system or possibly from the remote socket in a server. Demand loads are non speculative load uops. (Precise Event - PEBS) (Precise event)mem_load_uops_retired.hit_lfbcacheRetired load uops which data sources were load uops missed L1 but hit FB due to preceding miss to the same cache line with data not ready. (Precise Event - PEBS) (Precise event)event=0xd1,period=100003,umask=0x4000mem_load_uops_retired.l1_hitcacheRetired load uops with L1 cache hits as data sources. (Precise Event - PEBS) (Precise event)event=0xd1,period=2000003,umask=100mem_load_uops_retired.l2_hitcacheRetired load uops with L2 cache hits as data sources. (Precise Event - PEBS) (Precise event)event=0xd1,period=100003,umask=200mem_load_uops_retired.llc_hitcacheRetired load uops which data sources were data hits in LLC without snoops required. (Precise Event - PEBS) (Precise event)event=0xd1,period=50021,umask=400This event counts retired load uops that hit in the last-level (L3) cache without snoops required. (Precise Event - PEBS) (Precise event)mem_uops_retired.all_loadscacheAll retired load uops. (Precise Event - PEBS) (Precise event)event=0xd0,period=2000003,umask=0x8100This event counts the number of load uops retired (Precise Event) (Precise event)mem_uops_retired.all_storescacheAll retired store uops. (Precise Event - PEBS) (Precise event)event=0xd0,period=2000003,umask=0x8200This event counts the number of store uops retired. (Precise Event - PEBS) (Precise event)mem_uops_retired.lock_loadscacheRetired load uops with locked access. (Precise Event - PEBS) (Precise event)event=0xd0,period=100007,umask=0x2100mem_uops_retired.split_loadscacheRetired load uops that split across a cacheline boundary. (Precise Event - PEBS) (Precise event)event=0xd0,period=100003,umask=0x4100This event counts line-splitted load uops retired to the architected path. A line split is across 64B cache-line which includes a page split (4K). (Precise Event - PEBS) (Precise event)mem_uops_retired.split_storescacheRetired store uops that split across a cacheline boundary. (Precise Event - PEBS) (Precise event)event=0xd0,period=100003,umask=0x4200This event counts line-splitted store uops retired to the architected path. A line split is across 64B cache-line which includes a page split (4K). (Precise Event - PEBS) (Precise event)mem_uops_retired.stlb_miss_loadscacheRetired load uops that miss the STLB. (Precise Event - PEBS) (Precise event)event=0xd0,period=100003,umask=0x1100mem_uops_retired.stlb_miss_storescacheRetired store uops that miss the STLB. (Precise Event - PEBS) (Precise event)event=0xd0,period=100003,umask=0x1200offcore_requests.demand_code_rdcacheCacheable and noncacheable code read requestsevent=0xb0,period=100003,umask=200offcore_response.all_code_rd.llc_hit.hitm_other_corecacheCounts demand & prefetch code reads that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003c024400offcore_response.all_code_rd.llc_hit.snoop_misscacheCounts demand & prefetch code reads that hit in the LLC and the snoops sent to sibling cores return clean responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003c024400offcore_response.all_data_rd.llc_hit.snoop_misscacheCounts demand & prefetch data reads that hit in the LLC and the snoops sent to sibling cores return clean responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003c009100offcore_response.all_pf_code_rd.llc_hit.any_responsecacheCounts all prefetch code reads that hit in the LLCevent=0xb7,period=100003,umask=1,offcore_rsp=0x3f803c024000offcore_response.all_pf_code_rd.llc_hit.hitm_other_corecacheCounts prefetch code reads that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003c024000offcore_response.all_pf_code_rd.llc_hit.hit_other_core_no_fwdcacheCounts prefetch code reads that hit in the LLC and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003c024000offcore_response.all_pf_code_rd.llc_hit.no_snoop_neededcacheCounts prefetch code reads that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003c024000offcore_response.all_pf_code_rd.llc_hit.snoop_misscacheCounts prefetch code reads that hit in the LLC and the snoops sent to sibling cores return clean responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003c024000offcore_response.all_pf_data_rd.llc_hit.any_responsecacheCounts all prefetch data reads that hit in the LLCevent=0xb7,period=100003,umask=1,offcore_rsp=0x3f803c009000offcore_response.all_pf_data_rd.llc_hit.snoop_misscacheCounts prefetch data reads that hit in the LLC and the snoops sent to sibling cores return clean responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003c009000offcore_response.all_pf_rfo.llc_hit.any_responsecacheCounts all prefetch RFOs that hit in the LLCevent=0xb7,period=100003,umask=1,offcore_rsp=0x3f803c012000offcore_response.all_pf_rfo.llc_hit.hitm_other_corecacheCounts prefetch RFOs that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003c012000offcore_response.all_pf_rfo.llc_hit.hit_other_core_no_fwdcacheCounts prefetch RFOs that hit in the LLC and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003c012000offcore_response.all_pf_rfo.llc_hit.no_snoop_neededcacheCounts prefetch RFOs that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003c012000offcore_response.all_pf_rfo.llc_hit.snoop_misscacheCounts prefetch RFOs that hit in the LLC and the snoops sent to sibling cores return clean responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003c012000offcore_response.all_reads.any_responsecacheCounts all data/code/rfo references (demand & prefetch) event=0xb7,period=100003,umask=1,offcore_rsp=0x000107F700offcore_response.all_reads.llc_hit.hitm_other_corecacheCounts data/code/rfo reads (demand & prefetch) that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003c03f700offcore_response.all_reads.llc_hit.hit_other_core_no_fwdcacheCounts data/code/rfo reads (demand & prefetch) that hit in the LLC and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003c03f700offcore_response.all_reads.llc_hit.no_snoop_neededcacheCounts data/code/rfo reads (demand & prefetch) that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003c03f700offcore_response.all_reads.llc_hit.snoop_misscacheCounts data/code/rfo reads (demand & prefetch) that hit in the LLC and the snoops sent to sibling cores return clean responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003c03f700offcore_response.all_rfo.any_responsecacheCounts all demand & prefetch prefetch RFOs event=0xb7,period=100003,umask=1,offcore_rsp=0x0001012200offcore_response.all_rfo.llc_hit.hitm_other_corecacheCounts demand & prefetch RFOs that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003c012200offcore_response.all_rfo.llc_hit.hit_other_core_no_fwdcacheCounts demand & prefetch RFOs that hit in the LLC and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003c012200offcore_response.all_rfo.llc_hit.snoop_misscacheCounts demand & prefetch RFOs that hit in the LLC and the snoops sent to sibling cores return clean responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003c012200offcore_response.corewb.any_responsecacheOFFCORE_RESPONSE.COREWB.ANY_RESPONSEevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000800offcore_response.data_in.any_responsecacheREQUEST = DATA_INTO_CORE and RESPONSE = ANY_RESPONSEevent=0xb7,period=100003,umask=1,offcore_rsp=0x1043300offcore_response.demand_code_rd.llc_hit.hitm_other_corecacheCounts demand code reads that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003c000400offcore_response.demand_code_rd.llc_hit.hit_other_core_no_fwdcacheCounts demand code reads that hit in the LLC and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003c000400offcore_response.demand_code_rd.llc_hit.snoop_misscacheCounts demand code reads that hit in the LLC and the snoops sent to sibling cores return clean responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003c000400offcore_response.demand_data_rd.any_responsecacheCounts all demand data reads event=0xb7,period=100003,umask=1,offcore_rsp=0x0001000100offcore_response.demand_data_rd.llc_hit.snoop_misscacheCounts demand data reads that hit in the LLC and the snoops sent to sibling cores return clean responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003c000100offcore_response.demand_rfo.any_responsecacheCounts all demand rfo's event=0xb7,period=100003,umask=1,offcore_rsp=0x0001000200offcore_response.demand_rfo.llc_hit.hit_other_core_no_fwdcacheCounts demand data writes (RFOs) that hit in the LLC and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003c000200offcore_response.demand_rfo.llc_hit.snoop_misscacheCounts demand data writes (RFOs) that hit in the LLC and the snoops sent to sibling cores return clean responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003c000200offcore_response.demand_rfo.llc_hit_m.hitmcacheREQUEST = DEMAND_RFO and RESPONSE = LLC_HIT_M and SNOOP = HITMevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004000200offcore_response.other.portio_mmio_uccacheCounts miscellaneous accesses that include port i/o, MMIO and uncacheable memory accessesevent=0xb7,period=100003,umask=1,offcore_rsp=0x238040800000offcore_response.pf_ifetch.any_responsecacheREQUEST = PF_RFO and RESPONSE = ANY_RESPONSEevent=0xb7,period=100003,umask=1,offcore_rsp=0x1004000offcore_response.pf_l2_code_rd.llc_hit.hitm_other_corecacheCounts prefetch (that bring data to L2) code reads that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003c004000offcore_response.pf_l2_code_rd.llc_hit.hit_other_core_no_fwdcacheCounts prefetch (that bring data to L2) code reads that hit in the LLC and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003c004000offcore_response.pf_l2_code_rd.llc_hit.no_snoop_neededcacheCounts prefetch (that bring data to L2) code reads that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003c004000offcore_response.pf_l2_code_rd.llc_hit.snoop_misscacheCounts prefetch (that bring data to L2) code reads that hit in the LLC and the snoops sent to sibling cores return clean responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003c004000offcore_response.pf_l2_data_rd.llc_hit.any_responsecacheCounts all prefetch (that bring data to L2) data reads that hit in the LLCevent=0xb7,period=100003,umask=1,offcore_rsp=0x3f803c001000offcore_response.pf_l2_rfo.llc_hit.any_responsecacheCounts all prefetch (that bring data to L2) RFOs that hit in the LLCevent=0xb7,period=100003,umask=1,offcore_rsp=0x3f803c002000offcore_response.pf_l2_rfo.llc_hit.hitm_other_corecacheCounts prefetch (that bring data to L2) RFOs that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003c002000offcore_response.pf_l2_rfo.llc_hit.hit_other_core_no_fwdcacheCounts prefetch (that bring data to L2) RFOs that hit in the LLC and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003c002000offcore_response.pf_l2_rfo.llc_hit.no_snoop_neededcacheCounts prefetch (that bring data to L2) RFOs that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003c002000offcore_response.pf_l2_rfo.llc_hit.snoop_misscacheCounts prefetch (that bring data to L2) RFOs that hit in the LLC and the snoops sent to sibling cores return clean responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003c002000offcore_response.pf_llc_code_rd.llc_hit.hitm_other_corecacheCounts prefetch (that bring data to LLC only) code reads that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003c020000offcore_response.pf_llc_code_rd.llc_hit.hit_other_core_no_fwdcacheCounts prefetch (that bring data to LLC only) code reads that hit in the LLC and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003c020000offcore_response.pf_llc_code_rd.llc_hit.no_snoop_neededcacheCounts prefetch (that bring data to LLC only) code reads that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003c020000offcore_response.pf_llc_code_rd.llc_hit.snoop_misscacheCounts prefetch (that bring data to LLC only) code reads that hit in the LLC and the snoops sent to sibling cores return clean responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003c020000offcore_response.pf_llc_data_rd.llc_hit.any_responsecacheCounts all prefetch (that bring data to LLC only) data reads that hit in the LLCevent=0xb7,period=100003,umask=1,offcore_rsp=0x3f803c008000offcore_response.pf_llc_rfo.llc_hit.any_responsecacheCounts all prefetch (that bring data to LLC only) RFOs that hit in the LLCevent=0xb7,period=100003,umask=1,offcore_rsp=0x3f803c010000offcore_response.pf_llc_rfo.llc_hit.hitm_other_corecacheCounts prefetch (that bring data to LLC only) RFOs that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003c010000offcore_response.pf_llc_rfo.llc_hit.hit_other_core_no_fwdcacheCounts prefetch (that bring data to LLC only) RFOs that hit in the LLC and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003c010000offcore_response.pf_llc_rfo.llc_hit.no_snoop_neededcacheCounts prefetch (that bring data to LLC only) RFOs that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003c010000offcore_response.pf_llc_rfo.llc_hit.snoop_misscacheCounts prefetch (that bring data to LLC only) RFOs that hit in the LLC and the snoops sent to sibling cores return clean responseevent=0xb7,period=100003,umask=1,offcore_rsp=0x2003c010000offcore_response.pf_l_data_rd.any_responsecacheREQUEST = PF_LLC_DATA_RD and RESPONSE = ANY_RESPONSEevent=0xb7,period=100003,umask=1,offcore_rsp=0x1008000offcore_response.pf_l_ifetch.any_responsecacheREQUEST = PF_LLC_IFETCH and RESPONSE = ANY_RESPONSEevent=0xb7,period=100003,umask=1,offcore_rsp=0x1020000idq.ms_cyclesfrontendCycles when uops are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busyevent=0x79,cmask=1,period=2000003,umask=0x3000This event counts cycles during which the microcode sequencer assisted the front-end in delivering uops.  Microcode assists are used for complex instructions or scenarios that can't be handled by the standard decoder.  Using other instructions, if possible, will usually improve performance.  See the Intel(R) 64 and IA-32 Architectures Optimization Reference Manual for more informationoffcore_response.all_pf_code_rd.llc_miss.drammemoryCounts all prefetch code reads that miss the LLC  and the data returned from dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x30040024000offcore_response.all_pf_data_rd.llc_miss.drammemoryCounts all prefetch data reads that miss the LLC  and the data returned from dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x30040009000offcore_response.all_pf_rfo.llc_miss.drammemoryCounts all prefetch RFOs that miss the LLC  and the data returned from dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x30040012000offcore_response.all_rfo.llc_miss.drammemoryCounts all demand & prefetch RFOs that miss the LLC  and the data returned from dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x30040012200offcore_response.any_request.llc_miss_local.drammemoryREQUEST = ANY_REQUEST and RESPONSE = LLC_MISS_LOCAL and SNOOP = DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x1f80408fff00This event counts any requests that miss the LLC where the data was returned from local DRAMoffcore_response.data_in_socket.llc_miss.local_drammemoryCounts LLC replacementsevent=0xb7,period=100003,umask=1,offcore_rsp=0x6004001b300This event counts all data requests (demand/prefetch data reads and demand data writes (RFOs) that miss the LLC  where the data is returned from local DRAMoffcore_response.data_in_socket.llc_miss_local.any_llc_hitmemoryREQUEST = DATA_IN_SOCKET and RESPONSE = LLC_MISS_LOCAL and SNOOP = ANY_LLC_HITevent=0xb7,period=100003,umask=1,offcore_rsp=0x17004001b300offcore_response.demand_ifetch.llc_miss_local.drammemoryREQUEST = DEMAND_IFETCH and RESPONSE = LLC_MISS_LOCAL and SNOOP = DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x1f8040000400offcore_response.demand_rfo.llc_miss.drammemoryCounts demand data writes (RFOs) that miss the LLC and the data returned from dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x30040000200offcore_response.pf_data_rd.llc_miss_local.drammemoryREQUEST = PF_DATA_RD and RESPONSE = LLC_MISS_LOCAL and SNOOP = DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x1f8040001000offcore_response.pf_ifetch.llc_miss_local.drammemoryREQUEST = PF_RFO and RESPONSE = LLC_MISS_LOCAL and SNOOP = DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x1f8040004000offcore_response.pf_l2_code_rd.llc_miss.drammemoryCounts all prefetch (that bring data to L2) code reads that miss the LLC  and the data returned from dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x30040004000offcore_response.pf_l2_data_rd.llc_miss.drammemoryCounts prefetch (that bring data to L2) data reads that miss the LLC and the data returned from dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x30040001000offcore_response.pf_l2_rfo.llc_miss.drammemoryCounts all prefetch (that bring data to L2) RFOs that miss the LLC  and the data returned from dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x30040002000offcore_response.pf_llc_code_rd.llc_miss.drammemoryCounts all prefetch (that bring data to LLC only) code reads that miss the LLC  and the data returned from dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x30040020000offcore_response.pf_llc_data_rd.llc_miss.drammemoryCounts all prefetch (that bring data to LLC only) data reads that miss the LLC  and the data returned from dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x30040008000offcore_response.pf_llc_rfo.llc_miss.drammemoryCounts all prefetch (that bring data to LLC only) RFOs that miss the LLC  and the data returned from dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x30040010000offcore_response.pf_l_data_rd.llc_miss_local.drammemoryREQUEST = PF_LLC_DATA_RD and RESPONSE = LLC_MISS_LOCAL and SNOOP = DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x1f8040008000offcore_response.pf_l_ifetch.llc_miss_local.drammemoryREQUEST = PF_LLC_IFETCH and RESPONSE = LLC_MISS_LOCAL and SNOOP = DRAMevent=0xb7,period=100003,umask=1,offcore_rsp=0x1f8040020000page_walks.llc_missmemoryNumber of any page walk that had a miss in LLC. Does not necessary cause a SUSPENDevent=0xbe,period=100003,umask=100br_inst_retired.conditionalpipelineConditional branch instructions retired. (Precise Event - PEBS) (Precise event)event=0xc4,period=400009,umask=100br_inst_retired.near_callpipelineDirect and indirect near call instructions retired. (Precise Event - PEBS) (Precise event)event=0xc4,period=100007,umask=200br_inst_retired.near_call_r3pipelineDirect and indirect macro near call instructions retired (captured in ring 3). (Precise Event - PEBS) (Precise event)event=0xc4,period=100007,umask=200br_inst_retired.near_returnpipelineReturn instructions retired. (Precise Event - PEBS) (Precise event)event=0xc4,period=100007,umask=800br_inst_retired.near_takenpipelineTaken branch instructions retired. (Precise Event - PEBS) (Precise event)event=0xc4,period=400009,umask=0x2000br_misp_retired.conditionalpipelineMispredicted conditional branch instructions retired. (Precise Event - PEBS) (Precise event)event=0xc5,period=400009,umask=100br_misp_retired.near_callpipelineDirect and indirect mispredicted near call instructions retired. (Precise Event - PEBS) (Precise event)event=0xc5,period=100007,umask=200br_misp_retired.not_takenpipelineMispredicted not taken branch instructions retired.(Precise Event - PEBS) (Precise event)event=0xc5,period=400009,umask=0x1000br_misp_retired.takenpipelineMispredicted taken branch instructions retired. (Precise Event - PEBS) (Precise event)event=0xc5,period=400009,umask=0x2000ld_blocks.store_forwardpipelineCases when loads get true Block-on-Store blocking code preventing store forwardingevent=3,period=100003,umask=200This event counts loads that followed a store to the same address, where the data could not be forwarded inside the pipeline from the store to the load.  The most common reason why store forwarding would be blocked is when a load's address range overlaps with a preceding smaller uncompleted store.  See the table of not supported store forwards in the Intel(R) 64 and IA-32 Architectures Optimization Reference Manual.  The penalty for blocked store forwarding is that the load must wait for the store to complete before it can be issuedpartial_rat_stalls.flags_merge_uop_cyclespipelinePerformance sensitive flags-merging uops added by Sandy Bridge u-archevent=0x59,cmask=1,period=2000003,umask=0x2000This event counts the number of cycles spent executing performance-sensitive flags-merging uops. For example, shift CL (merge_arith_flags). For more details, See the Intel(R) 64 and IA-32 Architectures Optimization Reference Manualpartial_rat_stalls.slow_lea_windowpipelineCycles with at least one slow LEA uop being allocatedevent=0x59,period=2000003,umask=0x4000This event counts the number of cycles with at least one slow LEA uop being allocated. A uop is generally considered as slow LEA if it has three sources (for example, two sources and immediate) regardless of whether it is a result of LEA instruction or not. Examples of the slow LEA uop are or uops with base, index, and offset source operands using base and index reqisters, where base is EBR/RBP/R13, using RIP relative or 16-bit addressing modes. See the Intel(R) 64 and IA-32 Architectures Optimization Reference Manual for more details about slow LEA instructionsuops_retired.allpipelineActually retired uops. (Precise Event - PEBS) (Precise event)event=0xc2,period=2000003,umask=100This event counts the number of micro-ops retired. (Precise Event) (Precise event)uops_retired.retire_slotspipelineRetirement slots used. (Precise Event - PEBS) (Precise event)event=0xc2,period=2000003,umask=200This event counts the number of retirement slots used each cycle.  There are potentially 4 slots that can be used each cycle - meaning, 4 micro-ops or 4 instructions could retire each cycle.  This event is used in determining the 'Retiring' category of the Top-Down pipeline slots characterization. (Precise Event - PEBS) (Precise event)mem_load_l3_miss_retired.remote_pmmcacheRetired load instructions with remote Intel(R) Optane(TM) DC persistent memory as the data source where the data request missed all caches (Precise event)event=0xd3,period=100007,umask=0x1000Counts retired load instructions with remote Intel(R) Optane(TM) DC persistent memory as the data source and the data request missed L3 (Precise event)mem_load_retired.local_pmmcacheRetired load instructions with local Intel(R) Optane(TM) DC persistent memory as the data source where the data request missed all caches  Supports address when precise (Precise event)event=0xd1,period=1000003,umask=0x8000Counts retired load instructions with local Intel(R) Optane(TM) DC persistent memory as the data source and the data request missed L3  Supports address when precise (Precise event)ocr.demand_data_rd.local_socket_pmmotherCounts demand data reads that were supplied by PMM attached to this socket, whether or not in Sub NUMA Cluster(SNC) Mode.  In SNC Mode counts PMM accesses that are controlled by the close or distant SNC Clusterevent=0x2a,period=100003,umask=1,offcore_rsp=0x700C0000100ocr.demand_data_rd.pmmotherCounts demand data reads that were supplied by PMMevent=0x2a,period=100003,umask=1,offcore_rsp=0x703C0000100ocr.demand_data_rd.remote_pmmotherCounts demand data reads that were supplied by PMM attached to another socketevent=0x2a,period=100003,umask=1,offcore_rsp=0x70300000100ocr.reads_to_core.local_socket_pmmotherCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by PMM attached to this socket, whether or not in Sub NUMA Cluster(SNC) Mode.  In SNC Mode counts PMM accesses that are controlled by the close or distant SNC Clusterevent=0x2a,period=100003,umask=1,offcore_rsp=0x700C0447700ocr.reads_to_core.remote_pmmotherCounts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by PMM attached to another socketevent=0x2a,period=100003,umask=1,offcore_rsp=0x70300447700unc_cha_tor_inserts.ia_miss_drd_opt_pref_remoteuncore cacheInserts into the TOR from local IA cores which miss the LLC and snoop filter with the opcode DRD_PREF_OPT, and target remote memoryevent=0x35,umask=0xc8a77e0101TOR Inserts : DRd_Opt_Prefs issued by iA Cores that missed the LLCunc_cha_tor_inserts.ia_miss_drd_opt_remoteuncore cacheInserts into the TOR from local IA cores which miss the LLC and snoop filter with the opcode DRd_Opt, and target remote memoryevent=0x35,umask=0xc8277e0101TOR Inserts : DRd_Opt issued by iA Cores that missed the LLCunc_b2hot_clockticksuncore interconnectUNC_B2HOT_CLOCKTICKSevent=1,umask=101core_reject_l2q.allcacheCounts the number of request that were not accepted into the L2Q because the L2Q is FULLevent=0x31,period=20000300Counts the number of (demand and L1 prefetchers) core requests rejected by the L2Q due to a full or nearly full w condition which likely indicates back pressure from L2Q.  It also counts requests that would have gone directly to the XQ, but are rejected due to a full or nearly full condition, indicating back pressure from the IDI link.  The L2Q may also reject transactions  from a core to insure fairness between cores, or to delay a core?s dirty eviction when the address conflicts incoming external snoops.  (Note that L2 prefetcher requests that are dropped are not counted by this event.)fetch_stall.icache_fill_pending_cyclescacheCycles code-fetch stalled due to an outstanding ICache missevent=0x86,period=200003,umask=400Counts cycles that fetch is stalled due to an outstanding ICache miss. That is, the decoder queue is able to accept bytes, but the fetch unit is unable to provide bytes due to an ICache miss.  Note: this event is not the same as the total number of cycles spent retrieving instruction cache lines from the memory hierarchy.
Counts cycles that fetch is stalled due to any reason. That is, the decoder queue is able to accept bytes, but the fetch unit is unable to provide bytes.  This will include cycles due to an ITLB miss, ICache miss and other eventsl2_reject_xq.allcacheCounts the number of request from the L2 that were not accepted into the XQevent=0x30,period=20000300This event counts the number of demand and prefetch transactions that the L2 XQ rejects due to a full or near full condition which likely indicates back pressure from the IDI link. The XQ may reject transactions from the L2Q (non-cacheable requests), BBS (L2 misses) and WOB (L2 write-back victims)longest_lat_cache.misscacheL2 cache request missesevent=0x2e,period=200003,umask=0x4100This event counts the total number of L2 cache references and the number of L2 cache misses respectivelylongest_lat_cache.referencecacheL2 cache requests from this coreevent=0x2e,period=200003,umask=0x4f00This event counts requests originating from the core that references a cache line in the L2 cachemem_uops_retired.all_loadscacheAll Loadsevent=4,period=200003,umask=0x4000This event counts the number of load ops retiredmem_uops_retired.all_storescacheAll Storesevent=4,period=200003,umask=0x8000This event counts the number of store ops retiredmem_uops_retired.hitmcacheCross core or cross module hitm (Precise event)event=4,period=200003,umask=0x2000This event counts the number of load ops retired that got data from the other core or from the other module (Precise event)mem_uops_retired.l1_miss_loadscacheLoads missed L1event=4,period=200003,umask=100This event counts the number of load ops retired that miss in L1 Data cache. Note that prefetch misses will not be countedmem_uops_retired.l2_hit_loadscacheLoads hit L2 (Precise event)event=4,period=200003,umask=200This event counts the number of load ops retired that hit in the L2 (Precise event)mem_uops_retired.l2_miss_loadscacheLoads missed L2 (Precise event)event=4,period=100007,umask=400This event counts the number of load ops retired that miss in the L2 (Precise event)mem_uops_retired.utlb_misscacheLoads missed UTLBevent=4,period=200003,umask=0x1000This event counts the number of load ops retired that had UTLB missoffcore_responsecacheOffcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transactionevent=0xb7,period=100007,umask=100Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transactionoffcore_response.any_code_rd.any_responsecacheCounts any code reads (demand & prefetch) that have any response typeevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001004400offcore_response.any_code_rd.l2_miss.anycacheCounts any code reads (demand & prefetch) that miss L2event=0xb7,period=100007,umask=1,offcore_rsp=0x168000004400offcore_response.any_code_rd.l2_miss.hitm_other_corecacheCounts any code reads (demand & prefetch) that hit in the other module where modified copies were found in other core's L1 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000004400offcore_response.any_code_rd.l2_miss.hit_other_core_no_fwdcacheCounts any code reads (demand & prefetch) that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100007,umask=1,offcore_rsp=0x040000004400offcore_response.any_code_rd.l2_miss.snoop_misscacheCounts any code reads (demand & prefetch) that miss L2 with a snoop miss responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000004400offcore_response.any_data_rd.any_responsecacheCounts any data read (demand & prefetch) that have any response typeevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001309100offcore_response.any_data_rd.l2_miss.anycacheCounts any data read (demand & prefetch) that miss L2event=0xb7,period=100007,umask=1,offcore_rsp=0x168000309100offcore_response.any_data_rd.l2_miss.hitm_other_corecacheCounts any data read (demand & prefetch) that hit in the other module where modified copies were found in other core's L1 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000309100offcore_response.any_data_rd.l2_miss.hit_other_core_no_fwdcacheCounts any data read (demand & prefetch) that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100007,umask=1,offcore_rsp=0x040000309100offcore_response.any_data_rd.l2_miss.snoop_misscacheCounts any data read (demand & prefetch) that miss L2 with a snoop miss responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000309100offcore_response.any_request.any_responsecacheCounts any request that have any response typeevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001800800offcore_response.any_request.l2_miss.hitm_other_corecacheCounts any request that hit in the other module where modified copies were found in other core's L1 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000800800offcore_response.any_request.l2_miss.hit_other_core_no_fwdcacheCounts any request that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100007,umask=1,offcore_rsp=0x040000800800offcore_response.any_request.l2_miss.snoop_misscacheCounts any request that miss L2 with a snoop miss responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000800800offcore_response.any_rfo.any_responsecacheCounts any rfo reads (demand & prefetch) that have any response typeevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001002200offcore_response.any_rfo.l2_miss.anycacheCounts any rfo reads (demand & prefetch) that miss L2event=0xb7,period=100007,umask=1,offcore_rsp=0x168000002200offcore_response.any_rfo.l2_miss.hitm_other_corecacheCounts any rfo reads (demand & prefetch) that hit in the other module where modified copies were found in other core's L1 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000002200offcore_response.any_rfo.l2_miss.hit_other_core_no_fwdcacheCounts any rfo reads (demand & prefetch) that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100007,umask=1,offcore_rsp=0x040000002200offcore_response.any_rfo.l2_miss.snoop_misscacheCounts any rfo reads (demand & prefetch) that miss L2 with a snoop miss responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000002200offcore_response.corewb.l2_miss.anycacheCounts writeback (modified to exclusive) that miss L2event=0xb7,period=100007,umask=1,offcore_rsp=0x168000000800offcore_response.corewb.l2_miss.no_snoop_neededcacheCounts writeback (modified to exclusive) that miss L2 with no details on snoop-related informationevent=0xb7,period=100007,umask=1,offcore_rsp=0x008000000800offcore_response.demand_code_rd.any_responsecacheCounts demand and DCU prefetch instruction cacheline that have any response typeevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001000400offcore_response.demand_code_rd.l2_miss.anycacheCounts demand and DCU prefetch instruction cacheline that miss L2event=0xb7,period=100007,umask=1,offcore_rsp=0x168000000400offcore_response.demand_code_rd.l2_miss.hit_other_core_no_fwdcacheCounts demand and DCU prefetch instruction cacheline that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100007,umask=1,offcore_rsp=0x040000000400offcore_response.demand_code_rd.l2_miss.snoop_misscacheCounts demand and DCU prefetch instruction cacheline that miss L2 with a snoop miss responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000000400offcore_response.demand_code_rd.outstandingcacheCounts demand and DCU prefetch instruction cacheline that are are outstanding, per cycle, from the time of the L2 miss to when any response is receivedevent=0xb7,period=100007,umask=1,offcore_rsp=0x400000000400offcore_response.demand_data_rd.any_responsecacheCounts demand and DCU prefetch data read that have any response typeevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001000100offcore_response.demand_data_rd.l2_miss.anycacheCounts demand and DCU prefetch data read that miss L2event=0xb7,period=100007,umask=1,offcore_rsp=0x168000000100offcore_response.demand_data_rd.l2_miss.hitm_other_corecacheCounts demand and DCU prefetch data read that hit in the other module where modified copies were found in other core's L1 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000000100offcore_response.demand_data_rd.l2_miss.hit_other_core_no_fwdcacheCounts demand and DCU prefetch data read that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100007,umask=1,offcore_rsp=0x040000000100offcore_response.demand_data_rd.l2_miss.snoop_misscacheCounts demand and DCU prefetch data read that miss L2 with a snoop miss responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000000100offcore_response.demand_data_rd.outstandingcacheCounts demand and DCU prefetch data read that are are outstanding, per cycle, from the time of the L2 miss to when any response is receivedevent=0xb7,period=100007,umask=1,offcore_rsp=0x400000000100offcore_response.demand_rfo.l2_miss.anycacheCounts demand and DCU prefetch RFOs that miss L2event=0xb7,period=100007,umask=1,offcore_rsp=0x168000000200offcore_response.demand_rfo.l2_miss.hitm_other_corecacheCounts demand and DCU prefetch RFOs that hit in the other module where modified copies were found in other core's L1 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000000200offcore_response.demand_rfo.l2_miss.hit_other_core_no_fwdcacheCounts demand and DCU prefetch RFOs that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100007,umask=1,offcore_rsp=0x040000000200offcore_response.demand_rfo.l2_miss.snoop_misscacheCounts demand and DCU prefetch RFOs that miss L2 with a snoop miss responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000000200offcore_response.demand_rfo.outstandingcacheCounts demand and DCU prefetch RFOs that are are outstanding, per cycle, from the time of the L2 miss to when any response is receivedevent=0xb7,period=100007,umask=1,offcore_rsp=0x400000000200offcore_response.partial_reads.l2_miss.anycacheCounts demand reads of partial cache lines (including UC and WC) that miss L2event=0xb7,period=100007,umask=1,offcore_rsp=0x168000008000offcore_response.partial_writes.l2_miss.anycacheCountsof demand RFO requests to write to partial cache lines that miss L2event=0xb7,period=100007,umask=1,offcore_rsp=0x168000010000offcore_response.pf_l1_data_rd.any_responsecacheCounts DCU hardware prefetcher data read that have any response typeevent=0xb7,period=100007,umask=1,offcore_rsp=0x000001200000offcore_response.pf_l1_data_rd.l2_miss.anycacheCounts DCU hardware prefetcher data read that miss L2event=0xb7,period=100007,umask=1,offcore_rsp=0x168000200000offcore_response.pf_l1_data_rd.l2_miss.hitm_other_corecacheCounts DCU hardware prefetcher data read that hit in the other module where modified copies were found in other core's L1 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000200000offcore_response.pf_l1_data_rd.l2_miss.hit_other_core_no_fwdcacheCounts DCU hardware prefetcher data read that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100007,umask=1,offcore_rsp=0x040000200000offcore_response.pf_l1_data_rd.l2_miss.snoop_misscacheCounts DCU hardware prefetcher data read that miss L2 with a snoop miss responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000200000offcore_response.pf_l2_code_rd.l2_miss.anycacheCounts code reads generated by L2 prefetchers that miss L2event=0xb7,period=100007,umask=1,offcore_rsp=0x168000004000offcore_response.pf_l2_code_rd.l2_miss.hit_other_core_no_fwdcacheCounts code reads generated by L2 prefetchers that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100007,umask=1,offcore_rsp=0x040000004000offcore_response.pf_l2_code_rd.l2_miss.snoop_misscacheCounts code reads generated by L2 prefetchers that miss L2 with a snoop miss responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000004000offcore_response.pf_l2_data_rd.l2_miss.anycacheCounts data cacheline reads generated by L2 prefetchers that miss L2event=0xb7,period=100007,umask=1,offcore_rsp=0x168000001000offcore_response.pf_l2_data_rd.l2_miss.hitm_other_corecacheCounts data cacheline reads generated by L2 prefetchers that hit in the other module where modified copies were found in other core's L1 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000001000offcore_response.pf_l2_data_rd.l2_miss.hit_other_core_no_fwdcacheCounts data cacheline reads generated by L2 prefetchers that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100007,umask=1,offcore_rsp=0x040000001000offcore_response.pf_l2_data_rd.l2_miss.snoop_misscacheCounts data cacheline reads generated by L2 prefetchers that miss L2 with a snoop miss responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000001000offcore_response.pf_l2_rfo.l2_miss.anycacheCounts RFO requests generated by L2 prefetchers that miss L2event=0xb7,period=100007,umask=1,offcore_rsp=0x168000002000offcore_response.pf_l2_rfo.l2_miss.hitm_other_corecacheCounts RFO requests generated by L2 prefetchers that hit in the other module where modified copies were found in other core's L1 cacheevent=0xb7,period=100007,umask=1,offcore_rsp=0x100000002000offcore_response.pf_l2_rfo.l2_miss.hit_other_core_no_fwdcacheCounts RFO requests generated by L2 prefetchers that miss L2 and the snoops to sibling cores hit in either E/S state and the line is not forwardedevent=0xb7,period=100007,umask=1,offcore_rsp=0x040000002000offcore_response.pf_l2_rfo.l2_miss.snoop_misscacheCounts RFO requests generated by L2 prefetchers that miss L2 with a snoop miss responseevent=0xb7,period=100007,umask=1,offcore_rsp=0x020000002000offcore_response.streaming_stores.l2_miss.anycacheCounts streaming store that miss L2event=0xb7,period=100007,umask=1,offcore_rsp=0x168000480000rehabq.any_ldcacheAny reissued load uopsevent=3,period=200003,umask=0x4000This event counts the number of load uops reissued from Rehabqrehabq.any_stcacheAny reissued store uopsevent=3,period=200003,umask=0x8000This event counts the number of store uops reissued from Rehabqrehabq.ld_block_std_notreadycacheLoads blocked due to store data not readyevent=3,period=200003,umask=200This event counts the cases where a forward was technically possible, but did not occur because the store data was not available at the right timerehabq.ld_block_st_forwardcacheLoads blocked due to store forward restriction (Precise event)event=3,period=200003,umask=100This event counts the number of retired loads that were prohibited from receiving forwarded data from the store because of address mismatch (Precise event)rehabq.ld_splitscacheLoad uops that split cache line boundary (Precise event)event=3,period=200003,umask=800This event counts the number of retire loads that experienced cache line boundary splits (Precise event)rehabq.lockcacheUops with lock semanticsevent=3,period=200003,umask=0x1000This event counts the number of retired memory operations with lock semantics. These are either implicit locked instructions such as the XCHG instruction or instructions with an explicit LOCK prefix (0xF0)rehabq.sta_fullcacheStore address buffer fullevent=3,period=200003,umask=0x2000This event counts the number of retired stores that are delayed because there is not a store address buffer availablerehabq.st_splitscacheStore uops that split cache line boundaryevent=3,period=200003,umask=400This event counts the number of retire stores that experienced cache line boundary splitsmachine_clears.fp_assistfloating pointStalls due to FP assistsevent=0xc3,period=200003,umask=400This event counts the number of times that pipeline stalled due to FP operations needing assistsbaclears.allfrontendCounts the number of baclearsevent=0xe6,period=200003,umask=100The BACLEARS event counts the number of times the front end is resteered, mainly when the Branch Prediction Unit cannot provide a correct prediction and this is corrected by the Branch Address Calculator at the front end.  The BACLEARS.ANY event counts the number of baclears for any type of branchbaclears.condfrontendCounts the number of JCC baclearsevent=0xe6,period=200003,umask=0x1000The BACLEARS event counts the number of times the front end is resteered, mainly when the Branch Prediction Unit cannot provide a correct prediction and this is corrected by the Branch Address Calculator at the front end.  The BACLEARS.COND event counts the number of JCC (Jump on Conditional Code) baclearsbaclears.returnfrontendCounts the number of RETURN baclearsevent=0xe6,period=200003,umask=800The BACLEARS event counts the number of times the front end is resteered, mainly when the Branch Prediction Unit cannot provide a correct prediction and this is corrected by the Branch Address Calculator at the front end.  The BACLEARS.RETURN event counts the number of RETURN baclearsdecode_restriction.predecode_wrongfrontendCounts the number of times a decode restriction reduced the decode throughput due to wrong instruction length predictionevent=0xe9,period=200003,umask=100Counts the number of times a decode restriction reduced the decode throughput due to wrong instruction length predictionicache.accessesfrontendInstruction fetchesevent=0x80,period=200003,umask=300This event counts all instruction fetches, not including most uncacheable
fetchesicache.hitfrontendInstruction fetches from Icacheevent=0x80,period=200003,umask=100This event counts all instruction fetches from the instruction cacheicache.missesfrontendIcache missevent=0x80,period=200003,umask=200This event counts all instruction fetches that miss the Instruction cache or produce memory requests. This includes uncacheable fetches. An instruction fetch miss is counted only once and not once for every cycle it is outstandingms_decoded.ms_entryfrontendCounts the number of times entered into a ucode flow in the FEC.  Includes inserted flows due to front-end detected faults or assists.  Speculative countevent=0xe7,period=200003,umask=100Counts the number of times the MSROM starts a flow of UOPS. It does not count every time a UOP is read from the microcode ROM.  The most common case that this counts is when a micro-coded instruction is encountered by the front end of the machine.  Other cases include when an instruction encounters a fault, trap, or microcode assist of any sort.  The event will count MSROM startups for UOPS that are speculative, and subsequently cleared by branch mispredict or machine clear.  Background: UOPS are produced by two mechanisms.  Either they are generated by hardware that decodes instructions into UOPS, or they are delivered by a ROM (called the MSROM) that holds UOPS associated with a specific instruction.  MSROM UOPS might also be delivered in response to some condition such as a fault or other exceptional condition.  This event is an excellent mechanism for detecting instructions that require the use of MSROM instructionsmachine_clears.memory_orderingmemoryStalls due to Memory orderingevent=0xc3,period=200003,umask=200This event counts the number of times that pipeline was cleared due to memory ordering issuesfetch_stall.allotherCycles code-fetch stalled due to any reasonevent=0x86,period=200003,umask=0x3f00Counts cycles that fetch is stalled due to any reason. That is, the decoder queue is able to accept bytes, but the fetch unit is unable to provide bytes.  This will include cycles due to an ITLB miss, ICache miss and other eventsfetch_stall.itlb_fill_pending_cyclesotherCycles code-fetch stalled due to an outstanding ITLB missevent=0x86,period=200003,umask=200Counts cycles that fetch is stalled due to an outstanding ITLB miss. That is, the decoder queue is able to accept bytes, but the fetch unit is unable to provide bytes due to an ITLB miss.  Note: this event is not the same as page walk cycles to retrieve an instruction translationbr_inst_retired.all_branchespipelineCounts the number of branch instructions retired.. (Precise event)event=0xc4,period=20000300ALL_BRANCHES counts the number of any branch instructions retired.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns (Precise event)br_inst_retired.all_taken_branchespipelineCounts the number of taken branch instructions retired (Must be precise)event=0xc4,period=200003,umask=0x8000ALL_TAKEN_BRANCHES counts the number of all taken branch instructions retired.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns (Must be precise)br_inst_retired.callpipelineCounts the number of near CALL branch instructions retired (Precise event)event=0xc4,period=200003,umask=0xf900CALL counts the number of near CALL branch instructions retired.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns (Precise event)br_inst_retired.far_branchpipelineCounts the number of far branch instructions retired (Precise event)event=0xc4,period=200003,umask=0xbf00FAR counts the number of far branch instructions retired.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns (Precise event)br_inst_retired.ind_callpipelineCounts the number of near indirect CALL branch instructions retired (Precise event)event=0xc4,period=200003,umask=0xfb00IND_CALL counts the number of near indirect CALL branch instructions retired.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns (Precise event)br_inst_retired.jccpipelineCounts the number of JCC branch instructions retired (Precise event)event=0xc4,period=200003,umask=0x7e00JCC counts the number of conditional branch (JCC) instructions retired. Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns (Precise event)br_inst_retired.non_return_indpipelineCounts the number of near indirect JMP and near indirect CALL branch instructions retired (Precise event)event=0xc4,period=200003,umask=0xeb00NON_RETURN_IND counts the number of near indirect JMP and near indirect CALL branch instructions retired.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns (Precise event)br_inst_retired.rel_callpipelineCounts the number of near relative CALL branch instructions retired (Precise event)event=0xc4,period=200003,umask=0xfd00REL_CALL counts the number of near relative CALL branch instructions retired.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns (Precise event)br_inst_retired.returnpipelineCounts the number of near RET branch instructions retired (Precise event)event=0xc4,period=200003,umask=0xf700RETURN counts the number of near RET branch instructions retired.  Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns (Precise event)br_inst_retired.taken_jccpipelineCounts the number of taken JCC branch instructions retired (Precise event)event=0xc4,period=200003,umask=0xfe00TAKEN_JCC counts the number of taken conditional branch (JCC) instructions retired. Branch prediction predicts the branch target and enables the processor to begin executing instructions long before the branch true execution path is known. All branches utilize the branch prediction unit (BPU) for prediction. This unit predicts the target address not only based on the EIP of the branch but also based on the execution path through which execution reached this EIP. The BPU can efficiently predict the following branch types: conditional branches, direct calls and jumps, indirect calls and jumps, returns (Precise event)br_misp_retired.all_branchespipelineCounts the number of mispredicted branch instructions retired (Precise event)event=0xc5,period=20000300ALL_BRANCHES counts the number of any mispredicted branch instructions retired. This umask is an architecturally defined event. This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa.  When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path (Precise event)br_misp_retired.ind_callpipelineCounts the number of mispredicted near indirect CALL branch instructions retired (Precise event)event=0xc5,period=200003,umask=0xfb00IND_CALL counts the number of mispredicted near indirect CALL branch instructions retired.  This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa.  When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path (Precise event)br_misp_retired.jccpipelineCounts the number of mispredicted JCC branch instructions retired (Precise event)event=0xc5,period=200003,umask=0x7e00JCC counts the number of mispredicted conditional branches (JCC) instructions retired.  This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa.  When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path (Precise event)br_misp_retired.non_return_indpipelineCounts the number of mispredicted near indirect JMP and near indirect CALL branch instructions retired (Precise event)event=0xc5,period=200003,umask=0xeb00NON_RETURN_IND counts the number of mispredicted near indirect JMP and near indirect CALL branch instructions retired.  This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa.  When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path (Precise event)br_misp_retired.returnpipelineCounts the number of mispredicted near RET branch instructions retired (Precise event)event=0xc5,period=200003,umask=0xf700RETURN counts the number of mispredicted near RET branch instructions retired.  This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa.  When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path (Precise event)br_misp_retired.taken_jccpipelineCounts the number of mispredicted taken JCC branch instructions retired (Precise event)event=0xc5,period=200003,umask=0xfe00TAKEN_JCC counts the number of mispredicted taken conditional branch (JCC) instructions retired.  This event counts the number of retired branch instructions that were mispredicted by the processor, categorized by type. A branch misprediction occurs when the processor predicts that the branch would be taken, but it is not, or vice-versa.  When the misprediction is discovered, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path (Precise event)cpu_clk_unhalted.corepipelineFixed Counter: Counts the number of unhalted core clock cyclesevent=0x3c,period=200000300Counts the number of core cycles while the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios.  The core frequency may change from time to time. For this reason this event may have a changing ratio with regards to time. In systems with a constant core frequency, this event can give you a measurement of the elapsed time while the core was not in halt state by dividing the event count by the core frequency. This event is architecturally defined and is a designated fixed counter.  CPU_CLK_UNHALTED.CORE and CPU_CLK_UNHALTED.CORE_P use the core frequency which may change from time to time.  CPU_CLK_UNHALTE.REF_TSC and CPU_CLK_UNHALTED.REF are not affected by core frequency changes but counts as if the core is running at the maximum frequency all the time.  The fixed events are CPU_CLK_UNHALTED.CORE and CPU_CLK_UNHALTED.REF_TSC and the programmable events are CPU_CLK_UNHALTED.CORE_P and CPU_CLK_UNHALTED.REFcpu_clk_unhalted.core_ppipelineCore cycles when core is not haltedevent=0x3c,period=200000300This event counts the number of core cycles while the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. In mobile systems the core frequency may change from time to time. For this reason this event may have a changing ratio with regards to timecpu_clk_unhalted.refpipelineReference cycles when core is not haltedevent=0x0,umask=0x03,period=200000300This event counts the number of reference cycles that the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. In mobile systems the core frequency may change from time. This event is not affected by core frequency changes but counts as if the core is running at the maximum frequency all the timecpu_clk_unhalted.ref_tscpipelineFixed Counter: Counts the number of unhalted reference clock cyclesevent=0,period=2000003,umask=300Counts the number of reference cycles while the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios.  The core frequency may change from time. This event is not affected by core frequency changes but counts as if the core is running at the maximum frequency all the time.  Divide this event count by core frequency to determine the elapsed time while the core was not in halt state.  Divide this event count by core frequency to determine the elapsed time while the core was not in halt state.  This event is architecturally defined and is a designated fixed counter.  CPU_CLK_UNHALTED.CORE and CPU_CLK_UNHALTED.CORE_P use the core frequency which may change from time to time.  CPU_CLK_UNHALTE.REF_TSC and CPU_CLK_UNHALTED.REF are not affected by core frequency changes but counts as if the core is running at the maximum frequency all the time.  The fixed events are CPU_CLK_UNHALTED.CORE and CPU_CLK_UNHALTED.REF_TSC and the programmable events are CPU_CLK_UNHALTED.CORE_P and CPU_CLK_UNHALTED.REFcycles_div_busy.allpipelineCycles the divider is busy.  Does not imply a stall waiting for the dividerevent=0xcd,period=2000003,umask=100Cycles the divider is busy.This event counts the cycles when the divide unit is unable to accept a new divide UOP because it is busy processing a previously dispatched UOP. The cycles will be counted irrespective of whether or not another divide UOP is waiting to enter the divide unit (from the RS). This event might count cycles while a divide is in progress even if the RS is empty.  The divide instruction is one of the longest latency instructions in the machine.  Hence, it has a special event associated with it to help determine if divides are delaying the retirement of instructionsinst_retired.anypipelineFixed Counter: Counts the number of instructions retiredevent=0xc0,period=200000300This event counts the number of instructions that retire.  For instructions that consist of multiple micro-ops, this event counts exactly once, as the last micro-op of the instruction retires.  The event continues counting while instructions retire, including during interrupt service routines caused by hardware interrupts, faults or traps.  Background: Modern microprocessors employ extensive pipelining and speculative techniques.  Since sometimes an instruction is started but never completed, the notion of "retirement" is introduced.  A retired instruction is one that commits its states. Or stated differently, an instruction might be abandoned at some point. No instruction is truly finished until it retires.  This counter measures the number of completed instructions.  The fixed event is INST_RETIRED.ANY and the programmable event is INST_RETIRED.ANY_Pinst_retired.any_ppipelineInstructions retiredevent=0xc0,period=200000300This event counts the number of instructions that retire execution. For instructions that consist of multiple micro-ops, this event counts the retirement of the last micro-op of the instruction. The counter continues counting during hardware interrupts, traps, and inside interrupt handlersmachine_clears.allpipelineCounts all machine clearsevent=0xc3,period=200003,umask=800Machine clears happen when something happens in the machine that causes the hardware to need to take special care to get the right answer. When such a condition is signaled on an instruction, the front end of the machine is notified that it must restart, so no more instructions will be decoded from the current path.  All instructions "older" than this one will be allowed to finish.  This instruction and all "younger" instructions must be cleared, since they must not be allowed to complete.  Essentially, the hardware waits until the problematic instruction is the oldest instruction in the machine.  This means all older instructions are retired, and all pending stores (from older instructions) are completed.  Then the new path of instructions from the front end are allowed to start into the machine.  There are many conditions that might cause a machine clear (including the receipt of an interrupt, or a trap or a fault).  All those conditions (including but not limited to MACHINE_CLEARS.MEMORY_ORDERING, MACHINE_CLEARS.SMC, and MACHINE_CLEARS.FP_ASSIST) are captured in the ANY event. In addition, some conditions can be specifically counted (i.e. SMC, MEMORY_ORDERING, FP_ASSIST).  However, the sum of SMC, MEMORY_ORDERING, and FP_ASSIST machine clears will not necessarily equal the number of ANYmachine_clears.smcpipelineSelf-Modifying Code detectedevent=0xc3,period=200003,umask=100This event counts the number of times that a program writes to a code section. Self-modifying code causes a severe penalty in all Intel? architecture processorsno_alloc_cycles.allpipelineCounts the number of cycles when no uops are allocated for any reasonevent=0xca,period=200003,umask=0x3f00The NO_ALLOC_CYCLES.ALL event counts the number of cycles when the front-end does not provide any instructions to be allocated for any reason. This event indicates the cycles where an allocation stalls occurs, and no UOPS are allocated in that cycleno_alloc_cycles.mispredictspipelineCounts the number of cycles when no uops are allocated and the alloc pipe is stalled waiting for a mispredicted jump to retire.  After the misprediction is detected, the front end will start immediately but the allocate pipe stalls until the mispredictedevent=0xca,period=200003,umask=400Counts the number of cycles when no uops are allocated and the alloc pipe is stalled waiting for a mispredicted jump to retire.  After the misprediction is detected, the front end will start immediately but the allocate pipe stalls until the mispredictedno_alloc_cycles.not_deliveredpipelineCounts the number of cycles when no uops are allocated, the IQ is empty, and no other condition is blocking allocationevent=0xca,period=200003,umask=0x5000The NO_ALLOC_CYCLES.NOT_DELIVERED event is used to measure front-end inefficiencies, i.e. when front-end of the machine is not delivering micro-ops to the back-end and the back-end is not stalled. This event can be used to identify if the machine is truly front-end bound.  When this event occurs, it is an indication that the front-end of the machine is operating at less than its theoretical peak performance.  Background: We can think of the processor pipeline as being divided into 2 broader parts: Front-end and Back-end. Front-end is responsible for fetching the instruction, decoding into micro-ops (uops) in machine understandable format and putting them into a micro-op queue to be consumed by back end. The back-end then takes these micro-ops, allocates the required resources.  When all resources are ready, micro-ops are executed. If the back-end is not ready to accept micro-ops from the front-end, then we do not want to count these as front-end bottlenecks.  However, whenever we have bottlenecks in the back-end, we will have allocation unit stalls and eventually forcing the front-end to wait until the back-end is ready to receive more UOPS. This event counts the cycles only when back-end is requesting more uops and front-end is not able to provide them. Some examples of conditions that cause front-end efficiencies are: Icache misses, ITLB misses, and decoder restrictions that limit the front-end bandwidthno_alloc_cycles.rat_stallpipelineCounts the number of cycles when no uops are allocated and a RATstall is assertedevent=0xca,period=200003,umask=0x2000no_alloc_cycles.rob_fullpipelineCounts the number of cycles when no uops are allocated and the ROB is full (less than 2 entries available)event=0xca,period=200003,umask=100Counts the number of cycles when no uops are allocated and the ROB is full (less than 2 entries available)rs_full_stall.allpipelineCounts the number of cycles the Alloc pipeline is stalled when any one of the RSs (IEC, FPC and MEC) is full. This event is a superset of all the individual RS stall event countsevent=0xcb,period=200003,umask=0x1f00rs_full_stall.mecpipelineCounts the number of cycles and allocation pipeline is stalled and is waiting for a free MEC reservation station entry.  The cycles should be appropriately counted in case of the cracked ops e.g. In case of a cracked load-op, the load portion is sent to Mevent=0xcb,period=200003,umask=100Counts the number of cycles and allocation pipeline is stalled and is waiting for a free MEC reservation station entry.  The cycles should be appropriately counted in case of the cracked ops e.g. In case of a cracked load-op, the load portion is sent to Muops_retired.allpipelineMicro-ops retiredevent=0xc2,period=2000003,umask=0x1000This event counts the number of micro-ops retired. The processor decodes complex macro instructions into a sequence of simpler micro-ops. Most instructions are composed of one or two micro-ops. Some instructions are decoded into longer sequences such as repeat instructions, floating point transcendental instructions, and assists. In some cases micro-op sequences are fused or whole instructions are fused into one micro-op. See other UOPS_RETIRED events for differentiating retired fused and non-fused micro-opsuops_retired.mspipelineMSROM micro-ops retiredevent=0xc2,period=2000003,umask=100This event counts the number of micro-ops retired that were supplied from MSROMmem_uops_retired.dtlb_miss_loadsvirtual memoryLoads missed DTLB (Precise event)event=4,period=200003,umask=800This event counts the number of load ops retired that had DTLB miss (Precise event)page_walks.cyclesvirtual memoryTotal cycles for all the page walks. (I-side and D-side)event=5,period=200003,umask=300This event counts every cycle when a data (D) page walk or instruction (I) page walk is in progress.  Since a pagewalk implies a TLB miss, the approximate cost of a TLB miss can be determined from this eventpage_walks.d_side_cyclesvirtual memoryDuration of D-side page-walks in core cyclesevent=5,period=200003,umask=100This event counts every cycle when a D-side (walks due to a load) page walk is in progress. Page walk duration divided by number of page walks is the average duration of page-walkspage_walks.d_side_walksvirtual memoryD-side page-walksevent=5,edge=1,period=100003,umask=100This event counts when a data (D) page walk is completed or started.  Since a page walk implies a TLB miss, the number of TLB misses can be counted by counting the number of pagewalkspage_walks.i_side_cyclesvirtual memoryDuration of I-side page-walks in core cyclesevent=5,period=200003,umask=200This event counts every cycle when a I-side (walks due to an instruction fetch) page walk is in progress. Page walk duration divided by number of page walks is the average duration of page-walkspage_walks.i_side_walksvirtual memoryI-side page-walksevent=5,edge=1,period=100003,umask=200This event counts when an instruction (I) page walk is completed or started.  Since a page walk implies a TLB miss, the number of TLB misses can be counted by counting the number of pagewalkspage_walks.walksvirtual memoryTotal page walks that are completed (I-side and D-side)event=5,edge=1,period=100003,umask=300This event counts when a data (D) page walk or an instruction (I) page walk is completed or started.  Since a page walk implies a TLB miss, the number of TLB misses can be counted by counting the number of pagewalksl2_lines_out.non_silentcacheCounts the number of lines that are evicted by L2 cache when triggered by an L2 cache fill. Those lines are in Modified state. Modified lines are written back to L3event=0xf2,period=200003,umask=200l2_lines_out.silentcacheCounts the number of lines that are silently dropped by L2 cache when triggered by an L2 cache fill. These lines are typically in Shared or Exclusive state. A non-threaded eventevent=0xf2,period=200003,umask=100offcore_responsecacheOffcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transactionevent=0xb7,period=100003,umask=100Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transactionoffcore_response.demand_code_rd.l3_hit.any_snoopcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC01C000400offcore_response.demand_code_rd.l3_hit.snoop_hitmcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10001C000400offcore_response.demand_code_rd.l3_hit.snoop_hit_no_fwdcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4001C000400offcore_response.demand_code_rd.l3_hit.snoop_misscacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x2001C000400offcore_response.demand_code_rd.l3_hit.snoop_nonecacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x801C000400offcore_response.demand_code_rd.l3_hit.snoop_not_neededcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x1001C000400offcore_response.demand_code_rd.l3_hit.spl_hitcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x401C000400offcore_response.demand_code_rd.l3_hit_e.any_snoopcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC008000400offcore_response.demand_code_rd.l3_hit_e.snoop_hitmcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008000400offcore_response.demand_code_rd.l3_hit_e.snoop_hit_no_fwdcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008000400offcore_response.demand_code_rd.l3_hit_e.snoop_misscacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20008000400offcore_response.demand_code_rd.l3_hit_e.snoop_nonecacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008000400offcore_response.demand_code_rd.l3_hit_e.snoop_not_neededcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008000400offcore_response.demand_code_rd.l3_hit_e.spl_hitcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4008000400offcore_response.demand_code_rd.l3_hit_m.any_snoopcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC004000400offcore_response.demand_code_rd.l3_hit_m.snoop_hitmcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004000400offcore_response.demand_code_rd.l3_hit_m.snoop_hit_no_fwdcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004000400offcore_response.demand_code_rd.l3_hit_m.snoop_misscacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20004000400offcore_response.demand_code_rd.l3_hit_m.snoop_nonecacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8004000400offcore_response.demand_code_rd.l3_hit_m.snoop_not_neededcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004000400offcore_response.demand_code_rd.l3_hit_m.spl_hitcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4004000400offcore_response.demand_code_rd.l3_hit_s.any_snoopcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC010000400offcore_response.demand_code_rd.l3_hit_s.snoop_hitmcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010000400offcore_response.demand_code_rd.l3_hit_s.snoop_hit_no_fwdcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010000400offcore_response.demand_code_rd.l3_hit_s.snoop_misscacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20010000400offcore_response.demand_code_rd.l3_hit_s.snoop_nonecacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8010000400offcore_response.demand_code_rd.l3_hit_s.snoop_not_neededcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010000400offcore_response.demand_code_rd.l3_hit_s.spl_hitcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4010000400offcore_response.demand_code_rd.l4_hit_local_l4.any_snoopcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC040000400offcore_response.demand_code_rd.l4_hit_local_l4.snoop_hitmcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100040000400offcore_response.demand_code_rd.l4_hit_local_l4.snoop_hit_no_fwdcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40040000400offcore_response.demand_code_rd.l4_hit_local_l4.snoop_misscacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20040000400offcore_response.demand_code_rd.l4_hit_local_l4.snoop_nonecacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040000400offcore_response.demand_code_rd.l4_hit_local_l4.snoop_not_neededcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040000400offcore_response.demand_code_rd.l4_hit_local_l4.spl_hitcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4040000400offcore_response.demand_code_rd.supplier_none.any_snoopcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC002000400offcore_response.demand_code_rd.supplier_none.spl_hitcacheCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4002000400offcore_response.demand_data_rd.l3_hit.any_snoopcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC01C000100offcore_response.demand_data_rd.l3_hit.snoop_hitmcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10001C000100offcore_response.demand_data_rd.l3_hit.snoop_hit_no_fwdcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4001C000100offcore_response.demand_data_rd.l3_hit.snoop_misscacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x2001C000100offcore_response.demand_data_rd.l3_hit.snoop_nonecacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x801C000100offcore_response.demand_data_rd.l3_hit.snoop_not_neededcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x1001C000100offcore_response.demand_data_rd.l3_hit.spl_hitcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x401C000100offcore_response.demand_data_rd.l3_hit_e.any_snoopcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC008000100offcore_response.demand_data_rd.l3_hit_e.snoop_hitmcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008000100offcore_response.demand_data_rd.l3_hit_e.snoop_hit_no_fwdcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008000100offcore_response.demand_data_rd.l3_hit_e.snoop_misscacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20008000100offcore_response.demand_data_rd.l3_hit_e.snoop_nonecacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008000100offcore_response.demand_data_rd.l3_hit_e.snoop_not_neededcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008000100offcore_response.demand_data_rd.l3_hit_e.spl_hitcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4008000100offcore_response.demand_data_rd.l3_hit_m.any_snoopcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC004000100offcore_response.demand_data_rd.l3_hit_m.snoop_hitmcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004000100offcore_response.demand_data_rd.l3_hit_m.snoop_hit_no_fwdcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004000100offcore_response.demand_data_rd.l3_hit_m.snoop_misscacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20004000100offcore_response.demand_data_rd.l3_hit_m.snoop_nonecacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8004000100offcore_response.demand_data_rd.l3_hit_m.snoop_not_neededcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004000100offcore_response.demand_data_rd.l3_hit_m.spl_hitcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4004000100offcore_response.demand_data_rd.l3_hit_s.any_snoopcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC010000100offcore_response.demand_data_rd.l3_hit_s.snoop_hitmcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010000100offcore_response.demand_data_rd.l3_hit_s.snoop_hit_no_fwdcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010000100offcore_response.demand_data_rd.l3_hit_s.snoop_misscacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20010000100offcore_response.demand_data_rd.l3_hit_s.snoop_nonecacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8010000100offcore_response.demand_data_rd.l3_hit_s.snoop_not_neededcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010000100offcore_response.demand_data_rd.l3_hit_s.spl_hitcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4010000100offcore_response.demand_data_rd.l4_hit_local_l4.any_snoopcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC040000100offcore_response.demand_data_rd.l4_hit_local_l4.snoop_hitmcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100040000100offcore_response.demand_data_rd.l4_hit_local_l4.snoop_hit_no_fwdcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40040000100offcore_response.demand_data_rd.l4_hit_local_l4.snoop_misscacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20040000100offcore_response.demand_data_rd.l4_hit_local_l4.snoop_nonecacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040000100offcore_response.demand_data_rd.l4_hit_local_l4.snoop_not_neededcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040000100offcore_response.demand_data_rd.l4_hit_local_l4.spl_hitcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4040000100offcore_response.demand_data_rd.supplier_none.any_snoopcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC002000100offcore_response.demand_data_rd.supplier_none.spl_hitcacheCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4002000100offcore_response.demand_rfo.l3_hit.any_snoopcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x3FC01C000200offcore_response.demand_rfo.l3_hit.snoop_hitmcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x10001C000200offcore_response.demand_rfo.l3_hit.snoop_hit_no_fwdcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x4001C000200offcore_response.demand_rfo.l3_hit.snoop_misscacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x2001C000200offcore_response.demand_rfo.l3_hit.snoop_nonecacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x801C000200offcore_response.demand_rfo.l3_hit.snoop_not_neededcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x1001C000200offcore_response.demand_rfo.l3_hit.spl_hitcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x401C000200offcore_response.demand_rfo.l3_hit_e.any_snoopcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x3FC008000200offcore_response.demand_rfo.l3_hit_e.snoop_hitmcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x100008000200offcore_response.demand_rfo.l3_hit_e.snoop_hit_no_fwdcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x40008000200offcore_response.demand_rfo.l3_hit_e.snoop_misscacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x20008000200offcore_response.demand_rfo.l3_hit_e.snoop_nonecacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x8008000200offcore_response.demand_rfo.l3_hit_e.snoop_not_neededcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x10008000200offcore_response.demand_rfo.l3_hit_e.spl_hitcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x4008000200offcore_response.demand_rfo.l3_hit_m.any_snoopcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x3FC004000200offcore_response.demand_rfo.l3_hit_m.snoop_hitmcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x100004000200offcore_response.demand_rfo.l3_hit_m.snoop_hit_no_fwdcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x40004000200offcore_response.demand_rfo.l3_hit_m.snoop_misscacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x20004000200offcore_response.demand_rfo.l3_hit_m.snoop_nonecacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x8004000200offcore_response.demand_rfo.l3_hit_m.snoop_not_neededcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x10004000200offcore_response.demand_rfo.l3_hit_m.spl_hitcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x4004000200offcore_response.demand_rfo.l3_hit_s.any_snoopcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x3FC010000200offcore_response.demand_rfo.l3_hit_s.snoop_hitmcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x100010000200offcore_response.demand_rfo.l3_hit_s.snoop_hit_no_fwdcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x40010000200offcore_response.demand_rfo.l3_hit_s.snoop_misscacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x20010000200offcore_response.demand_rfo.l3_hit_s.snoop_nonecacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x8010000200offcore_response.demand_rfo.l3_hit_s.snoop_not_neededcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x10010000200offcore_response.demand_rfo.l3_hit_s.spl_hitcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x4010000200offcore_response.demand_rfo.l4_hit_local_l4.any_snoopcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x3FC040000200offcore_response.demand_rfo.l4_hit_local_l4.snoop_hitmcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x100040000200offcore_response.demand_rfo.l4_hit_local_l4.snoop_hit_no_fwdcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x40040000200offcore_response.demand_rfo.l4_hit_local_l4.snoop_misscacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x20040000200offcore_response.demand_rfo.l4_hit_local_l4.snoop_nonecacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x8040000200offcore_response.demand_rfo.l4_hit_local_l4.snoop_not_neededcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x10040000200offcore_response.demand_rfo.l4_hit_local_l4.spl_hitcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x4040000200offcore_response.demand_rfo.supplier_none.any_snoopcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x3FC002000200offcore_response.demand_rfo.supplier_none.snoop_hitmcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x100002000200offcore_response.demand_rfo.supplier_none.snoop_hit_no_fwdcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x40002000200offcore_response.demand_rfo.supplier_none.snoop_misscacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x20002000200offcore_response.demand_rfo.supplier_none.snoop_nonecacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x8002000200offcore_response.demand_rfo.supplier_none.snoop_not_neededcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x10002000200offcore_response.demand_rfo.supplier_none.spl_hitcacheCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x4002000200offcore_response.other.l3_hit.any_snoopcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC01C800000offcore_response.other.l3_hit.snoop_hitmcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10001C800000offcore_response.other.l3_hit.snoop_hit_no_fwdcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4001C800000offcore_response.other.l3_hit.snoop_misscacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x2001C800000offcore_response.other.l3_hit.snoop_nonecacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x801C800000offcore_response.other.l3_hit.snoop_not_neededcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x1001C800000offcore_response.other.l3_hit.spl_hitcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x401C800000offcore_response.other.l3_hit_e.any_snoopcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC008800000offcore_response.other.l3_hit_e.snoop_hitmcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100008800000offcore_response.other.l3_hit_e.snoop_hit_no_fwdcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40008800000offcore_response.other.l3_hit_e.snoop_misscacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20008800000offcore_response.other.l3_hit_e.snoop_nonecacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8008800000offcore_response.other.l3_hit_e.snoop_not_neededcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10008800000offcore_response.other.l3_hit_e.spl_hitcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4008800000offcore_response.other.l3_hit_m.any_snoopcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC004800000offcore_response.other.l3_hit_m.snoop_hitmcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100004800000offcore_response.other.l3_hit_m.snoop_hit_no_fwdcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40004800000offcore_response.other.l3_hit_m.snoop_misscacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20004800000offcore_response.other.l3_hit_m.snoop_nonecacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8004800000offcore_response.other.l3_hit_m.snoop_not_neededcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10004800000offcore_response.other.l3_hit_m.spl_hitcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4004800000offcore_response.other.l3_hit_s.any_snoopcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC010800000offcore_response.other.l3_hit_s.snoop_hitmcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100010800000offcore_response.other.l3_hit_s.snoop_hit_no_fwdcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40010800000offcore_response.other.l3_hit_s.snoop_misscacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20010800000offcore_response.other.l3_hit_s.snoop_nonecacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8010800000offcore_response.other.l3_hit_s.snoop_not_neededcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10010800000offcore_response.other.l3_hit_s.spl_hitcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4010800000offcore_response.other.l4_hit_local_l4.any_snoopcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC040800000offcore_response.other.l4_hit_local_l4.snoop_hitmcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x100040800000offcore_response.other.l4_hit_local_l4.snoop_hit_no_fwdcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x40040800000offcore_response.other.l4_hit_local_l4.snoop_misscacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20040800000offcore_response.other.l4_hit_local_l4.snoop_nonecacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x8040800000offcore_response.other.l4_hit_local_l4.snoop_not_neededcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x10040800000offcore_response.other.l4_hit_local_l4.spl_hitcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4040800000offcore_response.other.supplier_none.any_snoopcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC002800000offcore_response.other.supplier_none.spl_hitcacheCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4002800000offcore_response.demand_code_rd.l3_hit.snoop_non_drammemoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20001C000400offcore_response.demand_code_rd.l3_hit_e.snoop_non_drammemoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200008000400offcore_response.demand_code_rd.l3_hit_m.snoop_non_drammemoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200004000400offcore_response.demand_code_rd.l3_hit_s.snoop_non_drammemoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200010000400offcore_response.demand_code_rd.l3_miss.any_snoopmemoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FFC40000400offcore_response.demand_code_rd.l3_miss.snoop_hitmmemoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C40000400offcore_response.demand_code_rd.l3_miss.snoop_hit_no_fwdmemoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C40000400offcore_response.demand_code_rd.l3_miss.snoop_missmemoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C40000400offcore_response.demand_code_rd.l3_miss.snoop_nonememoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC40000400offcore_response.demand_code_rd.l3_miss.snoop_non_drammemoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x203C40000400offcore_response.demand_code_rd.l3_miss.snoop_not_neededmemoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C40000400offcore_response.demand_code_rd.l3_miss.spl_hitmemoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x7C40000400offcore_response.demand_code_rd.l3_miss_local_dram.any_snoopmemoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC400000400offcore_response.demand_code_rd.l3_miss_local_dram.spl_hitmemoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4400000400offcore_response.demand_code_rd.l4_hit_local_l4.snoop_non_drammemoryCounts all demand code readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200040000400offcore_response.demand_data_rd.l3_hit.snoop_non_drammemoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20001C000100offcore_response.demand_data_rd.l3_hit_e.snoop_non_drammemoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200008000100offcore_response.demand_data_rd.l3_hit_m.snoop_non_drammemoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200004000100offcore_response.demand_data_rd.l3_hit_s.snoop_non_drammemoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200010000100offcore_response.demand_data_rd.l3_miss.any_snoopmemoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FFC40000100offcore_response.demand_data_rd.l3_miss.snoop_hitmmemoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C40000100offcore_response.demand_data_rd.l3_miss.snoop_hit_no_fwdmemoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C40000100offcore_response.demand_data_rd.l3_miss.snoop_missmemoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C40000100offcore_response.demand_data_rd.l3_miss.snoop_nonememoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC40000100offcore_response.demand_data_rd.l3_miss.snoop_non_drammemoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x203C40000100offcore_response.demand_data_rd.l3_miss.snoop_not_neededmemoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C40000100offcore_response.demand_data_rd.l3_miss.spl_hitmemoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x7C40000100offcore_response.demand_data_rd.l3_miss_local_dram.any_snoopmemoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC400000100offcore_response.demand_data_rd.l3_miss_local_dram.spl_hitmemoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4400000100offcore_response.demand_data_rd.l4_hit_local_l4.snoop_non_drammemoryCounts demand data readsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200040000100offcore_response.demand_rfo.l3_hit.snoop_non_drammemoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x20001C000200offcore_response.demand_rfo.l3_hit_e.snoop_non_drammemoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x200008000200offcore_response.demand_rfo.l3_hit_m.snoop_non_drammemoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x200004000200offcore_response.demand_rfo.l3_hit_s.snoop_non_drammemoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x200010000200offcore_response.demand_rfo.l3_miss.any_snoopmemoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x3FFC40000200offcore_response.demand_rfo.l3_miss.snoop_hitmmemoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x103C40000200offcore_response.demand_rfo.l3_miss.snoop_hit_no_fwdmemoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x43C40000200offcore_response.demand_rfo.l3_miss.snoop_missmemoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x23C40000200offcore_response.demand_rfo.l3_miss.snoop_nonememoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0xBC40000200offcore_response.demand_rfo.l3_miss.snoop_non_drammemoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x203C40000200offcore_response.demand_rfo.l3_miss.snoop_not_neededmemoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x13C40000200offcore_response.demand_rfo.l3_miss.spl_hitmemoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x7C40000200offcore_response.demand_rfo.l3_miss_local_dram.any_snoopmemoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x3FC400000200offcore_response.demand_rfo.l3_miss_local_dram.snoop_hitmmemoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x100400000200offcore_response.demand_rfo.l3_miss_local_dram.snoop_hit_no_fwdmemoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x40400000200offcore_response.demand_rfo.l3_miss_local_dram.snoop_missmemoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x20400000200offcore_response.demand_rfo.l3_miss_local_dram.snoop_nonememoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x8400000200offcore_response.demand_rfo.l3_miss_local_dram.snoop_non_drammemoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x200400000200offcore_response.demand_rfo.l3_miss_local_dram.snoop_not_neededmemoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x10400000200offcore_response.demand_rfo.l3_miss_local_dram.spl_hitmemoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x4400000200offcore_response.demand_rfo.l4_hit_local_l4.snoop_non_drammemoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x200040000200offcore_response.demand_rfo.supplier_none.snoop_non_drammemoryCounts all demand data writes (RFOs)event=0xb7,period=100003,umask=1,offcore_rsp=0x200002000200offcore_response.other.l3_hit.snoop_non_drammemoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x20001C800000offcore_response.other.l3_hit_e.snoop_non_drammemoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200008800000offcore_response.other.l3_hit_m.snoop_non_drammemoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200004800000offcore_response.other.l3_hit_s.snoop_non_drammemoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200010800000offcore_response.other.l3_miss.any_snoopmemoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FFC40800000offcore_response.other.l3_miss.snoop_hitmmemoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x103C40800000offcore_response.other.l3_miss.snoop_hit_no_fwdmemoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x43C40800000offcore_response.other.l3_miss.snoop_missmemoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x23C40800000offcore_response.other.l3_miss.snoop_nonememoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0xBC40800000offcore_response.other.l3_miss.snoop_non_drammemoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x203C40800000offcore_response.other.l3_miss.snoop_not_neededmemoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x13C40800000offcore_response.other.l3_miss.spl_hitmemoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x7C40800000offcore_response.other.l3_miss_local_dram.any_snoopmemoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x3FC400800000offcore_response.other.l3_miss_local_dram.spl_hitmemoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x4400800000offcore_response.other.l4_hit_local_l4.snoop_non_drammemoryCounts any other requestsevent=0xb7,period=100003,umask=1,offcore_rsp=0x200040800000memory_disambiguation.history_resetotherMEMORY_DISAMBIGUATION.HISTORY_RESETevent=9,period=2000003,umask=100unc_arb_trk_occupancy.alluncore interconnectNumber of all Core entries outstanding for the memory controller. The outstanding interval starts after LLC miss till return of first data chunk. Accounts for Coherent and non-coherent trafficevent=0x80,umask=101unc_arb_trk_occupancy.data_readuncore interconnectNumber of Core Data Read entries outstanding for the memory controller. The outstanding interval starts after LLC miss till return of first data chunkevent=0x80,umask=201unc_arb_trk_requests.alluncore interconnectUNC_ARB_TRK_REQUESTS.ALLevent=0x81,umask=101unc_arb_trk_requests.data_readuncore interconnectNumber of Core coherent Data Read requests sent to memory controller whose data is returned directly to requesting agentevent=0x81,umask=201unc_arb_trk_requests.drd_directuncore interconnectNumber of Core coherent Data Read requests sent to memory controller whose data is returned directly to requesting agentevent=0x81,umask=201offcore_response.all_data_rd.any_responsecacheCounts all demand & prefetch data reads that have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1049100offcore_response.all_data_rd.l3_hit.any_snoopcacheCounts all demand & prefetch data reads that hit in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C049100offcore_response.all_data_rd.l3_hit.hitm_other_corecacheCounts all demand & prefetch data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C049100offcore_response.all_data_rd.l3_hit.hit_other_core_no_fwdcacheCounts all demand & prefetch data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C049100offcore_response.all_data_rd.l3_hit.no_snoop_neededcacheCounts all demand & prefetch data reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C049100offcore_response.all_data_rd.l3_hit.snoop_hit_with_fwdcacheOFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C049100offcore_response.all_pf_data_rd.any_responsecacheCounts all prefetch data reads that have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1049000offcore_response.all_pf_data_rd.l3_hit.any_snoopcacheCounts all prefetch data reads that hit in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C049000offcore_response.all_pf_data_rd.l3_hit.hitm_other_corecacheCounts all prefetch data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C049000offcore_response.all_pf_data_rd.l3_hit.hit_other_core_no_fwdcacheCounts all prefetch data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C049000offcore_response.all_pf_data_rd.l3_hit.no_snoop_neededcacheCounts all prefetch data reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C049000offcore_response.all_pf_data_rd.l3_hit.snoop_hit_with_fwdcacheOFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C049000offcore_response.all_pf_rfo.any_responsecacheCounts prefetch RFOs that have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1012000offcore_response.all_pf_rfo.l3_hit.any_snoopcacheCounts prefetch RFOs that hit in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C012000offcore_response.all_pf_rfo.l3_hit.hitm_other_corecacheCounts prefetch RFOs that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C012000offcore_response.all_pf_rfo.l3_hit.hit_other_core_no_fwdcacheCounts prefetch RFOs that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C012000offcore_response.all_pf_rfo.l3_hit.no_snoop_neededcacheCounts prefetch RFOs that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C012000offcore_response.all_pf_rfo.l3_hit.snoop_hit_with_fwdcacheOFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C012000offcore_response.all_reads.l3_hit.hit_other_core_fwdcacheOFFCORE_RESPONSE.ALL_READS.L3_HIT.HIT_OTHER_CORE_FWD hit in the L3 and the snoop to one of the sibling cores hits the line in E/S/F state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C07F700offcore_response.all_rfo.any_responsecacheCounts all demand & prefetch RFOs that have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1012200offcore_response.all_rfo.l3_hit.any_snoopcacheCounts all demand & prefetch RFOs that hit in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C012200offcore_response.all_rfo.l3_hit.hitm_other_corecacheCounts all demand & prefetch RFOs that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C012200offcore_response.all_rfo.l3_hit.hit_other_core_no_fwdcacheCounts all demand & prefetch RFOs that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C012200offcore_response.all_rfo.l3_hit.no_snoop_neededcacheCounts all demand & prefetch RFOs that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C012200offcore_response.all_rfo.l3_hit.snoop_hit_with_fwdcacheOFFCORE_RESPONSE.ALL_RFO.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C012200offcore_response.demand_code_rd.any_responsecacheCounts all demand code reads that have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000400offcore_response.demand_code_rd.l3_hit.any_snoopcacheCounts all demand code reads that hit in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C000400offcore_response.demand_code_rd.l3_hit.hitm_other_corecacheCounts all demand code reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000400offcore_response.demand_code_rd.l3_hit.hit_other_core_no_fwdcacheCounts all demand code reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C000400offcore_response.demand_code_rd.l3_hit.no_snoop_neededcacheCounts all demand code reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C000400offcore_response.demand_code_rd.l3_hit.snoop_hit_with_fwdcacheOFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C000400offcore_response.demand_data_rd.any_responsecacheCounts demand data reads that have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000100offcore_response.demand_data_rd.l3_hit.any_snoopcacheCounts demand data reads that hit in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C000100offcore_response.demand_data_rd.l3_hit.hitm_other_corecacheCounts demand data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000100offcore_response.demand_data_rd.l3_hit.hit_other_core_no_fwdcacheCounts demand data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C000100offcore_response.demand_data_rd.l3_hit.no_snoop_neededcacheCounts demand data reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C000100offcore_response.demand_data_rd.l3_hit.snoop_hit_with_fwdcacheOFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C000100offcore_response.demand_rfo.any_responsecacheCounts all demand data writes (RFOs) that have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1000200offcore_response.demand_rfo.l3_hit.any_snoopcacheCounts all demand data writes (RFOs) that hit in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C000200offcore_response.demand_rfo.l3_hit.hitm_other_corecacheCounts all demand data writes (RFOs) that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C000200offcore_response.demand_rfo.l3_hit.hit_other_core_no_fwdcacheCounts all demand data writes (RFOs) that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C000200offcore_response.demand_rfo.l3_hit.no_snoop_neededcacheCounts all demand data writes (RFOs) that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C000200offcore_response.demand_rfo.l3_hit.snoop_hit_with_fwdcacheOFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C000200offcore_response.pf_l1d_and_sw.any_responsecacheCounts L1 data cache hardware prefetch requests and software prefetch requests that have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1040000offcore_response.pf_l1d_and_sw.l3_hit.any_snoopcacheCounts L1 data cache hardware prefetch requests and software prefetch requests that hit in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C040000offcore_response.pf_l1d_and_sw.l3_hit.hitm_other_corecacheCounts L1 data cache hardware prefetch requests and software prefetch requests that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C040000offcore_response.pf_l1d_and_sw.l3_hit.hit_other_core_no_fwdcacheCounts L1 data cache hardware prefetch requests and software prefetch requests that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C040000offcore_response.pf_l1d_and_sw.l3_hit.no_snoop_neededcacheCounts L1 data cache hardware prefetch requests and software prefetch requests that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C040000offcore_response.pf_l1d_and_sw.l3_hit.snoop_hit_with_fwdcacheOFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C040000offcore_response.pf_l2_data_rd.any_responsecacheCounts prefetch (that bring data to L2) data reads that have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1001000offcore_response.pf_l2_data_rd.l3_hit.any_snoopcacheCounts prefetch (that bring data to L2) data reads that hit in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C001000offcore_response.pf_l2_data_rd.l3_hit.hitm_other_corecacheCounts prefetch (that bring data to L2) data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C001000offcore_response.pf_l2_data_rd.l3_hit.hit_other_core_no_fwdcacheCounts prefetch (that bring data to L2) data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C001000offcore_response.pf_l2_data_rd.l3_hit.no_snoop_neededcacheCounts prefetch (that bring data to L2) data reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C001000offcore_response.pf_l2_data_rd.l3_hit.snoop_hit_with_fwdcacheOFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C001000offcore_response.pf_l2_rfo.any_responsecacheCounts all prefetch (that bring data to L2) RFOs that have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1002000offcore_response.pf_l2_rfo.l3_hit.any_snoopcacheCounts all prefetch (that bring data to L2) RFOs that hit in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C002000offcore_response.pf_l2_rfo.l3_hit.hitm_other_corecacheCounts all prefetch (that bring data to L2) RFOs that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C002000offcore_response.pf_l2_rfo.l3_hit.hit_other_core_no_fwdcacheCounts all prefetch (that bring data to L2) RFOs that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C002000offcore_response.pf_l2_rfo.l3_hit.no_snoop_neededcacheCounts all prefetch (that bring data to L2) RFOs that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C002000offcore_response.pf_l2_rfo.l3_hit.snoop_hit_with_fwdcacheOFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C002000offcore_response.pf_l3_data_rd.any_responsecacheCounts all prefetch (that bring data to LLC only) data reads that have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1008000offcore_response.pf_l3_data_rd.l3_hit.any_snoopcacheCounts all prefetch (that bring data to LLC only) data reads that hit in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C008000offcore_response.pf_l3_data_rd.l3_hit.hitm_other_corecacheCounts all prefetch (that bring data to LLC only) data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C008000offcore_response.pf_l3_data_rd.l3_hit.hit_other_core_no_fwdcacheCounts all prefetch (that bring data to LLC only) data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C008000offcore_response.pf_l3_data_rd.l3_hit.no_snoop_neededcacheCounts all prefetch (that bring data to LLC only) data reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C008000offcore_response.pf_l3_data_rd.l3_hit.snoop_hit_with_fwdcacheOFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C008000offcore_response.pf_l3_rfo.any_responsecacheCounts all prefetch (that bring data to LLC only) RFOs that have any response typeevent=0xb7,period=100003,umask=1,offcore_rsp=0x1010000offcore_response.pf_l3_rfo.l3_hit.any_snoopcacheCounts all prefetch (that bring data to LLC only) RFOs that hit in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3F803C010000offcore_response.pf_l3_rfo.l3_hit.hitm_other_corecacheCounts all prefetch (that bring data to LLC only) RFOs that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x10003C010000offcore_response.pf_l3_rfo.l3_hit.hit_other_core_no_fwdcacheCounts all prefetch (that bring data to LLC only) RFOs that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwardedevent=0xb7,period=100003,umask=1,offcore_rsp=0x4003C010000offcore_response.pf_l3_rfo.l3_hit.no_snoop_neededcacheCounts all prefetch (that bring data to LLC only) RFOs that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple coresevent=0xb7,period=100003,umask=1,offcore_rsp=0x1003C010000offcore_response.pf_l3_rfo.l3_hit.snoop_hit_with_fwdcacheOFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C010000fp_arith_inst_retired.512b_packed_doublefloating pointCounts number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 8 computation operations, one for each element.  Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per elementevent=0xc7,period=2000003,umask=0x4000Number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 8 computation operations, one for each element.  Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsfp_arith_inst_retired.512b_packed_singlefloating pointCounts number of SSE/AVX computational 512-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 16 computation operations, one for each element.  Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per elementevent=0xc7,period=2000003,umask=0x8000Number of SSE/AVX computational 512-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below.  Each count represents 16 computation operations, one for each element.  Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these eventsoffcore_response.all_data_rd.l3_miss.any_snoopmemoryCounts all demand & prefetch data reads that miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00049100offcore_response.all_data_rd.l3_miss.remote_hitmmemoryCounts all demand & prefetch data reads that miss the L3 and the modified data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0049100offcore_response.all_data_rd.l3_miss.remote_hit_forwardmemoryCounts all demand & prefetch data reads that miss the L3 and clean or shared data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0049100offcore_response.all_data_rd.l3_miss.snoop_miss_or_no_fwdmemoryCounts all demand & prefetch data reads that miss the L3 and the data is returned from local or remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x63FC0049100offcore_response.all_data_rd.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryCounts all demand & prefetch data reads that miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400049100offcore_response.all_data_rd.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryCounts all demand & prefetch data reads that miss the L3 and the data is returned from remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80049100offcore_response.all_pf_data_rd.l3_miss.any_snoopmemoryCounts all prefetch data reads that miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00049000offcore_response.all_pf_data_rd.l3_miss.remote_hitmmemoryCounts all prefetch data reads that miss the L3 and the modified data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0049000offcore_response.all_pf_data_rd.l3_miss.remote_hit_forwardmemoryCounts all prefetch data reads that miss the L3 and clean or shared data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0049000offcore_response.all_pf_data_rd.l3_miss.snoop_miss_or_no_fwdmemoryCounts all prefetch data reads that miss the L3 and the data is returned from local or remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x63FC0049000offcore_response.all_pf_data_rd.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryCounts all prefetch data reads that miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400049000offcore_response.all_pf_data_rd.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryCounts all prefetch data reads that miss the L3 and the data is returned from remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80049000offcore_response.all_pf_rfo.l3_miss.any_snoopmemoryCounts prefetch RFOs that miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00012000offcore_response.all_pf_rfo.l3_miss.remote_hitmmemoryCounts prefetch RFOs that miss the L3 and the modified data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0012000offcore_response.all_pf_rfo.l3_miss.remote_hit_forwardmemoryCounts prefetch RFOs that miss the L3 and clean or shared data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0012000offcore_response.all_pf_rfo.l3_miss.snoop_miss_or_no_fwdmemoryCounts prefetch RFOs that miss the L3 and the data is returned from local or remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x63FC0012000offcore_response.all_pf_rfo.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryCounts prefetch RFOs that miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400012000offcore_response.all_pf_rfo.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryCounts prefetch RFOs that miss the L3 and the data is returned from remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80012000offcore_response.all_rfo.l3_miss.any_snoopmemoryCounts all demand & prefetch RFOs that miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00012200offcore_response.all_rfo.l3_miss.remote_hitmmemoryCounts all demand & prefetch RFOs that miss the L3 and the modified data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0012200offcore_response.all_rfo.l3_miss.remote_hit_forwardmemoryCounts all demand & prefetch RFOs that miss the L3 and clean or shared data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0012200offcore_response.all_rfo.l3_miss.snoop_miss_or_no_fwdmemoryCounts all demand & prefetch RFOs that miss the L3 and the data is returned from local or remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x63FC0012200offcore_response.all_rfo.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryCounts all demand & prefetch RFOs that miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400012200offcore_response.all_rfo.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryCounts all demand & prefetch RFOs that miss the L3 and the data is returned from remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80012200offcore_response.demand_code_rd.l3_miss.any_snoopmemoryCounts all demand code reads that miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00000400offcore_response.demand_code_rd.l3_miss.remote_hitmmemoryCounts all demand code reads that miss the L3 and the modified data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0000400offcore_response.demand_code_rd.l3_miss.remote_hit_forwardmemoryCounts all demand code reads that miss the L3 and clean or shared data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0000400offcore_response.demand_code_rd.l3_miss.snoop_miss_or_no_fwdmemoryCounts all demand code reads that miss the L3 and the data is returned from local or remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x63FC0000400offcore_response.demand_code_rd.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryCounts all demand code reads that miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400000400offcore_response.demand_code_rd.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryCounts all demand code reads that miss the L3 and the data is returned from remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80000400offcore_response.demand_data_rd.l3_miss.any_snoopmemoryCounts demand data reads that miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00000100offcore_response.demand_data_rd.l3_miss.remote_hitmmemoryCounts demand data reads that miss the L3 and the modified data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0000100offcore_response.demand_data_rd.l3_miss.remote_hit_forwardmemoryCounts demand data reads that miss the L3 and clean or shared data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0000100offcore_response.demand_data_rd.l3_miss.snoop_miss_or_no_fwdmemoryCounts demand data reads that miss the L3 and the data is returned from local or remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x63FC0000100offcore_response.demand_data_rd.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryCounts demand data reads that miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400000100offcore_response.demand_data_rd.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryCounts demand data reads that miss the L3 and the data is returned from remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80000100offcore_response.demand_rfo.l3_miss.any_snoopmemoryCounts all demand data writes (RFOs) that miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00000200offcore_response.demand_rfo.l3_miss.remote_hitmmemoryCounts all demand data writes (RFOs) that miss the L3 and the modified data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0000200offcore_response.demand_rfo.l3_miss.remote_hit_forwardmemoryCounts all demand data writes (RFOs) that miss the L3 and clean or shared data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0000200offcore_response.demand_rfo.l3_miss.snoop_miss_or_no_fwdmemoryCounts all demand data writes (RFOs) that miss the L3 and the data is returned from local or remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x63FC0000200offcore_response.demand_rfo.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryCounts all demand data writes (RFOs) that miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400000200offcore_response.demand_rfo.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryCounts all demand data writes (RFOs) that miss the L3 and the data is returned from remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80000200offcore_response.pf_l1d_and_sw.l3_miss.any_snoopmemoryCounts L1 data cache hardware prefetch requests and software prefetch requests that miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00040000offcore_response.pf_l1d_and_sw.l3_miss.remote_hitmmemoryCounts L1 data cache hardware prefetch requests and software prefetch requests that miss the L3 and the modified data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0040000offcore_response.pf_l1d_and_sw.l3_miss.remote_hit_forwardmemoryCounts L1 data cache hardware prefetch requests and software prefetch requests that miss the L3 and clean or shared data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0040000offcore_response.pf_l1d_and_sw.l3_miss.snoop_miss_or_no_fwdmemoryCounts L1 data cache hardware prefetch requests and software prefetch requests that miss the L3 and the data is returned from local or remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x63FC0040000offcore_response.pf_l1d_and_sw.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryCounts L1 data cache hardware prefetch requests and software prefetch requests that miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400040000offcore_response.pf_l1d_and_sw.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryCounts L1 data cache hardware prefetch requests and software prefetch requests that miss the L3 and the data is returned from remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80040000offcore_response.pf_l2_data_rd.l3_miss.any_snoopmemoryCounts prefetch (that bring data to L2) data reads that miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00001000offcore_response.pf_l2_data_rd.l3_miss.remote_hitmmemoryCounts prefetch (that bring data to L2) data reads that miss the L3 and the modified data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0001000offcore_response.pf_l2_data_rd.l3_miss.remote_hit_forwardmemoryCounts prefetch (that bring data to L2) data reads that miss the L3 and clean or shared data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0001000offcore_response.pf_l2_data_rd.l3_miss.snoop_miss_or_no_fwdmemoryCounts prefetch (that bring data to L2) data reads that miss the L3 and the data is returned from local or remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x63FC0001000offcore_response.pf_l2_data_rd.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryCounts prefetch (that bring data to L2) data reads that miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400001000offcore_response.pf_l2_data_rd.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryCounts prefetch (that bring data to L2) data reads that miss the L3 and the data is returned from remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80001000offcore_response.pf_l2_rfo.l3_miss.any_snoopmemoryCounts all prefetch (that bring data to L2) RFOs that miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00002000offcore_response.pf_l2_rfo.l3_miss.remote_hitmmemoryCounts all prefetch (that bring data to L2) RFOs that miss the L3 and the modified data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0002000offcore_response.pf_l2_rfo.l3_miss.remote_hit_forwardmemoryCounts all prefetch (that bring data to L2) RFOs that miss the L3 and clean or shared data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0002000offcore_response.pf_l2_rfo.l3_miss.snoop_miss_or_no_fwdmemoryCounts all prefetch (that bring data to L2) RFOs that miss the L3 and the data is returned from local or remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x63FC0002000offcore_response.pf_l2_rfo.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryCounts all prefetch (that bring data to L2) RFOs that miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400002000offcore_response.pf_l2_rfo.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryCounts all prefetch (that bring data to L2) RFOs that miss the L3 and the data is returned from remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80002000offcore_response.pf_l3_data_rd.l3_miss.any_snoopmemoryCounts all prefetch (that bring data to LLC only) data reads that miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00008000offcore_response.pf_l3_data_rd.l3_miss.remote_hitmmemoryCounts all prefetch (that bring data to LLC only) data reads that miss the L3 and the modified data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0008000offcore_response.pf_l3_data_rd.l3_miss.remote_hit_forwardmemoryCounts all prefetch (that bring data to LLC only) data reads that miss the L3 and clean or shared data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0008000offcore_response.pf_l3_data_rd.l3_miss.snoop_miss_or_no_fwdmemoryCounts all prefetch (that bring data to LLC only) data reads that miss the L3 and the data is returned from local or remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x63FC0008000offcore_response.pf_l3_data_rd.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryCounts all prefetch (that bring data to LLC only) data reads that miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400008000offcore_response.pf_l3_data_rd.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryCounts all prefetch (that bring data to LLC only) data reads that miss the L3 and the data is returned from remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80008000offcore_response.pf_l3_rfo.l3_miss.any_snoopmemoryCounts all prefetch (that bring data to LLC only) RFOs that miss in the L3event=0xb7,period=100003,umask=1,offcore_rsp=0x3FBC00010000offcore_response.pf_l3_rfo.l3_miss.remote_hitmmemoryCounts all prefetch (that bring data to LLC only) RFOs that miss the L3 and the modified data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x103FC0010000offcore_response.pf_l3_rfo.l3_miss.remote_hit_forwardmemoryCounts all prefetch (that bring data to LLC only) RFOs that miss the L3 and clean or shared data is transferred from remote cacheevent=0xb7,period=100003,umask=1,offcore_rsp=0x83FC0010000offcore_response.pf_l3_rfo.l3_miss.snoop_miss_or_no_fwdmemoryCounts all prefetch (that bring data to LLC only) RFOs that miss the L3 and the data is returned from local or remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x63FC0010000offcore_response.pf_l3_rfo.l3_miss_local_dram.snoop_miss_or_no_fwdmemoryCounts all prefetch (that bring data to LLC only) RFOs that miss the L3 and the data is returned from local dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x60400010000offcore_response.pf_l3_rfo.l3_miss_remote_dram.snoop_miss_or_no_fwdmemoryCounts all prefetch (that bring data to LLC only) RFOs that miss the L3 and the data is returned from remote dramevent=0xb7,period=100003,umask=1,offcore_rsp=0x63B80010000unc_cha_clockticksuncore cacheClockticks of the uncore caching & home agent (CHA)event=001Counts clockticks of the clock controlling the uncore caching and home agent (CHA)unc_upi_rxl_crc_errorsuncore interconnectCRC Errors Detectedevent=0xb01Number of CRC errors detected in the UPI Agent.  Each UPI flit incorporates 8 bits of CRC for error detection.  This counts the number of flits where the CRC was able to detect an error.  After an error has been detected, the UPI agent will send a request to the transmitting socket to resend the flit (as well as any flits that came after it)unc_upi_rxl_crc_llr_req_transmituncore interconnectLLR Requests Sentevent=801Number of LLR Requests were transmitted.  This should generally be <= the number of CRC errors detected.  If multiple errors are detected before the Rx side receives a LLC_REQ_ACK from the Tx side, there is no need to send more LLR_REQ_NACKsunc_m_power_channel_ppduncore memoryCycles where DRAM ranks are in power down (CKE) modeevent=0x8501Counts cycles when all the ranks in the channel are in PPD (PreCharge Power Down) mode. If IBT (Input Buffer Terminators)=off is enabled, then this event counts the cycles in PPD mode. If IBT=off is not enabled, then this event counts the number of cycles when being in PPD mode could have been taken advantage ofunc_cha_clockticksuncore cacheUncore cache clock ticksevent=001unc_cha_llc_lookup.codeuncore cacheThis event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.CODE_READevent=0x34,umask=0x1bd0ff11unc_cha_llc_lookup.dmnd_read_localuncore cacheThis event is deprecatedevent=0x34,umask=0x841ff11unc_cha_llc_lookup.rfo_pref_localuncore cacheThis event is deprecatedevent=0x34,umask=0x888ff11unc_cha_llc_lookup.write_localuncore cacheThis event is deprecatedevent=0x34,umask=0x842ff11unc_m2m_prefcam_demand_merge.ch0_xptuncore interconnectDemands Merged with CAMed Prefetches : XPT - Ch 0event=0x74,umask=101unc_m2m_prefcam_demand_merge.ch1_xptuncore interconnectDemands Merged with CAMed Prefetches : XPT - Ch 1event=0x74,umask=401unc_m2m_prefcam_demand_merge.xpt_allchuncore interconnectDemands Merged with CAMed Prefetches : XPT - All Channelsevent=0x74,umask=0x1501unc_m2m_prefcam_demand_no_merge.ch0_xptuncore interconnectDemands Not Merged with CAMed Prefetches : XPT - Ch 0event=0x75,umask=101unc_m2m_prefcam_demand_no_merge.ch1_xptuncore interconnectDemands Not Merged with CAMed Prefetches : XPT - Ch 1event=0x75,umask=401unc_m2m_prefcam_demand_no_merge.xpt_allchuncore interconnectDemands Not Merged with CAMed Prefetches : XPT - All Channelsevent=0x75,umask=0x1501llc_misses.pcie_readuncore ioPCI Express bandwidth reading at IIO. Derived from unc_iio_data_req_of_cpu.mem_read.part0event=0x83,ch_mask=1,fc_mask=7,umask=4,ch_mask=0x1f014BytesData requested of the CPU : Card reading from DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0llc_misses.pcie_writeuncore ioPCI Express bandwidth writing at IIO. Derived from unc_iio_data_req_of_cpu.mem_write.part0event=0x83,ch_mask=1,fc_mask=7,umask=1,ch_mask=0x1f014BytesData requested of the CPU : Card writing to DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_clockticksuncore ioClockticks of the integrated IO (IIO) traffic controllerevent=101unc_iio_data_req_of_cpu.mem_read.part0uncore ioPCI Express bandwidth reading at IIO, part 0event=0x83,ch_mask=1,fc_mask=7,umask=401Data requested of the CPU : Card reading from DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_of_cpu.mem_read.part1uncore ioPCI Express bandwidth reading at IIO, part 1event=0x83,ch_mask=2,fc_mask=7,umask=401Data requested of the CPU : Card reading from DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_of_cpu.mem_read.part2uncore ioPCI Express bandwidth reading at IIO, part 2event=0x83,ch_mask=4,fc_mask=7,umask=401Data requested of the CPU : Card reading from DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_data_req_of_cpu.mem_read.part3uncore ioPCI Express bandwidth reading at IIO, part 3event=0x83,ch_mask=8,fc_mask=7,umask=401Data requested of the CPU : Card reading from DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 3unc_iio_data_req_of_cpu.mem_write.part0uncore ioPCI Express bandwidth writing at IIO, part 0event=0x83,ch_mask=1,fc_mask=7,umask=101Data requested of the CPU : Card writing to DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0unc_iio_data_req_of_cpu.mem_write.part1uncore ioPCI Express bandwidth writing at IIO, part 1event=0x83,ch_mask=2,fc_mask=7,umask=101Data requested of the CPU : Card writing to DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 1unc_iio_data_req_of_cpu.mem_write.part2uncore ioPCI Express bandwidth writing at IIO, part 2event=0x83,ch_mask=4,fc_mask=7,umask=101Data requested of the CPU : Card writing to DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2unc_iio_data_req_of_cpu.mem_write.part3uncore ioPCI Express bandwidth writing at IIO, part 3event=0x83,ch_mask=8,fc_mask=7,umask=101Data requested of the CPU : Card writing to DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 3unc_m2p_clockticksuncore ioClockticks of the mesh to PCI (M2P)event=101llc_misses.mem_readuncore memoryread requests to memory controller. Derived from unc_m_cas_count.rdevent=4,umask=0xf0164BytesCounts the total number of DRAM Read CAS commands, w/ and w/o auto-pre, issued on this channel.  This includes underfillsllc_misses.mem_writeuncore memorywrite requests to memory controller. Derived from unc_m_cas_count.wrevent=4,umask=0x300164BytesCounts the total number of DRAM Write CAS commands issued, w/ and w/o auto-pre, on this channelunc_m_clockticksuncore memoryMemory controller clock ticksevent=001Clockticks of the integrated memory controller (IMC)unc_m_power_channel_ppduncore memoryCycles where DRAM ranks are in power down (CKE) modeevent=0x8501Channel PPD Cycles : Number of cycles when all the ranks in the channel are in PPD mode.  If IBT=off is enabled, then this can be used to count those cycles.  If it is not enabled, then this can count the number of cycles when that could have been taken advantage ofunc_m_power_self_refreshuncore memoryCycles Memory is in self refresh power modeevent=0x4301Clock-Enabled Self-Refresh : Counts the number of cycles when the iMC is in self-refresh and the iMC still has a clock.  This happens in some package C-states.  For example, the PCU may ask the iMC to enter self-refresh even though some of the cores are still processing.  One use of this is for Monroe technology.  Self-refresh is required during package C3 and C6, but there is no clock in the iMC at this time, so it is not possible to count these casesunc_m_pre_count.page_missuncore memoryPre-charges due to page missesevent=2,umask=0xc01DRAM Precharge commands. : Precharge due to page miss : Counts the number of DRAM Precharge commands sent on this channel. : Pages Misses are due to precharges from bank scheduler (rd/wr requests)unc_m_pre_count.rduncore memoryPre-charge for readsevent=2,umask=401DRAM Precharge commands. : Precharge due to read : Counts the number of DRAM Precharge commands sent on this channel. : Precharge from read bank schedulerunc_m_pre_count.wruncore memoryPre-charge for writesevent=2,umask=801DRAM Precharge commands. : Precharge due to write : Counts the number of DRAM Precharge commands sent on this channel. : Precharge from write bank schedulerunc_p_clockticksuncore powerClockticks of the power control unit (PCU)event=001l2_rqsts.misscacheRead requests with true-miss in L2 cacheevent=0x24,period=200003,umask=0x3f00Counts read requests of any type with true-miss in the L2 cache. True-miss excludes L2 misses that were merged with ongoing L2 missesl2_rqsts.referencescacheAll accesses to L2 cacheevent=0x24,period=200003,umask=0xff00Counts all requests that were hit or true misses in L2 cache. True-miss excludes misses that were merged with ongoing L2 missesmem_load_l3_hit_retired.xsnp_fwdcacheSnoop hit a modified(HITM) or clean line(HIT_W_FWD) in another on-pkg core which forwarded the data back due to a retired load instruction  Supports address when precise (Precise event)event=0xd2,period=20011,umask=400Counts retired load instructions where a cross-core snoop hit in another cores caches on this socket, the data was forwarded back to the requesting core as the data was modified (SNOOP_HITM) or the L3 did not have the data(SNOOP_HIT_WITH_FWD)  Supports address when precise (Precise event)mem_load_l3_hit_retired.xsnp_no_fwdcacheSnoop hit without forwarding in another on-pkg core due to a retired load instruction, data was supplied by the L3  Supports address when precise (Precise event)event=0xd2,period=20011,umask=200Counts retired load instructions in which the L3 supplied the data and a cross-core snoop hit in another cores caches on this socket but that other core did not forward the data back (SNOOP_HIT_NO_FWD)  Supports address when precise (Precise event)mem_load_misc_retired.uccacheRetired instructions with at least 1 uncacheable load or lock  Supports address when precise (Precise event)event=0xd4,period=100007,umask=400Retired instructions with at least one load to uncacheable memory-type, or at least one cache-line split locked access  Supports address when precise (Precise event)ocr.demand_data_rd.l3_hit.snoop_hit_with_fwdcacheOCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWDevent=0xb7,period=100003,umask=1,offcore_rsp=0x8003C000100offcore_requests_outstanding.all_data_rdcacheOffcore outstanding cacheable Core Data Read transactions in SuperQueue (SQ), queue to uncoreevent=0x60,period=1000003,umask=800Counts the number of offcore outstanding cacheable Core Data Read transactions in the super queue every cycle. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation). See corresponding Umask under OFFCORE_REQUESTSoffcore_requests_outstanding.cycles_with_data_rdcacheCycles when offcore outstanding cacheable Core Data Read transactions are present in SuperQueue (SQ), queue to uncoreevent=0x60,cmask=1,period=1000003,umask=800Counts cycles when offcore outstanding cacheable Core Data Read transactions are present in the super queue. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation). See corresponding Umask under OFFCORE_REQUESTSoffcore_requests_outstanding.cycles_with_demand_rfocacheCycles with offcore outstanding demand rfo reads transactions in SuperQueue (SQ), queue to uncoreevent=0x60,cmask=1,period=1000003,umask=400Counts the number of offcore outstanding demand rfo Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTSoffcore_requests_outstanding.demand_data_rdcacheDemand Data Read transactions pending for off-core. Highly correlatedevent=0x60,period=1000003,umask=100Counts the number of off-core outstanding Demand Data Read transactions every cycle. A transaction is considered to be in the Off-core outstanding state between L2 cache miss and data-return to the coresq_misc.sq_fullcacheCycles the superQ cannot take any more entriesevent=0xf4,period=100003,umask=400Counts the cycles for which the thread is active and the superQ cannot take any more entriesexe_activity.bound_on_loadspipelineCycles when the memory subsystem has an outstanding load. Increments by 4 for every such cycleevent=0xa6,cmask=5,period=2000003,umask=0x2100Counts cycles when the memory subsystem has an outstanding load. Increments by 4 for every such cycleld_blocks_partial.address_aliaspipelineFalse dependencies in MOB due to partial compare on addressevent=7,period=100003,umask=100Counts the number of times a load got blocked due to false dependencies in MOB due to partial compare on addressuops_issued.vector_width_mismatchpipelineUops inserted at issue-stage in order to preserve upper bits of vector registersevent=0xe,period=100003,umask=200Counts the number of Blend Uops issued by the Resource Allocation Table (RAT) to the reservation station (RS) in order to preserve upper bits of vector registers. Starting with the Skylake microarchitecture, these Blend uops are needed since every Intel SSE instruction executed in Dirty Upper State needs to preserve bits 128-255 of the destination register. For more information, refer to Mixing Intel AVX and Intel SSE Code section of the Optimization Guideunc_arb_coh_trk_requests.alluncore interconnectUNC_ARB_COH_TRK_REQUESTS.ALLevent=0x84,umask=101unc_arb_trk_occupancy.alluncore interconnectEach cycle count number of all outgoing valid entries in ReqTrk. Such entry is defined as valid from it's allocation in ReqTrk till deallocation. Accounts for Coherent and non-coherent trafficevent=0x80,umask=101unc_mc0_rdcas_count_freerununcore memoryCounts every read (RdCAS) issued by the Memory Controller to DRAM (sum of all channels). All requests result in 64 byte data transfers from DRAMevent=0xff,umask=0x2001unc_mc0_total_reqcount_freerununcore memoryCounts every 64B read and write request entering the Memory Controller to DRAM (sum of all channels). Each write request counts as a new request incrementing this counter. However, same cache line write requests (both full and partial) are combined to a single 64 byte data transfer to DRAMevent=0xff,umask=0x1001unc_mc0_wrcas_count_freerununcore memoryCounts every write (WrCAS) issued by the Memory Controller to DRAM (sum of all channels). All requests result in 64 byte data transfers from DRAMevent=0xff,umask=0x3001unc_mc1_rdcas_count_freerununcore memoryCounts every read (RdCAS) issued by the Memory Controller to DRAM (sum of all channels). All requests result in 64 byte data transfers from DRAMevent=0xff,umask=0x2001unc_mc1_total_reqcount_freerununcore memoryCounts every 64B read and write request entering the Memory Controller to DRAM (sum of all channels). Each write request counts as a new request incrementing this counter. However, same cache line write requests (both full and partial) are combined to a single 64 byte data transfer to DRAMevent=0xff,umask=0x1001unc_mc1_wrcas_count_freerununcore memoryCounts every write (WrCAS) issued by the Memory Controller to DRAM (sum of all channels). All requests result in 64 byte data transfers from DRAMevent=0xff,umask=0x3001dtlb_store_misses.walk_completed_2m_4mvirtual memoryPage walks completed due to a demand data store to a 2M/4M pageevent=0x49,period=100003,umask=400Counts page walks completed due to demand data stores whose address translations missed in the TLB and were mapped to 2M/4M pages.  The page walks can end with or without a page faultdtlb_store_misses.walk_completed_4kvirtual memoryPage walks completed due to a demand data store to a 4K pageevent=0x49,period=100003,umask=200Counts page walks completed due to demand data stores whose address translations missed in the TLB and were mapped to 4K pages.  The page walks can end with or without a page faultoffcore_requests.anycacheAll offcore requestsevent=0xb0,period=100000,umask=0x8000offcore_requests.any.readcacheOffcore read requestsevent=0xb0,period=100000,umask=800offcore_requests.any.rfocacheOffcore RFO requestsevent=0xb0,period=100000,umask=0x1000offcore_requests.demand.read_codecacheOffcore demand code read requestsevent=0xb0,period=100000,umask=200offcore_requests.demand.read_datacacheOffcore demand data read requestsevent=0xb0,period=100000,umask=100offcore_requests.demand.rfocacheOffcore demand RFO requestsevent=0xb0,period=100000,umask=400offcore_requests_outstanding.any.readcacheOutstanding offcore readsevent=0x60,period=2000000,umask=800offcore_requests_outstanding.any.read_not_emptycacheCycles offcore reads busyevent=0x60,cmask=1,period=2000000,umask=800offcore_requests_outstanding.demand.read_codecacheOutstanding offcore demand code readsevent=0x60,period=2000000,umask=200offcore_requests_outstanding.demand.read_code_not_emptycacheCycles offcore demand code read busyevent=0x60,cmask=1,period=2000000,umask=200offcore_requests_outstanding.demand.read_datacacheOutstanding offcore demand data readsevent=0x60,period=2000000,umask=100offcore_requests_outstanding.demand.read_data_not_emptycacheCycles offcore demand data read busyevent=0x60,cmask=1,period=2000000,umask=100offcore_requests_outstanding.demand.rfocacheOutstanding offcore demand RFOsevent=0x60,period=2000000,umask=400offcore_requests_outstanding.demand.rfo_not_emptycacheCycles offcore demand RFOs busyevent=0x60,cmask=1,period=2000000,umask=400offcore_response.any_data.all_local_dram_and_remote_cache_hitcacheREQUEST = ANY_DATA read and RESPONSE = ALL_LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x501100offcore_response.any_data.any_cache_dramcacheREQUEST = ANY_DATA read and RESPONSE = ANY_CACHE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7f1100offcore_response.any_data.any_locationcacheREQUEST = ANY_DATA read and RESPONSE = ANY_LOCATIONevent=0xb7,period=100000,umask=1,offcore_rsp=0xff1100offcore_response.any_data.io_csr_mmiocacheREQUEST = ANY_DATA read and RESPONSE = IO_CSR_MMIOevent=0xb7,period=100000,umask=1,offcore_rsp=0x801100offcore_response.any_data.llc_hit_no_other_corecacheREQUEST = ANY_DATA read and RESPONSE = LLC_HIT_NO_OTHER_COREevent=0xb7,period=100000,umask=1,offcore_rsp=0x11100offcore_response.any_data.llc_hit_other_core_hitcacheREQUEST = ANY_DATA read and RESPONSE = LLC_HIT_OTHER_CORE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x21100offcore_response.any_data.llc_hit_other_core_hitmcacheREQUEST = ANY_DATA read and RESPONSE = LLC_HIT_OTHER_CORE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x41100offcore_response.any_data.local_cachecacheREQUEST = ANY_DATA read and RESPONSE = LOCAL_CACHEevent=0xb7,period=100000,umask=1,offcore_rsp=0x71100offcore_response.any_data.local_dram_and_remote_cache_hitcacheREQUEST = ANY_DATA read and RESPONSE = LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x101100offcore_response.any_data.remote_cache_hitmcacheREQUEST = ANY_DATA read and RESPONSE = REMOTE_CACHE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x81100offcore_response.any_ifetch.all_local_dram_and_remote_cache_hitcacheREQUEST = ANY IFETCH and RESPONSE = ALL_LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x504400offcore_response.any_ifetch.any_cache_dramcacheREQUEST = ANY IFETCH and RESPONSE = ANY_CACHE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7f4400offcore_response.any_ifetch.any_locationcacheREQUEST = ANY IFETCH and RESPONSE = ANY_LOCATIONevent=0xb7,period=100000,umask=1,offcore_rsp=0xff4400offcore_response.any_ifetch.io_csr_mmiocacheREQUEST = ANY IFETCH and RESPONSE = IO_CSR_MMIOevent=0xb7,period=100000,umask=1,offcore_rsp=0x804400offcore_response.any_ifetch.llc_hit_no_other_corecacheREQUEST = ANY IFETCH and RESPONSE = LLC_HIT_NO_OTHER_COREevent=0xb7,period=100000,umask=1,offcore_rsp=0x14400offcore_response.any_ifetch.llc_hit_other_core_hitcacheREQUEST = ANY IFETCH and RESPONSE = LLC_HIT_OTHER_CORE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x24400offcore_response.any_ifetch.llc_hit_other_core_hitmcacheREQUEST = ANY IFETCH and RESPONSE = LLC_HIT_OTHER_CORE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x44400offcore_response.any_ifetch.local_cachecacheREQUEST = ANY IFETCH and RESPONSE = LOCAL_CACHEevent=0xb7,period=100000,umask=1,offcore_rsp=0x74400offcore_response.any_ifetch.local_dram_and_remote_cache_hitcacheREQUEST = ANY IFETCH and RESPONSE = LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x104400offcore_response.any_ifetch.remote_cache_hitmcacheREQUEST = ANY IFETCH and RESPONSE = REMOTE_CACHE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x84400offcore_response.any_request.all_local_dram_and_remote_cache_hitcacheREQUEST = ANY_REQUEST and RESPONSE = ALL_LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x50ff00offcore_response.any_request.any_cache_dramcacheREQUEST = ANY_REQUEST and RESPONSE = ANY_CACHE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7fff00offcore_response.any_request.any_locationcacheREQUEST = ANY_REQUEST and RESPONSE = ANY_LOCATIONevent=0xb7,period=100000,umask=1,offcore_rsp=0xffff00offcore_response.any_request.io_csr_mmiocacheREQUEST = ANY_REQUEST and RESPONSE = IO_CSR_MMIOevent=0xb7,period=100000,umask=1,offcore_rsp=0x80ff00offcore_response.any_request.llc_hit_no_other_corecacheREQUEST = ANY_REQUEST and RESPONSE = LLC_HIT_NO_OTHER_COREevent=0xb7,period=100000,umask=1,offcore_rsp=0x1ff00offcore_response.any_request.llc_hit_other_core_hitcacheREQUEST = ANY_REQUEST and RESPONSE = LLC_HIT_OTHER_CORE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x2ff00offcore_response.any_request.llc_hit_other_core_hitmcacheREQUEST = ANY_REQUEST and RESPONSE = LLC_HIT_OTHER_CORE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x4ff00offcore_response.any_request.local_cachecacheREQUEST = ANY_REQUEST and RESPONSE = LOCAL_CACHEevent=0xb7,period=100000,umask=1,offcore_rsp=0x7ff00offcore_response.any_request.local_dram_and_remote_cache_hitcacheREQUEST = ANY_REQUEST and RESPONSE = LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x10ff00offcore_response.any_request.remote_cache_hitmcacheREQUEST = ANY_REQUEST and RESPONSE = REMOTE_CACHE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x8ff00offcore_response.any_rfo.all_local_dram_and_remote_cache_hitcacheREQUEST = ANY RFO and RESPONSE = ALL_LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x502200offcore_response.any_rfo.any_cache_dramcacheREQUEST = ANY RFO and RESPONSE = ANY_CACHE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7f2200offcore_response.any_rfo.any_locationcacheREQUEST = ANY RFO and RESPONSE = ANY_LOCATIONevent=0xb7,period=100000,umask=1,offcore_rsp=0xff2200offcore_response.any_rfo.io_csr_mmiocacheREQUEST = ANY RFO and RESPONSE = IO_CSR_MMIOevent=0xb7,period=100000,umask=1,offcore_rsp=0x802200offcore_response.any_rfo.llc_hit_no_other_corecacheREQUEST = ANY RFO and RESPONSE = LLC_HIT_NO_OTHER_COREevent=0xb7,period=100000,umask=1,offcore_rsp=0x12200offcore_response.any_rfo.llc_hit_other_core_hitcacheREQUEST = ANY RFO and RESPONSE = LLC_HIT_OTHER_CORE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x22200offcore_response.any_rfo.llc_hit_other_core_hitmcacheREQUEST = ANY RFO and RESPONSE = LLC_HIT_OTHER_CORE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x42200offcore_response.any_rfo.local_cachecacheREQUEST = ANY RFO and RESPONSE = LOCAL_CACHEevent=0xb7,period=100000,umask=1,offcore_rsp=0x72200offcore_response.any_rfo.local_dram_and_remote_cache_hitcacheREQUEST = ANY RFO and RESPONSE = LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x102200offcore_response.any_rfo.remote_cache_hitmcacheREQUEST = ANY RFO and RESPONSE = REMOTE_CACHE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x82200offcore_response.corewb.all_local_dram_and_remote_cache_hitcacheREQUEST = CORE_WB and RESPONSE = ALL_LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x500800offcore_response.corewb.any_cache_dramcacheREQUEST = CORE_WB and RESPONSE = ANY_CACHE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7f0800offcore_response.corewb.any_locationcacheREQUEST = CORE_WB and RESPONSE = ANY_LOCATIONevent=0xb7,period=100000,umask=1,offcore_rsp=0xff0800offcore_response.corewb.io_csr_mmiocacheREQUEST = CORE_WB and RESPONSE = IO_CSR_MMIOevent=0xb7,period=100000,umask=1,offcore_rsp=0x800800offcore_response.corewb.llc_hit_no_other_corecacheREQUEST = CORE_WB and RESPONSE = LLC_HIT_NO_OTHER_COREevent=0xb7,period=100000,umask=1,offcore_rsp=0x10800offcore_response.corewb.llc_hit_other_core_hitcacheREQUEST = CORE_WB and RESPONSE = LLC_HIT_OTHER_CORE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x20800offcore_response.corewb.llc_hit_other_core_hitmcacheREQUEST = CORE_WB and RESPONSE = LLC_HIT_OTHER_CORE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x40800offcore_response.corewb.local_cachecacheREQUEST = CORE_WB and RESPONSE = LOCAL_CACHEevent=0xb7,period=100000,umask=1,offcore_rsp=0x70800offcore_response.corewb.local_dram_and_remote_cache_hitcacheREQUEST = CORE_WB and RESPONSE = LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x100800offcore_response.corewb.remote_cache_hitmcacheREQUEST = CORE_WB and RESPONSE = REMOTE_CACHE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x80800offcore_response.data_ifetch.all_local_dram_and_remote_cache_hitcacheREQUEST = DATA_IFETCH and RESPONSE = ALL_LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x507700offcore_response.data_ifetch.any_cache_dramcacheREQUEST = DATA_IFETCH and RESPONSE = ANY_CACHE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7f7700offcore_response.data_ifetch.any_locationcacheREQUEST = DATA_IFETCH and RESPONSE = ANY_LOCATIONevent=0xb7,period=100000,umask=1,offcore_rsp=0xff7700offcore_response.data_ifetch.io_csr_mmiocacheREQUEST = DATA_IFETCH and RESPONSE = IO_CSR_MMIOevent=0xb7,period=100000,umask=1,offcore_rsp=0x807700offcore_response.data_ifetch.llc_hit_no_other_corecacheREQUEST = DATA_IFETCH and RESPONSE = LLC_HIT_NO_OTHER_COREevent=0xb7,period=100000,umask=1,offcore_rsp=0x17700offcore_response.data_ifetch.llc_hit_other_core_hitcacheREQUEST = DATA_IFETCH and RESPONSE = LLC_HIT_OTHER_CORE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x27700offcore_response.data_ifetch.llc_hit_other_core_hitmcacheREQUEST = DATA_IFETCH and RESPONSE = LLC_HIT_OTHER_CORE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x47700offcore_response.data_ifetch.local_cachecacheREQUEST = DATA_IFETCH and RESPONSE = LOCAL_CACHEevent=0xb7,period=100000,umask=1,offcore_rsp=0x77700offcore_response.data_ifetch.local_dram_and_remote_cache_hitcacheREQUEST = DATA_IFETCH and RESPONSE = LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x107700offcore_response.data_ifetch.remote_cache_hitmcacheREQUEST = DATA_IFETCH and RESPONSE = REMOTE_CACHE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x87700offcore_response.data_in.all_local_dram_and_remote_cache_hitcacheREQUEST = DATA_IN and RESPONSE = ALL_LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x503300offcore_response.data_in.any_cache_dramcacheREQUEST = DATA_IN and RESPONSE = ANY_CACHE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7f3300offcore_response.data_in.any_locationcacheREQUEST = DATA_IN and RESPONSE = ANY_LOCATIONevent=0xb7,period=100000,umask=1,offcore_rsp=0xff3300offcore_response.data_in.io_csr_mmiocacheREQUEST = DATA_IN and RESPONSE = IO_CSR_MMIOevent=0xb7,period=100000,umask=1,offcore_rsp=0x803300offcore_response.data_in.llc_hit_no_other_corecacheREQUEST = DATA_IN and RESPONSE = LLC_HIT_NO_OTHER_COREevent=0xb7,period=100000,umask=1,offcore_rsp=0x13300offcore_response.data_in.llc_hit_other_core_hitcacheREQUEST = DATA_IN and RESPONSE = LLC_HIT_OTHER_CORE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x23300offcore_response.data_in.llc_hit_other_core_hitmcacheREQUEST = DATA_IN and RESPONSE = LLC_HIT_OTHER_CORE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x43300offcore_response.data_in.local_cachecacheREQUEST = DATA_IN and RESPONSE = LOCAL_CACHEevent=0xb7,period=100000,umask=1,offcore_rsp=0x73300offcore_response.data_in.local_dram_and_remote_cache_hitcacheREQUEST = DATA_IN and RESPONSE = LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x103300offcore_response.data_in.remote_cache_hitmcacheREQUEST = DATA_IN and RESPONSE = REMOTE_CACHE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x83300offcore_response.demand_data.all_local_dram_and_remote_cache_hitcacheREQUEST = DEMAND_DATA and RESPONSE = ALL_LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x500300offcore_response.demand_data.any_cache_dramcacheREQUEST = DEMAND_DATA and RESPONSE = ANY_CACHE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7f0300offcore_response.demand_data.any_locationcacheREQUEST = DEMAND_DATA and RESPONSE = ANY_LOCATIONevent=0xb7,period=100000,umask=1,offcore_rsp=0xff0300offcore_response.demand_data.io_csr_mmiocacheREQUEST = DEMAND_DATA and RESPONSE = IO_CSR_MMIOevent=0xb7,period=100000,umask=1,offcore_rsp=0x800300offcore_response.demand_data.llc_hit_no_other_corecacheREQUEST = DEMAND_DATA and RESPONSE = LLC_HIT_NO_OTHER_COREevent=0xb7,period=100000,umask=1,offcore_rsp=0x10300offcore_response.demand_data.llc_hit_other_core_hitcacheREQUEST = DEMAND_DATA and RESPONSE = LLC_HIT_OTHER_CORE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x20300offcore_response.demand_data.llc_hit_other_core_hitmcacheREQUEST = DEMAND_DATA and RESPONSE = LLC_HIT_OTHER_CORE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x40300offcore_response.demand_data.local_cachecacheREQUEST = DEMAND_DATA and RESPONSE = LOCAL_CACHEevent=0xb7,period=100000,umask=1,offcore_rsp=0x70300offcore_response.demand_data.local_dram_and_remote_cache_hitcacheREQUEST = DEMAND_DATA and RESPONSE = LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x100300offcore_response.demand_data.remote_cache_hitmcacheREQUEST = DEMAND_DATA and RESPONSE = REMOTE_CACHE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x80300offcore_response.demand_data_rd.all_local_dram_and_remote_cache_hitcacheREQUEST = DEMAND_DATA_RD and RESPONSE = ALL_LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x500100offcore_response.demand_data_rd.any_cache_dramcacheREQUEST = DEMAND_DATA_RD and RESPONSE = ANY_CACHE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7f0100offcore_response.demand_data_rd.any_locationcacheREQUEST = DEMAND_DATA_RD and RESPONSE = ANY_LOCATIONevent=0xb7,period=100000,umask=1,offcore_rsp=0xff0100offcore_response.demand_data_rd.io_csr_mmiocacheREQUEST = DEMAND_DATA_RD and RESPONSE = IO_CSR_MMIOevent=0xb7,period=100000,umask=1,offcore_rsp=0x800100offcore_response.demand_data_rd.llc_hit_no_other_corecacheREQUEST = DEMAND_DATA_RD and RESPONSE = LLC_HIT_NO_OTHER_COREevent=0xb7,period=100000,umask=1,offcore_rsp=0x10100offcore_response.demand_data_rd.llc_hit_other_core_hitcacheREQUEST = DEMAND_DATA_RD and RESPONSE = LLC_HIT_OTHER_CORE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x20100offcore_response.demand_data_rd.llc_hit_other_core_hitmcacheREQUEST = DEMAND_DATA_RD and RESPONSE = LLC_HIT_OTHER_CORE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x40100offcore_response.demand_data_rd.local_cachecacheREQUEST = DEMAND_DATA_RD and RESPONSE = LOCAL_CACHEevent=0xb7,period=100000,umask=1,offcore_rsp=0x70100offcore_response.demand_data_rd.local_dram_and_remote_cache_hitcacheREQUEST = DEMAND_DATA_RD and RESPONSE = LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x100100offcore_response.demand_data_rd.remote_cache_hitmcacheREQUEST = DEMAND_DATA_RD and RESPONSE = REMOTE_CACHE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x80100offcore_response.demand_ifetch.all_local_dram_and_remote_cache_hitcacheREQUEST = DEMAND_IFETCH and RESPONSE = ALL_LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x500400offcore_response.demand_ifetch.any_cache_dramcacheREQUEST = DEMAND_IFETCH and RESPONSE = ANY_CACHE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7f0400offcore_response.demand_ifetch.any_locationcacheREQUEST = DEMAND_IFETCH and RESPONSE = ANY_LOCATIONevent=0xb7,period=100000,umask=1,offcore_rsp=0xff0400offcore_response.demand_ifetch.io_csr_mmiocacheREQUEST = DEMAND_IFETCH and RESPONSE = IO_CSR_MMIOevent=0xb7,period=100000,umask=1,offcore_rsp=0x800400offcore_response.demand_ifetch.llc_hit_no_other_corecacheREQUEST = DEMAND_IFETCH and RESPONSE = LLC_HIT_NO_OTHER_COREevent=0xb7,period=100000,umask=1,offcore_rsp=0x10400offcore_response.demand_ifetch.llc_hit_other_core_hitcacheREQUEST = DEMAND_IFETCH and RESPONSE = LLC_HIT_OTHER_CORE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x20400offcore_response.demand_ifetch.llc_hit_other_core_hitmcacheREQUEST = DEMAND_IFETCH and RESPONSE = LLC_HIT_OTHER_CORE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x40400offcore_response.demand_ifetch.local_cachecacheREQUEST = DEMAND_IFETCH and RESPONSE = LOCAL_CACHEevent=0xb7,period=100000,umask=1,offcore_rsp=0x70400offcore_response.demand_ifetch.local_dram_and_remote_cache_hitcacheREQUEST = DEMAND_IFETCH and RESPONSE = LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x100400offcore_response.demand_ifetch.remote_cache_hitmcacheREQUEST = DEMAND_IFETCH and RESPONSE = REMOTE_CACHE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x80400offcore_response.demand_rfo.all_local_dram_and_remote_cache_hitcacheREQUEST = DEMAND_RFO and RESPONSE = ALL_LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x500200offcore_response.demand_rfo.any_cache_dramcacheREQUEST = DEMAND_RFO and RESPONSE = ANY_CACHE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7f0200offcore_response.demand_rfo.any_locationcacheREQUEST = DEMAND_RFO and RESPONSE = ANY_LOCATIONevent=0xb7,period=100000,umask=1,offcore_rsp=0xff0200offcore_response.demand_rfo.io_csr_mmiocacheREQUEST = DEMAND_RFO and RESPONSE = IO_CSR_MMIOevent=0xb7,period=100000,umask=1,offcore_rsp=0x800200offcore_response.demand_rfo.llc_hit_no_other_corecacheREQUEST = DEMAND_RFO and RESPONSE = LLC_HIT_NO_OTHER_COREevent=0xb7,period=100000,umask=1,offcore_rsp=0x10200offcore_response.demand_rfo.llc_hit_other_core_hitcacheREQUEST = DEMAND_RFO and RESPONSE = LLC_HIT_OTHER_CORE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x20200offcore_response.demand_rfo.llc_hit_other_core_hitmcacheREQUEST = DEMAND_RFO and RESPONSE = LLC_HIT_OTHER_CORE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x40200offcore_response.demand_rfo.local_cachecacheREQUEST = DEMAND_RFO and RESPONSE = LOCAL_CACHEevent=0xb7,period=100000,umask=1,offcore_rsp=0x70200offcore_response.demand_rfo.local_dram_and_remote_cache_hitcacheREQUEST = DEMAND_RFO and RESPONSE = LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x100200offcore_response.demand_rfo.remote_cache_hitmcacheREQUEST = DEMAND_RFO and RESPONSE = REMOTE_CACHE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x80200offcore_response.other.all_local_dram_and_remote_cache_hitcacheREQUEST = OTHER and RESPONSE = ALL_LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x508000offcore_response.other.any_cache_dramcacheREQUEST = OTHER and RESPONSE = ANY_CACHE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7f8000offcore_response.other.any_locationcacheREQUEST = OTHER and RESPONSE = ANY_LOCATIONevent=0xb7,period=100000,umask=1,offcore_rsp=0xff8000offcore_response.other.io_csr_mmiocacheREQUEST = OTHER and RESPONSE = IO_CSR_MMIOevent=0xb7,period=100000,umask=1,offcore_rsp=0x808000offcore_response.other.llc_hit_no_other_corecacheREQUEST = OTHER and RESPONSE = LLC_HIT_NO_OTHER_COREevent=0xb7,period=100000,umask=1,offcore_rsp=0x18000offcore_response.other.llc_hit_other_core_hitcacheREQUEST = OTHER and RESPONSE = LLC_HIT_OTHER_CORE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x28000offcore_response.other.llc_hit_other_core_hitmcacheREQUEST = OTHER and RESPONSE = LLC_HIT_OTHER_CORE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x48000offcore_response.other.local_cachecacheREQUEST = OTHER and RESPONSE = LOCAL_CACHEevent=0xb7,period=100000,umask=1,offcore_rsp=0x78000offcore_response.other.local_dram_and_remote_cache_hitcacheREQUEST = OTHER and RESPONSE = LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x108000offcore_response.other.remote_cache_hitmcacheREQUEST = OTHER and RESPONSE = REMOTE_CACHE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x88000offcore_response.pf_data.all_local_dram_and_remote_cache_hitcacheREQUEST = PF_DATA and RESPONSE = ALL_LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x505000offcore_response.pf_data.any_cache_dramcacheREQUEST = PF_DATA and RESPONSE = ANY_CACHE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7f5000offcore_response.pf_data.any_locationcacheREQUEST = PF_DATA and RESPONSE = ANY_LOCATIONevent=0xb7,period=100000,umask=1,offcore_rsp=0xff5000offcore_response.pf_data.io_csr_mmiocacheREQUEST = PF_DATA and RESPONSE = IO_CSR_MMIOevent=0xb7,period=100000,umask=1,offcore_rsp=0x805000offcore_response.pf_data.llc_hit_no_other_corecacheREQUEST = PF_DATA and RESPONSE = LLC_HIT_NO_OTHER_COREevent=0xb7,period=100000,umask=1,offcore_rsp=0x15000offcore_response.pf_data.llc_hit_other_core_hitcacheREQUEST = PF_DATA and RESPONSE = LLC_HIT_OTHER_CORE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x25000offcore_response.pf_data.llc_hit_other_core_hitmcacheREQUEST = PF_DATA and RESPONSE = LLC_HIT_OTHER_CORE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x45000offcore_response.pf_data.local_cachecacheREQUEST = PF_DATA and RESPONSE = LOCAL_CACHEevent=0xb7,period=100000,umask=1,offcore_rsp=0x75000offcore_response.pf_data.local_dram_and_remote_cache_hitcacheREQUEST = PF_DATA and RESPONSE = LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x105000offcore_response.pf_data.remote_cache_hitmcacheREQUEST = PF_DATA and RESPONSE = REMOTE_CACHE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x85000offcore_response.pf_data_rd.all_local_dram_and_remote_cache_hitcacheREQUEST = PF_DATA_RD and RESPONSE = ALL_LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x501000offcore_response.pf_data_rd.any_cache_dramcacheREQUEST = PF_DATA_RD and RESPONSE = ANY_CACHE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7f1000offcore_response.pf_data_rd.any_locationcacheREQUEST = PF_DATA_RD and RESPONSE = ANY_LOCATIONevent=0xb7,period=100000,umask=1,offcore_rsp=0xff1000offcore_response.pf_data_rd.io_csr_mmiocacheREQUEST = PF_DATA_RD and RESPONSE = IO_CSR_MMIOevent=0xb7,period=100000,umask=1,offcore_rsp=0x801000offcore_response.pf_data_rd.llc_hit_no_other_corecacheREQUEST = PF_DATA_RD and RESPONSE = LLC_HIT_NO_OTHER_COREevent=0xb7,period=100000,umask=1,offcore_rsp=0x11000offcore_response.pf_data_rd.llc_hit_other_core_hitcacheREQUEST = PF_DATA_RD and RESPONSE = LLC_HIT_OTHER_CORE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x21000offcore_response.pf_data_rd.llc_hit_other_core_hitmcacheREQUEST = PF_DATA_RD and RESPONSE = LLC_HIT_OTHER_CORE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x41000offcore_response.pf_data_rd.local_cachecacheREQUEST = PF_DATA_RD and RESPONSE = LOCAL_CACHEevent=0xb7,period=100000,umask=1,offcore_rsp=0x71000offcore_response.pf_data_rd.local_dram_and_remote_cache_hitcacheREQUEST = PF_DATA_RD and RESPONSE = LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x101000offcore_response.pf_data_rd.remote_cache_hitmcacheREQUEST = PF_DATA_RD and RESPONSE = REMOTE_CACHE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x81000offcore_response.pf_ifetch.all_local_dram_and_remote_cache_hitcacheREQUEST = PF_RFO and RESPONSE = ALL_LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x504000offcore_response.pf_ifetch.any_cache_dramcacheREQUEST = PF_RFO and RESPONSE = ANY_CACHE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7f4000offcore_response.pf_ifetch.any_locationcacheREQUEST = PF_RFO and RESPONSE = ANY_LOCATIONevent=0xb7,period=100000,umask=1,offcore_rsp=0xff4000offcore_response.pf_ifetch.io_csr_mmiocacheREQUEST = PF_RFO and RESPONSE = IO_CSR_MMIOevent=0xb7,period=100000,umask=1,offcore_rsp=0x804000offcore_response.pf_ifetch.llc_hit_no_other_corecacheREQUEST = PF_RFO and RESPONSE = LLC_HIT_NO_OTHER_COREevent=0xb7,period=100000,umask=1,offcore_rsp=0x14000offcore_response.pf_ifetch.llc_hit_other_core_hitcacheREQUEST = PF_RFO and RESPONSE = LLC_HIT_OTHER_CORE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x24000offcore_response.pf_ifetch.llc_hit_other_core_hitmcacheREQUEST = PF_RFO and RESPONSE = LLC_HIT_OTHER_CORE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x44000offcore_response.pf_ifetch.local_cachecacheREQUEST = PF_RFO and RESPONSE = LOCAL_CACHEevent=0xb7,period=100000,umask=1,offcore_rsp=0x74000offcore_response.pf_ifetch.local_dram_and_remote_cache_hitcacheREQUEST = PF_RFO and RESPONSE = LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x104000offcore_response.pf_ifetch.remote_cache_hitmcacheREQUEST = PF_RFO and RESPONSE = REMOTE_CACHE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x84000offcore_response.pf_rfo.all_local_dram_and_remote_cache_hitcacheREQUEST = PF_IFETCH and RESPONSE = ALL_LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x502000offcore_response.pf_rfo.any_cache_dramcacheREQUEST = PF_IFETCH and RESPONSE = ANY_CACHE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7f2000offcore_response.pf_rfo.any_locationcacheREQUEST = PF_IFETCH and RESPONSE = ANY_LOCATIONevent=0xb7,period=100000,umask=1,offcore_rsp=0xff2000offcore_response.pf_rfo.io_csr_mmiocacheREQUEST = PF_IFETCH and RESPONSE = IO_CSR_MMIOevent=0xb7,period=100000,umask=1,offcore_rsp=0x802000offcore_response.pf_rfo.llc_hit_no_other_corecacheREQUEST = PF_IFETCH and RESPONSE = LLC_HIT_NO_OTHER_COREevent=0xb7,period=100000,umask=1,offcore_rsp=0x12000offcore_response.pf_rfo.llc_hit_other_core_hitcacheREQUEST = PF_IFETCH and RESPONSE = LLC_HIT_OTHER_CORE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x22000offcore_response.pf_rfo.llc_hit_other_core_hitmcacheREQUEST = PF_IFETCH and RESPONSE = LLC_HIT_OTHER_CORE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x42000offcore_response.pf_rfo.local_cachecacheREQUEST = PF_IFETCH and RESPONSE = LOCAL_CACHEevent=0xb7,period=100000,umask=1,offcore_rsp=0x72000offcore_response.pf_rfo.local_dram_and_remote_cache_hitcacheREQUEST = PF_IFETCH and RESPONSE = LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x102000offcore_response.pf_rfo.remote_cache_hitmcacheREQUEST = PF_IFETCH and RESPONSE = REMOTE_CACHE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x82000offcore_response.prefetch.all_local_dram_and_remote_cache_hitcacheREQUEST = PREFETCH and RESPONSE = ALL_LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x507000offcore_response.prefetch.any_cache_dramcacheREQUEST = PREFETCH and RESPONSE = ANY_CACHE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7f7000offcore_response.prefetch.any_locationcacheREQUEST = PREFETCH and RESPONSE = ANY_LOCATIONevent=0xb7,period=100000,umask=1,offcore_rsp=0xff7000offcore_response.prefetch.io_csr_mmiocacheREQUEST = PREFETCH and RESPONSE = IO_CSR_MMIOevent=0xb7,period=100000,umask=1,offcore_rsp=0x807000offcore_response.prefetch.llc_hit_no_other_corecacheREQUEST = PREFETCH and RESPONSE = LLC_HIT_NO_OTHER_COREevent=0xb7,period=100000,umask=1,offcore_rsp=0x17000offcore_response.prefetch.llc_hit_other_core_hitcacheREQUEST = PREFETCH and RESPONSE = LLC_HIT_OTHER_CORE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x27000offcore_response.prefetch.llc_hit_other_core_hitmcacheREQUEST = PREFETCH and RESPONSE = LLC_HIT_OTHER_CORE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x47000offcore_response.prefetch.local_cachecacheREQUEST = PREFETCH and RESPONSE = LOCAL_CACHEevent=0xb7,period=100000,umask=1,offcore_rsp=0x77000offcore_response.prefetch.local_dram_and_remote_cache_hitcacheREQUEST = PREFETCH and RESPONSE = LOCAL_DRAM AND REMOTE_CACHE_HITevent=0xb7,period=100000,umask=1,offcore_rsp=0x107000offcore_response.prefetch.remote_cache_hitmcacheREQUEST = PREFETCH and RESPONSE = REMOTE_CACHE_HITMevent=0xb7,period=100000,umask=1,offcore_rsp=0x87000sq_misc.lru_hintscacheSuper Queue LRU hints sent to LLCevent=0xf4,period=2000000,umask=400misalign_mem_ref.storememoryMisaligned store referencesevent=5,period=200000,umask=200offcore_response.any_data.any_dram_and_remote_fwdmemoryREQUEST = ANY_DATA read and RESPONSE = ANY_DRAM AND REMOTE_FWDevent=0xb7,period=100000,umask=1,offcore_rsp=0x301100offcore_response.any_data.any_llc_missmemoryREQUEST = ANY_DATA read and RESPONSE = ANY_LLC_MISSevent=0xb7,period=100000,umask=1,offcore_rsp=0xf81100offcore_response.any_data.other_local_drammemoryREQUEST = ANY_DATA read and RESPONSE = OTHER_LOCAL_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x401100offcore_response.any_data.remote_drammemoryREQUEST = ANY_DATA read and RESPONSE = REMOTE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x201100offcore_response.any_ifetch.any_dram_and_remote_fwdmemoryREQUEST = ANY IFETCH and RESPONSE = ANY_DRAM AND REMOTE_FWDevent=0xb7,period=100000,umask=1,offcore_rsp=0x304400offcore_response.any_ifetch.any_llc_missmemoryREQUEST = ANY IFETCH and RESPONSE = ANY_LLC_MISSevent=0xb7,period=100000,umask=1,offcore_rsp=0xf84400offcore_response.any_ifetch.other_local_drammemoryREQUEST = ANY IFETCH and RESPONSE = OTHER_LOCAL_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x404400offcore_response.any_ifetch.remote_drammemoryREQUEST = ANY IFETCH and RESPONSE = REMOTE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x204400offcore_response.any_request.any_dram_and_remote_fwdmemoryREQUEST = ANY_REQUEST and RESPONSE = ANY_DRAM AND REMOTE_FWDevent=0xb7,period=100000,umask=1,offcore_rsp=0x30ff00offcore_response.any_request.any_llc_missmemoryREQUEST = ANY_REQUEST and RESPONSE = ANY_LLC_MISSevent=0xb7,period=100000,umask=1,offcore_rsp=0xf8ff00offcore_response.any_request.other_local_drammemoryREQUEST = ANY_REQUEST and RESPONSE = OTHER_LOCAL_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x40ff00offcore_response.any_request.remote_drammemoryREQUEST = ANY_REQUEST and RESPONSE = REMOTE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x20ff00offcore_response.any_rfo.any_dram_and_remote_fwdmemoryREQUEST = ANY RFO and RESPONSE = ANY_DRAM AND REMOTE_FWDevent=0xb7,period=100000,umask=1,offcore_rsp=0x302200offcore_response.any_rfo.any_llc_missmemoryREQUEST = ANY RFO and RESPONSE = ANY_LLC_MISSevent=0xb7,period=100000,umask=1,offcore_rsp=0xf82200offcore_response.any_rfo.other_local_drammemoryREQUEST = ANY RFO and RESPONSE = OTHER_LOCAL_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x402200offcore_response.any_rfo.remote_drammemoryREQUEST = ANY RFO and RESPONSE = REMOTE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x202200offcore_response.corewb.any_dram_and_remote_fwdmemoryREQUEST = CORE_WB and RESPONSE = ANY_DRAM AND REMOTE_FWDevent=0xb7,period=100000,umask=1,offcore_rsp=0x300800offcore_response.corewb.any_llc_missmemoryREQUEST = CORE_WB and RESPONSE = ANY_LLC_MISSevent=0xb7,period=100000,umask=1,offcore_rsp=0xf80800offcore_response.corewb.other_local_drammemoryREQUEST = CORE_WB and RESPONSE = OTHER_LOCAL_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x400800offcore_response.corewb.remote_drammemoryREQUEST = CORE_WB and RESPONSE = REMOTE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x200800offcore_response.data_ifetch.any_dram_and_remote_fwdmemoryREQUEST = DATA_IFETCH and RESPONSE = ANY_DRAM AND REMOTE_FWDevent=0xb7,period=100000,umask=1,offcore_rsp=0x307700offcore_response.data_ifetch.any_llc_missmemoryREQUEST = DATA_IFETCH and RESPONSE = ANY_LLC_MISSevent=0xb7,period=100000,umask=1,offcore_rsp=0xf87700offcore_response.data_ifetch.other_local_drammemoryREQUEST = DATA_IFETCH and RESPONSE = OTHER_LOCAL_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x407700offcore_response.data_ifetch.remote_drammemoryREQUEST = DATA_IFETCH and RESPONSE = REMOTE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x207700offcore_response.data_in.any_dram_and_remote_fwdmemoryREQUEST = DATA_IN and RESPONSE = ANY_DRAM AND REMOTE_FWDevent=0xb7,period=100000,umask=1,offcore_rsp=0x303300offcore_response.data_in.any_llc_missmemoryREQUEST = DATA_IN and RESPONSE = ANY_LLC_MISSevent=0xb7,period=100000,umask=1,offcore_rsp=0xf83300offcore_response.data_in.other_local_drammemoryREQUEST = DATA_IN and RESPONSE = OTHER_LOCAL_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x403300offcore_response.data_in.remote_drammemoryREQUEST = DATA_IN and RESPONSE = REMOTE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x203300offcore_response.demand_data.any_dram_and_remote_fwdmemoryREQUEST = DEMAND_DATA and RESPONSE = ANY_DRAM AND REMOTE_FWDevent=0xb7,period=100000,umask=1,offcore_rsp=0x300300offcore_response.demand_data.any_llc_missmemoryREQUEST = DEMAND_DATA and RESPONSE = ANY_LLC_MISSevent=0xb7,period=100000,umask=1,offcore_rsp=0xf80300offcore_response.demand_data.other_local_drammemoryREQUEST = DEMAND_DATA and RESPONSE = OTHER_LOCAL_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x400300offcore_response.demand_data.remote_drammemoryREQUEST = DEMAND_DATA and RESPONSE = REMOTE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x200300offcore_response.demand_data_rd.any_dram_and_remote_fwdmemoryREQUEST = DEMAND_DATA_RD and RESPONSE = ANY_DRAM AND REMOTE_FWDevent=0xb7,period=100000,umask=1,offcore_rsp=0x300100offcore_response.demand_data_rd.any_llc_missmemoryREQUEST = DEMAND_DATA_RD and RESPONSE = ANY_LLC_MISSevent=0xb7,period=100000,umask=1,offcore_rsp=0xf80100offcore_response.demand_data_rd.other_local_drammemoryREQUEST = DEMAND_DATA_RD and RESPONSE = OTHER_LOCAL_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x400100offcore_response.demand_data_rd.remote_drammemoryREQUEST = DEMAND_DATA_RD and RESPONSE = REMOTE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x200100offcore_response.demand_ifetch.any_dram_and_remote_fwdmemoryREQUEST = DEMAND_IFETCH and RESPONSE = ANY_DRAM AND REMOTE_FWDevent=0xb7,period=100000,umask=1,offcore_rsp=0x300400offcore_response.demand_ifetch.any_llc_missmemoryREQUEST = DEMAND_IFETCH and RESPONSE = ANY_LLC_MISSevent=0xb7,period=100000,umask=1,offcore_rsp=0xf80400offcore_response.demand_ifetch.other_local_drammemoryREQUEST = DEMAND_IFETCH and RESPONSE = OTHER_LOCAL_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x400400offcore_response.demand_ifetch.remote_drammemoryREQUEST = DEMAND_IFETCH and RESPONSE = REMOTE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x200400offcore_response.demand_rfo.any_dram_and_remote_fwdmemoryREQUEST = DEMAND_RFO and RESPONSE = ANY_DRAM AND REMOTE_FWDevent=0xb7,period=100000,umask=1,offcore_rsp=0x300200offcore_response.demand_rfo.any_llc_missmemoryREQUEST = DEMAND_RFO and RESPONSE = ANY_LLC_MISSevent=0xb7,period=100000,umask=1,offcore_rsp=0xf80200offcore_response.demand_rfo.other_local_drammemoryREQUEST = DEMAND_RFO and RESPONSE = OTHER_LOCAL_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x400200offcore_response.demand_rfo.remote_drammemoryREQUEST = DEMAND_RFO and RESPONSE = REMOTE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x200200offcore_response.other.any_dram_and_remote_fwdmemoryREQUEST = OTHER and RESPONSE = ANY_DRAM AND REMOTE_FWDevent=0xb7,period=100000,umask=1,offcore_rsp=0x308000offcore_response.other.any_llc_missmemoryREQUEST = OTHER and RESPONSE = ANY_LLC_MISSevent=0xb7,period=100000,umask=1,offcore_rsp=0xf88000offcore_response.other.other_local_drammemoryREQUEST = OTHER and RESPONSE = OTHER_LOCAL_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x408000offcore_response.other.remote_drammemoryREQUEST = OTHER and RESPONSE = REMOTE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x208000offcore_response.pf_data.any_dram_and_remote_fwdmemoryREQUEST = PF_DATA and RESPONSE = ANY_DRAM AND REMOTE_FWDevent=0xb7,period=100000,umask=1,offcore_rsp=0x305000offcore_response.pf_data.any_llc_missmemoryREQUEST = PF_DATA and RESPONSE = ANY_LLC_MISSevent=0xb7,period=100000,umask=1,offcore_rsp=0xf85000offcore_response.pf_data.other_local_drammemoryREQUEST = PF_DATA and RESPONSE = OTHER_LOCAL_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x405000offcore_response.pf_data.remote_drammemoryREQUEST = PF_DATA and RESPONSE = REMOTE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x205000offcore_response.pf_data_rd.any_dram_and_remote_fwdmemoryREQUEST = PF_DATA_RD and RESPONSE = ANY_DRAM AND REMOTE_FWDevent=0xb7,period=100000,umask=1,offcore_rsp=0x301000offcore_response.pf_data_rd.any_llc_missmemoryREQUEST = PF_DATA_RD and RESPONSE = ANY_LLC_MISSevent=0xb7,period=100000,umask=1,offcore_rsp=0xf81000offcore_response.pf_data_rd.other_local_drammemoryREQUEST = PF_DATA_RD and RESPONSE = OTHER_LOCAL_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x401000offcore_response.pf_data_rd.remote_drammemoryREQUEST = PF_DATA_RD and RESPONSE = REMOTE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x201000offcore_response.pf_ifetch.any_dram_and_remote_fwdmemoryREQUEST = PF_RFO and RESPONSE = ANY_DRAM AND REMOTE_FWDevent=0xb7,period=100000,umask=1,offcore_rsp=0x304000offcore_response.pf_ifetch.any_llc_missmemoryREQUEST = PF_RFO and RESPONSE = ANY_LLC_MISSevent=0xb7,period=100000,umask=1,offcore_rsp=0xf84000offcore_response.pf_ifetch.other_local_drammemoryREQUEST = PF_RFO and RESPONSE = OTHER_LOCAL_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x404000offcore_response.pf_ifetch.remote_drammemoryREQUEST = PF_RFO and RESPONSE = REMOTE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x204000offcore_response.pf_rfo.any_dram_and_remote_fwdmemoryREQUEST = PF_IFETCH and RESPONSE = ANY_DRAM AND REMOTE_FWDevent=0xb7,period=100000,umask=1,offcore_rsp=0x302000offcore_response.pf_rfo.any_llc_missmemoryREQUEST = PF_IFETCH and RESPONSE = ANY_LLC_MISSevent=0xb7,period=100000,umask=1,offcore_rsp=0xf82000offcore_response.pf_rfo.other_local_drammemoryREQUEST = PF_IFETCH and RESPONSE = OTHER_LOCAL_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x402000offcore_response.pf_rfo.remote_drammemoryREQUEST = PF_IFETCH and RESPONSE = REMOTE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x202000offcore_response.prefetch.any_dram_and_remote_fwdmemoryREQUEST = PREFETCH and RESPONSE = ANY_DRAM AND REMOTE_FWDevent=0xb7,period=100000,umask=1,offcore_rsp=0x307000offcore_response.prefetch.any_llc_missmemoryREQUEST = PREFETCH and RESPONSE = ANY_LLC_MISSevent=0xb7,period=100000,umask=1,offcore_rsp=0xf87000offcore_response.prefetch.other_local_drammemoryREQUEST = PREFETCH and RESPONSE = OTHER_LOCAL_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x407000offcore_response.prefetch.remote_drammemoryREQUEST = PREFETCH and RESPONSE = REMOTE_DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x207000load_block.overlap_storeotherLoads that partially overlap an earlier storeevent=3,period=200000,umask=200snoopq_requests.codeotherSnoop code requestsevent=0xb4,period=100000,umask=400snoopq_requests.dataotherSnoop data requestsevent=0xb4,period=100000,umask=100snoopq_requests.invalidateotherSnoop invalidate requestsevent=0xb4,period=100000,umask=200snoopq_requests_outstanding.codeotherOutstanding snoop code requestsevent=0xb3,period=2000000,umask=400snoopq_requests_outstanding.code_not_emptyotherCycles snoop code requests queuedevent=0xb3,cmask=1,period=2000000,umask=400snoopq_requests_outstanding.dataotherOutstanding snoop data requestsevent=0xb3,period=2000000,umask=100snoopq_requests_outstanding.data_not_emptyotherCycles snoop data requests queuedevent=0xb3,cmask=1,period=2000000,umask=100snoopq_requests_outstanding.invalidateotherOutstanding snoop invalidate requestsevent=0xb3,period=2000000,umask=200snoopq_requests_outstanding.invalidate_not_emptyotherCycles snoop invalidate requests queuedevent=0xb3,cmask=1,period=2000000,umask=200br_misp_retired.all_branchespipelineMispredicted retired branch instructions (Precise Event)event=0xc5,period=20000,umask=400br_misp_retired.conditionalpipelineMispredicted conditional retired branches (Precise Event)event=0xc5,period=20000,umask=100dtlb_load_misses.large_walk_completedvirtual memoryDTLB load miss large page walksevent=8,period=200000,umask=0x8000dtlb_load_misses.walk_cyclesvirtual memoryDTLB load miss page walk cyclesevent=8,period=200000,umask=400dtlb_misses.large_walk_completedvirtual memoryDTLB miss large page walksevent=0x49,period=200000,umask=0x8000dtlb_misses.pde_missvirtual memoryDTLB misses caused by low part of addressevent=0x49,period=200000,umask=0x2000dtlb_misses.walk_cyclesvirtual memoryDTLB miss page walk cyclesevent=0x49,period=2000000,umask=400ept.walk_cyclesvirtual memoryExtended Page Table walk cyclesevent=0x4f,period=2000000,umask=0x1000itlb_misses.large_walk_completedvirtual memoryITLB miss large page walksevent=0x85,period=200000,umask=0x8000itlb_misses.walk_cyclesvirtual memoryITLB miss page walk cyclesevent=0x85,period=2000000,umask=400mem_uncore_retired.local_dramcacheLoad instructions retired with a data source of local DRAM or locally homed remote hitm (Precise Event)event=0xf,period=10000,umask=0x1000mem_uncore_retired.remote_dramcacheLoad instructions retired remote DRAM and remote home-remote cache HITM (Precise Event)event=0xf,period=10000,umask=0x2000offcore_requests.uncached_memcacheOffcore uncached memory accessesevent=0xb0,period=100000,umask=0x2000offcore_response.any_data.local_cache_dramcacheOffcore data reads satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x271100offcore_response.any_data.remote_cache_dramcacheOffcore data reads satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x581100offcore_response.any_ifetch.local_cache_dramcacheOffcore code reads satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x274400offcore_response.any_ifetch.remote_cache_dramcacheOffcore code reads satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x584400offcore_response.any_request.local_cache_dramcacheOffcore requests satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x27FF00offcore_response.any_request.remote_cache_dramcacheOffcore requests satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x58FF00offcore_response.any_rfo.local_cache_dramcacheOffcore RFO requests satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x272200offcore_response.any_rfo.remote_cache_dramcacheOffcore RFO requests satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x582200offcore_response.corewb.local_cache_dramcacheOffcore writebacks to the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x270800offcore_response.corewb.remote_cache_dramcacheOffcore writebacks to a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x580800offcore_response.data_ifetch.local_cache_dramcacheOffcore code or data read requests satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x277700offcore_response.data_ifetch.remote_cache_dramcacheOffcore code or data read requests satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x587700offcore_response.data_in.local_cache_dramcacheOffcore request = all data, response = local cache or dramevent=0xb7,period=100000,umask=1,offcore_rsp=0x273300offcore_response.data_in.remote_cache_dramcacheOffcore request = all data, response = remote cache or dramevent=0xb7,period=100000,umask=1,offcore_rsp=0x583300offcore_response.demand_data.local_cache_dramcacheOffcore demand data requests satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x270300offcore_response.demand_data.remote_cache_dramcacheOffcore demand data requests satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x580300offcore_response.demand_data_rd.local_cache_dramcacheOffcore demand data reads satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x270100offcore_response.demand_data_rd.remote_cache_dramcacheOffcore demand data reads satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x580100offcore_response.demand_ifetch.local_cache_dramcacheOffcore demand code reads satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x270400offcore_response.demand_ifetch.remote_cache_dramcacheOffcore demand code reads satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x580400offcore_response.demand_rfo.local_cache_dramcacheOffcore demand RFO requests satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x270200offcore_response.demand_rfo.remote_cache_dramcacheOffcore demand RFO requests satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x580200offcore_response.other.local_cache_dramcacheOffcore other requests satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x278000offcore_response.other.remote_cache_dramcacheOffcore other requests satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x588000offcore_response.pf_data.any_cache_dramcacheOffcore prefetch data requests satisfied by any cache or DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x7F5000offcore_response.pf_data.any_locationcacheAll offcore prefetch data requestsevent=0xb7,period=100000,umask=1,offcore_rsp=0xFF5000offcore_response.pf_data.io_csr_mmiocacheOffcore prefetch data requests satisfied by the IO, CSR, MMIO unitevent=0xb7,period=100000,umask=1,offcore_rsp=0x805000offcore_response.pf_data.llc_hit_no_other_corecacheOffcore prefetch data requests satisfied by the LLC and not found in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x15000offcore_response.pf_data.llc_hit_other_core_hitcacheOffcore prefetch data requests satisfied by the LLC and HIT in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x25000offcore_response.pf_data.llc_hit_other_core_hitmcacheOffcore prefetch data requests satisfied by the LLC  and HITM in a sibling coreevent=0xb7,period=100000,umask=1,offcore_rsp=0x45000offcore_response.pf_data.local_cachecacheOffcore prefetch data requests satisfied by the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0x75000offcore_response.pf_data.local_cache_dramcacheOffcore prefetch data requests satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x275000offcore_response.pf_data.remote_cachecacheOffcore prefetch data requests satisfied by a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x185000offcore_response.pf_data.remote_cache_dramcacheOffcore prefetch data requests satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x585000offcore_response.pf_data.remote_cache_hitcacheOffcore prefetch data requests that HIT in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x105000offcore_response.pf_data.remote_cache_hitmcacheOffcore prefetch data requests that HITM in a remote cacheevent=0xb7,period=100000,umask=1,offcore_rsp=0x85000offcore_response.pf_data_rd.local_cache_dramcacheOffcore prefetch data reads satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x271000offcore_response.pf_data_rd.remote_cache_dramcacheOffcore prefetch data reads satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x581000offcore_response.pf_ifetch.local_cache_dramcacheOffcore prefetch code reads satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x274000offcore_response.pf_ifetch.remote_cache_dramcacheOffcore prefetch code reads satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x584000offcore_response.pf_rfo.local_cache_dramcacheOffcore prefetch RFO requests satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x272000offcore_response.pf_rfo.remote_cache_dramcacheOffcore prefetch RFO requests satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x582000offcore_response.prefetch.local_cache_dramcacheOffcore prefetch requests satisfied by the LLC or local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x277000offcore_response.prefetch.remote_cache_dramcacheOffcore prefetch requests satisfied by a remote cache or remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x587000offcore_response.any_data.local_drammemoryOffcore data reads satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x201100offcore_response.any_data.remote_drammemoryOffcore data reads satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x401100offcore_response.any_ifetch.local_drammemoryOffcore code reads satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x204400offcore_response.any_ifetch.remote_drammemoryOffcore code reads satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x404400offcore_response.any_request.local_drammemoryOffcore requests satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x20FF00offcore_response.any_request.remote_drammemoryOffcore requests satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x40FF00offcore_response.any_rfo.local_drammemoryOffcore RFO requests satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x202200offcore_response.any_rfo.remote_drammemoryOffcore RFO requests satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x402200offcore_response.corewb.local_drammemoryOffcore writebacks to the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x200800offcore_response.corewb.remote_drammemoryOffcore writebacks to a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x400800offcore_response.data_ifetch.local_drammemoryOffcore code or data read requests satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x207700offcore_response.data_ifetch.remote_drammemoryOffcore code or data read requests satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x407700offcore_response.data_in.local_drammemoryOffcore data reads, RFOs, and prefetches satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x203300offcore_response.data_in.remote_drammemoryOffcore data reads, RFOs, and prefetches satisfied by the remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x403300offcore_response.demand_data.local_drammemoryOffcore demand data requests satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x200300offcore_response.demand_data.remote_drammemoryOffcore demand data requests satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x400300offcore_response.demand_data_rd.local_drammemoryOffcore demand data reads satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x200100offcore_response.demand_data_rd.remote_drammemoryOffcore demand data reads satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x400100offcore_response.demand_ifetch.local_drammemoryOffcore demand code reads satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x200400offcore_response.demand_ifetch.remote_drammemoryOffcore demand code reads satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x400400offcore_response.demand_rfo.local_drammemoryOffcore demand RFO requests satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x200200offcore_response.demand_rfo.remote_drammemoryOffcore demand RFO requests satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x400200offcore_response.other.remote_drammemoryOffcore other requests satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x408000offcore_response.pf_data.any_drammemoryOffcore prefetch data requests satisfied by any DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x605000offcore_response.pf_data.any_llc_missmemoryOffcore prefetch data requests that missed the LLCevent=0xb7,period=100000,umask=1,offcore_rsp=0xF85000offcore_response.pf_data.local_drammemoryOffcore prefetch data requests satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x205000offcore_response.pf_data.remote_drammemoryOffcore prefetch data requests satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x405000offcore_response.pf_data_rd.local_drammemoryOffcore prefetch data reads satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x201000offcore_response.pf_data_rd.remote_drammemoryOffcore prefetch data reads satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x401000offcore_response.pf_ifetch.local_drammemoryOffcore prefetch code reads satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x204000offcore_response.pf_ifetch.remote_drammemoryOffcore prefetch code reads satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x404000offcore_response.pf_rfo.local_drammemoryOffcore prefetch RFO requests satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x202000offcore_response.pf_rfo.remote_drammemoryOffcore prefetch RFO requests satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x402000offcore_response.prefetch.local_drammemoryOffcore prefetch requests satisfied by the local DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x207000offcore_response.prefetch.remote_drammemoryOffcore prefetch requests satisfied by a remote DRAMevent=0xb7,period=100000,umask=1,offcore_rsp=0x407000mem_uncore_retired.local_dram_and_remote_cache_hitcacheLoad instructions retired local dram and remote cache HIT data sources (Precise Event)event=0xf,period=20000,umask=800mem_uncore_retired.local_hitmcacheLoad instructions retired that HIT modified data in sibling core (Precise Event)event=0xf,period=40000,umask=200mem_uncore_retired.remote_hitmcacheRetired loads that hit remote socket in modified state (Precise Event)event=0xf,period=40000,umask=400bpu_clears.earlypipelineEarly Branch Prediction Unit clearsevent=0xe8,period=2000000,umask=100uops_executed.core_stall_countpipelineUops executed on any port (core count)event=0xb1,cmask=1,edge=1,inv=1,period=2000000,umask=0x3f00uops_executed.core_stall_count_no_port5pipelineUops executed on ports 0-4 (core count)event=0xb1,cmask=1,edge=1,inv=1,period=2000000,umask=0x1f00dtlb_misses.pde_missvirtual memoryDTLB misses caused by low part of address. Count also includes 2M page references because 2M pages do not use the PDEevent=0x49,period=200000,umask=0x2000CPI1 / IPC00IPCgroup1inst_retired.any / cpu_clk_unhalted.thread00Frontend_Bound_SMTidq_uops_not_delivered.core / (4 * (cpu_clk_unhalted.thread / 2 * (1 + cpu_clk_unhalted.one_thread_active / cpu_clk_unhalted.ref_xclk)))00dcache_miss_cpil1d\-loads\-misses / inst_retired.any00icache_miss_cyclesl1i\-loads\-misses / inst_retired.any00cache_miss_cyclesgroup1dcache_miss_cpi + icache_miss_cycles00DCache_L2_All_Hitsl2_rqsts.demand_data_rd_hit + l2_rqsts.pf_hit + l2_rqsts.rfo_hit00DCache_L2_All_Missmax(l2_rqsts.all_demand_data_rd - l2_rqsts.demand_data_rd_hit, 0) + l2_rqsts.pf_miss + l2_rqsts.rfo_miss00DCache_L2_AllDCache_L2_All_Hits + DCache_L2_All_Miss00DCache_L2_Hitsd_ratio(DCache_L2_All_Hits, DCache_L2_All)00DCache_L2_Missesd_ratio(DCache_L2_All_Miss, DCache_L2_All)00M1ipc + M200M2ipc + M100M31 / M300L1D_Cache_Fill_BW64 * l1d.replacement / 1e9 / duration_time00C10_Pkg_ResidencyPowercstate_pkg@c10\-residency@ / TSCC10 residency percent per package100%00C1_Core_ResidencyPowercstate_core@c1\-residency@ / TSCC1 residency percent per core100%00C2_Pkg_ResidencyPowercstate_pkg@c2\-residency@ / TSCC2 residency percent per package100%00C3_Pkg_ResidencyPowercstate_pkg@c3\-residency@ / TSCC3 residency percent per package100%00C6_Core_ResidencyPowercstate_core@c6\-residency@ / TSCC6 residency percent per core100%00C6_Pkg_ResidencyPowercstate_pkg@c6\-residency@ / TSCC6 residency percent per package100%00C7_Core_ResidencyPowercstate_core@c7\-residency@ / TSCC7 residency percent per core100%00C7_Pkg_ResidencyPowercstate_pkg@c7\-residency@ / TSCC7 residency percent per package100%00C8_Pkg_ResidencyPowercstate_pkg@c8\-residency@ / TSCC8 residency percent per package100%00C9_Pkg_ResidencyPowercstate_pkg@c9\-residency@ / TSCC9 residency percent per package100%00smi_cyclessmi((msr@aperf@ - cycles) / msr@aperf@ if msr@smi@ > 0 else 0)smi_cycles > 0.1Percentage of cycles spent in System Management Interrupts100%00smi_numsmimsr@smi@Number of SMI interrupts1SMI#00tsx_aborted_cyclestransaction(max(cycles\-t - cycles\-ct, 0) / cycles if has_event(cycles\-t) else 0)Percentage of cycles in aborted transactions100%00tsx_cycles_per_elisiontransaction(cycles\-t / el\-start if has_event(el\-start) else 0)Number of cycles within a transaction divided by the number of elisions1cycles / elision00tsx_cycles_per_transactiontransaction(cycles\-t / tx\-start if has_event(cycles\-t) else 0)Number of cycles within a transaction divided by the number of transactions1cycles / transaction00tsx_transactional_cyclestransaction(cycles\-t / cycles if has_event(cycles\-t) else 0)Percentage of cycles within a transaction region100%00cpu_atomtma_allocation_restrictionTopdownL3;tma_L3_group;tma_core_bound_grouptma_core_boundtma_allocation_restriction > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.1)Counts the number of issue slots that were not consumed by the backend due to certain allocation restrictions100%00tma_backend_boundDefault;TopdownL1;tma_L1_groupcpu_atom@TOPDOWN_BE_BOUND.ALL@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_backend_bound > 0.1Counts the total number of issue slots that were not consumed by the backend due to backend stallsCounts the total number of issue slots that were not consumed by the backend due to backend stalls. Note that uops must be available for consumption in order for this event to count. If a uop is not available (IQ is empty), this event will not count100%TopdownL1;DefaultTopdownL100tma_bad_speculationDefault;TopdownL1;tma_L1_group(5 * cpu_atom@CPU_CLK_UNHALTED.CORE@ - (cpu_atom@TOPDOWN_FE_BOUND.ALL@ + cpu_atom@TOPDOWN_BE_BOUND.ALL@ + cpu_atom@TOPDOWN_RETIRING.ALL@)) / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_bad_speculation > 0.15Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clearCounts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear. Only issue slots wasted due to fast nukes such as memory ordering nukes are counted. Other nukes are not accounted for. Counts all issue slots blocked during this recovery window including relevant microcode flows and while uops are not yet available in the instruction queue (IQ). Also includes the issue slots that were consumed by the backend but were thrown away because they were younger than the mispredict or machine clear100%TopdownL1;DefaultTopdownL100tma_branch_detectTopdownL3;tma_L3_group;tma_ifetch_latency_groupcpu_atom@TOPDOWN_FE_BOUND.BRANCH_DETECT@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_branch_detect > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to BACLEARS, which occurs when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontendCounts the number of issue slots that were not delivered by the frontend due to BACLEARS, which occurs when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend. Includes BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branches100%00tma_branch_mispredictsTopdownL2;tma_L2_group;tma_bad_speculation_groupcpu_atom@TOPDOWN_BAD_SPECULATION.MISPREDICT@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_branch_mispredicts > 0.05 & tma_bad_speculation > 0.15Counts the number of issue slots that were not consumed by the backend due to branch mispredicts100%TopdownL200tma_branch_resteerTopdownL3;tma_L3_group;tma_ifetch_latency_groupcpu_atom@TOPDOWN_FE_BOUND.BRANCH_RESTEER@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_branch_resteer > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to BTCLEARS, which occurs when the Branch Target Buffer (BTB) predicts a taken branch100%00tma_ciscTopdownL3;tma_L3_group;tma_ifetch_bandwidth_groupcpu_atom@TOPDOWN_FE_BOUND.CISC@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_cisc > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to the microcode sequencer (MS)100%00tma_core_boundTopdownL2;tma_L2_group;tma_backend_bound_groupcpu_atom@TOPDOWN_BE_BOUND.ALLOC_RESTRICTIONS@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_core_bound > 0.1 & tma_backend_bound > 0.1Counts the number of cycles due to backend bound stalls that are bounded by core restrictions and not attributed to an outstanding load or stores, or resource limitation100%TopdownL200tma_decodeTopdownL3;tma_L3_group;tma_ifetch_bandwidth_groupcpu_atom@TOPDOWN_FE_BOUND.DECODE@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_decode > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to decode stalls100%00tma_fast_nukeTopdownL3;tma_L3_group;tma_machine_clears_groupcpu_atom@TOPDOWN_BAD_SPECULATION.FASTNUKE@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_fast_nuke > 0.05 & (tma_machine_clears > 0.05 & tma_bad_speculation > 0.15)Counts the number of issue slots that were not consumed by the backend due to a machine clear that does not require the use of microcode, classified as a fast nuke, due to memory ordering, memory disambiguation and memory renaming100%00tma_frontend_boundDefault;TopdownL1;tma_L1_groupcpu_atom@TOPDOWN_FE_BOUND.ALL@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_frontend_bound > 0.2Counts the number of issue slots that were not consumed by the backend due to frontend stalls100%TopdownL1;DefaultTopdownL100tma_icache_missesTopdownL3;tma_L3_group;tma_ifetch_latency_groupcpu_atom@TOPDOWN_FE_BOUND.ICACHE@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_icache_misses > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to instruction cache misses100%00tma_ifetch_bandwidthTopdownL2;tma_L2_group;tma_frontend_bound_groupcpu_atom@TOPDOWN_FE_BOUND.FRONTEND_BANDWIDTH@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2Counts the number of issue slots that were not delivered by the frontend due to frontend bandwidth restrictions due to decode, predecode, cisc, and other limitations100%TopdownL200tma_ifetch_latencyTopdownL2;tma_L2_group;tma_frontend_bound_groupcpu_atom@TOPDOWN_FE_BOUND.FRONTEND_LATENCY@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2Counts the number of issue slots that were not delivered by the frontend due to frontend latency restrictions due to icache misses, itlb misses, branch detection, and resteer limitations100%TopdownL200tma_info_bottleneck_%_dtlb_miss_bound_cycles100 * (cpu_atom@LD_HEAD.DTLB_MISS_AT_RET@ + cpu_atom@LD_HEAD.PGWALK_AT_RET@) / cpu_atom@CPU_CLK_UNHALTED.CORE@Percentage of time that retirement is stalled due to a first level data TLB miss00tma_info_bottleneck_%_ifetch_miss_bound_cyclesIfetch100 * cpu_atom@MEM_BOUND_STALLS.IFETCH@ / cpu_atom@CPU_CLK_UNHALTED.CORE@Percentage of time that allocation and retirement is stalled by the Frontend Cluster due to an Ifetch Miss, either Icache or ITLB MissPercentage of time that allocation and retirement is stalled by the Frontend Cluster due to an Ifetch Miss, either Icache or ITLB Miss. See Info.Ifetch_Bound00tma_info_bottleneck_%_load_miss_bound_cyclesLoad_Store_Miss100 * cpu_atom@MEM_BOUND_STALLS.LOAD@ / cpu_atom@CPU_CLK_UNHALTED.CORE@Percentage of time that retirement is stalled due to an L1 missPercentage of time that retirement is stalled due to an L1 miss. See Info.Load_Miss_Bound00tma_info_bottleneck_%_mem_exec_bound_cyclesMem_Exec100 * cpu_atom@LD_HEAD.ANY_AT_RET@ / cpu_atom@CPU_CLK_UNHALTED.CORE@Percentage of time that retirement is stalled by the Memory Cluster due to a pipeline stallPercentage of time that retirement is stalled by the Memory Cluster due to a pipeline stall. See Info.Mem_Exec_Bound00tma_info_br_inst_mix_ipbranchcpu_atom@INST_RETIRED.ANY@ / cpu_atom@BR_INST_RETIRED.ALL_BRANCHES@Instructions per Branch (lower number means higher occurrence rate)00tma_info_br_inst_mix_ipcallcpu_atom@INST_RETIRED.ANY@ / cpu_atom@BR_INST_RETIRED.CALL@Instruction per (near) call (lower number means higher occurrence rate)00tma_info_br_inst_mix_ipfarbranchcpu_atom@INST_RETIRED.ANY@ / cpu_atom@BR_INST_RETIRED.FAR_BRANCH@uInstructions per Far Branch ( Far Branches apply upon transition from application to operating system, handling interrupts, exceptions) [lower number means higher occurrence rate]00tma_info_br_inst_mix_ipmisp_cond_ntakencpu_atom@INST_RETIRED.ANY@ / (cpu_atom@BR_MISP_RETIRED.COND@ - cpu_atom@BR_MISP_RETIRED.COND_TAKEN@)Instructions per retired conditional Branch Misprediction where the branch was not taken00tma_info_br_inst_mix_ipmisp_cond_takencpu_atom@INST_RETIRED.ANY@ / cpu_atom@BR_MISP_RETIRED.COND_TAKEN@Instructions per retired conditional Branch Misprediction where the branch was taken00tma_info_br_inst_mix_ipmisp_indirectcpu_atom@INST_RETIRED.ANY@ / cpu_atom@BR_MISP_RETIRED.INDIRECT@Instructions per retired indirect call or jump Branch Misprediction00tma_info_br_inst_mix_ipmisp_retcpu_atom@INST_RETIRED.ANY@ / cpu_atom@BR_MISP_RETIRED.RETURN@Instructions per retired return Branch Misprediction00tma_info_br_inst_mix_ipmispredictcpu_atom@INST_RETIRED.ANY@ / cpu_atom@BR_MISP_RETIRED.ALL_BRANCHES@Instructions per retired Branch Misprediction00tma_info_br_mispredict_bound_branch_mispredict_ratiocpu_atom@BR_MISP_RETIRED.ALL_BRANCHES@ / cpu_atom@BR_INST_RETIRED.ALL_BRANCHES@Ratio of all branches which mispredict00tma_info_br_mispredict_bound_branch_mispredict_to_unknown_branch_ratiocpu_atom@BR_MISP_RETIRED.ALL_BRANCHES@ / cpu_atom@BACLEARS.ANY@Ratio between Mispredicted branches and unknown branches00tma_info_buffer_stalls_%_load_buffer_stall_cycles100 * cpu_atom@MEM_SCHEDULER_BLOCK.LD_BUF@ / cpu_atom@CPU_CLK_UNHALTED.CORE@Percentage of time that allocation is stalled due to load buffer full00tma_info_buffer_stalls_%_mem_rsv_stall_cycles100 * cpu_atom@MEM_SCHEDULER_BLOCK.RSV@ / cpu_atom@CPU_CLK_UNHALTED.CORE@Percentage of time that allocation is stalled due to memory reservation stations full00tma_info_buffer_stalls_%_store_buffer_stall_cycles100 * cpu_atom@MEM_SCHEDULER_BLOCK.ST_BUF@ / cpu_atom@CPU_CLK_UNHALTED.CORE@Percentage of time that allocation is stalled due to store buffer full00tma_info_core_cpicpu_atom@CPU_CLK_UNHALTED.CORE@ / cpu_atom@INST_RETIRED.ANY@Cycles Per Instruction00tma_info_core_ipccpu_atom@INST_RETIRED.ANY@ / cpu_atom@CPU_CLK_UNHALTED.CORE@Instructions Per Cycle00tma_info_core_upicpu_atom@UOPS_RETIRED.ALL@ / cpu_atom@INST_RETIRED.ANY@Uops Per Instruction00tma_info_ifetch_miss_bound_%_ifetchmissbound_with_l2hit100 * cpu_atom@MEM_BOUND_STALLS.IFETCH_L2_HIT@ / cpu_atom@MEM_BOUND_STALLS.IFETCH@Percentage of ifetch miss bound stalls, where the ifetch miss hits in the L200tma_info_ifetch_miss_bound_%_ifetchmissbound_with_l3hit100 * cpu_atom@MEM_BOUND_STALLS.IFETCH_LLC_HIT@ / cpu_atom@MEM_BOUND_STALLS.IFETCH@Percentage of ifetch miss bound stalls, where the ifetch miss hits in the L300tma_info_ifetch_miss_bound_%_ifetchmissbound_with_l3miss100 * cpu_atom@MEM_BOUND_STALLS.IFETCH_DRAM_HIT@ / cpu_atom@MEM_BOUND_STALLS.IFETCH@Percentage of ifetch miss bound stalls, where the ifetch miss subsequently misses in the L300tma_info_load_miss_bound_%_loadmissbound_with_l2hitload_store_bound100 * cpu_atom@MEM_BOUND_STALLS.LOAD_L2_HIT@ / cpu_atom@MEM_BOUND_STALLS.LOAD@Percentage of memory bound stalls where retirement is stalled due to an L1 miss that hit the L200tma_info_load_miss_bound_%_loadmissbound_with_l3hitload_store_bound100 * cpu_atom@MEM_BOUND_STALLS.LOAD_LLC_HIT@ / cpu_atom@MEM_BOUND_STALLS.LOAD@Percentage of memory bound stalls where retirement is stalled due to an L1 miss that hit the L300tma_info_load_miss_bound_%_loadmissbound_with_l3missload_store_bound100 * cpu_atom@MEM_BOUND_STALLS.LOAD_DRAM_HIT@ / cpu_atom@MEM_BOUND_STALLS.LOAD@Percentage of memory bound stalls where retirement is stalled due to an L1 miss that subsequently misses the L300tma_info_load_store_bound_l1_boundload_store_bound100 * cpu_atom@LD_HEAD.L1_BOUND_AT_RET@ / cpu_atom@CPU_CLK_UNHALTED.CORE@Counts the number of cycles that the oldest load of the load buffer is stalled at retirement due to a pipeline block00tma_info_load_store_bound_load_boundload_store_bound100 * (cpu_atom@LD_HEAD.L1_BOUND_AT_RET@ + cpu_atom@MEM_BOUND_STALLS.LOAD@) / cpu_atom@CPU_CLK_UNHALTED.CORE@Counts the number of cycles that the oldest load of the load buffer is stalled at retirement00tma_info_load_store_bound_store_boundload_store_bound100 * (cpu_atom@MEM_SCHEDULER_BLOCK.ST_BUF@ / cpu_atom@MEM_SCHEDULER_BLOCK.ALL@) * tma_mem_schedulerCounts the number of cycles the core is stalled due to store buffer full00tma_info_machine_clear_bound_machine_clears_disamb_pki1e3 * cpu_atom@MACHINE_CLEARS.DISAMBIGUATION@ / cpu_atom@INST_RETIRED.ANY@Counts the number of machine clears relative to thousands of instructions retired, due to memory disambiguation00tma_info_machine_clear_bound_machine_clears_fp_assist_pki1e3 * cpu_atom@MACHINE_CLEARS.FP_ASSIST@ / cpu_atom@INST_RETIRED.ANY@Counts the number of machine clears relative to thousands of instructions retired, due to floating point assists00tma_info_machine_clear_bound_machine_clears_monuke_pki1e3 * cpu_atom@MACHINE_CLEARS.MEMORY_ORDERING@ / cpu_atom@INST_RETIRED.ANY@Counts the number of machine clears relative to thousands of instructions retired, due to memory ordering00tma_info_machine_clear_bound_machine_clears_mrn_pki1e3 * cpu_atom@MACHINE_CLEARS.MRN_NUKE@ / cpu_atom@INST_RETIRED.ANY@Counts the number of machine clears relative to thousands of instructions retired, due to memory renaming00tma_info_machine_clear_bound_machine_clears_page_fault_pki1e3 * cpu_atom@MACHINE_CLEARS.PAGE_FAULT@ / cpu_atom@INST_RETIRED.ANY@Counts the number of machine clears relative to thousands of instructions retired, due to page faults00tma_info_machine_clear_bound_machine_clears_smc_pki1e3 * cpu_atom@MACHINE_CLEARS.SMC@ / cpu_atom@INST_RETIRED.ANY@Counts the number of machine clears relative to thousands of instructions retired, due to self-modifying code00tma_info_mem_exec_blocks_%_loads_with_adressaliasing100 * cpu_atom@LD_BLOCKS.4K_ALIAS@ / cpu_atom@MEM_UOPS_RETIRED.ALL_LOADS@Percentage of total non-speculative loads with an address aliasing block00tma_info_mem_exec_blocks_%_loads_with_storefwdblk100 * cpu_atom@LD_BLOCKS.DATA_UNKNOWN@ / cpu_atom@MEM_UOPS_RETIRED.ALL_LOADS@Percentage of total non-speculative loads with a store forward or unknown store address block00tma_info_mem_exec_bound_%_loadhead_with_l1miss100 * cpu_atom@LD_HEAD.L1_MISS_AT_RET@ / cpu_atom@LD_HEAD.ANY_AT_RET@Percentage of Memory Execution Bound due to a first level data cache miss00tma_info_mem_exec_bound_%_loadhead_with_otherpipelineblks100 * cpu_atom@LD_HEAD.OTHER_AT_RET@ / cpu_atom@LD_HEAD.ANY_AT_RET@Percentage of Memory Execution Bound due to other block cases, such as pipeline conflicts, fences, etc00tma_info_mem_exec_bound_%_loadhead_with_pagewalk100 * cpu_atom@LD_HEAD.PGWALK_AT_RET@ / cpu_atom@LD_HEAD.ANY_AT_RET@Percentage of Memory Execution Bound due to a pagewalk00tma_info_mem_exec_bound_%_loadhead_with_stlbhit100 * cpu_atom@LD_HEAD.DTLB_MISS_AT_RET@ / cpu_atom@LD_HEAD.ANY_AT_RET@Percentage of Memory Execution Bound due to a second level TLB miss00tma_info_mem_exec_bound_%_loadhead_with_storefwding100 * cpu_atom@LD_HEAD.ST_ADDR_AT_RET@ / cpu_atom@LD_HEAD.ANY_AT_RET@Percentage of Memory Execution Bound due to a store forward address match00tma_info_mem_mix_iploadcpu_atom@INST_RETIRED.ANY@ / cpu_atom@MEM_UOPS_RETIRED.ALL_LOADS@Instructions per Load00tma_info_mem_mix_ipstorecpu_atom@INST_RETIRED.ANY@ / cpu_atom@MEM_UOPS_RETIRED.ALL_STORES@Instructions per Store00tma_info_mem_mix_load_locks_ratio100 * cpu_atom@MEM_UOPS_RETIRED.LOCK_LOADS@ / cpu_atom@MEM_UOPS_RETIRED.ALL_LOADS@Percentage of total non-speculative loads that perform one or more locks00tma_info_mem_mix_load_splits_ratio100 * cpu_atom@MEM_UOPS_RETIRED.SPLIT_LOADS@ / cpu_atom@MEM_UOPS_RETIRED.ALL_LOADS@Percentage of total non-speculative loads that are splits00tma_info_mem_mix_memload_ratio1e3 * cpu_atom@MEM_UOPS_RETIRED.ALL_LOADS@ / cpu_atom@UOPS_RETIRED.ALL@Ratio of mem load uops to all uops00tma_info_serialization _%_tpause_cycles100 * cpu_atom@SERIALIZATION.C01_MS_SCB@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)Percentage of time that the core is stalled due to a TPAUSE or UMWAIT instruction00tma_info_system_cpu_utilizationcpu_atom@CPU_CLK_UNHALTED.REF_TSC@ / TSCAverage CPU Utilization00tma_info_system_kernel_utilizationSummarycpu_atom@CPU_CLK_UNHALTED.CORE_P@k / cpu_atom@CPU_CLK_UNHALTED.CORE@Fraction of cycles spent in Kernel mode00tma_info_system_turbo_utilizationPowercpu_atom@CPU_CLK_UNHALTED.CORE@ / cpu_atom@CPU_CLK_UNHALTED.REF_TSC@Average Frequency Utilization relative nominal frequency00tma_info_uop_mix_fpdiv_uop_ratio100 * cpu_atom@UOPS_RETIRED.FPDIV@ / cpu_atom@UOPS_RETIRED.ALL@Percentage of all uops which are FPDiv uops00tma_info_uop_mix_idiv_uop_ratio100 * cpu_atom@UOPS_RETIRED.IDIV@ / cpu_atom@UOPS_RETIRED.ALL@Percentage of all uops which are IDiv uops00tma_info_uop_mix_microcode_uop_ratio100 * cpu_atom@UOPS_RETIRED.MS@ / cpu_atom@UOPS_RETIRED.ALL@Percentage of all uops which are microcode ops00tma_info_uop_mix_x87_uop_ratio100 * cpu_atom@UOPS_RETIRED.X87@ / cpu_atom@UOPS_RETIRED.ALL@Percentage of all uops which are x87 uops00tma_itlb_missesTopdownL3;tma_L3_group;tma_ifetch_latency_groupcpu_atom@TOPDOWN_FE_BOUND.ITLB@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_itlb_misses > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to Instruction Table Lookaside Buffer (ITLB) misses100%00tma_machine_clearsTopdownL2;tma_L2_group;tma_bad_speculation_groupcpu_atom@TOPDOWN_BAD_SPECULATION.MACHINE_CLEARS@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_machine_clears > 0.05 & tma_bad_speculation > 0.15Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a machine clear (nuke) of any kind including memory ordering and memory disambiguation100%TopdownL200tma_mem_schedulerTopdownL3;tma_L3_group;tma_resource_bound_groupcpu_atom@TOPDOWN_BE_BOUND.MEM_SCHEDULER@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_mem_scheduler > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)Counts the number of issue slots that were not consumed by the backend due to memory reservation stalls in which a scheduler is not able to accept uops100%00tma_non_mem_schedulerTopdownL3;tma_L3_group;tma_resource_bound_groupcpu_atom@TOPDOWN_BE_BOUND.NON_MEM_SCHEDULER@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_non_mem_scheduler > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)Counts the number of issue slots that were not consumed by the backend due to IEC or FPC RAT stalls, which can be due to FIQ or IEC reservation stalls in which the integer, floating point or SIMD scheduler is not able to accept uops100%00tma_nukeTopdownL3;tma_L3_group;tma_machine_clears_groupcpu_atom@TOPDOWN_BAD_SPECULATION.NUKE@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_nuke > 0.05 & (tma_machine_clears > 0.05 & tma_bad_speculation > 0.15)Counts the number of issue slots that were not consumed by the backend due to a machine clear that requires the use of microcode (slow nuke)100%00tma_other_fbTopdownL3;tma_L3_group;tma_ifetch_bandwidth_groupcpu_atom@TOPDOWN_FE_BOUND.OTHER@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_other_fb > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to other common frontend stalls not categorized100%00tma_predecodeTopdownL3;tma_L3_group;tma_ifetch_bandwidth_groupcpu_atom@TOPDOWN_FE_BOUND.PREDECODE@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_predecode > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to wrong predecodes100%00tma_registerTopdownL3;tma_L3_group;tma_resource_bound_groupcpu_atom@TOPDOWN_BE_BOUND.REGISTER@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_register > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)Counts the number of issue slots that were not consumed by the backend due to the physical register file unable to accept an entry (marble stalls)100%00tma_reorder_bufferTopdownL3;tma_L3_group;tma_resource_bound_groupcpu_atom@TOPDOWN_BE_BOUND.REORDER_BUFFER@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_reorder_buffer > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)Counts the number of issue slots that were not consumed by the backend due to the reorder buffer being full (ROB stalls)100%00tma_resource_boundTopdownL2;tma_L2_group;tma_backend_bound_grouptma_backend_bound - tma_core_boundtma_resource_bound > 0.2 & tma_backend_bound > 0.1Counts the number of cycles the core is stalled due to a resource limitation100%TopdownL200tma_retiringDefault;TopdownL1;tma_L1_groupcpu_atom@TOPDOWN_RETIRING.ALL@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_retiring > 0.75Counts the number of issue slots that result in retirement slots100%TopdownL1;DefaultTopdownL100tma_serializationTopdownL3;tma_L3_group;tma_resource_bound_groupcpu_atom@TOPDOWN_BE_BOUND.SERIALIZATION@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_serialization > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)Counts the number of issue slots that were not consumed by the backend due to scoreboards from the instruction queue (IQ), jump execution unit (JEU), or microcode sequencer (MS)100%00cpu_coreUNCORE_FREQSoCtma_info_system_socket_clks / #num_dies / duration_time / 1e9Uncore frequency per die [GHZ]00tma_alu_op_utilizationTopdownL5;tma_L5_group;tma_ports_utilized_3m_group(cpu_core@UOPS_DISPATCHED.PORT_0@ + cpu_core@UOPS_DISPATCHED.PORT_1@ + cpu_core@UOPS_DISPATCHED.PORT_5_11@ + cpu_core@UOPS_DISPATCHED.PORT_6@) / (5 * tma_info_core_core_clks)tma_alu_op_utilization > 0.4This metric represents Core fraction of cycles CPU dispatched uops on execution ports for ALU operations100%00tma_assistsBvIO;TopdownL4;tma_L4_group;tma_microcode_sequencer_group78 * cpu_core@ASSISTS.ANY@ / tma_info_thread_slotstma_assists > 0.1 & (tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1)This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of AssistsThis metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists. Assists are long sequences of uops that are required in certain corner-cases for operations that cannot be handled natively by the execution pipeline. For example; when working with very small floating point values (so-called Denormals); the FP units are not set up to perform these operations natively. Instead; a sequence of instructions to perform the computation on the Denormals is injected into the pipeline. Since these microcode sequences might be dozens of uops long; Assists can be extremely deleterious to performance and they can be avoided in many cases. Sample with: ASSISTS.ANY100%00tma_avx_assistsHPC;TopdownL5;tma_L5_group;tma_assists_group63 * cpu_core@ASSISTS.SSE_AVX_MIX@ / tma_info_thread_slotstma_avx_assists > 0.1This metric estimates fraction of slots the CPU retired uops as a result of handing SSE to AVX* or AVX* to SSE transition Assists100%00tma_backend_boundBvOB;Default;TmaL1;TopdownL1;tma_L1_groupcpu_core@topdown\-be\-bound@ / (cpu_core@topdown\-fe\-bound@ + cpu_core@topdown\-bad\-spec@ + cpu_core@topdown\-retiring@ + cpu_core@topdown\-be\-bound@) + 0 * tma_info_thread_slotstma_backend_bound > 0.2This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the BackendThis category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend. Backend is the portion of the processor core where the out-of-order scheduler dispatches ready uops into their respective execution units; and once completed these uops get retired according to program order. For example; stalls due to data-cache misses or stalls due to the divider unit being overloaded are both categorized under Backend Bound. Backend Bound is further divided into two main categories: Memory Bound and Core Bound. Sample with: TOPDOWN.BACKEND_BOUND_SLOTS100%TopdownL1;DefaultTopdownL100tma_bad_speculationDefault;TmaL1;TopdownL1;tma_L1_groupmax(1 - (tma_frontend_bound + tma_backend_bound + tma_retiring), 0)tma_bad_speculation > 0.15This category represents fraction of slots wasted due to incorrect speculationsThis category represents fraction of slots wasted due to incorrect speculations. This include slots used to issue uops that do not eventually get retired and slots for which the issue-pipeline was blocked due to recovery from earlier incorrect speculation. For example; wasted work due to miss-predicted branches are categorized under Bad Speculation category. Incorrect data speculation followed by Memory Ordering Nukes is another example100%TopdownL1;DefaultTopdownL100tma_branch_mispredictsBadSpec;BrMispredicts;BvMP;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBMcpu_core@topdown\-br\-mispredict@ / (cpu_core@topdown\-fe\-bound@ + cpu_core@topdown\-bad\-spec@ + cpu_core@topdown\-retiring@ + cpu_core@topdown\-be\-bound@) + 0 * tma_info_thread_slotstma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15This metric represents fraction of slots the CPU has wasted due to Branch MispredictionThis metric represents fraction of slots the CPU has wasted due to Branch Misprediction.  These slots are either wasted by uops fetched from an incorrectly speculated program path; or stalls when the out-of-order part of the machine needs to recover its state from a speculative path. Sample with: TOPDOWN.BR_MISPREDICT_SLOTS. Related metrics: tma_info_bad_spec_branch_misprediction_cost, tma_info_bottleneck_mispredictions, tma_mispredicts_resteers100%TopdownL200tma_branch_resteersFetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_groupcpu_core@INT_MISC.CLEAR_RESTEER_CYCLES@ / tma_info_thread_clks + tma_unknown_branchestma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric represents fraction of cycles the CPU was stalled due to Branch ResteersThis metric represents fraction of cycles the CPU was stalled due to Branch Resteers. Branch Resteers estimates the Frontend delay in fetching operations from corrected path; following all sorts of miss-predicted branches. For example; branchy code with lots of miss-predictions might get categorized under Branch Resteers. Note the value of this node may overlap with its siblings. Sample with: BR_MISP_RETIRED.ALL_BRANCHES100%00tma_c01_waitC0Wait;TopdownL4;tma_L4_group;tma_serializing_operation_groupcpu_core@CPU_CLK_UNHALTED.C01@ / tma_info_thread_clkstma_c01_wait > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles the CPU was stalled due staying in C0.1 power-performance optimized state (Faster wakeup time; Smaller power savings)100%00tma_c02_waitC0Wait;TopdownL4;tma_L4_group;tma_serializing_operation_groupcpu_core@CPU_CLK_UNHALTED.C02@ / tma_info_thread_clkstma_c02_wait > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles the CPU was stalled due staying in C0.2 power-performance optimized state (Slower wakeup time; Larger power savings)100%00tma_ciscTopdownL4;tma_L4_group;tma_microcode_sequencer_groupmax(0, tma_microcode_sequencer - tma_assists)tma_cisc > 0.1 & (tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1)This metric estimates fraction of cycles the CPU retired uops originated from CISC (complex instruction set computer) instructionThis metric estimates fraction of cycles the CPU retired uops originated from CISC (complex instruction set computer) instruction. A CISC instruction has multiple uops that are required to perform the instruction's functionality as in the case of read-modify-write as an example. Since these instructions require multiple uops they may or may not imply sub-optimal use of machine resources. Sample with: FRONTEND_RETIRED.MS_FLOWS100%00tma_clears_resteersBadSpec;MachineClears;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueMC(1 - tma_branch_mispredicts / tma_bad_speculation) * cpu_core@INT_MISC.CLEAR_RESTEER_CYCLES@ / tma_info_thread_clkstma_clears_resteers > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Machine ClearsThis metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Machine Clears. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES. Related metrics: tma_l1_bound, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches100%00tma_contested_accessesBvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group(25 * tma_info_system_core_frequency * (cpu_core@MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD@ * (cpu_core@OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM@ / (cpu_core@OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM@ + cpu_core@OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD@))) + 24 * tma_info_system_core_frequency * cpu_core@MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS@) * (1 + cpu_core@MEM_LOAD_RETIRED.FB_HIT@ / cpu_core@MEM_LOAD_RETIRED.L1_MISS@ / 2) / tma_info_thread_clkstma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accessesThis metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS. Related metrics: tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache100%00tma_core_boundBackend;Compute;TmaL2;TopdownL2;tma_L2_group;tma_backend_bound_groupmax(0, tma_backend_bound - tma_memory_bound)tma_core_bound > 0.1 & tma_backend_bound > 0.2This metric represents fraction of slots where Core non-memory issues were of a bottleneckThis metric represents fraction of slots where Core non-memory issues were of a bottleneck.  Shortage in hardware compute resources; or dependencies in software's instructions are both categorized under Core Bound. Hence it may indicate the machine ran out of an out-of-order resource; certain execution units are overloaded or dependencies in program's data- or instruction-flow are limiting the performance (e.g. FP-chained long-latency arithmetic operations)100%TopdownL200tma_data_sharingBvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group24 * tma_info_system_core_frequency * (cpu_core@MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD@ + cpu_core@MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD@ * (1 - cpu_core@OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM@ / (cpu_core@OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM@ + cpu_core@OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD@))) * (1 + cpu_core@MEM_LOAD_RETIRED.FB_HIT@ / cpu_core@MEM_LOAD_RETIRED.L1_MISS@ / 2) / tma_info_thread_clkstma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accessesThis metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD. Related metrics: tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache100%00tma_decoder0_aloneDSBmiss;FetchBW;TopdownL4;tma_L4_group;tma_issueD0;tma_mite_group(cpu_core@INST_DECODED.DECODERS\,cmask\=1@ - cpu_core@INST_DECODED.DECODERS\,cmask\=2@) / tma_info_core_core_clks / 2tma_decoder0_alone > 0.1 & (tma_mite > 0.1 & tma_fetch_bandwidth > 0.2)This metric represents fraction of cycles where decoder-0 was the only active decoderThis metric represents fraction of cycles where decoder-0 was the only active decoder. Related metrics: tma_few_uops_instructions100%00tma_dividerBvCB;TopdownL3;tma_L3_group;tma_core_bound_groupcpu_core@ARITH.DIV_ACTIVE@ / tma_info_thread_clkstma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)This metric represents fraction of cycles where the Divider unit was activeThis metric represents fraction of cycles where the Divider unit was active. Divide and square root instructions are performed by the Divider unit and can take considerably longer latency than integer or Floating Point addition; subtraction; or multiplication. Sample with: ARITH.DIVIDER_ACTIVE100%00tma_dram_boundMemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_groupcpu_core@MEMORY_ACTIVITY.STALLS_L3_MISS@ / tma_info_thread_clkstma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loadsThis metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads. Better caching can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_MISS_PS100%00tma_dsbDSB;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group(cpu_core@IDQ.DSB_CYCLES_ANY@ - cpu_core@IDQ.DSB_CYCLES_OK@) / tma_info_core_core_clks / 2tma_dsb > 0.15 & tma_fetch_bandwidth > 0.2This metric represents Core fraction of cycles in which CPU was likely limited due to DSB (decoded uop cache) fetch pipelineThis metric represents Core fraction of cycles in which CPU was likely limited due to DSB (decoded uop cache) fetch pipeline.  For example; inefficient utilization of the DSB cache structure or bank conflict when reading from it; are categorized here100%00tma_dsb_switchesDSBmiss;FetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueFBcpu_core@DSB2MITE_SWITCHES.PENALTY_CYCLES@ / tma_info_thread_clkstma_dsb_switches > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric represents fraction of cycles the CPU was stalled due to switches from DSB to MITE pipelinesThis metric represents fraction of cycles the CPU was stalled due to switches from DSB to MITE pipelines. The DSB (decoded i-cache) is a Uop Cache where the front-end directly delivers Uops (micro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter latency and delivered higher bandwidth than the MITE (legacy instruction decode pipeline). Switching between the two pipelines can cause penalties hence this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DSB_MISS_PS. Related metrics: tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp100%00tma_dtlb_loadBvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_groupmin(7 * cpu_core@DTLB_LOAD_MISSES.STLB_HIT\,cmask\=1@ + cpu_core@DTLB_LOAD_MISSES.WALK_ACTIVE@, max(cpu_core@CYCLE_ACTIVITY.CYCLES_MEM_ANY@ - cpu_core@MEMORY_ACTIVITY.CYCLES_L1D_MISS@, 0)) / tma_info_thread_clkstma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accessesThis metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_dtlb_store, tma_info_bottleneck_memory_data_tlbs, tma_info_bottleneck_memory_synchronization100%00tma_dtlb_storeBvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group(7 * cpu_core@DTLB_STORE_MISSES.STLB_HIT\,cmask\=1@ + cpu_core@DTLB_STORE_MISSES.WALK_ACTIVE@) / tma_info_core_core_clkstma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates the fraction of cycles spent handling first-level data TLB store missesThis metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses.  As with ordinary data caching; focus on improving data locality and reducing working-set size to reduce DTLB overhead.  Additionally; consider using profile-guided optimization (PGO) to collocate frequently-used data on the same page.  Try using larger page sizes for large amounts of frequently-used data. Sample with: MEM_INST_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_dtlb_load, tma_info_bottleneck_memory_data_tlbs, tma_info_bottleneck_memory_synchronization100%00tma_false_sharingBvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group28 * tma_info_system_core_frequency * cpu_core@OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM@ / tma_info_thread_clkstma_false_sharing > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates how often CPU was handling synchronizations due to False SharingThis metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache100%00tma_fb_fullBvMS;MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_groupcpu_core@L1D_PEND_MISS.FB_FULL@ / tma_info_thread_clkstma_fb_full > 0.3This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceedThis metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed. The higher the metric value; the deeper the memory hierarchy level the misses are satisfied from (metric values >1 are valid). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or external memory). Related metrics: tma_info_bottleneck_cache_memory_bandwidth, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full, tma_store_latency, tma_streaming_stores100%00tma_fetch_bandwidthFetchBW;Frontend;TmaL2;TopdownL2;tma_L2_group;tma_frontend_bound_group;tma_issueFBmax(0, tma_frontend_bound - tma_fetch_latency)tma_fetch_bandwidth > 0.2This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issuesThis metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues.  For example; inefficiencies at the instruction decoders; or restrictions for caching in the DSB (decoded uops cache) are categorized under Fetch Bandwidth. In such cases; the Frontend typically delivers suboptimal amount of uops to the Backend. Sample with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_2_PS. Related metrics: tma_dsb_switches, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp100%TopdownL200tma_fetch_latencyFrontend;TmaL2;TopdownL2;tma_L2_group;tma_frontend_bound_groupcpu_core@topdown\-fetch\-lat@ / (cpu_core@topdown\-fe\-bound@ + cpu_core@topdown\-bad\-spec@ + cpu_core@topdown\-retiring@ + cpu_core@topdown\-be\-bound@) - cpu_core@INT_MISC.UOP_DROPPING@ / tma_info_thread_slotstma_fetch_latency > 0.1 & tma_frontend_bound > 0.15This metric represents fraction of slots the CPU was stalled due to Frontend latency issuesThis metric represents fraction of slots the CPU was stalled due to Frontend latency issues.  For example; instruction-cache misses; iTLB misses or fetch stalls after a branch misprediction are categorized under Frontend Latency. In such cases; the Frontend eventually delivers no uops for some period. Sample with: FRONTEND_RETIRED.LATENCY_GE_16_PS;FRONTEND_RETIRED.LATENCY_GE_8_PS100%TopdownL200tma_few_uops_instructionsTopdownL3;tma_L3_group;tma_heavy_operations_group;tma_issueD0max(0, tma_heavy_operations - tma_microcode_sequencer)tma_few_uops_instructions > 0.05 & tma_heavy_operations > 0.1This metric represents fraction of slots where the CPU was retiring instructions that that are decoder into two or up to ([SNB+] four; [ADL+] five) uopsThis metric represents fraction of slots where the CPU was retiring instructions that that are decoder into two or up to ([SNB+] four; [ADL+] five) uops. This highly-correlates with the number of uops in such instructions. Related metrics: tma_decoder0_alone100%00tma_fp_arithHPC;TopdownL3;tma_L3_group;tma_light_operations_grouptma_x87_use + tma_fp_scalar + tma_fp_vectortma_fp_arith > 0.2 & tma_light_operations > 0.6This metric represents overall arithmetic floating-point (FP) operations fraction the CPU has executed (retired)This metric represents overall arithmetic floating-point (FP) operations fraction the CPU has executed (retired). Note this metric's value may exceed its parent due to use of "Uops" CountDomain and FMA double-counting100%00tma_fp_assistsHPC;TopdownL5;tma_L5_group;tma_assists_group30 * cpu_core@ASSISTS.FP@ / tma_info_thread_slotstma_fp_assists > 0.1This metric roughly estimates fraction of slots the CPU retired uops as a result of handing Floating Point (FP) AssistsThis metric roughly estimates fraction of slots the CPU retired uops as a result of handing Floating Point (FP) Assists. FP Assist may apply when working with very small floating point values (so-called Denormals)100%00tma_fp_scalarCompute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2Pcpu_core@FP_ARITH_INST_RETIRED.SCALAR@ / (tma_retiring * tma_info_thread_slots)tma_fp_scalar > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)This metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retiredThis metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retired. May overcount due to FMA double counting. Related metrics: tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_fp_vectorCompute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2Pcpu_core@FP_ARITH_INST_RETIRED.VECTOR@ / (tma_retiring * tma_info_thread_slots)tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)This metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widthsThis metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widths. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_fp_vector_128bCompute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P(cpu_core@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE@ + cpu_core@FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE@) / (tma_retiring * tma_info_thread_slots)tma_fp_vector_128b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))This metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectorsThis metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_fp_vector_256bCompute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P(cpu_core@FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE@ + cpu_core@FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE@) / (tma_retiring * tma_info_thread_slots)tma_fp_vector_256b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))This metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectorsThis metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_frontend_boundBvFB;BvIO;Default;PGO;TmaL1;TopdownL1;tma_L1_groupcpu_core@topdown\-fe\-bound@ / (cpu_core@topdown\-fe\-bound@ + cpu_core@topdown\-bad\-spec@ + cpu_core@topdown\-retiring@ + cpu_core@topdown\-be\-bound@) - cpu_core@INT_MISC.UOP_DROPPING@ / tma_info_thread_slotstma_frontend_bound > 0.15This category represents fraction of slots where the processor's Frontend undersupplies its BackendThis category represents fraction of slots where the processor's Frontend undersupplies its Backend. Frontend denotes the first part of the processor core responsible to fetch operations that are executed later on by the Backend part. Within the Frontend; a branch predictor predicts the next address to fetch; cache-lines are fetched from the memory subsystem; parsed into instructions; and lastly decoded into micro-operations (uops). Ideally the Frontend can issue Pipeline_Width uops every cycle to the Backend. Frontend Bound denotes unutilized issue-slots when there is no Backend stall; i.e. bubbles where Frontend delivered no uops while Backend could have accepted them. For example; stalls due to instruction-cache misses would be categorized under Frontend Bound. Sample with: FRONTEND_RETIRED.LATENCY_GE_4_PS100%TopdownL1;DefaultTopdownL100tma_fused_instructionsBranches;BvBO;Pipeline;TopdownL3;tma_L3_group;tma_light_operations_grouptma_light_operations * cpu_core@INST_RETIRED.MACRO_FUSED@ / (tma_retiring * tma_info_thread_slots)tma_fused_instructions > 0.1 & tma_light_operations > 0.6This metric represents fraction of slots where the CPU was retiring fused instructions -- where one uop can represent multiple contiguous instructionsThis metric represents fraction of slots where the CPU was retiring fused instructions -- where one uop can represent multiple contiguous instructions. CMP+JCC or DEC+JCC are common examples of legacy fusions. {([MTL] Note new MOV+OP and Load+OP fusions appear under Other_Light_Ops in MTL!)}100%00tma_heavy_operationsRetire;TmaL2;TopdownL2;tma_L2_group;tma_retiring_groupcpu_core@topdown\-heavy\-ops@ / (cpu_core@topdown\-fe\-bound@ + cpu_core@topdown\-bad\-spec@ + cpu_core@topdown\-retiring@ + cpu_core@topdown\-be\-bound@) + 0 * tma_info_thread_slotstma_heavy_operations > 0.1This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequencesThis metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences. ([ICL+] Note this may overcount due to approximation using indirect events; [ADL+] .). Sample with: UOPS_RETIRED.HEAVY100%TopdownL200tma_icache_missesBigFootprint;BvBC;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_groupcpu_core@ICACHE_DATA.STALLS@ / tma_info_thread_clkstma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric represents fraction of cycles the CPU was stalled due to instruction cache missesThis metric represents fraction of cycles the CPU was stalled due to instruction cache misses. Sample with: FRONTEND_RETIRED.L2_MISS_PS;FRONTEND_RETIRED.L1I_MISS_PS100%00tma_info_bad_spec_branch_misprediction_costBad;BrMispredicts;tma_issueBMtma_info_bottleneck_mispredictions * tma_info_thread_slots / cpu_core@BR_MISP_RETIRED.ALL_BRANCHES@ / 100Branch Misprediction Cost: Fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear)Branch Misprediction Cost: Fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear). Related metrics: tma_branch_mispredicts, tma_info_bottleneck_mispredictions, tma_mispredicts_resteers00tma_info_bad_spec_ipmisp_cond_ntakenBad;BrMispredictscpu_core@INST_RETIRED.ANY@ / cpu_core@BR_MISP_RETIRED.COND_NTAKEN@tma_info_bad_spec_ipmisp_cond_ntaken < 200Instructions per retired mispredicts for conditional non-taken branches (lower number means higher occurrence rate)00tma_info_bad_spec_ipmisp_cond_takenBad;BrMispredictscpu_core@INST_RETIRED.ANY@ / cpu_core@BR_MISP_RETIRED.COND_TAKEN@tma_info_bad_spec_ipmisp_cond_taken < 200Instructions per retired mispredicts for conditional taken branches (lower number means higher occurrence rate)00tma_info_bad_spec_ipmisp_indirectBad;BrMispredictscpu_core@INST_RETIRED.ANY@ / cpu_core@BR_MISP_RETIRED.INDIRECT@tma_info_bad_spec_ipmisp_indirect < 1e3Instructions per retired mispredicts for indirect CALL or JMP branches (lower number means higher occurrence rate)00tma_info_bad_spec_ipmisp_retBad;BrMispredictscpu_core@INST_RETIRED.ANY@ / cpu_core@BR_MISP_RETIRED.RET@tma_info_bad_spec_ipmisp_ret < 500Instructions per retired mispredicts for return branches (lower number means higher occurrence rate)00tma_info_bad_spec_ipmispredictBad;BadSpec;BrMispredictscpu_core@INST_RETIRED.ANY@ / cpu_core@BR_MISP_RETIRED.ALL_BRANCHES@tma_info_bad_spec_ipmispredict < 200Number of Instructions per non-speculative Branch Misprediction (JEClear) (lower number means higher occurrence rate)00tma_info_bad_spec_spec_clears_ratioBrMispredictscpu_core@INT_MISC.CLEARS_COUNT@ / (cpu_core@BR_MISP_RETIRED.ALL_BRANCHES@ + cpu_core@MACHINE_CLEARS.COUNT@)Speculative to Retired ratio of all clears (covering mispredicts and nukes)00tma_info_botlnk_l0_core_bound_likelyCor;SMT(100 * (1 - tma_core_bound / tma_ports_utilization if tma_core_bound < tma_ports_utilization else 1) if tma_info_system_smt_2t_utilization > 0.5 else 0)tma_info_botlnk_l0_core_bound_likely > 0.5Probability of Core Bound bottleneck hidden by SMT-profiling artifacts00tma_info_botlnk_l2_dsb_bandwidthDSB;FetchBW;tma_issueFB100 * (tma_frontend_bound * (tma_fetch_bandwidth / (tma_fetch_bandwidth + tma_fetch_latency)) * (tma_dsb / (tma_dsb + tma_lsd + tma_mite)))tma_info_botlnk_l2_dsb_bandwidth > 10Total pipeline cost of DSB (uop cache) hits - subset of the Instruction_Fetch_BW BottleneckTotal pipeline cost of DSB (uop cache) hits - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp00tma_info_botlnk_l2_dsb_missesDSBmiss;Fed;tma_issueFB100 * (tma_fetch_latency * tma_dsb_switches / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + tma_fetch_bandwidth * tma_mite / (tma_dsb + tma_lsd + tma_mite))tma_info_botlnk_l2_dsb_misses > 10Total pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW BottleneckTotal pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp00tma_info_botlnk_l2_ic_missesFed;FetchLat;IcMiss;tma_issueFL100 * (tma_fetch_latency * tma_icache_misses / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))tma_info_botlnk_l2_ic_misses > 5Total pipeline cost of Instruction Cache misses - subset of the Big_Code BottleneckTotal pipeline cost of Instruction Cache misses - subset of the Big_Code Bottleneck. Related metrics: 00tma_info_bottleneck_big_codeBigFootprint;BvBC;Fed;Frontend;IcMiss;MemoryTLB100 * tma_fetch_latency * (tma_itlb_misses + tma_icache_misses + tma_unknown_branches) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)tma_info_bottleneck_big_code > 20Total pipeline cost of instruction fetch related bottlenecks by large code footprint programs (i-side cache; TLB and BTB misses)00tma_info_bottleneck_branching_overheadBvBO;Ret100 * ((cpu_core@BR_INST_RETIRED.ALL_BRANCHES@ + 2 * cpu_core@BR_INST_RETIRED.NEAR_CALL@ + cpu_core@INST_RETIRED.NOP@) / tma_info_thread_slots)tma_info_bottleneck_branching_overhead > 5Total pipeline cost of instructions used for program control-flow - a subset of the Retiring category in TMATotal pipeline cost of instructions used for program control-flow - a subset of the Retiring category in TMA. Examples include function calls; loops and alignments. (A lower bound)00tma_info_bottleneck_cache_memory_bandwidthBvMB;Mem;MemoryBW;Offcore;tma_issueBW100 * (tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_fb_full / (tma_dtlb_load + tma_fb_full + tma_l1_hit_latency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)))tma_info_bottleneck_cache_memory_bandwidth > 20Total pipeline cost of external Memory- or Cache-Bandwidth related bottlenecksTotal pipeline cost of external Memory- or Cache-Bandwidth related bottlenecks. Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full00tma_info_bottleneck_cache_memory_latencyBvML;Mem;MemoryLat;Offcore;tma_issueLat100 * (tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_l3_hit_latency / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * tma_l2_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_store_latency / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_l1_hit_latency / (tma_dtlb_load + tma_fb_full + tma_l1_hit_latency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)))tma_info_bottleneck_cache_memory_latency > 20Total pipeline cost of external Memory- or Cache-Latency related bottlenecksTotal pipeline cost of external Memory- or Cache-Latency related bottlenecks. Related metrics: tma_l3_hit_latency, tma_mem_latency00tma_info_bottleneck_compute_bound_estBvCB;Cor;tma_issueComp100 * (tma_core_bound * tma_divider / (tma_divider + tma_ports_utilization + tma_serializing_operation) + tma_core_bound * (tma_ports_utilization / (tma_divider + tma_ports_utilization + tma_serializing_operation)) * (tma_ports_utilized_3m / (tma_ports_utilized_0 + tma_ports_utilized_1 + tma_ports_utilized_2 + tma_ports_utilized_3m)))tma_info_bottleneck_compute_bound_est > 20Total pipeline cost when the execution is compute-bound - an estimationTotal pipeline cost when the execution is compute-bound - an estimation. Covers Core Bound when High ILP as well as when long-latency execution units are busy. Related metrics: 00tma_info_bottleneck_instruction_fetch_bwBvFB;Fed;FetchBW;Frontend100 * (tma_frontend_bound - (1 - 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts) * tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) - (1 - cpu_core@INST_RETIRED.REP_ITERATION@ / cpu_core@UOPS_RETIRED.MS\,cmask\=1@) * (tma_fetch_latency * (tma_ms_switches + tma_branch_resteers * (tma_clears_resteers + tma_mispredicts_resteers * tma_other_mispredicts / tma_branch_mispredicts) / (tma_clears_resteers + tma_mispredicts_resteers + tma_unknown_branches)) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))) - tma_info_bottleneck_big_codetma_info_bottleneck_instruction_fetch_bw > 20Total pipeline cost of instruction fetch bandwidth related bottlenecks (when the front-end could not sustain operations delivery to the back-end)00tma_info_bottleneck_irregular_overheadBad;BvIO;Cor;Ret;tma_issueMS100 * ((1 - cpu_core@INST_RETIRED.REP_ITERATION@ / cpu_core@UOPS_RETIRED.MS\,cmask\=1@) * (tma_fetch_latency * (tma_ms_switches + tma_branch_resteers * (tma_clears_resteers + tma_mispredicts_resteers * tma_other_mispredicts / tma_branch_mispredicts) / (tma_clears_resteers + tma_mispredicts_resteers + tma_unknown_branches)) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) + 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts * tma_branch_mispredicts + tma_machine_clears * tma_other_nukes / tma_other_nukes + tma_core_bound * (tma_serializing_operation + cpu_core@RS.EMPTY\,umask\=1@ / tma_info_thread_clks * tma_ports_utilized_0) / (tma_divider + tma_ports_utilization + tma_serializing_operation) + tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_heavy_operations)tma_info_bottleneck_irregular_overhead > 10Total pipeline cost of irregular execution (e.gTotal pipeline cost of irregular execution (e.g. FP-assists in HPC, Wait time with work imbalance multithreaded workloads, overhead in system services or virtualized environments). Related metrics: tma_microcode_sequencer, tma_ms_switches00tma_info_bottleneck_memory_data_tlbsBvMT;Mem;MemoryTLB;Offcore;tma_issueTLB100 * (tma_memory_bound * (tma_l1_bound / max(tma_memory_bound, tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_dtlb_load / max(tma_l1_bound, tma_dtlb_load + tma_fb_full + tma_l1_hit_latency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_dtlb_store / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)))tma_info_bottleneck_memory_data_tlbs > 20Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs)Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs). Related metrics: tma_dtlb_load, tma_dtlb_store, tma_info_bottleneck_memory_synchronization00tma_info_bottleneck_memory_synchronizationBvMS;Mem;Offcore;tma_issueTLB100 * (tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_contested_accesses + tma_data_sharing) / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full) + tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * tma_false_sharing / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores - tma_store_latency)) + tma_machine_clears * (1 - tma_other_nukes / tma_other_nukes))tma_info_bottleneck_memory_synchronization > 10Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors)Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors). Related metrics: tma_dtlb_load, tma_dtlb_store, tma_info_bottleneck_memory_data_tlbs00tma_info_bottleneck_mispredictionsBad;BadSpec;BrMispredicts;BvMP;tma_issueBM100 * (1 - 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts) * (tma_branch_mispredicts + tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))tma_info_bottleneck_mispredictions > 20Total pipeline cost of Branch Misprediction related bottlenecksTotal pipeline cost of Branch Misprediction related bottlenecks. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost, tma_mispredicts_resteers00tma_info_bottleneck_other_bottlenecksBvOB;Cor;Offcore100 - (tma_info_bottleneck_big_code + tma_info_bottleneck_instruction_fetch_bw + tma_info_bottleneck_mispredictions + tma_info_bottleneck_cache_memory_bandwidth + tma_info_bottleneck_cache_memory_latency + tma_info_bottleneck_memory_data_tlbs + tma_info_bottleneck_memory_synchronization + tma_info_bottleneck_compute_bound_est + tma_info_bottleneck_irregular_overhead + tma_info_bottleneck_branching_overhead + tma_info_bottleneck_useful_work)tma_info_bottleneck_other_bottlenecks > 20Total pipeline cost of remaining bottlenecks in the back-endTotal pipeline cost of remaining bottlenecks in the back-end. Examples include data-dependencies (Core Bound when Low ILP) and other unlisted memory-related stalls00tma_info_bottleneck_useful_workBvUW;Ret100 * (tma_retiring - (cpu_core@BR_INST_RETIRED.ALL_BRANCHES@ + 2 * cpu_core@BR_INST_RETIRED.NEAR_CALL@ + cpu_core@INST_RETIRED.NOP@) / tma_info_thread_slots - tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_heavy_operations)tma_info_bottleneck_useful_work > 20Total pipeline cost of "useful operations" - the portion of Retiring category not covered by Branching_Overhead nor Irregular_Overhead00tma_info_branches_callretBad;Branches(cpu_core@BR_INST_RETIRED.NEAR_CALL@ + cpu_core@BR_INST_RETIRED.NEAR_RETURN@) / cpu_core@BR_INST_RETIRED.ALL_BRANCHES@Fraction of branches that are CALL or RET00tma_info_branches_cond_ntBad;Branches;CodeGen;PGOcpu_core@BR_INST_RETIRED.COND_NTAKEN@ / cpu_core@BR_INST_RETIRED.ALL_BRANCHES@Fraction of branches that are non-taken conditionals00tma_info_branches_cond_tkBad;Branches;CodeGen;PGOcpu_core@BR_INST_RETIRED.COND_TAKEN@ / cpu_core@BR_INST_RETIRED.ALL_BRANCHES@Fraction of branches that are taken conditionals00tma_info_branches_jumpBad;Branches(cpu_core@BR_INST_RETIRED.NEAR_TAKEN@ - cpu_core@BR_INST_RETIRED.COND_TAKEN@ - 2 * cpu_core@BR_INST_RETIRED.NEAR_CALL@) / cpu_core@BR_INST_RETIRED.ALL_BRANCHES@Fraction of branches that are unconditional (direct or indirect) jumps00tma_info_branches_other_branchesBad;Branches1 - (tma_info_branches_cond_nt + tma_info_branches_cond_tk + tma_info_branches_callret + tma_info_branches_jump)Fraction of branches of other types (not individually covered by other metrics in Info.Branches group)00tma_info_core_core_clksSMT(cpu_core@CPU_CLK_UNHALTED.DISTRIBUTED@ if #SMT_on else tma_info_thread_clks)Core actual clocks when any Logical Processor is active on the Physical Core00tma_info_core_coreipcRet;SMT;TmaL1;tma_L1_groupcpu_core@INST_RETIRED.ANY@ / tma_info_core_core_clksInstructions Per Cycle across hyper-threads (per physical core)00tma_info_core_epcPowercpu_core@UOPS_EXECUTED.THREAD@ / tma_info_thread_clksuops Executed per Cycle00tma_info_core_flopcFlops;Ret(cpu_core@FP_ARITH_INST_RETIRED.SCALAR@ + 2 * cpu_core@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE@ + 4 * cpu_core@FP_ARITH_INST_RETIRED.4_FLOPS@ + 8 * cpu_core@FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE@) / tma_info_core_core_clksFloating Point Operations Per Cycle00tma_info_core_fp_arith_utilizationCor;Flops;HPC(cpu_core@FP_ARITH_DISPATCHED.PORT_0@ + cpu_core@FP_ARITH_DISPATCHED.PORT_1@ + cpu_core@FP_ARITH_DISPATCHED.PORT_5@) / (2 * tma_info_core_core_clks)Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width)Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width). Values > 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less common)00tma_info_core_ilpBackend;Cor;Pipeline;PortsUtilcpu_core@UOPS_EXECUTED.THREAD@ / cpu_core@UOPS_EXECUTED.THREAD\,cmask\=1@Instruction-Level-Parallelism (average number of uops executed when there is execution) per thread (logical-processor)00tma_info_frontend_dsb_coverageDSB;Fed;FetchBW;tma_issueFBcpu_core@IDQ.DSB_UOPS@ / cpu_core@UOPS_ISSUED.ANY@tma_info_frontend_dsb_coverage < 0.7 & tma_info_thread_ipc / 6 > 0.35Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache)Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache). Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_inst_mix_iptb, tma_lcp00tma_info_frontend_dsb_switch_costDSBmisscpu_core@DSB2MITE_SWITCHES.PENALTY_CYCLES@ / cpu_core@DSB2MITE_SWITCHES.PENALTY_CYCLES\,cmask\=1\,edge@Average number of cycles of a switch from the DSB fetch-unit to MITE fetch unit - see DSB_Switches tree node for details00tma_info_frontend_fetch_upcFed;FetchBWcpu_core@UOPS_ISSUED.ANY@ / cpu_core@UOPS_ISSUED.ANY\,cmask\=1@Average number of Uops issued by front-end when it issued something00tma_info_frontend_icache_miss_latencyFed;FetchLat;IcMisscpu_core@ICACHE_DATA.STALLS@ / cpu_core@ICACHE_DATA.STALLS\,cmask\=1\,edge@Average Latency for L1 instruction cache misses00tma_info_frontend_ipdsb_miss_retDSBmiss;Fedcpu_core@INST_RETIRED.ANY@ / cpu_core@FRONTEND_RETIRED.ANY_DSB_MISS@tma_info_frontend_ipdsb_miss_ret < 50Instructions per non-speculative DSB miss (lower number means higher occurrence rate)00tma_info_frontend_ipunknown_branchFedtma_info_inst_mix_instructions / cpu_core@BACLEARS.ANY@Instructions per speculative Unknown Branch Misprediction (BAClear) (lower number means higher occurrence rate)00tma_info_frontend_l2mpki_codeIcMiss1e3 * cpu_core@FRONTEND_RETIRED.L2_MISS@ / cpu_core@INST_RETIRED.ANY@L2 cache true code cacheline misses per kilo instruction00tma_info_frontend_l2mpki_code_allIcMiss1e3 * cpu_core@L2_RQSTS.CODE_RD_MISS@ / cpu_core@INST_RETIRED.ANY@L2 cache speculative code cacheline misses per kilo instruction00tma_info_frontend_lsd_coverageFed;LSDcpu_core@LSD.UOPS@ / cpu_core@UOPS_ISSUED.ANY@Fraction of Uops delivered by the LSD (Loop Stream Detector; aka Loop Cache)00tma_info_frontend_unknown_branch_costFedcpu_core@INT_MISC.UNKNOWN_BRANCH_CYCLES@ / cpu_core@INT_MISC.UNKNOWN_BRANCH_CYCLES\,cmask\=1\,edge@Average number of cycles the front-end was delayed due to an Unknown Branch detectionAverage number of cycles the front-end was delayed due to an Unknown Branch detection. See Unknown_Branches node00tma_info_inst_mix_bptkbranchBranches;Fed;PGOcpu_core@BR_INST_RETIRED.ALL_BRANCHES@ / cpu_core@BR_INST_RETIRED.NEAR_TAKEN@Branch instructions per taken branch00tma_info_inst_mix_instructionsSummary;TmaL1;tma_L1_groupcpu_core@INST_RETIRED.ANY@Total number of retired InstructionsTotal number of retired Instructions. Sample with: INST_RETIRED.PREC_DIST00tma_info_inst_mix_iparithFlops;InsTypecpu_core@INST_RETIRED.ANY@ / (cpu_core@FP_ARITH_INST_RETIRED.SCALAR@ + cpu_core@FP_ARITH_INST_RETIRED.VECTOR@)tma_info_inst_mix_iparith < 10Instructions per FP Arithmetic instruction (lower number means higher occurrence rate)Instructions per FP Arithmetic instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting. Approximated prior to BDW00tma_info_inst_mix_iparith_avx128Flops;FpVector;InsTypecpu_core@INST_RETIRED.ANY@ / (cpu_core@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE@ + cpu_core@FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE@)tma_info_inst_mix_iparith_avx128 < 10Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate)Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting00tma_info_inst_mix_iparith_avx256Flops;FpVector;InsTypecpu_core@INST_RETIRED.ANY@ / (cpu_core@FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE@ + cpu_core@FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE@)tma_info_inst_mix_iparith_avx256 < 10Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate)Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting00tma_info_inst_mix_iparith_scalar_dpFlops;FpScalar;InsTypecpu_core@INST_RETIRED.ANY@ / cpu_core@FP_ARITH_INST_RETIRED.SCALAR_DOUBLE@tma_info_inst_mix_iparith_scalar_dp < 10Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate)Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting00tma_info_inst_mix_iparith_scalar_spFlops;FpScalar;InsTypecpu_core@INST_RETIRED.ANY@ / cpu_core@FP_ARITH_INST_RETIRED.SCALAR_SINGLE@tma_info_inst_mix_iparith_scalar_sp < 10Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate)Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting00tma_info_inst_mix_ipbranchBranches;Fed;InsTypecpu_core@INST_RETIRED.ANY@ / cpu_core@BR_INST_RETIRED.ALL_BRANCHES@tma_info_inst_mix_ipbranch < 8Instructions per Branch (lower number means higher occurrence rate)00tma_info_inst_mix_ipcallBranches;Fed;PGOcpu_core@INST_RETIRED.ANY@ / cpu_core@BR_INST_RETIRED.NEAR_CALL@tma_info_inst_mix_ipcall < 200Instructions per (near) call (lower number means higher occurrence rate)00tma_info_inst_mix_ipflopFlops;InsTypecpu_core@INST_RETIRED.ANY@ / (cpu_core@FP_ARITH_INST_RETIRED.SCALAR@ + 2 * cpu_core@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE@ + 4 * cpu_core@FP_ARITH_INST_RETIRED.4_FLOPS@ + 8 * cpu_core@FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE@)tma_info_inst_mix_ipflop < 10Instructions per Floating Point (FP) Operation (lower number means higher occurrence rate)00tma_info_inst_mix_iploadInsTypecpu_core@INST_RETIRED.ANY@ / cpu_core@MEM_INST_RETIRED.ALL_LOADS@tma_info_inst_mix_ipload < 3Instructions per Load (lower number means higher occurrence rate)00tma_info_inst_mix_ippauseFlops;FpVector;InsTypetma_info_inst_mix_instructions / cpu_core@CPU_CLK_UNHALTED.PAUSE_INST@Instructions per PAUSE (lower number means higher occurrence rate)00tma_info_inst_mix_ipstoreInsTypecpu_core@INST_RETIRED.ANY@ / cpu_core@MEM_INST_RETIRED.ALL_STORES@tma_info_inst_mix_ipstore < 8Instructions per Store (lower number means higher occurrence rate)00tma_info_inst_mix_ipswpfPrefetchescpu_core@INST_RETIRED.ANY@ / cpu_core@SW_PREFETCH_ACCESS.T0\,umask\=0xF@tma_info_inst_mix_ipswpf < 100Instructions per Software prefetch instruction (of any type: NTA/T0/T1/T2/Prefetch) (lower number means higher occurrence rate)00tma_info_inst_mix_iptbBranches;Fed;FetchBW;Frontend;PGO;tma_issueFBcpu_core@INST_RETIRED.ANY@ / cpu_core@BR_INST_RETIRED.NEAR_TAKEN@tma_info_inst_mix_iptb < 13Instructions per taken branchInstructions per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_lcp00tma_info_memory_core_l1d_cache_fill_bw_2tMem;MemoryBWtma_info_memory_l1d_cache_fill_bwAverage per-core data fill bandwidth to the L1 data cache [GB / sec]00tma_info_memory_core_l2_cache_fill_bw_2tMem;MemoryBWtma_info_memory_l2_cache_fill_bwAverage per-core data fill bandwidth to the L2 cache [GB / sec]00tma_info_memory_core_l3_cache_access_bw_2tMem;MemoryBW;Offcoretma_info_memory_l3_cache_access_bwAverage per-core data access bandwidth to the L3 cache [GB / sec]00tma_info_memory_core_l3_cache_fill_bw_2tMem;MemoryBWtma_info_memory_l3_cache_fill_bwAverage per-core data fill bandwidth to the L3 cache [GB / sec]00tma_info_memory_fb_hpkiCacheHits;Mem1e3 * cpu_core@MEM_LOAD_RETIRED.FB_HIT@ / cpu_core@INST_RETIRED.ANY@Fill Buffer (FB) hits per kilo instructions for retired demand loads (L1D misses that merge into ongoing miss-handling entries)00tma_info_memory_l1d_cache_fill_bwMem;MemoryBW64 * cpu_core@L1D.REPLACEMENT@ / 1e9 / duration_timeAverage per-thread data fill bandwidth to the L1 data cache [GB / sec]00tma_info_memory_l1mpkiCacheHits;Mem1e3 * cpu_core@MEM_LOAD_RETIRED.L1_MISS@ / cpu_core@INST_RETIRED.ANY@L1 cache true misses per kilo instruction for retired demand loads00tma_info_memory_l1mpki_loadCacheHits;Mem1e3 * cpu_core@L2_RQSTS.ALL_DEMAND_DATA_RD@ / cpu_core@INST_RETIRED.ANY@L1 cache true misses per kilo instruction for all demand loads (including speculative)00tma_info_memory_l2_cache_fill_bwMem;MemoryBW64 * cpu_core@L2_LINES_IN.ALL@ / 1e9 / duration_timeAverage per-thread data fill bandwidth to the L2 cache [GB / sec]00tma_info_memory_l2hpki_allCacheHits;Mem1e3 * (cpu_core@L2_RQSTS.REFERENCES@ - cpu_core@L2_RQSTS.MISS@) / cpu_core@INST_RETIRED.ANY@L2 cache hits per kilo instruction for all request types (including speculative)00tma_info_memory_l2hpki_loadCacheHits;Mem1e3 * cpu_core@L2_RQSTS.DEMAND_DATA_RD_HIT@ / cpu_core@INST_RETIRED.ANY@L2 cache hits per kilo instruction for all demand loads  (including speculative)00tma_info_memory_l2mpkiBackend;CacheHits;Mem1e3 * cpu_core@MEM_LOAD_RETIRED.L2_MISS@ / cpu_core@INST_RETIRED.ANY@L2 cache true misses per kilo instruction for retired demand loads00tma_info_memory_l2mpki_allCacheHits;Mem;Offcore1e3 * cpu_core@L2_RQSTS.MISS@ / cpu_core@INST_RETIRED.ANY@L2 cache ([RKL+] true) misses per kilo instruction for all request types (including speculative)00tma_info_memory_l2mpki_loadCacheHits;Mem1e3 * cpu_core@L2_RQSTS.DEMAND_DATA_RD_MISS@ / cpu_core@INST_RETIRED.ANY@L2 cache ([RKL+] true) misses per kilo instruction for all demand loads  (including speculative)00tma_info_memory_l2mpki_rfoCacheMisses;Offcore1e3 * cpu_core@L2_RQSTS.RFO_MISS@ / cpu_core@INST_RETIRED.ANY@Offcore requests (L2 cache miss) per kilo instruction for demand RFOs00tma_info_memory_l3_cache_access_bwMem;MemoryBW;Offcore64 * cpu_core@OFFCORE_REQUESTS.ALL_REQUESTS@ / 1e9 / duration_timeAverage per-thread data access bandwidth to the L3 cache [GB / sec]00tma_info_memory_l3_cache_fill_bwMem;MemoryBW64 * cpu_core@LONGEST_LAT_CACHE.MISS@ / 1e9 / duration_timeAverage per-thread data fill bandwidth to the L3 cache [GB / sec]00tma_info_memory_l3mpkiMem1e3 * cpu_core@MEM_LOAD_RETIRED.L3_MISS@ / cpu_core@INST_RETIRED.ANY@L3 cache true misses per kilo instruction for retired demand loads00tma_info_memory_latency_data_l2_mlpMemory_BW;Offcorecpu_core@OFFCORE_REQUESTS_OUTSTANDING.DATA_RD@ / cpu_core@OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD@Average Parallel L2 cache miss data reads00tma_info_memory_latency_load_l2_miss_latencyMemory_Lat;Offcorecpu_core@OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD@ / cpu_core@OFFCORE_REQUESTS.DEMAND_DATA_RD@Average Latency for L2 cache miss demand Loads00tma_info_memory_latency_load_l2_mlpMemory_BW;Offcorecpu_core@OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD@ / cpu_core@OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD\,cmask\=1@Average Parallel L2 cache miss demand Loads00tma_info_memory_latency_load_l3_miss_latencyMemory_Lat;Offcorecpu_core@OFFCORE_REQUESTS_OUTSTANDING.L3_MISS_DEMAND_DATA_RD@ / cpu_core@OFFCORE_REQUESTS.L3_MISS_DEMAND_DATA_RD@Average Latency for L3 cache miss demand Loads00tma_info_memory_load_miss_real_latencyMem;MemoryBound;MemoryLatcpu_core@L1D_PEND_MISS.PENDING@ / cpu_core@MEM_LOAD_COMPLETED.L1_MISS_ANY@Actual Average Latency for L1 data-cache miss demand load operations (in core cycles)00tma_info_memory_mix_bus_lock_pkiMem1e3 * cpu_core@SQ_MISC.BUS_LOCK@ / cpu_core@INST_RETIRED.ANY@"Bus lock" per kilo instruction00tma_info_memory_mix_uc_load_pkiMem1e3 * cpu_core@MEM_LOAD_MISC_RETIRED.UC@ / cpu_core@INST_RETIRED.ANY@Un-cacheable retired load per kilo instruction00tma_info_memory_mlpMem;MemoryBW;MemoryBoundcpu_core@L1D_PEND_MISS.PENDING@ / cpu_core@L1D_PEND_MISS.PENDING_CYCLES@Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such missMemory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)00tma_info_memory_tlb_code_stlb_mpkiFed;MemoryTLB1e3 * cpu_core@ITLB_MISSES.WALK_COMPLETED@ / cpu_core@INST_RETIRED.ANY@STLB (2nd level TLB) code speculative misses per kilo instruction (misses of any page-size that complete the page walk)00tma_info_memory_tlb_load_stlb_mpkiMem;MemoryTLB1e3 * cpu_core@DTLB_LOAD_MISSES.WALK_COMPLETED@ / cpu_core@INST_RETIRED.ANY@STLB (2nd level TLB) data load speculative misses per kilo instruction (misses of any page-size that complete the page walk)00tma_info_memory_tlb_page_walks_utilizationMem;MemoryTLB(cpu_core@ITLB_MISSES.WALK_PENDING@ + cpu_core@DTLB_LOAD_MISSES.WALK_PENDING@ + cpu_core@DTLB_STORE_MISSES.WALK_PENDING@) / (4 * tma_info_core_core_clks)tma_info_memory_tlb_page_walks_utilization > 0.5Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses00tma_info_memory_tlb_store_stlb_mpkiMem;MemoryTLB1e3 * cpu_core@DTLB_STORE_MISSES.WALK_COMPLETED@ / cpu_core@INST_RETIRED.ANY@STLB (2nd level TLB) data store speculative misses per kilo instruction (misses of any page-size that complete the page walk)00tma_info_pipeline_executeCor;Pipeline;PortsUtil;SMTcpu_core@UOPS_EXECUTED.THREAD@ / (cpu_core@UOPS_EXECUTED.CORE_CYCLES_GE_1@ / 2 if #SMT_on else cpu_core@UOPS_EXECUTED.THREAD\,cmask\=1@)Instruction-Level-Parallelism (average number of uops executed when there is execution) per core00tma_info_pipeline_fetch_dsbFed;FetchBWcpu_core@IDQ.DSB_UOPS@ / cpu_core@IDQ.DSB_CYCLES_ANY@Average number of uops fetched from DSB per cycle00tma_info_pipeline_fetch_lsdFed;FetchBWcpu_core@LSD.UOPS@ / cpu_core@LSD.CYCLES_ACTIVE@Average number of uops fetched from LSD per cycle00tma_info_pipeline_fetch_miteFed;FetchBWcpu_core@IDQ.MITE_UOPS@ / cpu_core@IDQ.MITE_CYCLES_ANY@Average number of uops fetched from MITE per cycle00tma_info_pipeline_ipassistMicroSeq;Pipeline;Ret;Retirecpu_core@INST_RETIRED.ANY@ / cpu_core@ASSISTS.ANY@tma_info_pipeline_ipassist < 100e3Instructions per a microcode Assist invocationInstructions per a microcode Assist invocation. See Assists tree node for details (lower number means higher occurrence rate)00tma_info_pipeline_retirePipeline;Rettma_retiring * tma_info_thread_slots / cpu_core@UOPS_RETIRED.SLOTS\,cmask\=1@Average number of Uops retired in cycles where at least one uop has retired00tma_info_pipeline_strings_cyclesMicroSeq;Pipeline;Retcpu_core@INST_RETIRED.REP_ITERATION@ / cpu_core@UOPS_RETIRED.SLOTS\,cmask\=1@tma_info_pipeline_strings_cycles > 0.1Estimated fraction of retirement-cycles dealing with repeat instructions00tma_info_system_c0_waitC0Waitcpu_core@CPU_CLK_UNHALTED.C0_WAIT@ / tma_info_thread_clkstma_info_system_c0_wait > 0.05Fraction of cycles the processor is waiting yet unhalted; covering legacy PAUSE instruction, as well as C0.1 / C0.2 power-performance optimized states00tma_info_system_core_frequencyPower;Summarytma_info_system_turbo_utilization * TSC / 1e9 / duration_timeMeasured Average Core Frequency for unhalted processors [GHz]00tma_info_system_cpu_utilizationHPC;Summarytma_info_system_cpus_utilized / #num_cpus_onlineAverage CPU Utilization (percentage)00tma_info_system_cpus_utilizedSummarycpu_core@CPU_CLK_UNHALTED.REF_TSC@ / TSCAverage number of utilized CPUs00tma_info_system_dram_bw_useHPC;MemOffcore;MemoryBW;SoC;tma_issueBW64 * (UNC_ARB_TRK_REQUESTS.ALL + UNC_ARB_COH_TRK_REQUESTS.ALL) / 1e6 / duration_time / 1e3Average external Memory Bandwidth Use for reads and writes [GB / sec]Average external Memory Bandwidth Use for reads and writes [GB / sec]. Related metrics: tma_fb_full, tma_info_bottleneck_cache_memory_bandwidth, tma_mem_bandwidth, tma_sq_full00tma_info_system_gflopsCor;Flops;HPC(cpu_core@FP_ARITH_INST_RETIRED.SCALAR@ + 2 * cpu_core@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE@ + 4 * cpu_core@FP_ARITH_INST_RETIRED.4_FLOPS@ + 8 * cpu_core@FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE@) / 1e9 / duration_timeGiga Floating Point Operations Per SecondGiga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width00tma_info_system_ipfarbranchBranches;OScpu_core@INST_RETIRED.ANY@ / cpu_core@BR_INST_RETIRED.FAR_BRANCH@utma_info_system_ipfarbranch < 1e6Instructions per Far Branch ( Far Branches apply upon transition from application to operating system, handling interrupts, exceptions) [lower number means higher occurrence rate]00tma_info_system_kernel_cpiOScpu_core@CPU_CLK_UNHALTED.THREAD_P@k / cpu_core@INST_RETIRED.ANY_P@kCycles Per Instruction for the Operating System (OS) Kernel mode00tma_info_system_kernel_utilizationOScpu_core@CPU_CLK_UNHALTED.THREAD_P@k / cpu_core@CPU_CLK_UNHALTED.THREAD@tma_info_system_kernel_utilization > 0.05Fraction of cycles spent in the Operating System (OS) Kernel mode00tma_info_system_mem_parallel_readsMem;MemoryBW;SoCUNC_ARB_DAT_OCCUPANCY.RD / UNC_ARB_DAT_OCCUPANCY.RD@cmask\=1@Average number of parallel data read requests to external memoryAverage number of parallel data read requests to external memory. Accounts for demand loads and L1/L2 prefetches00tma_info_system_mem_read_latencyMem;MemoryLat;SoC(UNC_ARB_TRK_OCCUPANCY.RD + UNC_ARB_DAT_OCCUPANCY.RD) / UNC_ARB_TRK_REQUESTS.RDAverage latency of data read request to external memory (in nanoseconds)Average latency of data read request to external memory (in nanoseconds). Accounts for demand loads and L1/L2 prefetches. ([RKL+]memory-controller only)01tma_info_system_smt_2t_utilizationSMT(1 - cpu_core@CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE@ / cpu_core@CPU_CLK_UNHALTED.REF_DISTRIBUTED@ if #SMT_on else 0)Fraction of cycles where both hardware Logical Processors were active00tma_info_system_socket_clksSoCUNC_CLOCK.SOCKETSocket actual clocks when any core is active on that socket00tma_info_system_turbo_utilizationPowertma_info_thread_clks / cpu_core@CPU_CLK_UNHALTED.REF_TSC@Average Frequency Utilization relative nominal frequency00tma_info_thread_clksPipelinecpu_core@CPU_CLK_UNHALTED.THREAD@Per-Logical Processor actual clocks when the Logical Processor is active00tma_info_thread_cpiMem;Pipeline1 / tma_info_thread_ipcCycles Per Instruction (per Logical Processor)00tma_info_thread_execute_per_issueCor;Pipelinecpu_core@UOPS_EXECUTED.THREAD@ / cpu_core@UOPS_ISSUED.ANY@The ratio of Executed- by Issued-UopsThe ratio of Executed- by Issued-Uops. Ratio > 1 suggests high rate of uop micro-fusions. Ratio < 1 suggest high rate of "execute" at rename stage00tma_info_thread_ipcRet;Summarycpu_core@INST_RETIRED.ANY@ / tma_info_thread_clksInstructions Per Cycle (per Logical Processor)00tma_info_thread_slotsTmaL1;tma_L1_groupcpu_core@TOPDOWN.SLOTS@Total issue-pipeline slots (per-Physical Core till ICL; per-Logical Processor ICL onward)00tma_info_thread_slots_utilizationSMT;TmaL1;tma_L1_group(tma_info_thread_slots / (cpu_core@TOPDOWN.SLOTS@ / 2) if #SMT_on else 1)Fraction of Physical Core issue-slots utilized by this Logical Processor00tma_info_thread_uoppiPipeline;Ret;Retiretma_retiring * tma_info_thread_slots / cpu_core@INST_RETIRED.ANY@tma_info_thread_uoppi > 1.05Uops Per Instruction00tma_info_thread_uptbBranches;Fed;FetchBWtma_retiring * tma_info_thread_slots / cpu_core@BR_INST_RETIRED.NEAR_TAKEN@tma_info_thread_uptb < 9Uops per taken branch00tma_int_operationsPipeline;TopdownL3;tma_L3_group;tma_light_operations_grouptma_int_vector_128b + tma_int_vector_256btma_int_operations > 0.1 & tma_light_operations > 0.6This metric represents overall Integer (Int) select operations fraction the CPU has executed (retired)This metric represents overall Integer (Int) select operations fraction the CPU has executed (retired). Vector/Matrix Int operations and shuffles are counted. Note this metric's value may exceed its parent due to use of "Uops" CountDomain100%00tma_int_vector_128bCompute;IntVector;Pipeline;TopdownL4;tma_L4_group;tma_int_operations_group;tma_issue2P(cpu_core@INT_VEC_RETIRED.ADD_128@ + cpu_core@INT_VEC_RETIRED.VNNI_128@) / (tma_retiring * tma_info_thread_slots)tma_int_vector_128b > 0.1 & (tma_int_operations > 0.1 & tma_light_operations > 0.6)This metric represents 128-bit vector Integer ADD/SUB/SAD or VNNI (Vector Neural Network Instructions) uops fraction the CPU has retiredThis metric represents 128-bit vector Integer ADD/SUB/SAD or VNNI (Vector Neural Network Instructions) uops fraction the CPU has retired. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_int_vector_256bCompute;IntVector;Pipeline;TopdownL4;tma_L4_group;tma_int_operations_group;tma_issue2P(cpu_core@INT_VEC_RETIRED.ADD_256@ + cpu_core@INT_VEC_RETIRED.MUL_256@ + cpu_core@INT_VEC_RETIRED.VNNI_256@) / (tma_retiring * tma_info_thread_slots)tma_int_vector_256b > 0.1 & (tma_int_operations > 0.1 & tma_light_operations > 0.6)This metric represents 256-bit vector Integer ADD/SUB/SAD/MUL or VNNI (Vector Neural Network Instructions) uops fraction the CPU has retiredThis metric represents 256-bit vector Integer ADD/SUB/SAD/MUL or VNNI (Vector Neural Network Instructions) uops fraction the CPU has retired. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_itlb_missesBigFootprint;BvBC;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_groupcpu_core@ICACHE_TAG.STALLS@ / tma_info_thread_clkstma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) missesThis metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: FRONTEND_RETIRED.STLB_MISS_PS;FRONTEND_RETIRED.ITLB_MISS_PS100%00tma_l1_boundCacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_issueL1;tma_issueMC;tma_memory_bound_groupmax((cpu_core@EXE_ACTIVITY.BOUND_ON_LOADS@ - cpu_core@MEMORY_ACTIVITY.STALLS_L1D_MISS@) / tma_info_thread_clks, 0)tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled without loads missing the L1 data cacheThis metric estimates how often the CPU was stalled without loads missing the L1 data cache.  The L1 data cache typically has the shortest latency.  However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT_PS;MEM_LOAD_RETIRED.FB_HIT_PS. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1100%00tma_l1_hit_latencyBvML;MemoryLat;TopdownL4;tma_L4_group;tma_l1_bound_groupmin(2 * (cpu_core@MEM_INST_RETIRED.ALL_LOADS@ - cpu_core@MEM_LOAD_RETIRED.FB_HIT@ - cpu_core@MEM_LOAD_RETIRED.L1_MISS@) * 20 / 100, max(cpu_core@CYCLE_ACTIVITY.CYCLES_MEM_ANY@ - cpu_core@MEMORY_ACTIVITY.CYCLES_L1D_MISS@, 0)) / tma_info_thread_clkstma_l1_hit_latency > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates fraction of cycles with demand load accesses that hit the L1 cacheThis metric roughly estimates fraction of cycles with demand load accesses that hit the L1 cache. The short latency of the L1 data cache may be exposed in pointer-chasing memory access patterns as an example. Sample with: MEM_LOAD_RETIRED.L1_HIT100%00tma_l2_boundBvML;CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group(cpu_core@MEMORY_ACTIVITY.STALLS_L1D_MISS@ - cpu_core@MEMORY_ACTIVITY.STALLS_L2_MISS@) / tma_info_thread_clkstma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled due to L2 cache accesses by loadsThis metric estimates how often the CPU was stalled due to L2 cache accesses by loads.  Avoiding cache misses (i.e. L1 misses/L2 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L2_HIT_PS100%00tma_l3_boundCacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group(cpu_core@MEMORY_ACTIVITY.STALLS_L2_MISS@ - cpu_core@MEMORY_ACTIVITY.STALLS_L3_MISS@) / tma_info_thread_clkstma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling CoreThis metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core.  Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS100%00tma_l3_hit_latencyBvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group9 * tma_info_system_core_frequency * (cpu_core@MEM_LOAD_RETIRED.L3_HIT@ * (1 + cpu_core@MEM_LOAD_RETIRED.FB_HIT@ / cpu_core@MEM_LOAD_RETIRED.L1_MISS@ / 2)) / tma_info_thread_clkstma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited).  Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance.  Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS. Related metrics: tma_info_bottleneck_cache_memory_latency, tma_mem_latency100%00tma_lcpFetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueFBcpu_core@DECODE.LCP@ / tma_info_thread_clkstma_lcp > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric represents fraction of cycles CPU was stalled due to Length Changing Prefixes (LCPs)This metric represents fraction of cycles CPU was stalled due to Length Changing Prefixes (LCPs). Using proper compiler flags or Intel Compiler by default will certainly avoid this. #Link: Optimization Guide about LCP BKMs. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb100%00tma_light_operationsRetire;TmaL2;TopdownL2;tma_L2_group;tma_retiring_groupmax(0, tma_retiring - tma_heavy_operations)tma_light_operations > 0.6This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation)This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation). This correlates with total number of instructions used by the program. A uops-per-instruction (see UopPI metric) ratio of 1 or less should be expected for decently optimized code running on Intel Core/Xeon products. While this often indicates efficient X86 instructions were executed; high value does not necessarily mean better performance cannot be achieved. ([ICL+] Note this may undercount due to approximation using indirect events; [ADL+] .). Sample with: INST_RETIRED.PREC_DIST100%TopdownL200tma_load_op_utilizationTopdownL5;tma_L5_group;tma_ports_utilized_3m_groupcpu_core@UOPS_DISPATCHED.PORT_2_3_10@ / (3 * tma_info_core_core_clks)tma_load_op_utilization > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port for Load operationsThis metric represents Core fraction of cycles CPU dispatched uops on execution port for Load operations. Sample with: UOPS_DISPATCHED.PORT_2_3_10100%00tma_load_stlb_hitMemoryTLB;TopdownL5;tma_L5_group;tma_dtlb_load_grouptma_dtlb_load - tma_load_stlb_misstma_load_stlb_hit > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))This metric roughly estimates the fraction of cycles where the (first level) DTLB was missed by load accesses, that later on hit in second-level TLB (STLB)100%00tma_load_stlb_missMemoryTLB;TopdownL5;tma_L5_group;tma_dtlb_load_groupcpu_core@DTLB_LOAD_MISSES.WALK_ACTIVE@ / tma_info_thread_clkstma_load_stlb_miss > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))This metric estimates the fraction of cycles where the Second-level TLB (STLB) was missed by load accesses, performing a hardware page walk100%00tma_lock_latencyOffcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group(16 * max(0, cpu_core@MEM_INST_RETIRED.LOCK_LOADS@ - cpu_core@L2_RQSTS.ALL_RFO@) + cpu_core@MEM_INST_RETIRED.LOCK_LOADS@ / cpu_core@MEM_INST_RETIRED.ALL_STORES@ * (10 * cpu_core@L2_RQSTS.RFO_HIT@ + min(cpu_core@CPU_CLK_UNHALTED.THREAD@, cpu_core@OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO@))) / tma_info_thread_clkstma_lock_latency > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric represents fraction of cycles the CPU spent handling cache misses due to lock operationsThis metric represents fraction of cycles the CPU spent handling cache misses due to lock operations. Due to the microarchitecture handling of locks; they are classified as L1_Bound regardless of what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOADS. Related metrics: tma_store_latency100%00tma_lsdFetchBW;LSD;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group(cpu_core@LSD.CYCLES_ACTIVE@ - cpu_core@LSD.CYCLES_OK@) / tma_info_core_core_clks / 2tma_lsd > 0.15 & tma_fetch_bandwidth > 0.2This metric represents Core fraction of cycles in which CPU was likely limited due to LSD (Loop Stream Detector) unitThis metric represents Core fraction of cycles in which CPU was likely limited due to LSD (Loop Stream Detector) unit.  LSD typically does well sustaining Uop supply. However; in some rare cases; optimal uop-delivery could not be reached for small loops whose size (in terms of number of uops) does not suit well the LSD structure100%00tma_machine_clearsBadSpec;BvMS;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxnmax(0, tma_bad_speculation - tma_branch_mispredicts)tma_machine_clears > 0.1 & tma_bad_speculation > 0.15This metric represents fraction of slots the CPU has wasted due to Machine ClearsThis metric represents fraction of slots the CPU has wasted due to Machine Clears.  These slots are either wasted by uops fetched prior to the clear; or stalls the out-of-order portion of the machine needs to recover its state after the clear. For example; this can happen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modifying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT. Related metrics: tma_clears_resteers, tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_l1_bound, tma_microcode_sequencer, tma_ms_switches, tma_remote_cache100%TopdownL200tma_mem_bandwidthBvMS;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBWmin(cpu_core@CPU_CLK_UNHALTED.THREAD@, cpu_core@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\,cmask\=4@) / tma_info_thread_clkstma_mem_bandwidth > 0.2 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM)This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM).  The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_fb_full, tma_info_bottleneck_cache_memory_bandwidth, tma_info_system_dram_bw_use, tma_sq_full100%00tma_mem_latencyBvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLatmin(cpu_core@CPU_CLK_UNHALTED.THREAD@, cpu_core@OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD@) / tma_info_thread_clks - tma_mem_bandwidthtma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM)This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM).  This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: tma_info_bottleneck_cache_memory_latency, tma_l3_hit_latency100%00tma_memory_boundBackend;TmaL2;TopdownL2;tma_L2_group;tma_backend_bound_groupcpu_core@topdown\-mem\-bound@ / (cpu_core@topdown\-fe\-bound@ + cpu_core@topdown\-bad\-spec@ + cpu_core@topdown\-retiring@ + cpu_core@topdown\-be\-bound@) + 0 * tma_info_thread_slotstma_memory_bound > 0.2 & tma_backend_bound > 0.2This metric represents fraction of slots the Memory subsystem within the Backend was a bottleneckThis metric represents fraction of slots the Memory subsystem within the Backend was a bottleneck.  Memory Bound estimates fraction of slots where pipeline is likely stalled due to demand load or store instructions. This accounts mainly for (1) non-completed in-flight memory demand loads which coincides with execution units starvation; in addition to (2) cases where stores could impose backpressure on the pipeline when many of them get buffered at the same time (less common out of the two)100%TopdownL200tma_memory_fenceTopdownL4;tma_L4_group;tma_serializing_operation_group13 * cpu_core@MISC2_RETIRED.LFENCE@ / tma_info_thread_clkstma_memory_fence > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles the CPU was stalled due to LFENCE Instructions100%02tma_memory_operationsPipeline;TopdownL3;tma_L3_group;tma_light_operations_grouptma_light_operations * cpu_core@MEM_UOP_RETIRED.ANY@ / (tma_retiring * tma_info_thread_slots)tma_memory_operations > 0.1 & tma_light_operations > 0.6This metric represents fraction of slots where the CPU was retiring memory operations -- uops for memory load or store accesses100%00tma_microcode_sequencerMicroSeq;TopdownL3;tma_L3_group;tma_heavy_operations_group;tma_issueMC;tma_issueMScpu_core@UOPS_RETIRED.MS@ / tma_info_thread_slotstma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1This metric represents fraction of slots the CPU was retiring uops fetched by the Microcode Sequencer (MS) unitThis metric represents fraction of slots the CPU was retiring uops fetched by the Microcode Sequencer (MS) unit.  The MS is used for CISC instructions not supported by the default decoders (like repeat move strings; or CPUID); or by microcode assists used to address some operation modes (like in Floating Point assists). These cases can often be avoided. Sample with: UOPS_RETIRED.MS. Related metrics: tma_clears_resteers, tma_info_bottleneck_irregular_overhead, tma_l1_bound, tma_machine_clears, tma_ms_switches100%00tma_mispredicts_resteersBadSpec;BrMispredicts;BvMP;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueBMtma_branch_mispredicts / tma_bad_speculation * cpu_core@INT_MISC.CLEAR_RESTEER_CYCLES@ / tma_info_thread_clkstma_mispredicts_resteers > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stageThis metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost, tma_info_bottleneck_mispredictions100%00tma_miteDSBmiss;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group(cpu_core@IDQ.MITE_CYCLES_ANY@ - cpu_core@IDQ.MITE_CYCLES_OK@) / tma_info_core_core_clks / 2tma_mite > 0.1 & tma_fetch_bandwidth > 0.2This metric represents Core fraction of cycles in which CPU was likely limited due to the MITE pipeline (the legacy decode pipeline)This metric represents Core fraction of cycles in which CPU was likely limited due to the MITE pipeline (the legacy decode pipeline). This pipeline is used for code that was not pre-cached in the DSB or LSD. For example; inefficiencies due to asymmetric decoders; use of long immediate or LCP can manifest as MITE fetch bandwidth bottleneck. Sample with: FRONTEND_RETIRED.ANY_DSB_MISS100%00tma_mixing_vectorsTopdownL5;tma_L5_group;tma_issueMV;tma_ports_utilized_0_group160 * cpu_core@ASSISTS.SSE_AVX_MIX@ / tma_info_thread_clkstma_mixing_vectors > 0.05This metric estimates penalty in terms of percentage of([SKL+] injected blend uops out of all Uops Issued -- the Count Domain; [ADL+] cycles)This metric estimates penalty in terms of percentage of([SKL+] injected blend uops out of all Uops Issued -- the Count Domain; [ADL+] cycles). Usually a Mixing_Vectors over 5% is worth investigating. Read more in Appendix B1 of the Optimizations Guide for this topic. Related metrics: tma_ms_switches100%00tma_ms_switchesFetchLat;MicroSeq;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueMC;tma_issueMS;tma_issueMV;tma_issueSO3 * cpu_core@UOPS_RETIRED.MS\,cmask\=1\,edge@ / (cpu_core@UOPS_RETIRED.SLOTS@ / cpu_core@UOPS_ISSUED.ANY@) / tma_info_thread_clkstma_ms_switches > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS)This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS). Commonly used instructions are optimized for delivery by the DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Certain operations cannot be handled natively by the execution pipeline; and must be performed by microcode (small programs injected into the execution stream). Switching to the MS too often can negatively impact performance. The MS is designated to deliver long uop flows required by CISC instructions like CPUID; or uncommon conditions like Floating Point Assists when dealing with Denormals. Sample with: FRONTEND_RETIRED.MS_FLOWS. Related metrics: tma_clears_resteers, tma_info_bottleneck_irregular_overhead, tma_l1_bound, tma_machine_clears, tma_microcode_sequencer, tma_mixing_vectors, tma_serializing_operation100%00tma_non_fused_branchesBranches;BvBO;Pipeline;TopdownL3;tma_L3_group;tma_light_operations_grouptma_light_operations * (cpu_core@BR_INST_RETIRED.ALL_BRANCHES@ - cpu_core@INST_RETIRED.MACRO_FUSED@) / (tma_retiring * tma_info_thread_slots)tma_non_fused_branches > 0.1 & tma_light_operations > 0.6This metric represents fraction of slots where the CPU was retiring branch instructions that were not fusedThis metric represents fraction of slots where the CPU was retiring branch instructions that were not fused. Non-conditional branches like direct JMP or CALL would count here. Can be used to examine fusible conditional jumps that were not fused100%00tma_nop_instructionsBvBO;Pipeline;TopdownL4;tma_L4_group;tma_other_light_ops_grouptma_light_operations * cpu_core@INST_RETIRED.NOP@ / (tma_retiring * tma_info_thread_slots)tma_nop_instructions > 0.1 & (tma_other_light_ops > 0.3 & tma_light_operations > 0.6)This metric represents fraction of slots where the CPU was retiring NOP (no op) instructionsThis metric represents fraction of slots where the CPU was retiring NOP (no op) instructions. Compilers often use NOPs for certain address alignments - e.g. start address of a function or loop body. Sample with: INST_RETIRED.NOP100%00tma_other_light_opsPipeline;TopdownL3;tma_L3_group;tma_light_operations_groupmax(0, tma_light_operations - (tma_fp_arith + tma_int_operations + tma_memory_operations + tma_fused_instructions + tma_non_fused_branches))tma_other_light_ops > 0.3 & tma_light_operations > 0.6This metric represents the remaining light uops fraction the CPU has executed - remaining means not covered by other sibling nodesThis metric represents the remaining light uops fraction the CPU has executed - remaining means not covered by other sibling nodes. May undercount due to FMA double counting100%00tma_other_mispredictsBrMispredicts;BvIO;TopdownL3;tma_L3_group;tma_branch_mispredicts_groupmax(tma_branch_mispredicts * (1 - cpu_core@BR_MISP_RETIRED.ALL_BRANCHES@ / (cpu_core@INT_MISC.CLEARS_COUNT@ - cpu_core@MACHINE_CLEARS.COUNT@)), 0.0001)tma_other_mispredicts > 0.05 & (tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15)This metric estimates fraction of slots the CPU was stalled due to other cases of misprediction (non-retired x86 branches or other types)100%00tma_other_nukesBvIO;Machine_Clears;TopdownL3;tma_L3_group;tma_machine_clears_groupmax(tma_machine_clears * (1 - cpu_core@MACHINE_CLEARS.MEMORY_ORDERING@ / cpu_core@MACHINE_CLEARS.COUNT@), 0.0001)tma_other_nukes > 0.05 & (tma_machine_clears > 0.1 & tma_bad_speculation > 0.15)This metric represents fraction of slots the CPU has wasted due to Nukes (Machine Clears) not related to memory ordering100%00tma_page_faultsTopdownL5;tma_L5_group;tma_assists_group99 * cpu_core@ASSISTS.PAGE_FAULT@ / tma_info_thread_slotstma_page_faults > 0.05This metric roughly estimates fraction of slots the CPU retired uops as a result of handing Page FaultsThis metric roughly estimates fraction of slots the CPU retired uops as a result of handing Page Faults. A Page Fault may apply on first application access to a memory page. Note operating system handling of page faults accounts for the majority of its cost100%00tma_port_0Compute;TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2Pcpu_core@UOPS_DISPATCHED.PORT_0@ / tma_info_core_core_clkstma_port_0 > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch)This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch). Sample with: UOPS_DISPATCHED.PORT_0. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_port_1TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2Pcpu_core@UOPS_DISPATCHED.PORT_1@ / tma_info_core_core_clkstma_port_1 > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port 1 (ALU)This metric represents Core fraction of cycles CPU dispatched uops on execution port 1 (ALU). Sample with: UOPS_DISPATCHED.PORT_1. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_port_6TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2Pcpu_core@UOPS_DISPATCHED.PORT_6@ / tma_info_core_core_clkstma_port_6 > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU)This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU). Sample with: UOPS_DISPATCHED.PORT_6. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_ports_utilized_2100%00tma_ports_utilizationPortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group((tma_ports_utilized_0 * tma_info_thread_clks + (cpu_core@EXE_ACTIVITY.1_PORTS_UTIL@ + tma_retiring * cpu_core@EXE_ACTIVITY.2_PORTS_UTIL\,umask\=0xc@)) / tma_info_thread_clks if cpu_core@ARITH.DIV_ACTIVE@ < cpu_core@CYCLE_ACTIVITY.STALLS_TOTAL@ - cpu_core@EXE_ACTIVITY.BOUND_ON_LOADS@ else (cpu_core@EXE_ACTIVITY.1_PORTS_UTIL@ + tma_retiring * cpu_core@EXE_ACTIVITY.2_PORTS_UTIL\,umask\=0xc@) / tma_info_thread_clks)tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)This metric estimates fraction of cycles the CPU performance was potentially limited due to Core computation issues (non divider-related)This metric estimates fraction of cycles the CPU performance was potentially limited due to Core computation issues (non divider-related).  Two distinct categories can be attributed into this metric: (1) heavy data-dependency among contiguous instructions would manifest in this metric - such cases are often referred to as low Instruction Level Parallelism (ILP). (2) Contention on some hardware execution unit other than Divider. For example; when there are too many multiply operations100%00tma_ports_utilized_0PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group(cpu_core@EXE_ACTIVITY.EXE_BOUND_0_PORTS@ + max(cpu_core@RS.EMPTY\,umask\=1@ - cpu_core@RESOURCE_STALLS.SCOREBOARD@, 0)) / tma_info_thread_clks * (cpu_core@CYCLE_ACTIVITY.STALLS_TOTAL@ - cpu_core@EXE_ACTIVITY.BOUND_ON_LOADS@) / tma_info_thread_clkstma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise)This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise). Long-latency instructions like divides may contribute to this metric100%00tma_ports_utilized_1PortsUtil;TopdownL4;tma_L4_group;tma_issueL1;tma_ports_utilization_groupcpu_core@EXE_ACTIVITY.1_PORTS_UTIL@ / tma_info_thread_clkstma_ports_utilized_1 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles where the CPU executed total of 1 uop per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)This metric represents fraction of cycles where the CPU executed total of 1 uop per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). This can be due to heavy data-dependency among software instructions; or over oversubscribing a particular hardware resource. In some other cases with high 1_Port_Utilized and L1_Bound; this metric can point to L1 data-cache latency bottleneck that may not necessarily manifest with complete execution starvation (due to the short L1 latency e.g. walking a linked list) - looking at the assembly can be helpful. Sample with: EXE_ACTIVITY.1_PORTS_UTIL. Related metrics: tma_l1_bound100%00tma_ports_utilized_2PortsUtil;TopdownL4;tma_L4_group;tma_issue2P;tma_ports_utilization_groupcpu_core@EXE_ACTIVITY.2_PORTS_UTIL@ / tma_info_thread_clkstma_ports_utilized_2 > 0.15 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise).  Loop Vectorization -most compilers feature auto-Vectorization options today- reduces pressure on the execution ports as multiple elements are calculated with same uop. Sample with: EXE_ACTIVITY.2_PORTS_UTIL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6100%02tma_ports_utilized_3mBvCB;PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_groupcpu_core@UOPS_EXECUTED.CYCLES_GE_3@ / tma_info_thread_clkstma_ports_utilized_3m > 0.4 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). Sample with: UOPS_EXECUTED.CYCLES_GE_3100%02tma_retiringBvUW;Default;TmaL1;TopdownL1;tma_L1_groupcpu_core@topdown\-retiring@ / (cpu_core@topdown\-fe\-bound@ + cpu_core@topdown\-bad\-spec@ + cpu_core@topdown\-retiring@ + cpu_core@topdown\-be\-bound@) + 0 * tma_info_thread_slotstma_retiring > 0.7 | tma_heavy_operations > 0.1This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retiredThis category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired. Ideally; all pipeline slots would be attributed to the Retiring category.  Retiring of 100% would indicate the maximum Pipeline_Width throughput was achieved.  Maximizing Retiring typically increases the Instructions-per-cycle (see IPC metric). Note that a high Retiring value does not necessary mean there is no room for more performance.  For example; Heavy-operations or Microcode Assists are categorized under Retiring. They often indicate suboptimal performance and can often be optimized or avoided. Sample with: UOPS_RETIRED.SLOTS100%TopdownL1;DefaultTopdownL100tma_serializing_operationBvIO;PortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group;tma_issueSOcpu_core@RESOURCE_STALLS.SCOREBOARD@ / tma_info_thread_clks + tma_c02_waittma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)This metric represents fraction of cycles the CPU issue-pipeline was stalled due to serializing operationsThis metric represents fraction of cycles the CPU issue-pipeline was stalled due to serializing operations. Instructions like CPUID; WRMSR or LFENCE serialize the out-of-order execution which may limit performance. Sample with: RESOURCE_STALLS.SCOREBOARD. Related metrics: tma_ms_switches100%00tma_shuffles_256bHPC;Pipeline;TopdownL4;tma_L4_group;tma_other_light_ops_grouptma_light_operations * cpu_core@INT_VEC_RETIRED.SHUFFLES@ / (tma_retiring * tma_info_thread_slots)tma_shuffles_256b > 0.1 & (tma_other_light_ops > 0.3 & tma_light_operations > 0.6)This metric represents fraction of slots where the CPU was retiring Shuffle operations of 256-bit vector size (FP or Integer)This metric represents fraction of slots where the CPU was retiring Shuffle operations of 256-bit vector size (FP or Integer). Shuffles may incur slow cross "vector lane" data transfers100%00tma_slow_pauseTopdownL4;tma_L4_group;tma_serializing_operation_groupcpu_core@CPU_CLK_UNHALTED.PAUSE@ / tma_info_thread_clkstma_slow_pause > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles the CPU was stalled due to PAUSE InstructionsThis metric represents fraction of cycles the CPU was stalled due to PAUSE Instructions. Sample with: CPU_CLK_UNHALTED.PAUSE_INST100%02tma_split_loadsTopdownL4;tma_L4_group;tma_l1_bound_grouptma_info_memory_load_miss_real_latency * cpu_core@LD_BLOCKS.NO_SR@ / tma_info_thread_clkstma_split_loads > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundaryThis metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundary. Sample with: MEM_INST_RETIRED.SPLIT_LOADS_PS100%00tma_split_storesTopdownL4;tma_L4_group;tma_issueSpSt;tma_store_bound_groupcpu_core@MEM_INST_RETIRED.SPLIT_STORES@ / tma_info_core_core_clkstma_split_stores > 0.2 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric represents rate of split store accessesThis metric represents rate of split store accesses.  Consider aligning your data to the 64-byte cache line granularity. Sample with: MEM_INST_RETIRED.SPLIT_STORES_PS. Related metrics: tma_port_4100%00tma_sq_fullBvMS;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group(cpu_core@XQ.FULL_CYCLES@ + cpu_core@L1D_PEND_MISS.L2_STALLS@) / tma_info_thread_clkstma_sq_full > 0.3 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors)This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors). Related metrics: tma_fb_full, tma_info_bottleneck_cache_memory_bandwidth, tma_info_system_dram_bw_use, tma_mem_bandwidth100%00tma_store_boundMemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_groupcpu_core@EXE_ACTIVITY.BOUND_ON_STORES@ / tma_info_thread_clkstma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often CPU was stalled  due to RFO store memory accesses; RFO store issue a read-for-ownership request before the writeThis metric estimates how often CPU was stalled  due to RFO store memory accesses; RFO store issue a read-for-ownership request before the write. Even though store accesses do not typically stall out-of-order CPUs; there are few cases where stores can lead to actual stalls. This metric will be flagged should RFO stores be a bottleneck. Sample with: MEM_INST_RETIRED.ALL_STORES_PS100%00tma_store_fwd_blkTopdownL4;tma_L4_group;tma_l1_bound_group13 * cpu_core@LD_BLOCKS.STORE_FORWARD@ / tma_info_thread_clkstma_store_fwd_blk > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates fraction of cycles when the memory subsystem had loads blocked since they could not forward data from earlier (in program order) overlapping storesThis metric roughly estimates fraction of cycles when the memory subsystem had loads blocked since they could not forward data from earlier (in program order) overlapping stores. To streamline memory operations in the pipeline; a load can avoid waiting for memory if a prior in-flight store is writing the data that the load wants to read (store forwarding process). However; in some cases the load may be blocked for a significant time pending the store forward. For example; when the prior store is writing a smaller region than the load is reading100%00tma_store_latencyBvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group(cpu_core@MEM_STORE_RETIRED.L2_HIT@ * 10 * (1 - cpu_core@MEM_INST_RETIRED.LOCK_LOADS@ / cpu_core@MEM_INST_RETIRED.ALL_STORES@) + (1 - cpu_core@MEM_INST_RETIRED.LOCK_LOADS@ / cpu_core@MEM_INST_RETIRED.ALL_STORES@) * min(cpu_core@CPU_CLK_UNHALTED.THREAD@, cpu_core@OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO@)) / tma_info_thread_clkstma_store_latency > 0.1 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles the CPU spent handling L1D store missesThis metric estimates fraction of cycles the CPU spent handling L1D store misses. Store accesses usually less impact out-of-order core performance; however; holding resources for longer time can lead into undesired implications (e.g. contention on L1D fill-buffer entries - see FB_Full). Related metrics: tma_fb_full, tma_lock_latency100%00tma_store_op_utilizationTopdownL5;tma_L5_group;tma_ports_utilized_3m_group(cpu_core@UOPS_DISPATCHED.PORT_4_9@ + cpu_core@UOPS_DISPATCHED.PORT_7_8@) / (4 * tma_info_core_core_clks)tma_store_op_utilization > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port for Store operationsThis metric represents Core fraction of cycles CPU dispatched uops on execution port for Store operations. Sample with: UOPS_DISPATCHED.PORT_7_8100%00tma_store_stlb_hitMemoryTLB;TopdownL5;tma_L5_group;tma_dtlb_store_grouptma_dtlb_store - tma_store_stlb_misstma_store_stlb_hit > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))This metric roughly estimates the fraction of cycles where the TLB was missed by store accesses, hitting in the second-level TLB (STLB)100%00tma_store_stlb_missMemoryTLB;TopdownL5;tma_L5_group;tma_dtlb_store_groupcpu_core@DTLB_STORE_MISSES.WALK_ACTIVE@ / tma_info_core_core_clkstma_store_stlb_miss > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))This metric estimates the fraction of cycles where the STLB was missed by store accesses, performing a hardware page walk100%00tma_streaming_storesMemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueSmSt;tma_store_bound_group9 * cpu_core@OCR.STREAMING_WR.ANY_RESPONSE@ / tma_info_thread_clkstma_streaming_stores > 0.2 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates how often CPU was stalled  due to Streaming store memory accesses; Streaming store optimize out a read request required by RFO storesThis metric estimates how often CPU was stalled  due to Streaming store memory accesses; Streaming store optimize out a read request required by RFO stores. Even though store accesses do not typically stall out-of-order CPUs; there are few cases where stores can lead to actual stalls. This metric will be flagged should Streaming stores be a bottleneck. Sample with: OCR.STREAMING_WR.ANY_RESPONSE. Related metrics: tma_fb_full100%00tma_unknown_branchesBigFootprint;BvBC;FetchLat;TopdownL4;tma_L4_group;tma_branch_resteers_groupcpu_core@INT_MISC.UNKNOWN_BRANCH_CYCLES@ / tma_info_thread_clkstma_unknown_branches > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))This metric represents fraction of cycles the CPU was stalled due to new branch address clearsThis metric represents fraction of cycles the CPU was stalled due to new branch address clears. These are fetched branches the Branch Prediction Unit was unable to recognize (e.g. first time the branch is fetched or hitting BTB capacity limit) hence called Unknown Branches. Sample with: FRONTEND_RETIRED.UNKNOWN_BRANCH100%00tma_x87_useCompute;TopdownL4;tma_L4_group;tma_fp_arith_grouptma_retiring * cpu_core@UOPS_EXECUTED.X87@ / cpu_core@UOPS_EXECUTED.THREAD@tma_x87_use > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)This metric serves as an approximation of legacy x87 usageThis metric serves as an approximation of legacy x87 usage. It accounts for instructions beyond X87 FP arithmetic operations; hence may be used as a thermometer to avoid X87 high usage and preferably upgrade to modern ISA. See Tip under Tuning Hint100%00BackendGrouping from Top-down Microarchitecture Analysis Metrics spreadsheetBadBadSpecBigFootprintBrMispredictsBranchesBvBCBvBOBvCBBvFBBvIOBvMBBvMLBvMPBvMSBvMTBvOBBvUWC0WaitCacheHitsCacheMissesCodeGenComputeCorDSBDSBmissDataSharingFedFetchBWFetchLatFlopsFpScalarFpVectorFrontendHPCIcMissIfetchInsTypeIntVectorL2EvictsLoad_Store_MissMachineClearsMachine_ClearsMemMemOffcoreMem_ExecMemoryBWMemoryBoundMemoryLatMemoryTLBMemory_BWMemory_LatMicroSeqOSPGOPipelinePortsUtilPowerRetRetireSMTServerSnoopSoCSummaryTmaL1TmaL2TmaL3memTopdownL1Metrics for top-down breakdown at level 1TopdownL2Metrics for top-down breakdown at level 2TopdownL3Metrics for top-down breakdown at level 3TopdownL4Metrics for top-down breakdown at level 4TopdownL5Metrics for top-down breakdown at level 5TopdownL6Metrics for top-down breakdown at level 6load_store_boundtma_L1_grouptma_L2_grouptma_L3_grouptma_L4_grouptma_L5_grouptma_L6_grouptma_alu_op_utilization_groupMetrics contributing to tma_alu_op_utilization categorytma_assists_groupMetrics contributing to tma_assists categorytma_backend_bound_groupMetrics contributing to tma_backend_bound categorytma_bad_speculation_groupMetrics contributing to tma_bad_speculation categorytma_branch_mispredicts_groupMetrics contributing to tma_branch_mispredicts categorytma_branch_resteers_groupMetrics contributing to tma_branch_resteers categorytma_core_bound_groupMetrics contributing to tma_core_bound categorytma_dram_bound_groupMetrics contributing to tma_dram_bound categorytma_dtlb_load_groupMetrics contributing to tma_dtlb_load categorytma_dtlb_store_groupMetrics contributing to tma_dtlb_store categorytma_fetch_bandwidth_groupMetrics contributing to tma_fetch_bandwidth categorytma_fetch_latency_groupMetrics contributing to tma_fetch_latency categorytma_fp_arith_groupMetrics contributing to tma_fp_arith categorytma_fp_vector_groupMetrics contributing to tma_fp_vector categorytma_frontend_bound_groupMetrics contributing to tma_frontend_bound categorytma_heavy_operations_groupMetrics contributing to tma_heavy_operations categorytma_ifetch_bandwidth_groupMetrics contributing to tma_ifetch_bandwidth categorytma_ifetch_latency_groupMetrics contributing to tma_ifetch_latency categorytma_int_operations_groupMetrics contributing to tma_int_operations categorytma_issue2PMetrics related by the issue $issue2Ptma_issueBMMetrics related by the issue $issueBMtma_issueBWMetrics related by the issue $issueBWtma_issueCompMetrics related by the issue $issueComptma_issueD0Metrics related by the issue $issueD0tma_issueFBMetrics related by the issue $issueFBtma_issueFLMetrics related by the issue $issueFLtma_issueL1Metrics related by the issue $issueL1tma_issueLatMetrics related by the issue $issueLattma_issueMCMetrics related by the issue $issueMCtma_issueMSMetrics related by the issue $issueMStma_issueMVMetrics related by the issue $issueMVtma_issueRFOMetrics related by the issue $issueRFOtma_issueSLMetrics related by the issue $issueSLtma_issueSOMetrics related by the issue $issueSOtma_issueSmStMetrics related by the issue $issueSmSttma_issueSpStMetrics related by the issue $issueSpSttma_issueSyncxnMetrics related by the issue $issueSyncxntma_issueTLBMetrics related by the issue $issueTLBtma_l1_bound_groupMetrics contributing to tma_l1_bound categorytma_l3_bound_groupMetrics contributing to tma_l3_bound categorytma_light_operations_groupMetrics contributing to tma_light_operations categorytma_load_op_utilization_groupMetrics contributing to tma_load_op_utilization categorytma_machine_clears_groupMetrics contributing to tma_machine_clears categorytma_mem_latency_groupMetrics contributing to tma_mem_latency categorytma_memory_bound_groupMetrics contributing to tma_memory_bound categorytma_microcode_sequencer_groupMetrics contributing to tma_microcode_sequencer categorytma_mite_groupMetrics contributing to tma_mite categorytma_other_light_ops_groupMetrics contributing to tma_other_light_ops categorytma_ports_utilization_groupMetrics contributing to tma_ports_utilization categorytma_ports_utilized_0_groupMetrics contributing to tma_ports_utilized_0 categorytma_ports_utilized_3m_groupMetrics contributing to tma_ports_utilized_3m categorytma_resource_bound_groupMetrics contributing to tma_resource_bound categorytma_retiring_groupMetrics contributing to tma_retiring categorytma_serializing_operation_groupMetrics contributing to tma_serializing_operation categorytma_store_bound_groupMetrics contributing to tma_store_bound categorytma_store_op_utilization_groupMetrics contributing to tma_store_op_utilization categorytma_backend_boundDefault;TopdownL1;tma_L1_groupTOPDOWN_BE_BOUND.ALL / (5 * CPU_CLK_UNHALTED.CORE)tma_backend_bound > 0.1Counts the total number of issue slots that were not consumed by the backend due to backend stallsCounts the total number of issue slots that were not consumed by the backend due to backend stalls. Note that uops must be available for consumption in order for this event to count. If a uop is not available (IQ is empty), this event will not count100%TopdownL1;DefaultTopdownL100tma_bad_speculationDefault;TopdownL1;tma_L1_group(5 * CPU_CLK_UNHALTED.CORE - (TOPDOWN_FE_BOUND.ALL + TOPDOWN_BE_BOUND.ALL + TOPDOWN_RETIRING.ALL)) / (5 * CPU_CLK_UNHALTED.CORE)tma_bad_speculation > 0.15Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clearCounts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear. Only issue slots wasted due to fast nukes such as memory ordering nukes are counted. Other nukes are not accounted for. Counts all issue slots blocked during this recovery window including relevant microcode flows and while uops are not yet available in the instruction queue (IQ). Also includes the issue slots that were consumed by the backend but were thrown away because they were younger than the mispredict or machine clear100%TopdownL1;DefaultTopdownL100tma_branch_detectTopdownL3;tma_L3_group;tma_ifetch_latency_groupTOPDOWN_FE_BOUND.BRANCH_DETECT / (5 * CPU_CLK_UNHALTED.CORE)tma_branch_detect > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to BACLEARS, which occurs when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontendCounts the number of issue slots that were not delivered by the frontend due to BACLEARS, which occurs when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend. Includes BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branches100%00tma_branch_mispredictsTopdownL2;tma_L2_group;tma_bad_speculation_groupTOPDOWN_BAD_SPECULATION.MISPREDICT / (5 * CPU_CLK_UNHALTED.CORE)tma_branch_mispredicts > 0.05 & tma_bad_speculation > 0.15Counts the number of issue slots that were not consumed by the backend due to branch mispredicts100%TopdownL200tma_branch_resteerTopdownL3;tma_L3_group;tma_ifetch_latency_groupTOPDOWN_FE_BOUND.BRANCH_RESTEER / (5 * CPU_CLK_UNHALTED.CORE)tma_branch_resteer > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to BTCLEARS, which occurs when the Branch Target Buffer (BTB) predicts a taken branch100%00tma_ciscTopdownL3;tma_L3_group;tma_ifetch_bandwidth_groupTOPDOWN_FE_BOUND.CISC / (5 * CPU_CLK_UNHALTED.CORE)tma_cisc > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to the microcode sequencer (MS)100%00tma_core_boundTopdownL2;tma_L2_group;tma_backend_bound_groupTOPDOWN_BE_BOUND.ALLOC_RESTRICTIONS / (5 * CPU_CLK_UNHALTED.CORE)tma_core_bound > 0.1 & tma_backend_bound > 0.1Counts the number of cycles due to backend bound stalls that are bounded by core restrictions and not attributed to an outstanding load or stores, or resource limitation100%TopdownL200tma_decodeTopdownL3;tma_L3_group;tma_ifetch_bandwidth_groupTOPDOWN_FE_BOUND.DECODE / (5 * CPU_CLK_UNHALTED.CORE)tma_decode > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to decode stalls100%00tma_fast_nukeTopdownL3;tma_L3_group;tma_machine_clears_groupTOPDOWN_BAD_SPECULATION.FASTNUKE / (5 * CPU_CLK_UNHALTED.CORE)tma_fast_nuke > 0.05 & (tma_machine_clears > 0.05 & tma_bad_speculation > 0.15)Counts the number of issue slots that were not consumed by the backend due to a machine clear that does not require the use of microcode, classified as a fast nuke, due to memory ordering, memory disambiguation and memory renaming100%00tma_frontend_boundDefault;TopdownL1;tma_L1_groupTOPDOWN_FE_BOUND.ALL / (5 * CPU_CLK_UNHALTED.CORE)tma_frontend_bound > 0.2Counts the number of issue slots that were not consumed by the backend due to frontend stalls100%TopdownL1;DefaultTopdownL100tma_icache_missesTopdownL3;tma_L3_group;tma_ifetch_latency_groupTOPDOWN_FE_BOUND.ICACHE / (5 * CPU_CLK_UNHALTED.CORE)tma_icache_misses > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to instruction cache misses100%00tma_ifetch_bandwidthTopdownL2;tma_L2_group;tma_frontend_bound_groupTOPDOWN_FE_BOUND.FRONTEND_BANDWIDTH / (5 * CPU_CLK_UNHALTED.CORE)tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2Counts the number of issue slots that were not delivered by the frontend due to frontend bandwidth restrictions due to decode, predecode, cisc, and other limitations100%TopdownL200tma_ifetch_latencyTopdownL2;tma_L2_group;tma_frontend_bound_groupTOPDOWN_FE_BOUND.FRONTEND_LATENCY / (5 * CPU_CLK_UNHALTED.CORE)tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2Counts the number of issue slots that were not delivered by the frontend due to frontend latency restrictions due to icache misses, itlb misses, branch detection, and resteer limitations100%TopdownL200tma_info_bottleneck_%_dtlb_miss_bound_cycles100 * (LD_HEAD.DTLB_MISS_AT_RET + LD_HEAD.PGWALK_AT_RET) / CPU_CLK_UNHALTED.COREPercentage of time that retirement is stalled due to a first level data TLB miss00tma_info_bottleneck_%_ifetch_miss_bound_cyclesIfetch100 * MEM_BOUND_STALLS.IFETCH / CPU_CLK_UNHALTED.COREPercentage of time that allocation and retirement is stalled by the Frontend Cluster due to an Ifetch Miss, either Icache or ITLB MissPercentage of time that allocation and retirement is stalled by the Frontend Cluster due to an Ifetch Miss, either Icache or ITLB Miss. See Info.Ifetch_Bound00tma_info_bottleneck_%_load_miss_bound_cyclesLoad_Store_Miss100 * MEM_BOUND_STALLS.LOAD / CPU_CLK_UNHALTED.COREPercentage of time that retirement is stalled due to an L1 missPercentage of time that retirement is stalled due to an L1 miss. See Info.Load_Miss_Bound00tma_info_bottleneck_%_mem_exec_bound_cyclesMem_Exec100 * LD_HEAD.ANY_AT_RET / CPU_CLK_UNHALTED.COREPercentage of time that retirement is stalled by the Memory Cluster due to a pipeline stallPercentage of time that retirement is stalled by the Memory Cluster due to a pipeline stall. See Info.Mem_Exec_Bound00tma_info_br_inst_mix_ipbranchINST_RETIRED.ANY / BR_INST_RETIRED.ALL_BRANCHESInstructions per Branch (lower number means higher occurrence rate)00tma_info_br_inst_mix_ipcallINST_RETIRED.ANY / BR_INST_RETIRED.CALLInstruction per (near) call (lower number means higher occurrence rate)00tma_info_br_inst_mix_ipfarbranchINST_RETIRED.ANY / BR_INST_RETIRED.FAR_BRANCH:uInstructions per Far Branch ( Far Branches apply upon transition from application to operating system, handling interrupts, exceptions) [lower number means higher occurrence rate]00tma_info_br_inst_mix_ipmisp_cond_ntakenINST_RETIRED.ANY / (BR_MISP_RETIRED.COND - BR_MISP_RETIRED.COND_TAKEN)Instructions per retired conditional Branch Misprediction where the branch was not taken00tma_info_br_inst_mix_ipmisp_cond_takenINST_RETIRED.ANY / BR_MISP_RETIRED.COND_TAKENInstructions per retired conditional Branch Misprediction where the branch was taken00tma_info_br_inst_mix_ipmisp_indirectINST_RETIRED.ANY / BR_MISP_RETIRED.INDIRECTInstructions per retired indirect call or jump Branch Misprediction00tma_info_br_inst_mix_ipmisp_retINST_RETIRED.ANY / BR_MISP_RETIRED.RETURNInstructions per retired return Branch Misprediction00tma_info_br_inst_mix_ipmispredictINST_RETIRED.ANY / BR_MISP_RETIRED.ALL_BRANCHESInstructions per retired Branch Misprediction00tma_info_br_mispredict_bound_branch_mispredict_ratioBR_MISP_RETIRED.ALL_BRANCHES / BR_INST_RETIRED.ALL_BRANCHESRatio of all branches which mispredict00tma_info_br_mispredict_bound_branch_mispredict_to_unknown_branch_ratioBR_MISP_RETIRED.ALL_BRANCHES / BACLEARS.ANYRatio between Mispredicted branches and unknown branches00tma_info_buffer_stalls_%_load_buffer_stall_cycles100 * MEM_SCHEDULER_BLOCK.LD_BUF / CPU_CLK_UNHALTED.COREPercentage of time that allocation is stalled due to load buffer full00tma_info_buffer_stalls_%_mem_rsv_stall_cycles100 * MEM_SCHEDULER_BLOCK.RSV / CPU_CLK_UNHALTED.COREPercentage of time that allocation is stalled due to memory reservation stations full00tma_info_buffer_stalls_%_store_buffer_stall_cycles100 * MEM_SCHEDULER_BLOCK.ST_BUF / CPU_CLK_UNHALTED.COREPercentage of time that allocation is stalled due to store buffer full00tma_info_core_cpiCPU_CLK_UNHALTED.CORE / INST_RETIRED.ANYCycles Per Instruction00tma_info_core_ipcINST_RETIRED.ANY / CPU_CLK_UNHALTED.COREInstructions Per Cycle00tma_info_core_upiUOPS_RETIRED.ALL / INST_RETIRED.ANYUops Per Instruction00tma_info_ifetch_miss_bound_%_ifetchmissbound_with_l2hit100 * MEM_BOUND_STALLS.IFETCH_L2_HIT / MEM_BOUND_STALLS.IFETCHPercentage of ifetch miss bound stalls, where the ifetch miss hits in the L200tma_info_ifetch_miss_bound_%_ifetchmissbound_with_l3hit100 * MEM_BOUND_STALLS.IFETCH_LLC_HIT / MEM_BOUND_STALLS.IFETCHPercentage of ifetch miss bound stalls, where the ifetch miss hits in the L300tma_info_ifetch_miss_bound_%_ifetchmissbound_with_l3miss100 * MEM_BOUND_STALLS.IFETCH_DRAM_HIT / MEM_BOUND_STALLS.IFETCHPercentage of ifetch miss bound stalls, where the ifetch miss subsequently misses in the L300tma_info_load_miss_bound_%_loadmissbound_with_l2hitload_store_bound100 * MEM_BOUND_STALLS.LOAD_L2_HIT / MEM_BOUND_STALLS.LOADPercentage of memory bound stalls where retirement is stalled due to an L1 miss that hit the L200tma_info_load_miss_bound_%_loadmissbound_with_l3hitload_store_bound100 * MEM_BOUND_STALLS.LOAD_LLC_HIT / MEM_BOUND_STALLS.LOADPercentage of memory bound stalls where retirement is stalled due to an L1 miss that hit the L300tma_info_load_miss_bound_%_loadmissbound_with_l3missload_store_bound100 * MEM_BOUND_STALLS.LOAD_DRAM_HIT / MEM_BOUND_STALLS.LOADPercentage of memory bound stalls where retirement is stalled due to an L1 miss that subsequently misses the L300tma_info_load_store_bound_l1_boundload_store_bound100 * LD_HEAD.L1_BOUND_AT_RET / CPU_CLK_UNHALTED.CORECounts the number of cycles that the oldest load of the load buffer is stalled at retirement due to a pipeline block00tma_info_load_store_bound_load_boundload_store_bound100 * (LD_HEAD.L1_BOUND_AT_RET + MEM_BOUND_STALLS.LOAD) / CPU_CLK_UNHALTED.CORECounts the number of cycles that the oldest load of the load buffer is stalled at retirement00tma_info_load_store_bound_store_boundload_store_bound100 * (MEM_SCHEDULER_BLOCK.ST_BUF / MEM_SCHEDULER_BLOCK.ALL) * tma_mem_schedulerCounts the number of cycles the core is stalled due to store buffer full00tma_info_machine_clear_bound_machine_clears_disamb_pki1e3 * MACHINE_CLEARS.DISAMBIGUATION / INST_RETIRED.ANYCounts the number of machine clears relative to thousands of instructions retired, due to memory disambiguation00tma_info_machine_clear_bound_machine_clears_fp_assist_pki1e3 * MACHINE_CLEARS.FP_ASSIST / INST_RETIRED.ANYCounts the number of machine clears relative to thousands of instructions retired, due to floating point assists00tma_info_machine_clear_bound_machine_clears_monuke_pki1e3 * MACHINE_CLEARS.MEMORY_ORDERING / INST_RETIRED.ANYCounts the number of machine clears relative to thousands of instructions retired, due to memory ordering00tma_info_machine_clear_bound_machine_clears_mrn_pki1e3 * MACHINE_CLEARS.MRN_NUKE / INST_RETIRED.ANYCounts the number of machine clears relative to thousands of instructions retired, due to memory renaming00tma_info_machine_clear_bound_machine_clears_page_fault_pki1e3 * MACHINE_CLEARS.PAGE_FAULT / INST_RETIRED.ANYCounts the number of machine clears relative to thousands of instructions retired, due to page faults00tma_info_machine_clear_bound_machine_clears_smc_pki1e3 * MACHINE_CLEARS.SMC / INST_RETIRED.ANYCounts the number of machine clears relative to thousands of instructions retired, due to self-modifying code00tma_info_mem_exec_blocks_%_loads_with_adressaliasing100 * LD_BLOCKS.4K_ALIAS / MEM_UOPS_RETIRED.ALL_LOADSPercentage of total non-speculative loads with an address aliasing block00tma_info_mem_exec_blocks_%_loads_with_storefwdblk100 * LD_BLOCKS.DATA_UNKNOWN / MEM_UOPS_RETIRED.ALL_LOADSPercentage of total non-speculative loads with a store forward or unknown store address block00tma_info_mem_exec_bound_%_loadhead_with_l1miss100 * LD_HEAD.L1_MISS_AT_RET / LD_HEAD.ANY_AT_RETPercentage of Memory Execution Bound due to a first level data cache miss00tma_info_mem_exec_bound_%_loadhead_with_otherpipelineblks100 * LD_HEAD.OTHER_AT_RET / LD_HEAD.ANY_AT_RETPercentage of Memory Execution Bound due to other block cases, such as pipeline conflicts, fences, etc00tma_info_mem_exec_bound_%_loadhead_with_pagewalk100 * LD_HEAD.PGWALK_AT_RET / LD_HEAD.ANY_AT_RETPercentage of Memory Execution Bound due to a pagewalk00tma_info_mem_exec_bound_%_loadhead_with_stlbhit100 * LD_HEAD.DTLB_MISS_AT_RET / LD_HEAD.ANY_AT_RETPercentage of Memory Execution Bound due to a second level TLB miss00tma_info_mem_exec_bound_%_loadhead_with_storefwding100 * LD_HEAD.ST_ADDR_AT_RET / LD_HEAD.ANY_AT_RETPercentage of Memory Execution Bound due to a store forward address match00tma_info_mem_mix_iploadINST_RETIRED.ANY / MEM_UOPS_RETIRED.ALL_LOADSInstructions per Load00tma_info_mem_mix_ipstoreINST_RETIRED.ANY / MEM_UOPS_RETIRED.ALL_STORESInstructions per Store00tma_info_mem_mix_load_locks_ratio100 * MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL_LOADSPercentage of total non-speculative loads that perform one or more locks00tma_info_mem_mix_load_splits_ratio100 * MEM_UOPS_RETIRED.SPLIT_LOADS / MEM_UOPS_RETIRED.ALL_LOADSPercentage of total non-speculative loads that are splits00tma_info_mem_mix_memload_ratio1e3 * MEM_UOPS_RETIRED.ALL_LOADS / UOPS_RETIRED.ALLRatio of mem load uops to all uops00tma_info_serialization _%_tpause_cycles100 * SERIALIZATION.C01_MS_SCB / (5 * CPU_CLK_UNHALTED.CORE)Percentage of time that the core is stalled due to a TPAUSE or UMWAIT instruction00tma_info_system_cpu_utilizationCPU_CLK_UNHALTED.REF_TSC / TSCAverage CPU Utilization00tma_info_system_kernel_utilizationSummarycpu@CPU_CLK_UNHALTED.CORE_P@k / CPU_CLK_UNHALTED.COREFraction of cycles spent in Kernel mode00tma_info_system_turbo_utilizationPowerCPU_CLK_UNHALTED.CORE / CPU_CLK_UNHALTED.REF_TSCAverage Frequency Utilization relative nominal frequency00tma_info_uop_mix_fpdiv_uop_ratio100 * UOPS_RETIRED.FPDIV / UOPS_RETIRED.ALLPercentage of all uops which are FPDiv uops00tma_info_uop_mix_idiv_uop_ratio100 * UOPS_RETIRED.IDIV / UOPS_RETIRED.ALLPercentage of all uops which are IDiv uops00tma_info_uop_mix_microcode_uop_ratio100 * UOPS_RETIRED.MS / UOPS_RETIRED.ALLPercentage of all uops which are microcode ops00tma_info_uop_mix_x87_uop_ratio100 * UOPS_RETIRED.X87 / UOPS_RETIRED.ALLPercentage of all uops which are x87 uops00tma_itlb_missesTopdownL3;tma_L3_group;tma_ifetch_latency_groupTOPDOWN_FE_BOUND.ITLB / (5 * CPU_CLK_UNHALTED.CORE)tma_itlb_misses > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to Instruction Table Lookaside Buffer (ITLB) misses100%00tma_machine_clearsTopdownL2;tma_L2_group;tma_bad_speculation_groupTOPDOWN_BAD_SPECULATION.MACHINE_CLEARS / (5 * CPU_CLK_UNHALTED.CORE)tma_machine_clears > 0.05 & tma_bad_speculation > 0.15Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a machine clear (nuke) of any kind including memory ordering and memory disambiguation100%TopdownL200tma_mem_schedulerTopdownL3;tma_L3_group;tma_resource_bound_groupTOPDOWN_BE_BOUND.MEM_SCHEDULER / (5 * CPU_CLK_UNHALTED.CORE)tma_mem_scheduler > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)Counts the number of issue slots that were not consumed by the backend due to memory reservation stalls in which a scheduler is not able to accept uops100%00tma_non_mem_schedulerTopdownL3;tma_L3_group;tma_resource_bound_groupTOPDOWN_BE_BOUND.NON_MEM_SCHEDULER / (5 * CPU_CLK_UNHALTED.CORE)tma_non_mem_scheduler > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)Counts the number of issue slots that were not consumed by the backend due to IEC or FPC RAT stalls, which can be due to FIQ or IEC reservation stalls in which the integer, floating point or SIMD scheduler is not able to accept uops100%00tma_nukeTopdownL3;tma_L3_group;tma_machine_clears_groupTOPDOWN_BAD_SPECULATION.NUKE / (5 * CPU_CLK_UNHALTED.CORE)tma_nuke > 0.05 & (tma_machine_clears > 0.05 & tma_bad_speculation > 0.15)Counts the number of issue slots that were not consumed by the backend due to a machine clear that requires the use of microcode (slow nuke)100%00tma_other_fbTopdownL3;tma_L3_group;tma_ifetch_bandwidth_groupTOPDOWN_FE_BOUND.OTHER / (5 * CPU_CLK_UNHALTED.CORE)tma_other_fb > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to other common frontend stalls not categorized100%00tma_predecodeTopdownL3;tma_L3_group;tma_ifetch_bandwidth_groupTOPDOWN_FE_BOUND.PREDECODE / (5 * CPU_CLK_UNHALTED.CORE)tma_predecode > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to wrong predecodes100%00tma_registerTopdownL3;tma_L3_group;tma_resource_bound_groupTOPDOWN_BE_BOUND.REGISTER / (5 * CPU_CLK_UNHALTED.CORE)tma_register > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)Counts the number of issue slots that were not consumed by the backend due to the physical register file unable to accept an entry (marble stalls)100%00tma_reorder_bufferTopdownL3;tma_L3_group;tma_resource_bound_groupTOPDOWN_BE_BOUND.REORDER_BUFFER / (5 * CPU_CLK_UNHALTED.CORE)tma_reorder_buffer > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)Counts the number of issue slots that were not consumed by the backend due to the reorder buffer being full (ROB stalls)100%00tma_retiringDefault;TopdownL1;tma_L1_groupTOPDOWN_RETIRING.ALL / (5 * CPU_CLK_UNHALTED.CORE)tma_retiring > 0.75Counts the number of issue slots that result in retirement slots100%TopdownL1;DefaultTopdownL100tma_serializationTopdownL3;tma_L3_group;tma_resource_bound_groupTOPDOWN_BE_BOUND.SERIALIZATION / (5 * CPU_CLK_UNHALTED.CORE)tma_serialization > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)Counts the number of issue slots that were not consumed by the backend due to scoreboards from the instruction queue (IQ), jump execution unit (JEU), or microcode sequencer (MS)100%00branch_misprediction_ratiobranch_predictiond_ratio(ex_ret_brn_misp, ex_ret_brn)Execution-Time Branch Misprediction Ratio (Non-Speculative)100%00all_l2_cache_accessesl2_cachel2_request_g1.all_no_prefetch + l2_pf_hit_l2 + l2_pf_miss_l2_hit_l3 + l2_pf_miss_l2_l3All L2 Cache Accesses00l2_cache_accesses_from_l2_hwpfl2_cachel2_pf_hit_l2 + l2_pf_miss_l2_hit_l3 + l2_pf_miss_l2_l3L2 Cache Accesses from L2 HWPF00all_l2_cache_missesl2_cachel2_cache_req_stat.ic_dc_miss_in_l2 + l2_pf_miss_l2_hit_l3 + l2_pf_miss_l2_l3All L2 Cache Misses00l2_cache_misses_from_l2_hwpfl2_cachel2_pf_miss_l2_hit_l3 + l2_pf_miss_l2_l3L2 Cache Misses from L2 HWPF00all_l2_cache_hitsl2_cachel2_cache_req_stat.ic_dc_hit_in_l2 + l2_pf_hit_l2All L2 Cache Hits00l3_read_miss_latencyl3_cachexi_sys_fill_latency * 16 / xi_ccx_sdp_req1.all_l3_miss_req_typsAverage L3 Read Miss Latency (in core clocks)1core clocks00ic_fetch_miss_ratiol2_cached_ratio(l2_cache_req_stat.ic_access_in_l2, bp_l1_tlb_fetch_hit + bp_l1_tlb_miss_l2_hit + bp_l1_tlb_miss_l2_miss)L1 Instruction Cache (32B) Fetch Miss Ratio100%00l1_itlb_missestlbbp_l1_tlb_miss_l2_hit + bp_l1_tlb_miss_l2_missL1 ITLB Misses00all_remote_links_outbounddata_fabricremote_outbound_data_controller_0 + remote_outbound_data_controller_1 + remote_outbound_data_controller_2 + remote_outbound_data_controller_3Approximate: Outbound data bytes for all Remote Links for a node (die)3e-5MiB00nps1_die_to_dramdata_fabricdram_channel_data_controller_0 + dram_channel_data_controller_1 + dram_channel_data_controller_2 + dram_channel_data_controller_3 + dram_channel_data_controller_4 + dram_channel_data_controller_5 + dram_channel_data_controller_6 + dram_channel_data_controller_7Approximate: Combined DRAM B/bytes of all channels on a NPS1 node (die)6.1e-5MiB01ic_fetch_miss_ratiol2_cached_ratio(l2_cache_req_stat.ic_access_in_l2, bp_l1_tlb_fetch_hit + bp_l1_tlb_miss_l2_hit + bp_l1_tlb_miss_l2_tlb_miss)L1 Instruction Cache (32B) Fetch Miss Ratio100%00l1_itlb_missestlbbp_l1_tlb_miss_l2_hit + bp_l1_tlb_miss_l2_tlb_missL1 ITLB Misses00l2_cache_misses_from_l2_hwpfl2_cachel2_pf_miss_l2_hit_l3 + l2_pf_miss_l2_l3L2 Cache Misses from L2 Cache HWPF00l3_read_miss_latencyl3_cachexi_sys_fill_latency * 16 / xi_ccx_sdp_req1Average L3 Read Miss Latency (in core clocks)1core clocks00op_cache_fetch_miss_ratiol2_cached_ratio(op_cache_hit_miss.op_cache_miss, op_cache_hit_miss.all_op_cache_accesses)Op Cache (64B) Fetch Miss Ratio00ic_fetch_miss_ratiol2_cached_ratio(ic_tag_hit_miss.instruction_cache_miss, ic_tag_hit_miss.all_instruction_cache_accesses)Instruction Cache (32B) Fetch Miss Ratio100%00l1_itlb_missestlbbp_l1_tlb_miss_l2_tlb_hit + bp_l1_tlb_miss_l2_tlb_missL1 ITLB Misses00macro_ops_dispatcheddecoderde_dis_cops_from_decoder.disp_op_type.any_integer_dispatch + de_dis_cops_from_decoder.disp_op_type.any_fp_dispatchMacro-ops Dispatched00total_dispatch_slots6 * ls_not_halted_cycTotal dispatch slots (upto 6 instructions can be dispatched in each cycle)00frontend_boundPipelineL1d_ratio(de_no_dispatch_per_slot.no_ops_from_frontend, total_dispatch_slots)Fraction of dispatch slots that remained unused because the frontend did not supply enough instructions/ops100%00bad_speculationPipelineL1d_ratio(de_src_op_disp.all - ex_ret_ops, total_dispatch_slots)Fraction of dispatched ops that did not retire100%00backend_boundPipelineL1d_ratio(de_no_dispatch_per_slot.backend_stalls, total_dispatch_slots)Fraction of dispatch slots that remained unused because of backend stalls100%00smt_contentionPipelineL1d_ratio(de_no_dispatch_per_slot.smt_contention, total_dispatch_slots)Fraction of dispatch slots that remained unused because the other thread was selected100%00retiringPipelineL1d_ratio(ex_ret_ops, total_dispatch_slots)Fraction of dispatch slots used by ops that retired100%00frontend_bound_latencyPipelineL2;frontend_bound_groupd_ratio(6 * cpu@de_no_dispatch_per_slot.no_ops_from_frontend\,cmask\=0x6@, total_dispatch_slots)Fraction of dispatch slots that remained unused because of a latency bottleneck in the frontend (such as instruction cache or TLB misses)100%00frontend_bound_bandwidthPipelineL2;frontend_bound_groupd_ratio(de_no_dispatch_per_slot.no_ops_from_frontend - 6 * cpu@de_no_dispatch_per_slot.no_ops_from_frontend\,cmask\=0x6@, total_dispatch_slots)Fraction of dispatch slots that remained unused because of a bandwidth bottleneck in the frontend (such as decode or op cache fetch bandwidth)100%00bad_speculation_mispredictsPipelineL2;bad_speculation_groupd_ratio(bad_speculation * ex_ret_brn_misp, ex_ret_brn_misp + resyncs_or_nc_redirects)Fraction of dispatched ops that were flushed due to branch mispredicts100%00bad_speculation_pipeline_restartsPipelineL2;bad_speculation_groupd_ratio(bad_speculation * resyncs_or_nc_redirects, ex_ret_brn_misp + resyncs_or_nc_redirects)Fraction of dispatched ops that were flushed due to pipeline restarts (resyncs)100%00backend_bound_memoryPipelineL2;backend_bound_groupbackend_bound * d_ratio(ex_no_retire.load_not_complete, ex_no_retire.not_complete)Fraction of dispatch slots that remained unused because of stalls due to the memory subsystem100%00backend_bound_cpuPipelineL2;backend_bound_groupbackend_bound * (1 - d_ratio(ex_no_retire.load_not_complete, ex_no_retire.not_complete))Fraction of dispatch slots that remained unused because of stalls not related to the memory subsystem100%00retiring_fastpathPipelineL2;retiring_groupretiring * (1 - d_ratio(ex_ret_ucode_ops, ex_ret_ops))Fraction of dispatch slots used by fastpath ops that retired100%00retiring_microcodePipelineL2;retiring_groupretiring * d_ratio(ex_ret_ucode_ops, ex_ret_ops)Fraction of dispatch slots used by microcode ops that retired100%00branch_misprediction_ratiobranch_predictiond_ratio(ex_ret_brn_misp, ex_ret_brn)Execution-time branch misprediction ratio (non-speculative)100%00all_l2_cache_accessesl2_cachel2_request_g1.all_no_prefetch + l2_pf_hit_l2.all + l2_pf_miss_l2_hit_l3.all + l2_pf_miss_l2_l3.allAll L2 cache accesses00l2_cache_accesses_from_l1_ic_missesl2_cachel2_request_g1.cacheable_ic_readL2 cache accesses from L1 instruction cache misses (including prefetch)00l2_cache_accesses_from_l1_dc_missesl2_cachel2_request_g1.all_dcL2 cache accesses from L1 data cache misses (including prefetch)00l2_cache_accesses_from_l2_hwpfl2_cachel2_pf_hit_l2.all + l2_pf_miss_l2_hit_l3.all + l2_pf_miss_l2_l3.allL2 cache accesses from L2 cache hardware prefetcher00all_l2_cache_missesl2_cachel2_cache_req_stat.ic_dc_miss_in_l2 + l2_pf_miss_l2_hit_l3.all + l2_pf_miss_l2_l3.allAll L2 cache misses00l2_cache_misses_from_l1_ic_missl2_cachel2_cache_req_stat.ic_fill_missL2 cache misses from L1 instruction cache misses00l2_cache_misses_from_l1_dc_missl2_cachel2_cache_req_stat.ls_rd_blk_cL2 cache misses from L1 data cache misses00l2_cache_misses_from_l2_hwpfl2_cachel2_pf_miss_l2_hit_l3.all + l2_pf_miss_l2_l3.allL2 cache misses from L2 cache hardware prefetcher00all_l2_cache_hitsl2_cachel2_cache_req_stat.ic_dc_hit_in_l2 + l2_pf_hit_l2.allAll L2 cache hits00l2_cache_hits_from_l1_ic_missl2_cachel2_cache_req_stat.ic_hit_in_l2L2 cache hits from L1 instruction cache misses00l2_cache_hits_from_l1_dc_missl2_cachel2_cache_req_stat.dc_hit_in_l2L2 cache hits from L1 data cache misses00l2_cache_hits_from_l2_hwpfl2_cachel2_pf_hit_l2.allL2 cache hits from L2 cache hardware prefetcher00l3_cache_accessesl3_cachel3_lookup_state.all_coherent_accesses_to_l3L3 cache accesses00l3_missesl3_cachel3_lookup_state.l3_missL3 misses (including cacheline state change requests)00l3_read_miss_latencyl3_cachel3_xi_sampled_latency.all * 10 / l3_xi_sampled_latency_requests.allAverage L3 read miss latency (in core clocks)1core clocks00op_cache_fetch_miss_ratiod_ratio(op_cache_hit_miss.op_cache_miss, op_cache_hit_miss.all_op_cache_accesses)Op cache miss ratio for all fetches100%00ic_fetch_miss_ratiod_ratio(ic_tag_hit_miss.instruction_cache_miss, ic_tag_hit_miss.all_instruction_cache_accesses)Instruction cache miss ratio for all fetches. An instruction cache miss will not be counted by this metric if it is an OC hit100%00l1_data_cache_fills_from_memoryl1_dcachels_any_fills_from_sys.dram_io_allL1 data cache fills from DRAM or MMIO in any NUMA node00l1_data_cache_fills_from_remote_nodel1_dcachels_any_fills_from_sys.far_allL1 data cache fills from a different NUMA node00l1_data_cache_fills_from_same_ccxl1_dcachels_any_fills_from_sys.local_allL1 data cache fills from within the same CCX00l1_data_cache_fills_from_different_ccxl1_dcachels_any_fills_from_sys.remote_cacheL1 data cache fills from another CCX cache in any NUMA node00all_l1_data_cache_fillsl1_dcachels_any_fills_from_sys.allAll L1 data cache fills00l1_demand_data_cache_fills_from_local_l2l1_dcachels_dmnd_fills_from_sys.local_l2L1 demand data cache fills from local L2 cache00l1_demand_data_cache_fills_from_same_ccxl1_dcachels_dmnd_fills_from_sys.local_ccxL1 demand data cache fills from within the same CCX00l1_demand_data_cache_fills_from_near_cachel1_dcachels_dmnd_fills_from_sys.near_cacheL1 demand data cache fills from another CCX cache in the same NUMA node00l1_demand_data_cache_fills_from_near_memoryl1_dcachels_dmnd_fills_from_sys.dram_io_nearL1 demand data cache fills from DRAM or MMIO in the same NUMA node00l1_demand_data_cache_fills_from_far_cachel1_dcachels_dmnd_fills_from_sys.far_cacheL1 demand data cache fills from another CCX cache in a different NUMA node00l1_demand_data_cache_fills_from_far_memoryl1_dcachels_dmnd_fills_from_sys.dram_io_farL1 demand data cache fills from DRAM or MMIO in a different NUMA node00l1_itlb_missestlbbp_l1_tlb_miss_l2_tlb_hit + bp_l1_tlb_miss_l2_tlb_miss.allL1 instruction TLB misses00l2_itlb_missestlbbp_l1_tlb_miss_l2_tlb_miss.allL2 instruction TLB misses and instruction page walks00l1_dtlb_missestlbls_l1_d_tlb_miss.allL1 data TLB misses00l2_dtlb_missestlbls_l1_d_tlb_miss.all_l2_missL2 data TLB misses and data page walks00all_tlbs_flushedtlbls_tlb_flush.allAll TLBs flushed00macro_ops_dispatcheddecoderde_src_op_disp.allMacro-ops dispatched00sse_avx_stallsfp_disp_faults.sse_avx_allMixed SSE/AVX stalls00macro_ops_retiredex_ret_opsMacro-ops retired00dram_read_data_for_local_processordata_fabriclocal_processor_read_data_beats_cs0 + local_processor_read_data_beats_cs1 + local_processor_read_data_beats_cs2 + local_processor_read_data_beats_cs3 + local_processor_read_data_beats_cs4 + local_processor_read_data_beats_cs5 + local_processor_read_data_beats_cs6 + local_processor_read_data_beats_cs7 + local_processor_read_data_beats_cs8 + local_processor_read_data_beats_cs9 + local_processor_read_data_beats_cs10 + local_processor_read_data_beats_cs11DRAM read data for local processor6.103515625e-5MiB00dram_write_data_for_local_processordata_fabriclocal_processor_write_data_beats_cs0 + local_processor_write_data_beats_cs1 + local_processor_write_data_beats_cs2 + local_processor_write_data_beats_cs3 + local_processor_write_data_beats_cs4 + local_processor_write_data_beats_cs5 + local_processor_write_data_beats_cs6 + local_processor_write_data_beats_cs7 + local_processor_write_data_beats_cs8 + local_processor_write_data_beats_cs9 + local_processor_write_data_beats_cs10 + local_processor_write_data_beats_cs11DRAM write data for local processor6.103515625e-5MiB00dram_read_data_for_remote_processordata_fabricremote_processor_read_data_beats_cs0 + remote_processor_read_data_beats_cs1 + remote_processor_read_data_beats_cs2 + remote_processor_read_data_beats_cs3 + remote_processor_read_data_beats_cs4 + remote_processor_read_data_beats_cs5 + remote_processor_read_data_beats_cs6 + remote_processor_read_data_beats_cs7 + remote_processor_read_data_beats_cs8 + remote_processor_read_data_beats_cs9 + remote_processor_read_data_beats_cs10 + remote_processor_read_data_beats_cs11DRAM read data for remote processor6.103515625e-5MiB00dram_write_data_for_remote_processordata_fabricremote_processor_write_data_beats_cs0 + remote_processor_write_data_beats_cs1 + remote_processor_write_data_beats_cs2 + remote_processor_write_data_beats_cs3 + remote_processor_write_data_beats_cs4 + remote_processor_write_data_beats_cs5 + remote_processor_write_data_beats_cs6 + remote_processor_write_data_beats_cs7 + remote_processor_write_data_beats_cs8 + remote_processor_write_data_beats_cs9 + remote_processor_write_data_beats_cs10 + remote_processor_write_data_beats_cs11DRAM write data for remote processor6.103515625e-5MiB00local_socket_upstream_dma_read_datadata_fabriclocal_socket_upstream_read_beats_iom0 + local_socket_upstream_read_beats_iom1 + local_socket_upstream_read_beats_iom2 + local_socket_upstream_read_beats_iom3Local socket upstream DMA read data6.103515625e-5MiB00local_socket_upstream_dma_write_datadata_fabriclocal_socket_upstream_write_beats_iom0 + local_socket_upstream_write_beats_iom1 + local_socket_upstream_write_beats_iom2 + local_socket_upstream_write_beats_iom3Local socket upstream DMA write data6.103515625e-5MiB00remote_socket_upstream_dma_read_datadata_fabricremote_socket_upstream_read_beats_iom0 + remote_socket_upstream_read_beats_iom1 + remote_socket_upstream_read_beats_iom2 + remote_socket_upstream_read_beats_iom3Remote socket upstream DMA read data6.103515625e-5MiB00remote_socket_upstream_dma_write_datadata_fabricremote_socket_upstream_write_beats_iom0 + remote_socket_upstream_write_beats_iom1 + remote_socket_upstream_write_beats_iom2 + remote_socket_upstream_write_beats_iom3Remote socket upstream DMA write data6.103515625e-5MiB00local_socket_inbound_data_to_cpudata_fabriclocal_socket_inf0_inbound_data_beats_ccm0 + local_socket_inf1_inbound_data_beats_ccm0 + local_socket_inf0_inbound_data_beats_ccm1 + local_socket_inf1_inbound_data_beats_ccm1 + local_socket_inf0_inbound_data_beats_ccm2 + local_socket_inf1_inbound_data_beats_ccm2 + local_socket_inf0_inbound_data_beats_ccm3 + local_socket_inf1_inbound_data_beats_ccm3 + local_socket_inf0_inbound_data_beats_ccm4 + local_socket_inf1_inbound_data_beats_ccm4 + local_socket_inf0_inbound_data_beats_ccm5 + local_socket_inf1_inbound_data_beats_ccm5 + local_socket_inf0_inbound_data_beats_ccm6 + local_socket_inf1_inbound_data_beats_ccm6 + local_socket_inf0_inbound_data_beats_ccm7 + local_socket_inf1_inbound_data_beats_ccm7Local socket inbound data to the CPU (e.g. read data)3.0517578125e-5MiB00local_socket_outbound_data_from_cpudata_fabriclocal_socket_inf0_outbound_data_beats_ccm0 + local_socket_inf1_outbound_data_beats_ccm0 + local_socket_inf0_outbound_data_beats_ccm1 + local_socket_inf1_outbound_data_beats_ccm1 + local_socket_inf0_outbound_data_beats_ccm2 + local_socket_inf1_outbound_data_beats_ccm2 + local_socket_inf0_outbound_data_beats_ccm3 + local_socket_inf1_outbound_data_beats_ccm3 + local_socket_inf0_outbound_data_beats_ccm4 + local_socket_inf1_outbound_data_beats_ccm4 + local_socket_inf0_outbound_data_beats_ccm5 + local_socket_inf1_outbound_data_beats_ccm5 + local_socket_inf0_outbound_data_beats_ccm6 + local_socket_inf1_outbound_data_beats_ccm6 + local_socket_inf0_outbound_data_beats_ccm7 + local_socket_inf1_outbound_data_beats_ccm7Local socket outbound data from the CPU (e.g. write data)6.103515625e-5MiB00remote_socket_inbound_data_to_cpudata_fabricremote_socket_inf0_inbound_data_beats_ccm0 + remote_socket_inf1_inbound_data_beats_ccm0 + remote_socket_inf0_inbound_data_beats_ccm1 + remote_socket_inf1_inbound_data_beats_ccm1 + remote_socket_inf0_inbound_data_beats_ccm2 + remote_socket_inf1_inbound_data_beats_ccm2 + remote_socket_inf0_inbound_data_beats_ccm3 + remote_socket_inf1_inbound_data_beats_ccm3 + remote_socket_inf0_inbound_data_beats_ccm4 + remote_socket_inf1_inbound_data_beats_ccm4 + remote_socket_inf0_inbound_data_beats_ccm5 + remote_socket_inf1_inbound_data_beats_ccm5 + remote_socket_inf0_inbound_data_beats_ccm6 + remote_socket_inf1_inbound_data_beats_ccm6 + remote_socket_inf0_inbound_data_beats_ccm7 + remote_socket_inf1_inbound_data_beats_ccm7Remote socket inbound data to the CPU (e.g. read data)3.0517578125e-5MiB00remote_socket_outbound_data_from_cpudata_fabricremote_socket_inf0_outbound_data_beats_ccm0 + remote_socket_inf1_outbound_data_beats_ccm0 + remote_socket_inf0_outbound_data_beats_ccm1 + remote_socket_inf1_outbound_data_beats_ccm1 + remote_socket_inf0_outbound_data_beats_ccm2 + remote_socket_inf1_outbound_data_beats_ccm2 + remote_socket_inf0_outbound_data_beats_ccm3 + remote_socket_inf1_outbound_data_beats_ccm3 + remote_socket_inf0_outbound_data_beats_ccm4 + remote_socket_inf1_outbound_data_beats_ccm4 + remote_socket_inf0_outbound_data_beats_ccm5 + remote_socket_inf1_outbound_data_beats_ccm5 + remote_socket_inf0_outbound_data_beats_ccm6 + remote_socket_inf1_outbound_data_beats_ccm6 + remote_socket_inf0_outbound_data_beats_ccm7 + remote_socket_inf1_outbound_data_beats_ccm7Remote socket outbound data from the CPU (e.g. write data)6.103515625e-5MiB00local_socket_outbound_data_from_all_linksdata_fabriclocal_socket_outbound_data_beats_link0 + local_socket_outbound_data_beats_link1 + local_socket_outbound_data_beats_link2 + local_socket_outbound_data_beats_link3 + local_socket_outbound_data_beats_link4 + local_socket_outbound_data_beats_link5 + local_socket_outbound_data_beats_link6 + local_socket_outbound_data_beats_link7Outbound data from all links (local socket)6.103515625e-5MiB00umc_data_bus_utilizationmemory_controllerd_ratio(umc_data_slot_clks.all / 2, umc_mem_clk)Memory controller data bus utilization100%00umc_cas_cmd_ratememory_controllerd_ratio(umc_cas_cmd.all * 1e3, umc_mem_clk)Memory controller CAS command rate00umc_cas_cmd_read_ratiomemory_controllerd_ratio(umc_cas_cmd.rd, umc_cas_cmd.all)Ratio of memory controller CAS commands for reads100%00umc_cas_cmd_write_ratiomemory_controllerd_ratio(umc_cas_cmd.wr, umc_cas_cmd.all)Ratio of memory controller CAS commands for writes100%00umc_mem_read_bandwidthmemory_controllerumc_cas_cmd.rd * 64 / 1e6 / duration_timeEstimated memory read bandwidth1MB/s00umc_mem_write_bandwidthmemory_controllerumc_cas_cmd.wr * 64 / 1e6 / duration_timeEstimated memory write bandwidth1MB/s00umc_mem_bandwidthmemory_controllerumc_cas_cmd.all * 64 / 1e6 / duration_timeEstimated combined memory bandwidth1MB/s00umc_activate_cmd_ratememory_controllerd_ratio(umc_act_cmd.all * 1e3, umc_mem_clk)Memory controller ACTIVATE command rate00umc_precharge_cmd_ratememory_controllerd_ratio(umc_pchg_cmd.all * 1e3, umc_mem_clk)Memory controller PRECHARGE command rate00total_dispatch_slots8 * ls_not_halted_cycTotal dispatch slots (up to 8 instructions can be dispatched in each cycle)1slots00frontend_boundPipelineL1d_ratio(de_no_dispatch_per_slot.no_ops_from_frontend, total_dispatch_slots)Percentage of dispatch slots that remained unused because the frontend did not supply enough instructions/ops100%slots00bad_speculationPipelineL1d_ratio(de_src_op_disp.all - ex_ret_ops, total_dispatch_slots)Percentage of dispatched ops that did not retire100%ops00backend_boundPipelineL1d_ratio(de_no_dispatch_per_slot.backend_stalls, total_dispatch_slots)Percentage of dispatch slots that remained unused because of backend stalls100%slots00smt_contentionPipelineL1d_ratio(de_no_dispatch_per_slot.smt_contention, total_dispatch_slots)Percentage of dispatch slots that remained unused because the other thread was selected100%slots00retiringPipelineL1d_ratio(ex_ret_ops, total_dispatch_slots)Percentage of dispatch slots used by ops that retired100%slots00frontend_bound_by_latencyPipelineL2;frontend_bound_groupd_ratio(8 * cpu@de_no_dispatch_per_slot.no_ops_from_frontend\,cmask\=0x8@, total_dispatch_slots)Percentage of dispatch slots that remained unused because of a latency bottleneck in the frontend (such as instruction cache or TLB misses)100%slots00frontend_bound_by_bandwidthPipelineL2;frontend_bound_groupd_ratio(de_no_dispatch_per_slot.no_ops_from_frontend - 8 * cpu@de_no_dispatch_per_slot.no_ops_from_frontend\,cmask\=0x8@, total_dispatch_slots)Percentage of dispatch slots that remained unused because of a bandwidth bottleneck in the frontend (such as decode or op cache fetch bandwidth)100%slots00bad_speculation_from_mispredictsPipelineL2;bad_speculation_groupd_ratio(bad_speculation * ex_ret_brn_misp, ex_ret_brn_misp + bp_redirects.resync)Percentage of dispatched ops that were flushed due to branch mispredicts100%ops00bad_speculation_from_pipeline_restartsPipelineL2;bad_speculation_groupd_ratio(bad_speculation * bp_redirects.resync, ex_ret_brn_misp + bp_redirects.resync)Percentage of dispatched ops that were flushed due to pipeline restarts (resyncs)100%ops00backend_bound_by_memoryPipelineL2;backend_bound_groupbackend_bound * d_ratio(ex_no_retire.load_not_complete, ex_no_retire.not_complete)Percentage of dispatch slots that remained unused because of stalls due to the memory subsystem100%slots00backend_bound_by_cpuPipelineL2;backend_bound_groupbackend_bound * (1 - d_ratio(ex_no_retire.load_not_complete, ex_no_retire.not_complete))Percentage of dispatch slots that remained unused because of stalls not related to the memory subsystem100%slots00retiring_from_fastpathPipelineL2;retiring_groupretiring * (1 - d_ratio(ex_ret_ucode_ops, ex_ret_ops))Percentage of dispatch slots used by fastpath ops that retired100%slots00retiring_from_microcodePipelineL2;retiring_groupretiring * d_ratio(ex_ret_ucode_ops, ex_ret_ops)Percentage of dispatch slots used by microcode ops that retired100%slots00branch_misprediction_ratebranch_predictiond_ratio(ex_ret_brn_misp, ex_ret_brn)Execution-time branch misprediction rate (non-speculative)1per_branch00all_data_cache_accesses_ptil1_dcachels_dispatch.all / instructionsAll data cache accesses per thousand instructions1e3per_1k_instr00all_l2_cache_accesses_ptil2_cache(l2_request_g1.all_no_prefetch + l2_pf_hit_l2.l2_hwpf + l2_pf_miss_l2_hit_l3.l2_hwpf + l2_pf_miss_l2_l3.l2_hwpf) / instructionsAll L2 cache accesses per thousand instructions1e3per_1k_instr00l2_cache_accesses_from_l1_ic_misses_ptil2_cachel2_request_g1.cacheable_ic_read / instructionsL2 cache accesses from L1 instruction cache misses (including prefetch) per thousand instructions1e3per_1k_instr00l2_cache_accesses_from_l1_dc_misses_ptil2_cachel2_request_g1.all_dc / instructionsL2 cache accesses from L1 data cache misses (including prefetch) per thousand instructions1e3per_1k_instr00l2_cache_accesses_from_l2_hwpf_ptil2_cache(l2_pf_hit_l2.l1_dc_l2_hwpf + l2_pf_miss_l2_hit_l3.l1_dc_l2_hwpf + l2_pf_miss_l2_l3.l1_dc_l2_hwpf) / instructionsL2 cache accesses from L2 cache hardware prefetcher per thousand instructions1e3per_1k_instr00all_l2_cache_misses_ptil2_cache(l2_cache_req_stat.ic_dc_miss_in_l2 + l2_pf_miss_l2_hit_l3.l2_hwpf + l2_pf_miss_l2_l3.l2_hwpf) / instructionsAll L2 cache misses per thousand instructions1e3per_1k_instr00l2_cache_misses_from_l1_ic_miss_ptil2_cachel2_cache_req_stat.ic_fill_miss / instructionsL2 cache misses from L1 instruction cache misses per thousand instructions1e3per_1k_instr00l2_cache_misses_from_l1_dc_miss_ptil2_cachel2_cache_req_stat.ls_rd_blk_c / instructionsL2 cache misses from L1 data cache misses per thousand instructions1e3per_1k_instr00l2_cache_misses_from_l2_hwpf_ptil2_cache(l2_pf_miss_l2_hit_l3.l1_dc_l2_hwpf + l2_pf_miss_l2_l3.l1_dc_l2_hwpf) / instructionsL2 cache misses from L2 cache hardware prefetcher per thousand instructions1e3per_1k_instr00all_l2_cache_hits_ptil2_cache(l2_cache_req_stat.ic_dc_hit_in_l2 + l2_pf_hit_l2.l2_hwpf) / instructionsAll L2 cache hits per thousand instructions1e3per_1k_instr00l2_cache_hits_from_l1_ic_miss_ptil2_cachel2_cache_req_stat.ic_hit_in_l2 / instructionsL2 cache hits from L1 instruction cache misses per thousand instructions1e3per_1k_instr00l2_cache_hits_from_l1_dc_miss_ptil2_cachel2_cache_req_stat.dc_hit_in_l2 / instructionsL2 cache hits from L1 data cache misses per thousand instructions1e3per_1k_instr00l2_cache_hits_from_l2_hwpf_ptil2_cachel2_pf_hit_l2.l1_dc_l2_hwpf / instructionsL2 cache hits from L2 cache hardware prefetcher per thousand instructions1e3per_1k_instr00l3_read_miss_latencyl3_cachel3_xi_sampled_latency.all * 10 / l3_xi_sampled_latency_requests.allAverage L3 read miss latency (in core clocks)1ns00l3_read_miss_latency_for_local_draml3_cachel3_xi_sampled_latency.dram_near * 10 / l3_xi_sampled_latency_requests.dram_nearAverage L3 read miss latency (in core clocks) for local DRAM1ns00l3_read_miss_latency_for_remote_draml3_cachel3_xi_sampled_latency.dram_far * 10 / l3_xi_sampled_latency_requests.dram_farAverage L3 read miss latency (in core clocks) for remote DRAM1ns00l1_data_cache_fills_from_memory_ptil1_dcachels_any_fills_from_sys.dram_io_all / instructionsL1 data cache fills from DRAM or MMIO in any NUMA node per thousand instructions1e3per_1k_instr00l1_data_cache_fills_from_remote_node_ptil1_dcachels_any_fills_from_sys.far_all / instructionsL1 data cache fills from a different NUMA node per thousand instructions1e3per_1k_instr00l1_data_cache_fills_from_same_ccx_ptil1_dcachels_any_fills_from_sys.local_all / instructionsL1 data cache fills from within the same CCX per thousand instructions1e3per_1k_instr00l1_data_cache_fills_from_different_ccx_ptil1_dcachels_any_fills_from_sys.remote_cache / instructionsL1 data cache fills from another CCX cache in any NUMA node per thousand instructions1e3per_1k_instr00all_l1_data_cache_fills_ptil1_dcachels_any_fills_from_sys.all / instructionsAll L1 data cache fills per thousand instructions1e3per_1k_instr00l1_demand_data_cache_fills_from_local_l2_ptil1_dcachels_dmnd_fills_from_sys.local_l2 / instructionsL1 demand data cache fills from local L2 cache per thousand instructions1e3per_1k_instr00l1_demand_data_cache_fills_from_same_ccx_ptil1_dcachels_dmnd_fills_from_sys.local_ccx / instructionsL1 demand data cache fills from within the same CCX per thousand instructions1e3per_1k_instr00l1_demand_data_cache_fills_from_near_cache_ptil1_dcachels_dmnd_fills_from_sys.near_cache / instructionsL1 demand data cache fills from another CCX cache in the same NUMA node per thousand instructions1e3per_1k_instr00l1_demand_data_cache_fills_from_near_memory_ptil1_dcachels_dmnd_fills_from_sys.dram_io_near / instructionsL1 demand data cache fills from DRAM or MMIO in the same NUMA node per thousand instructions1e3per_1k_instr00l1_demand_data_cache_fills_from_far_cache_ptil1_dcachels_dmnd_fills_from_sys.far_cache / instructionsL1 demand data cache fills from another CCX cache in a different NUMA node per thousand instructions1e3per_1k_instr00l1_demand_data_cache_fills_from_far_memory_ptil1_dcachels_dmnd_fills_from_sys.dram_io_far / instructionsL1 demand data cache fills from DRAM or MMIO in a different NUMA node per thousand instructions1e3per_1k_instr00l1_itlb_misses_ptitlb(bp_l1_tlb_miss_l2_tlb_hit + bp_l1_tlb_miss_l2_tlb_miss.all) / instructionsL1 instruction TLB misses per thousand instructions1e3per_1k_instr00l2_itlb_misses_ptitlbbp_l1_tlb_miss_l2_tlb_miss.all / instructionsL2 instruction TLB misses and instruction page walks per thousand instructions1e3per_1k_instr00l1_dtlb_misses_ptitlbls_l1_d_tlb_miss.all / instructionsL1 data TLB misses per thousand instructions1e3per_1k_instr00l2_dtlb_misses_ptitlbls_l1_d_tlb_miss.all_l2_miss / instructionsL2 data TLB misses and data page walks per thousand instructions1e3per_1k_instr00all_tlbs_flushed_ptitlbls_tlb_flush.all / instructionsAll TLBs flushed per thousand instructions1e3per_1k_instr00umc_cas_cmd_ratememory_controllerd_ratio(umc_cas_cmd.all * 1e3, umc_mem_clk)Memory controller CAS command rate1per_memclk00umc_activate_cmd_ratememory_controllerd_ratio(umc_act_cmd.all * 1e3, umc_mem_clk)Memory controller ACTIVATE command rate1per_memclk00umc_precharge_cmd_ratememory_controllerd_ratio(umc_pchg_cmd.all * 1e3, umc_mem_clk)Memory controller PRECHARGE command rate1per_memclk00C3_Core_ResidencyPowercstate_core@c3\-residency@ / TSCC3 residency percent per core100%00tma_4k_aliasingTopdownL4;tma_L4_group;tma_l1_bound_groupLD_BLOCKS_PARTIAL.ADDRESS_ALIAS / tma_info_thread_clkstma_4k_aliasing > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates how often memory load accesses were aliased by preceding stores (in program order) with a 4K address offsetThis metric estimates how often memory load accesses were aliased by preceding stores (in program order) with a 4K address offset. False match is possible; which incur a few cycles load re-issue. However; the short re-issue duration is often hidden by the out-of-order core and HW optimizations; hence a user may safely ignore a high value of this metric unless it manages to propagate up into parent nodes of the hierarchy (e.g. to L1_Bound)100%00tma_alu_op_utilizationTopdownL5;tma_L5_group;tma_ports_utilized_3m_group(UOPS_DISPATCHED_PORT.PORT_0 + UOPS_DISPATCHED_PORT.PORT_1 + UOPS_DISPATCHED_PORT.PORT_5 + UOPS_DISPATCHED_PORT.PORT_6) / tma_info_thread_slotstma_alu_op_utilization > 0.4This metric represents Core fraction of cycles CPU dispatched uops on execution ports for ALU operations100%02tma_assistsBvIO;TopdownL4;tma_L4_group;tma_microcode_sequencer_group66 * OTHER_ASSISTS.ANY_WB_ASSIST / tma_info_thread_slotstma_assists > 0.1 & (tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1)This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of AssistsThis metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists. Assists are long sequences of uops that are required in certain corner-cases for operations that cannot be handled natively by the execution pipeline. For example; when working with very small floating point values (so-called Denormals); the FP units are not set up to perform these operations natively. Instead; a sequence of instructions to perform the computation on the Denormals is injected into the pipeline. Since these microcode sequences might be dozens of uops long; Assists can be extremely deleterious to performance and they can be avoided in many cases. Sample with: OTHER_ASSISTS.ANY100%00tma_backend_boundBvOB;TmaL1;TopdownL1;tma_L1_group1 - (tma_frontend_bound + tma_bad_speculation + tma_retiring)tma_backend_bound > 0.2This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the BackendThis category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend. Backend is the portion of the processor core where the out-of-order scheduler dispatches ready uops into their respective execution units; and once completed these uops get retired according to program order. For example; stalls due to data-cache misses or stalls due to the divider unit being overloaded are both categorized under Backend Bound. Backend Bound is further divided into two main categories: Memory Bound and Core Bound100%TopdownL102tma_bad_speculationTmaL1;TopdownL1;tma_L1_group(UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * (INT_MISC.RECOVERY_CYCLES_ANY / 2 if #SMT_on else INT_MISC.RECOVERY_CYCLES)) / tma_info_thread_slotstma_bad_speculation > 0.15This category represents fraction of slots wasted due to incorrect speculationsThis category represents fraction of slots wasted due to incorrect speculations. This include slots used to issue uops that do not eventually get retired and slots for which the issue-pipeline was blocked due to recovery from earlier incorrect speculation. For example; wasted work due to miss-predicted branches are categorized under Bad Speculation category. Incorrect data speculation followed by Memory Ordering Nukes is another example100%TopdownL100tma_branch_mispredictsBadSpec;BrMispredicts;BvMP;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBMBR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT) * tma_bad_speculationtma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15This metric represents fraction of slots the CPU has wasted due to Branch MispredictionThis metric represents fraction of slots the CPU has wasted due to Branch Misprediction.  These slots are either wasted by uops fetched from an incorrectly speculated program path; or stalls when the out-of-order part of the machine needs to recover its state from a speculative path. Sample with: BR_MISP_RETIRED.ALL_BRANCHES. Related metrics: tma_info_bad_spec_branch_misprediction_cost, tma_mispredicts_resteers100%TopdownL201tma_branch_resteersFetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group12 * (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT + BACLEARS.ANY) / tma_info_thread_clkstma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric represents fraction of cycles the CPU was stalled due to Branch ResteersThis metric represents fraction of cycles the CPU was stalled due to Branch Resteers. Branch Resteers estimates the Frontend delay in fetching operations from corrected path; following all sorts of miss-predicted branches. For example; branchy code with lots of miss-predictions might get categorized under Branch Resteers. Note the value of this node may overlap with its siblings. Sample with: BR_MISP_RETIRED.ALL_BRANCHES100%00tma_ciscTopdownL4;tma_L4_group;tma_microcode_sequencer_groupmax(0, tma_microcode_sequencer - tma_assists)tma_cisc > 0.1 & (tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1)This metric estimates fraction of cycles the CPU retired uops originated from CISC (complex instruction set computer) instructionThis metric estimates fraction of cycles the CPU retired uops originated from CISC (complex instruction set computer) instruction. A CISC instruction has multiple uops that are required to perform the instruction's functionality as in the case of read-modify-write as an example. Since these instructions require multiple uops they may or may not imply sub-optimal use of machine resources100%02tma_clears_resteersBadSpec;MachineClears;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueMCMACHINE_CLEARS.COUNT * tma_branch_resteers / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT + BACLEARS.ANY)tma_clears_resteers > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Machine ClearsThis metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Machine Clears. Related metrics: tma_l1_bound, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches100%00tma_contested_accessesBvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group(60 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_RETIRED.L3_MISS))) + 43 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_RETIRED.L3_MISS)))) / tma_info_thread_clkstma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accessesThis metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS. Related metrics: tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache100%01tma_core_boundBackend;Compute;TmaL2;TopdownL2;tma_L2_group;tma_backend_bound_grouptma_backend_bound - tma_memory_boundtma_core_bound > 0.1 & tma_backend_bound > 0.2This metric represents fraction of slots where Core non-memory issues were of a bottleneckThis metric represents fraction of slots where Core non-memory issues were of a bottleneck.  Shortage in hardware compute resources; or dependencies in software's instructions are both categorized under Core Bound. Hence it may indicate the machine ran out of an out-of-order resource; certain execution units are overloaded or dependencies in program's data- or instruction-flow are limiting the performance (e.g. FP-chained long-latency arithmetic operations)100%TopdownL201tma_data_sharingBvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group43 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_RETIRED.L3_MISS))) / tma_info_thread_clkstma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accessesThis metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT_PS. Related metrics: tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache100%01tma_dividerBvCB;TopdownL3;tma_L3_group;tma_core_bound_groupARITH.FPU_DIV_ACTIVE / tma_info_core_core_clkstma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)This metric represents fraction of cycles where the Divider unit was activeThis metric represents fraction of cycles where the Divider unit was active. Divide and square root instructions are performed by the Divider unit and can take considerably longer latency than integer or Floating Point addition; subtraction; or multiplication. Sample with: ARITH.DIVIDER_UOPS100%00tma_dram_boundMemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group(1 - MEM_LOAD_UOPS_RETIRED.L3_HIT / (MEM_LOAD_UOPS_RETIRED.L3_HIT + 7 * MEM_LOAD_UOPS_RETIRED.L3_MISS)) * CYCLE_ACTIVITY.STALLS_L2_MISS / tma_info_thread_clkstma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loadsThis metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads. Better caching can improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L3_MISS_PS100%03tma_dsbDSB;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group(IDQ.ALL_DSB_CYCLES_ANY_UOPS - IDQ.ALL_DSB_CYCLES_4_UOPS) / tma_info_core_core_clks / 2tma_dsb > 0.15 & tma_fetch_bandwidth > 0.2This metric represents Core fraction of cycles in which CPU was likely limited due to DSB (decoded uop cache) fetch pipelineThis metric represents Core fraction of cycles in which CPU was likely limited due to DSB (decoded uop cache) fetch pipeline.  For example; inefficient utilization of the DSB cache structure or bank conflict when reading from it; are categorized here100%00tma_dsb_switchesDSBmiss;FetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueFBDSB2MITE_SWITCHES.PENALTY_CYCLES / tma_info_thread_clkstma_dsb_switches > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric represents fraction of cycles the CPU was stalled due to switches from DSB to MITE pipelinesThis metric represents fraction of cycles the CPU was stalled due to switches from DSB to MITE pipelines. The DSB (decoded i-cache) is a Uop Cache where the front-end directly delivers Uops (micro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter latency and delivered higher bandwidth than the MITE (legacy instruction decode pipeline). Switching between the two pipelines can cause penalties hence this metric measures the exposed penalty. Related metrics: tma_fetch_bandwidth, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp100%00tma_dtlb_loadBvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group(8 * DTLB_LOAD_MISSES.STLB_HIT + cpu@DTLB_LOAD_MISSES.WALK_DURATION\,cmask\=1@ + 7 * DTLB_LOAD_MISSES.WALK_COMPLETED) / tma_info_thread_clkstma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accessesThis metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_UOPS_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_dtlb_store100%00tma_dtlb_storeBvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group(8 * DTLB_STORE_MISSES.STLB_HIT + cpu@DTLB_STORE_MISSES.WALK_DURATION\,cmask\=1@ + 7 * DTLB_STORE_MISSES.WALK_COMPLETED) / tma_info_thread_clkstma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates the fraction of cycles spent handling first-level data TLB store missesThis metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses.  As with ordinary data caching; focus on improving data locality and reducing working-set size to reduce DTLB overhead.  Additionally; consider using profile-guided optimization (PGO) to collocate frequently-used data on the same page.  Try using larger page sizes for large amounts of frequently-used data. Sample with: MEM_UOPS_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_dtlb_load100%00tma_false_sharingBvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group60 * OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM / tma_info_thread_clkstma_false_sharing > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates how often CPU was handling synchronizations due to False SharingThis metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache100%00tma_fb_fullBvMS;MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_grouptma_info_memory_load_miss_real_latency * cpu@L1D_PEND_MISS.FB_FULL\,cmask\=1@ / tma_info_thread_clkstma_fb_full > 0.3This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceedThis metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed. The higher the metric value; the deeper the memory hierarchy level the misses are satisfied from (metric values >1 are valid). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or external memory). Related metrics: tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full, tma_store_latency, tma_streaming_stores100%01tma_fetch_bandwidthFetchBW;Frontend;TmaL2;TopdownL2;tma_L2_group;tma_frontend_bound_group;tma_issueFBtma_frontend_bound - tma_fetch_latencytma_fetch_bandwidth > 0.2This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issuesThis metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues.  For example; inefficiencies at the instruction decoders; or restrictions for caching in the DSB (decoded uops cache) are categorized under Fetch Bandwidth. In such cases; the Frontend typically delivers suboptimal amount of uops to the Backend. Related metrics: tma_dsb_switches, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp100%TopdownL200tma_fetch_latencyFrontend;TmaL2;TopdownL2;tma_L2_group;tma_frontend_bound_group4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / tma_info_thread_slotstma_fetch_latency > 0.1 & tma_frontend_bound > 0.15This metric represents fraction of slots the CPU was stalled due to Frontend latency issuesThis metric represents fraction of slots the CPU was stalled due to Frontend latency issues.  For example; instruction-cache misses; iTLB misses or fetch stalls after a branch misprediction are categorized under Frontend Latency. In such cases; the Frontend eventually delivers no uops for some period. Sample with: RS_EVENTS.EMPTY_END100%TopdownL200tma_fp_scalarCompute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2PFP_ARITH_INST_RETIRED.SCALAR / UOPS_RETIRED.RETIRE_SLOTStma_fp_scalar > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)This metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retiredThis metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retired. May overcount due to FMA double counting. Related metrics: tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_fp_vectorCompute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2PFP_ARITH_INST_RETIRED.VECTOR / UOPS_RETIRED.RETIRE_SLOTStma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)This metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widthsThis metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widths. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_fp_vector_128bCompute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P(FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE) / UOPS_RETIRED.RETIRE_SLOTStma_fp_vector_128b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))This metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectorsThis metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_fp_vector_256bCompute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P(FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / UOPS_RETIRED.RETIRE_SLOTStma_fp_vector_256b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))This metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectorsThis metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_frontend_boundBvFB;BvIO;PGO;TmaL1;TopdownL1;tma_L1_groupIDQ_UOPS_NOT_DELIVERED.CORE / tma_info_thread_slotstma_frontend_bound > 0.15This category represents fraction of slots where the processor's Frontend undersupplies its BackendThis category represents fraction of slots where the processor's Frontend undersupplies its Backend. Frontend denotes the first part of the processor core responsible to fetch operations that are executed later on by the Backend part. Within the Frontend; a branch predictor predicts the next address to fetch; cache-lines are fetched from the memory subsystem; parsed into instructions; and lastly decoded into micro-operations (uops). Ideally the Frontend can issue Pipeline_Width uops every cycle to the Backend. Frontend Bound denotes unutilized issue-slots when there is no Backend stall; i.e. bubbles where Frontend delivered no uops while Backend could have accepted them. For example; stalls due to instruction-cache misses would be categorized under Frontend Bound100%TopdownL100tma_heavy_operationsRetire;TmaL2;TopdownL2;tma_L2_group;tma_retiring_grouptma_microcode_sequencertma_heavy_operations > 0.1This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequencesThis metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences. ([ICL+] Note this may overcount due to approximation using indirect events; [ADL+] .)100%TopdownL200tma_icache_missesBigFootprint;BvBC;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_groupICACHE.IFDATA_STALL / tma_info_thread_clkstma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric represents fraction of cycles the CPU was stalled due to instruction cache misses100%00tma_info_bad_spec_ipmisp_indirectBad;BrMispredictstma_info_inst_mix_instructions / (UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY * BR_MISP_EXEC.INDIRECT)tma_info_bad_spec_ipmisp_indirect < 1e3Instructions per retired mispredicts for indirect CALL or JMP branches (lower number means higher occurrence rate)00tma_info_bad_spec_ipmispredictBad;BadSpec;BrMispredictsINST_RETIRED.ANY / BR_MISP_RETIRED.ALL_BRANCHEStma_info_bad_spec_ipmispredict < 200Number of Instructions per non-speculative Branch Misprediction (JEClear) (lower number means higher occurrence rate)00tma_info_core_core_clksSMT(CPU_CLK_UNHALTED.THREAD / 2 * (1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK) if #core_wide < 1 else (CPU_CLK_UNHALTED.THREAD_ANY / 2 if #SMT_on else tma_info_thread_clks))Core actual clocks when any Logical Processor is active on the Physical Core00tma_info_core_coreipcRet;SMT;TmaL1;tma_L1_groupINST_RETIRED.ANY / tma_info_core_core_clksInstructions Per Cycle across hyper-threads (per physical core)00tma_info_core_flopcFlops;Ret(FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / tma_info_core_core_clksFloating Point Operations Per Cycle00tma_info_core_fp_arith_utilizationCor;Flops;HPC(FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIRED.VECTOR) / (2 * tma_info_core_core_clks)Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width)Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width). Values > 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less common)00tma_info_core_ilpBackend;Cor;Pipeline;PortsUtilUOPS_EXECUTED.THREAD / cpu@UOPS_EXECUTED.THREAD\,cmask\=1@Instruction-Level-Parallelism (average number of uops executed when there is execution) per thread (logical-processor)00tma_info_frontend_dsb_coverageDSB;Fed;FetchBW;tma_issueFBIDQ.DSB_UOPS / (IDQ.DSB_UOPS + LSD.UOPS + IDQ.MITE_UOPS + IDQ.MS_UOPS)tma_info_frontend_dsb_coverage < 0.7 & tma_info_thread_ipc / 4 > 0.35Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache)Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache). Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_inst_mix_iptb, tma_lcp00tma_info_frontend_ipunknown_branchFedtma_info_inst_mix_instructions / BACLEARS.ANYInstructions per speculative Unknown Branch Misprediction (BAClear) (lower number means higher occurrence rate)00tma_info_inst_mix_bptkbranchBranches;Fed;PGOBR_INST_RETIRED.ALL_BRANCHES / BR_INST_RETIRED.NEAR_TAKENBranch instructions per taken branch00tma_info_inst_mix_instructionsSummary;TmaL1;tma_L1_groupINST_RETIRED.ANYTotal number of retired InstructionsTotal number of retired Instructions. Sample with: INST_RETIRED.PREC_DIST00tma_info_inst_mix_iparithFlops;InsTypeINST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIRED.VECTOR)tma_info_inst_mix_iparith < 10Instructions per FP Arithmetic instruction (lower number means higher occurrence rate)Instructions per FP Arithmetic instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting. Approximated prior to BDW00tma_info_inst_mix_iparith_avx128Flops;FpVector;InsTypeINST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE)tma_info_inst_mix_iparith_avx128 < 10Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate)Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting00tma_info_inst_mix_iparith_avx256Flops;FpVector;InsTypeINST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE)tma_info_inst_mix_iparith_avx256 < 10Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate)Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting00tma_info_inst_mix_iparith_scalar_dpFlops;FpScalar;InsTypeINST_RETIRED.ANY / FP_ARITH_INST_RETIRED.SCALAR_DOUBLEtma_info_inst_mix_iparith_scalar_dp < 10Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate)Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting00tma_info_inst_mix_iparith_scalar_spFlops;FpScalar;InsTypeINST_RETIRED.ANY / FP_ARITH_INST_RETIRED.SCALAR_SINGLEtma_info_inst_mix_iparith_scalar_sp < 10Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate)Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting00tma_info_inst_mix_ipbranchBranches;Fed;InsTypeINST_RETIRED.ANY / BR_INST_RETIRED.ALL_BRANCHEStma_info_inst_mix_ipbranch < 8Instructions per Branch (lower number means higher occurrence rate)00tma_info_inst_mix_ipcallBranches;Fed;PGOINST_RETIRED.ANY / BR_INST_RETIRED.NEAR_CALLtma_info_inst_mix_ipcall < 200Instructions per (near) call (lower number means higher occurrence rate)00tma_info_inst_mix_ipflopFlops;InsTypeINST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE)tma_info_inst_mix_ipflop < 10Instructions per Floating Point (FP) Operation (lower number means higher occurrence rate)00tma_info_inst_mix_iploadInsTypeINST_RETIRED.ANY / MEM_UOPS_RETIRED.ALL_LOADStma_info_inst_mix_ipload < 3Instructions per Load (lower number means higher occurrence rate)00tma_info_inst_mix_ipstoreInsTypeINST_RETIRED.ANY / MEM_UOPS_RETIRED.ALL_STOREStma_info_inst_mix_ipstore < 8Instructions per Store (lower number means higher occurrence rate)00tma_info_inst_mix_iptbBranches;Fed;FetchBW;Frontend;PGO;tma_issueFBINST_RETIRED.ANY / BR_INST_RETIRED.NEAR_TAKENtma_info_inst_mix_iptb < 9Instructions per taken branchInstructions per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_frontend_dsb_coverage, tma_lcp00tma_info_memory_l1d_cache_fill_bwMem;MemoryBW64 * L1D.REPLACEMENT / 1e9 / duration_timeAverage per-thread data fill bandwidth to the L1 data cache [GB / sec]00tma_info_memory_l1mpkiCacheHits;Mem1e3 * MEM_LOAD_UOPS_RETIRED.L1_MISS / INST_RETIRED.ANYL1 cache true misses per kilo instruction for retired demand loads00tma_info_memory_l2_cache_fill_bwMem;MemoryBW64 * L2_LINES_IN.ALL / 1e9 / duration_timeAverage per-thread data fill bandwidth to the L2 cache [GB / sec]00tma_info_memory_l2hpki_allCacheHits;Mem1e3 * (L2_RQSTS.REFERENCES - L2_RQSTS.MISS) / INST_RETIRED.ANYL2 cache hits per kilo instruction for all request types (including speculative)00tma_info_memory_l2hpki_loadCacheHits;Mem1e3 * L2_RQSTS.DEMAND_DATA_RD_HIT / INST_RETIRED.ANYL2 cache hits per kilo instruction for all demand loads  (including speculative)00tma_info_memory_l2mpkiBackend;CacheHits;Mem1e3 * MEM_LOAD_UOPS_RETIRED.L2_MISS / INST_RETIRED.ANYL2 cache true misses per kilo instruction for retired demand loads00tma_info_memory_l2mpki_allCacheHits;Mem;Offcore1e3 * L2_RQSTS.MISS / INST_RETIRED.ANYL2 cache ([RKL+] true) misses per kilo instruction for all request types (including speculative)00tma_info_memory_l2mpki_loadCacheHits;Mem1e3 * L2_RQSTS.DEMAND_DATA_RD_MISS / INST_RETIRED.ANYL2 cache ([RKL+] true) misses per kilo instruction for all demand loads  (including speculative)00tma_info_memory_l2mpki_rfoCacheMisses;Offcore1e3 * OFFCORE_REQUESTS.DEMAND_RFO / INST_RETIRED.ANYOffcore requests (L2 cache miss) per kilo instruction for demand RFOs00tma_info_memory_l3_cache_fill_bwMem;MemoryBW64 * LONGEST_LAT_CACHE.MISS / 1e9 / duration_timeAverage per-thread data fill bandwidth to the L3 cache [GB / sec]00tma_info_memory_l3mpkiMem1e3 * MEM_LOAD_UOPS_RETIRED.L3_MISS / INST_RETIRED.ANYL3 cache true misses per kilo instruction for retired demand loads00tma_info_memory_latency_data_l2_mlpMemory_BW;OffcoreOFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD / OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RDAverage Parallel L2 cache miss data reads00tma_info_memory_latency_load_l2_miss_latencyMemory_Lat;OffcoreOFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / OFFCORE_REQUESTS.DEMAND_DATA_RDAverage Latency for L2 cache miss demand Loads00tma_info_memory_latency_load_l2_mlpMemory_BW;OffcoreOFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_DATA_RDAverage Parallel L2 cache miss demand Loads00tma_info_memory_load_miss_real_latencyMem;MemoryBound;MemoryLatL1D_PEND_MISS.PENDING / (MEM_LOAD_UOPS_RETIRED.L1_MISS + MEM_LOAD_UOPS_RETIRED.HIT_LFB)Actual Average Latency for L1 data-cache miss demand load operations (in core cycles)01tma_info_memory_mlpMem;MemoryBW;MemoryBoundL1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLESMemory-Level-Parallelism (average number of L1 miss demand load when there is at least one such missMemory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)01tma_info_memory_tlb_page_walks_utilizationMem;MemoryTLB(cpu@ITLB_MISSES.WALK_DURATION\,cmask\=1@ + cpu@DTLB_LOAD_MISSES.WALK_DURATION\,cmask\=1@ + cpu@DTLB_STORE_MISSES.WALK_DURATION\,cmask\=1@ + 7 * (DTLB_STORE_MISSES.WALK_COMPLETED + DTLB_LOAD_MISSES.WALK_COMPLETED + ITLB_MISSES.WALK_COMPLETED)) / tma_info_core_core_clkstma_info_memory_tlb_page_walks_utilization > 0.5Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses00tma_info_pipeline_executeCor;Pipeline;PortsUtil;SMTUOPS_EXECUTED.THREAD / (cpu@UOPS_EXECUTED.CORE\,cmask\=1@ / 2 if #SMT_on else UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC)Instruction-Level-Parallelism (average number of uops executed when there is execution) per core00tma_info_pipeline_retirePipeline;RetUOPS_RETIRED.RETIRE_SLOTS / cpu@UOPS_RETIRED.RETIRE_SLOTS\,cmask\=1@Average number of Uops retired in cycles where at least one uop has retired00tma_info_system_cpus_utilizedSummaryCPU_CLK_UNHALTED.REF_TSC / TSCAverage number of utilized CPUs00tma_info_system_dram_bw_useHPC;MemOffcore;MemoryBW;SoC;tma_issueBW64 * (UNC_ARB_TRK_REQUESTS.ALL + UNC_ARB_COH_TRK_REQUESTS.ALL) / 1e6 / duration_time / 1e3Average external Memory Bandwidth Use for reads and writes [GB / sec]Average external Memory Bandwidth Use for reads and writes [GB / sec]. Related metrics: tma_fb_full, tma_mem_bandwidth, tma_sq_full00tma_info_system_gflopsCor;Flops;HPC(FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / 1e9 / duration_timeGiga Floating Point Operations Per SecondGiga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width00tma_info_system_ipfarbranchBranches;OSINST_RETIRED.ANY / BR_INST_RETIRED.FAR_BRANCH:utma_info_system_ipfarbranch < 1e6Instructions per Far Branch ( Far Branches apply upon transition from application to operating system, handling interrupts, exceptions) [lower number means higher occurrence rate]00tma_info_system_kernel_cpiOSCPU_CLK_UNHALTED.THREAD_P:k / INST_RETIRED.ANY_P:kCycles Per Instruction for the Operating System (OS) Kernel mode00tma_info_system_kernel_utilizationOSCPU_CLK_UNHALTED.THREAD_P:k / CPU_CLK_UNHALTED.THREADtma_info_system_kernel_utilization > 0.05Fraction of cycles spent in the Operating System (OS) Kernel mode00tma_info_system_smt_2t_utilizationSMT(1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / (CPU_CLK_UNHALTED.REF_XCLK_ANY / 2) if #SMT_on else 0)Fraction of cycles where both hardware Logical Processors were active00tma_info_system_turbo_utilizationPowertma_info_thread_clks / CPU_CLK_UNHALTED.REF_TSCAverage Frequency Utilization relative nominal frequency00tma_info_thread_clksPipelineCPU_CLK_UNHALTED.THREADPer-Logical Processor actual clocks when the Logical Processor is active00tma_info_thread_execute_per_issueCor;PipelineUOPS_EXECUTED.THREAD / UOPS_ISSUED.ANYThe ratio of Executed- by Issued-UopsThe ratio of Executed- by Issued-Uops. Ratio > 1 suggests high rate of uop micro-fusions. Ratio < 1 suggest high rate of "execute" at rename stage00tma_info_thread_ipcRet;SummaryINST_RETIRED.ANY / tma_info_thread_clksInstructions Per Cycle (per Logical Processor)00tma_info_thread_slotsTmaL1;tma_L1_group4 * tma_info_core_core_clksTotal issue-pipeline slots (per-Physical Core till ICL; per-Logical Processor ICL onward)00tma_info_thread_uoppiPipeline;Ret;RetireUOPS_RETIRED.RETIRE_SLOTS / INST_RETIRED.ANYtma_info_thread_uoppi > 1.05Uops Per Instruction00tma_info_thread_uptbBranches;Fed;FetchBWUOPS_RETIRED.RETIRE_SLOTS / BR_INST_RETIRED.NEAR_TAKENtma_info_thread_uptb < 6Uops per taken branch00tma_itlb_missesBigFootprint;BvBC;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group(14 * ITLB_MISSES.STLB_HIT + cpu@ITLB_MISSES.WALK_DURATION\,cmask\=1@ + 7 * ITLB_MISSES.WALK_COMPLETED) / tma_info_thread_clkstma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) missesThis metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: ITLB_MISSES.WALK_COMPLETED100%00tma_l1_boundCacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_issueL1;tma_issueMC;tma_memory_bound_groupmax((CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS) / tma_info_thread_clks, 0)tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled without loads missing the L1 data cacheThis metric estimates how often the CPU was stalled without loads missing the L1 data cache.  The L1 data cache typically has the shortest latency.  However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_UOPS_RETIRED.L1_HIT_PS;MEM_LOAD_UOPS_RETIRED.HIT_LFB_PS. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1100%00tma_l2_boundBvML;CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group(CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clkstma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled due to L2 cache accesses by loadsThis metric estimates how often the CPU was stalled due to L2 cache accesses by loads.  Avoiding cache misses (i.e. L1 misses/L2 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L2_HIT_PS100%00tma_l3_boundCacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_groupMEM_LOAD_UOPS_RETIRED.L3_HIT / (MEM_LOAD_UOPS_RETIRED.L3_HIT + 7 * MEM_LOAD_UOPS_RETIRED.L3_MISS) * CYCLE_ACTIVITY.STALLS_L2_MISS / tma_info_thread_clkstma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling CoreThis metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core.  Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS100%03tma_l3_hit_latencyBvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group29 * (MEM_LOAD_UOPS_RETIRED.L3_HIT * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_RETIRED.L3_MISS))) / tma_info_thread_clkstma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited).  Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance.  Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS. Related metrics: tma_mem_latency100%01tma_lcpFetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueFBILD_STALL.LCP / tma_info_thread_clkstma_lcp > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric represents fraction of cycles CPU was stalled due to Length Changing Prefixes (LCPs)This metric represents fraction of cycles CPU was stalled due to Length Changing Prefixes (LCPs). Using proper compiler flags or Intel Compiler by default will certainly avoid this. #Link: Optimization Guide about LCP BKMs. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb100%00tma_light_operationsRetire;TmaL2;TopdownL2;tma_L2_group;tma_retiring_grouptma_retiring - tma_heavy_operationstma_light_operations > 0.6This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation)This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation). This correlates with total number of instructions used by the program. A uops-per-instruction (see UopPI metric) ratio of 1 or less should be expected for decently optimized code running on Intel Core/Xeon products. While this often indicates efficient X86 instructions were executed; high value does not necessarily mean better performance cannot be achieved. ([ICL+] Note this may undercount due to approximation using indirect events; [ADL+] .). Sample with: INST_RETIRED.PREC_DIST100%TopdownL200tma_load_op_utilizationTopdownL5;tma_L5_group;tma_ports_utilized_3m_group(UOPS_DISPATCHED_PORT.PORT_2 + UOPS_DISPATCHED_PORT.PORT_3 + UOPS_DISPATCHED_PORT.PORT_7 - UOPS_DISPATCHED_PORT.PORT_4) / (2 * tma_info_core_core_clks)tma_load_op_utilization > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port for Load operationsThis metric represents Core fraction of cycles CPU dispatched uops on execution port for Load operations. Sample with: UOPS_DISPATCHED.PORT_2_3100%02tma_lock_latencyOffcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_groupMEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL_STORES * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO) / tma_info_thread_clkstma_lock_latency > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric represents fraction of cycles the CPU spent handling cache misses due to lock operationsThis metric represents fraction of cycles the CPU spent handling cache misses due to lock operations. Due to the microarchitecture handling of locks; they are classified as L1_Bound regardless of what memory source satisfied them. Sample with: MEM_UOPS_RETIRED.LOCK_LOADS_PS. Related metrics: tma_store_latency100%01tma_machine_clearsBadSpec;BvMS;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxntma_bad_speculation - tma_branch_mispredictstma_machine_clears > 0.1 & tma_bad_speculation > 0.15This metric represents fraction of slots the CPU has wasted due to Machine ClearsThis metric represents fraction of slots the CPU has wasted due to Machine Clears.  These slots are either wasted by uops fetched prior to the clear; or stalls the out-of-order portion of the machine needs to recover its state after the clear. For example; this can happen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modifying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT. Related metrics: tma_clears_resteers, tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_l1_bound, tma_microcode_sequencer, tma_ms_switches, tma_remote_cache100%TopdownL201tma_mem_bandwidthBvMS;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBWmin(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\,cmask\=4@) / tma_info_thread_clkstma_mem_bandwidth > 0.2 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM)This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM).  The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_sq_full100%00tma_mem_latencyBvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLatmin(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD) / tma_info_thread_clks - tma_mem_bandwidthtma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM)This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM).  This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: tma_l3_hit_latency100%00tma_memory_boundBackend;TmaL2;TopdownL2;tma_L2_group;tma_backend_bound_group(CYCLE_ACTIVITY.STALLS_MEM_ANY + RESOURCE_STALLS.SB) / (CYCLE_ACTIVITY.STALLS_TOTAL + UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC - (UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC if tma_info_thread_ipc > 1.8 else UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC) - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0) + RESOURCE_STALLS.SB) * tma_backend_boundtma_memory_bound > 0.2 & tma_backend_bound > 0.2This metric represents fraction of slots the Memory subsystem within the Backend was a bottleneckThis metric represents fraction of slots the Memory subsystem within the Backend was a bottleneck.  Memory Bound estimates fraction of slots where pipeline is likely stalled due to demand load or store instructions. This accounts mainly for (1) non-completed in-flight memory demand loads which coincides with execution units starvation; in addition to (2) cases where stores could impose backpressure on the pipeline when many of them get buffered at the same time (less common out of the two)100%TopdownL201tma_microcode_sequencerMicroSeq;TopdownL3;tma_L3_group;tma_heavy_operations_group;tma_issueMC;tma_issueMSUOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY * IDQ.MS_UOPS / tma_info_thread_slotstma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1This metric represents fraction of slots the CPU was retiring uops fetched by the Microcode Sequencer (MS) unitThis metric represents fraction of slots the CPU was retiring uops fetched by the Microcode Sequencer (MS) unit.  The MS is used for CISC instructions not supported by the default decoders (like repeat move strings; or CPUID); or by microcode assists used to address some operation modes (like in Floating Point assists). These cases can often be avoided. Sample with: IDQ.MS_UOPS. Related metrics: tma_clears_resteers, tma_l1_bound, tma_machine_clears, tma_ms_switches100%00tma_mispredicts_resteersBadSpec;BrMispredicts;BvMP;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueBMBR_MISP_RETIRED.ALL_BRANCHES * tma_branch_resteers / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT + BACLEARS.ANY)tma_mispredicts_resteers > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stageThis metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost100%00tma_miteDSBmiss;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group(IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MITE_CYCLES_4_UOPS) / tma_info_core_core_clks / 2tma_mite > 0.1 & tma_fetch_bandwidth > 0.2This metric represents Core fraction of cycles in which CPU was likely limited due to the MITE pipeline (the legacy decode pipeline)This metric represents Core fraction of cycles in which CPU was likely limited due to the MITE pipeline (the legacy decode pipeline). This pipeline is used for code that was not pre-cached in the DSB or LSD. For example; inefficiencies due to asymmetric decoders; use of long immediate or LCP can manifest as MITE fetch bandwidth bottleneck100%00tma_ms_switchesFetchLat;MicroSeq;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueMC;tma_issueMS;tma_issueMV;tma_issueSO2 * IDQ.MS_SWITCHES / tma_info_thread_clkstma_ms_switches > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS)This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS). Commonly used instructions are optimized for delivery by the DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Certain operations cannot be handled natively by the execution pipeline; and must be performed by microcode (small programs injected into the execution stream). Switching to the MS too often can negatively impact performance. The MS is designated to deliver long uop flows required by CISC instructions like CPUID; or uncommon conditions like Floating Point Assists when dealing with Denormals. Sample with: IDQ.MS_SWITCHES. Related metrics: tma_clears_resteers, tma_l1_bound, tma_machine_clears, tma_microcode_sequencer, tma_mixing_vectors, tma_serializing_operation100%00tma_port_0Compute;TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2PUOPS_DISPATCHED_PORT.PORT_0 / tma_info_core_core_clkstma_port_0 > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch)This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch). Sample with: UOPS_DISPATCHED_PORT.PORT_0. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_port_1TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2PUOPS_DISPATCHED_PORT.PORT_1 / tma_info_core_core_clkstma_port_1 > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port 1 (ALU)This metric represents Core fraction of cycles CPU dispatched uops on execution port 1 (ALU). Sample with: UOPS_DISPATCHED_PORT.PORT_1. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_port_2TopdownL6;tma_L6_group;tma_load_op_utilization_groupUOPS_DISPATCHED_PORT.PORT_2 / tma_info_core_core_clkstma_port_2 > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port 2 ([SNB+]Loads and Store-address; [ICL+] Loads)This metric represents Core fraction of cycles CPU dispatched uops on execution port 2 ([SNB+]Loads and Store-address; [ICL+] Loads). Sample with: UOPS_DISPATCHED_PORT.PORT_2100%00tma_port_3TopdownL6;tma_L6_group;tma_load_op_utilization_groupUOPS_DISPATCHED_PORT.PORT_3 / tma_info_core_core_clkstma_port_3 > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port 3 ([SNB+]Loads and Store-address; [ICL+] Loads)This metric represents Core fraction of cycles CPU dispatched uops on execution port 3 ([SNB+]Loads and Store-address; [ICL+] Loads). Sample with: UOPS_DISPATCHED_PORT.PORT_3100%00tma_port_4TopdownL6;tma_L6_group;tma_issueSpSt;tma_store_op_utilization_grouptma_store_op_utilizationtma_port_4 > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port 4 (Store-data)This metric represents Core fraction of cycles CPU dispatched uops on execution port 4 (Store-data). Sample with: UOPS_DISPATCHED_PORT.PORT_4. Related metrics: tma_split_stores100%00tma_port_5TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2PUOPS_DISPATCHED_PORT.PORT_5 / tma_info_core_core_clkstma_port_5 > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port 5 ([SNB+] Branches and ALU; [HSW+] ALU)This metric represents Core fraction of cycles CPU dispatched uops on execution port 5 ([SNB+] Branches and ALU; [HSW+] ALU). Sample with: UOPS_DISPATCHED.PORT_5. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_6, tma_ports_utilized_2100%00tma_port_6TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2PUOPS_DISPATCHED_PORT.PORT_6 / tma_info_core_core_clkstma_port_6 > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU)This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU). Sample with: UOPS_DISPATCHED_PORT.PORT_6. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_ports_utilized_2100%00tma_port_7TopdownL6;tma_L6_group;tma_store_op_utilization_groupUOPS_DISPATCHED_PORT.PORT_7 / tma_info_core_core_clkstma_port_7 > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port 7 ([HSW+]simple Store-address)This metric represents Core fraction of cycles CPU dispatched uops on execution port 7 ([HSW+]simple Store-address). Sample with: UOPS_DISPATCHED_PORT.PORT_7100%00tma_ports_utilizationPortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group(CYCLE_ACTIVITY.STALLS_TOTAL + UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC - (UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC if tma_info_thread_ipc > 1.8 else UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC) - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0) + RESOURCE_STALLS.SB - RESOURCE_STALLS.SB - CYCLE_ACTIVITY.STALLS_MEM_ANY) / tma_info_thread_clkstma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)This metric estimates fraction of cycles the CPU performance was potentially limited due to Core computation issues (non divider-related)This metric estimates fraction of cycles the CPU performance was potentially limited due to Core computation issues (non divider-related).  Two distinct categories can be attributed into this metric: (1) heavy data-dependency among contiguous instructions would manifest in this metric - such cases are often referred to as low Instruction Level Parallelism (ILP). (2) Contention on some hardware execution unit other than Divider. For example; when there are too many multiply operations100%01tma_ports_utilized_0PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group(cpu@UOPS_EXECUTED.CORE\,inv\,cmask\=1@ / 2 if #SMT_on else (CYCLE_ACTIVITY.STALLS_TOTAL - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0)) / tma_info_core_core_clks)tma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise)This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise). Long-latency instructions like divides may contribute to this metric100%00tma_ports_utilized_1PortsUtil;TopdownL4;tma_L4_group;tma_issueL1;tma_ports_utilization_group((cpu@UOPS_EXECUTED.CORE\,cmask\=1@ - cpu@UOPS_EXECUTED.CORE\,cmask\=2@) / 2 if #SMT_on else (UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC - UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC) / tma_info_core_core_clks)tma_ports_utilized_1 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles where the CPU executed total of 1 uop per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)This metric represents fraction of cycles where the CPU executed total of 1 uop per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). This can be due to heavy data-dependency among software instructions; or over oversubscribing a particular hardware resource. In some other cases with high 1_Port_Utilized and L1_Bound; this metric can point to L1 data-cache latency bottleneck that may not necessarily manifest with complete execution starvation (due to the short L1 latency e.g. walking a linked list) - looking at the assembly can be helpful. Related metrics: tma_l1_bound100%00tma_ports_utilized_2PortsUtil;TopdownL4;tma_L4_group;tma_issue2P;tma_ports_utilization_group((cpu@UOPS_EXECUTED.CORE\,cmask\=2@ - cpu@UOPS_EXECUTED.CORE\,cmask\=3@) / 2 if #SMT_on else (UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC - UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC) / tma_info_core_core_clks)tma_ports_utilized_2 > 0.15 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise).  Loop Vectorization -most compilers feature auto-Vectorization options today- reduces pressure on the execution ports as multiple elements are calculated with same uop. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6100%00tma_ports_utilized_3mBvCB;PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group(cpu@UOPS_EXECUTED.CORE\,cmask\=3@ / 2 if #SMT_on else UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC) / tma_info_core_core_clkstma_ports_utilized_3m > 0.4 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)100%00tma_retiringBvUW;TmaL1;TopdownL1;tma_L1_groupUOPS_RETIRED.RETIRE_SLOTS / tma_info_thread_slotstma_retiring > 0.7 | tma_heavy_operations > 0.1This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retiredThis category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired. Ideally; all pipeline slots would be attributed to the Retiring category.  Retiring of 100% would indicate the maximum Pipeline_Width throughput was achieved.  Maximizing Retiring typically increases the Instructions-per-cycle (see IPC metric). Note that a high Retiring value does not necessary mean there is no room for more performance.  For example; Heavy-operations or Microcode Assists are categorized under Retiring. They often indicate suboptimal performance and can often be optimized or avoided. Sample with: UOPS_RETIRED.RETIRE_SLOTS100%TopdownL100tma_split_loadsTopdownL4;tma_L4_group;tma_l1_bound_grouptma_info_memory_load_miss_real_latency * LD_BLOCKS.NO_SR / tma_info_thread_clkstma_split_loads > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundaryThis metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundary. Sample with: MEM_UOPS_RETIRED.SPLIT_LOADS_PS100%01tma_split_storesTopdownL4;tma_L4_group;tma_issueSpSt;tma_store_bound_group2 * MEM_UOPS_RETIRED.SPLIT_STORES / tma_info_core_core_clkstma_split_stores > 0.2 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric represents rate of split store accessesThis metric represents rate of split store accesses.  Consider aligning your data to the 64-byte cache line granularity. Sample with: MEM_UOPS_RETIRED.SPLIT_STORES_PS. Related metrics: tma_port_4100%00tma_sq_fullBvMS;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group(OFFCORE_REQUESTS_BUFFER.SQ_FULL / 2 if #SMT_on else OFFCORE_REQUESTS_BUFFER.SQ_FULL) / tma_info_core_core_clkstma_sq_full > 0.3 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors)This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors). Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth100%00tma_store_boundMemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_groupRESOURCE_STALLS.SB / tma_info_thread_clkstma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often CPU was stalled  due to RFO store memory accesses; RFO store issue a read-for-ownership request before the writeThis metric estimates how often CPU was stalled  due to RFO store memory accesses; RFO store issue a read-for-ownership request before the write. Even though store accesses do not typically stall out-of-order CPUs; there are few cases where stores can lead to actual stalls. This metric will be flagged should RFO stores be a bottleneck. Sample with: MEM_UOPS_RETIRED.ALL_STORES_PS100%00tma_store_fwd_blkTopdownL4;tma_L4_group;tma_l1_bound_group13 * LD_BLOCKS.STORE_FORWARD / tma_info_thread_clkstma_store_fwd_blk > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates fraction of cycles when the memory subsystem had loads blocked since they could not forward data from earlier (in program order) overlapping storesThis metric roughly estimates fraction of cycles when the memory subsystem had loads blocked since they could not forward data from earlier (in program order) overlapping stores. To streamline memory operations in the pipeline; a load can avoid waiting for memory if a prior in-flight store is writing the data that the load wants to read (store forwarding process). However; in some cases the load may be blocked for a significant time pending the store forward. For example; when the prior store is writing a smaller region than the load is reading100%00tma_store_latencyBvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group(L2_RQSTS.RFO_HIT * 9 * (1 - MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL_STORES) + (1 - MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL_STORES) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / tma_info_thread_clkstma_store_latency > 0.1 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles the CPU spent handling L1D store missesThis metric estimates fraction of cycles the CPU spent handling L1D store misses. Store accesses usually less impact out-of-order core performance; however; holding resources for longer time can lead into undesired implications (e.g. contention on L1D fill-buffer entries - see FB_Full). Related metrics: tma_fb_full, tma_lock_latency100%01tma_store_op_utilizationTopdownL5;tma_L5_group;tma_ports_utilized_3m_groupUOPS_DISPATCHED_PORT.PORT_4 / tma_info_core_core_clkstma_store_op_utilization > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port for Store operations100%00tma_unknown_branchesBigFootprint;BvBC;FetchLat;TopdownL4;tma_L4_group;tma_branch_resteers_grouptma_branch_resteers - tma_mispredicts_resteers - tma_clears_resteerstma_unknown_branches > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))This metric represents fraction of cycles the CPU was stalled due to new branch address clearsThis metric represents fraction of cycles the CPU was stalled due to new branch address clears. These are fetched branches the Branch Prediction Unit was unable to recognize (e.g. first time the branch is fetched or hitting BTB capacity limit) hence called Unknown Branches. Sample with: BACLEARS.ANY100%00tma_x87_useCompute;TopdownL4;tma_L4_group;tma_fp_arith_groupINST_RETIRED.X87 * tma_info_thread_uoppi / UOPS_RETIRED.RETIRE_SLOTStma_x87_use > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)This metric serves as an approximation of legacy x87 usageThis metric serves as an approximation of legacy x87 usage. It accounts for instructions beyond X87 FP arithmetic operations; hence may be used as a thermometer to avoid X87 high usage and preferably upgrade to modern ISA. See Tip under Tuning Hint100%00tma_assistsBvIO;TopdownL4;tma_L4_group;tma_microcode_sequencer_group66 * OTHER_ASSISTS.ANY_WB_ASSIST / tma_info_thread_slotstma_assists > 0.1 & (tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1)This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of AssistsThis metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists. Assists are long sequences of uops that are required in certain corner-cases for operations that cannot be handled natively by the execution pipeline. For example; when working with very small floating point values (so-called Denormals); the FP units are not set up to perform these operations natively. Instead; a sequence of instructions to perform the computation on the Denormals is injected into the pipeline. Since these microcode sequences might be dozens of uops long; Assists can be extremely deleterious to performance and they can be avoided in many cases. Sample with: ASSISTS.ANY100%00tma_backend_boundBvOB;TmaL1;TopdownL1;tma_L1_group1 - (tma_frontend_bound + tma_bad_speculation + tma_retiring)tma_backend_bound > 0.2This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the BackendThis category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend. Backend is the portion of the processor core where the out-of-order scheduler dispatches ready uops into their respective execution units; and once completed these uops get retired according to program order. For example; stalls due to data-cache misses or stalls due to the divider unit being overloaded are both categorized under Backend Bound. Backend Bound is further divided into two main categories: Memory Bound and Core Bound. Sample with: TOPDOWN.BACKEND_BOUND_SLOTS100%TopdownL102tma_branch_mispredictsBadSpec;BrMispredicts;BvMP;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBMBR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT) * tma_bad_speculationtma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15This metric represents fraction of slots the CPU has wasted due to Branch MispredictionThis metric represents fraction of slots the CPU has wasted due to Branch Misprediction.  These slots are either wasted by uops fetched from an incorrectly speculated program path; or stalls when the out-of-order part of the machine needs to recover its state from a speculative path. Sample with: TOPDOWN.BR_MISPREDICT_SLOTS. Related metrics: tma_info_bad_spec_branch_misprediction_cost, tma_mispredicts_resteers100%TopdownL201tma_clears_resteersBadSpec;MachineClears;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueMCMACHINE_CLEARS.COUNT * tma_branch_resteers / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT + BACLEARS.ANY)tma_clears_resteers > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Machine ClearsThis metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Machine Clears. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES. Related metrics: tma_l1_bound, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches100%00tma_contested_accessesBvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group(60 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_RETIRED.L3_MISS))) + 43 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_RETIRED.L3_MISS)))) / tma_info_thread_clkstma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accessesThis metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS. Related metrics: tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache100%01tma_data_sharingBvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group43 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_RETIRED.L3_MISS))) / tma_info_thread_clkstma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accessesThis metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD. Related metrics: tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache100%01tma_dividerBvCB;TopdownL3;tma_L3_group;tma_core_bound_groupARITH.FPU_DIV_ACTIVE / tma_info_core_core_clkstma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)This metric represents fraction of cycles where the Divider unit was activeThis metric represents fraction of cycles where the Divider unit was active. Divide and square root instructions are performed by the Divider unit and can take considerably longer latency than integer or Floating Point addition; subtraction; or multiplication. Sample with: ARITH.DIVIDER_ACTIVE100%00tma_dram_boundMemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group(1 - MEM_LOAD_UOPS_RETIRED.L3_HIT / (MEM_LOAD_UOPS_RETIRED.L3_HIT + 7 * MEM_LOAD_UOPS_RETIRED.L3_MISS)) * CYCLE_ACTIVITY.STALLS_L2_MISS / tma_info_thread_clkstma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loadsThis metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads. Better caching can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_MISS_PS100%03tma_dsb_switchesDSBmiss;FetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueFBDSB2MITE_SWITCHES.PENALTY_CYCLES / tma_info_thread_clkstma_dsb_switches > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric represents fraction of cycles the CPU was stalled due to switches from DSB to MITE pipelinesThis metric represents fraction of cycles the CPU was stalled due to switches from DSB to MITE pipelines. The DSB (decoded i-cache) is a Uop Cache where the front-end directly delivers Uops (micro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter latency and delivered higher bandwidth than the MITE (legacy instruction decode pipeline). Switching between the two pipelines can cause penalties hence this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DSB_MISS_PS. Related metrics: tma_fetch_bandwidth, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp100%00tma_dtlb_loadBvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group(8 * DTLB_LOAD_MISSES.STLB_HIT + cpu@DTLB_LOAD_MISSES.WALK_DURATION\,cmask\=1@ + 7 * DTLB_LOAD_MISSES.WALK_COMPLETED) / tma_info_thread_clkstma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accessesThis metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_dtlb_store100%00tma_dtlb_storeBvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group(8 * DTLB_STORE_MISSES.STLB_HIT + cpu@DTLB_STORE_MISSES.WALK_DURATION\,cmask\=1@ + 7 * DTLB_STORE_MISSES.WALK_COMPLETED) / tma_info_thread_clkstma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates the fraction of cycles spent handling first-level data TLB store missesThis metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses.  As with ordinary data caching; focus on improving data locality and reducing working-set size to reduce DTLB overhead.  Additionally; consider using profile-guided optimization (PGO) to collocate frequently-used data on the same page.  Try using larger page sizes for large amounts of frequently-used data. Sample with: MEM_INST_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_dtlb_load100%00tma_fetch_bandwidthFetchBW;Frontend;TmaL2;TopdownL2;tma_L2_group;tma_frontend_bound_group;tma_issueFBtma_frontend_bound - tma_fetch_latencytma_fetch_bandwidth > 0.2This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issuesThis metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues.  For example; inefficiencies at the instruction decoders; or restrictions for caching in the DSB (decoded uops cache) are categorized under Fetch Bandwidth. In such cases; the Frontend typically delivers suboptimal amount of uops to the Backend. Sample with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_2_PS. Related metrics: tma_dsb_switches, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp100%TopdownL200tma_fetch_latencyFrontend;TmaL2;TopdownL2;tma_L2_group;tma_frontend_bound_group4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / tma_info_thread_slotstma_fetch_latency > 0.1 & tma_frontend_bound > 0.15This metric represents fraction of slots the CPU was stalled due to Frontend latency issuesThis metric represents fraction of slots the CPU was stalled due to Frontend latency issues.  For example; instruction-cache misses; iTLB misses or fetch stalls after a branch misprediction are categorized under Frontend Latency. In such cases; the Frontend eventually delivers no uops for some period. Sample with: FRONTEND_RETIRED.LATENCY_GE_16_PS;FRONTEND_RETIRED.LATENCY_GE_8_PS100%TopdownL200tma_frontend_boundBvFB;BvIO;PGO;TmaL1;TopdownL1;tma_L1_groupIDQ_UOPS_NOT_DELIVERED.CORE / tma_info_thread_slotstma_frontend_bound > 0.15This category represents fraction of slots where the processor's Frontend undersupplies its BackendThis category represents fraction of slots where the processor's Frontend undersupplies its Backend. Frontend denotes the first part of the processor core responsible to fetch operations that are executed later on by the Backend part. Within the Frontend; a branch predictor predicts the next address to fetch; cache-lines are fetched from the memory subsystem; parsed into instructions; and lastly decoded into micro-operations (uops). Ideally the Frontend can issue Pipeline_Width uops every cycle to the Backend. Frontend Bound denotes unutilized issue-slots when there is no Backend stall; i.e. bubbles where Frontend delivered no uops while Backend could have accepted them. For example; stalls due to instruction-cache misses would be categorized under Frontend Bound. Sample with: FRONTEND_RETIRED.LATENCY_GE_4_PS100%TopdownL100tma_icache_missesBigFootprint;BvBC;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_groupICACHE.IFDATA_STALL / tma_info_thread_clkstma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric represents fraction of cycles the CPU was stalled due to instruction cache missesThis metric represents fraction of cycles the CPU was stalled due to instruction cache misses. Sample with: FRONTEND_RETIRED.L2_MISS_PS;FRONTEND_RETIRED.L1I_MISS_PS100%00tma_info_system_dram_bw_useHPC;MemOffcore;MemoryBW;SoC;tma_issueBW64 * (UNC_M_CAS_COUNT.RD + UNC_M_CAS_COUNT.WR) / 1e9 / duration_timeAverage external Memory Bandwidth Use for reads and writes [GB / sec]Average external Memory Bandwidth Use for reads and writes [GB / sec]. Related metrics: tma_fb_full, tma_mem_bandwidth, tma_sq_full00tma_info_system_socket_clksSoCcbox_0@event\=0x0@Socket actual clocks when any core is active on that socket00tma_itlb_missesBigFootprint;BvBC;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group(14 * ITLB_MISSES.STLB_HIT + cpu@ITLB_MISSES.WALK_DURATION\,cmask\=1@ + 7 * ITLB_MISSES.WALK_COMPLETED) / tma_info_thread_clkstma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) missesThis metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: FRONTEND_RETIRED.STLB_MISS_PS;FRONTEND_RETIRED.ITLB_MISS_PS100%00tma_l1_boundCacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_issueL1;tma_issueMC;tma_memory_bound_groupmax((CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS) / tma_info_thread_clks, 0)tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled without loads missing the L1 data cacheThis metric estimates how often the CPU was stalled without loads missing the L1 data cache.  The L1 data cache typically has the shortest latency.  However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT_PS;MEM_LOAD_RETIRED.FB_HIT_PS. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1100%00tma_l2_boundBvML;CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group(CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clkstma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled due to L2 cache accesses by loadsThis metric estimates how often the CPU was stalled due to L2 cache accesses by loads.  Avoiding cache misses (i.e. L1 misses/L2 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L2_HIT_PS100%00tma_l3_boundCacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_groupMEM_LOAD_UOPS_RETIRED.L3_HIT / (MEM_LOAD_UOPS_RETIRED.L3_HIT + 7 * MEM_LOAD_UOPS_RETIRED.L3_MISS) * CYCLE_ACTIVITY.STALLS_L2_MISS / tma_info_thread_clkstma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling CoreThis metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core.  Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS100%03tma_l3_hit_latencyBvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group29 * (MEM_LOAD_UOPS_RETIRED.L3_HIT * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_RETIRED.L3_MISS))) / tma_info_thread_clkstma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited).  Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance.  Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS. Related metrics: tma_mem_latency100%01tma_load_op_utilizationTopdownL5;tma_L5_group;tma_ports_utilized_3m_group(UOPS_DISPATCHED_PORT.PORT_2 + UOPS_DISPATCHED_PORT.PORT_3 + UOPS_DISPATCHED_PORT.PORT_7 - UOPS_DISPATCHED_PORT.PORT_4) / (2 * tma_info_core_core_clks)tma_load_op_utilization > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port for Load operationsThis metric represents Core fraction of cycles CPU dispatched uops on execution port for Load operations. Sample with: UOPS_DISPATCHED.PORT_2_3_10100%02tma_lock_latencyOffcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_groupMEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL_STORES * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO) / tma_info_thread_clkstma_lock_latency > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric represents fraction of cycles the CPU spent handling cache misses due to lock operationsThis metric represents fraction of cycles the CPU spent handling cache misses due to lock operations. Due to the microarchitecture handling of locks; they are classified as L1_Bound regardless of what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOADS. Related metrics: tma_store_latency100%01tma_microcode_sequencerMicroSeq;TopdownL3;tma_L3_group;tma_heavy_operations_group;tma_issueMC;tma_issueMSUOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY * IDQ.MS_UOPS / tma_info_thread_slotstma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1This metric represents fraction of slots the CPU was retiring uops fetched by the Microcode Sequencer (MS) unitThis metric represents fraction of slots the CPU was retiring uops fetched by the Microcode Sequencer (MS) unit.  The MS is used for CISC instructions not supported by the default decoders (like repeat move strings; or CPUID); or by microcode assists used to address some operation modes (like in Floating Point assists). These cases can often be avoided. Sample with: UOPS_RETIRED.MS. Related metrics: tma_clears_resteers, tma_l1_bound, tma_machine_clears, tma_ms_switches100%00tma_mispredicts_resteersBadSpec;BrMispredicts;BvMP;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueBMBR_MISP_RETIRED.ALL_BRANCHES * tma_branch_resteers / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT + BACLEARS.ANY)tma_mispredicts_resteers > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stageThis metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost100%00tma_miteDSBmiss;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group(IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MITE_CYCLES_4_UOPS) / tma_info_core_core_clks / 2tma_mite > 0.1 & tma_fetch_bandwidth > 0.2This metric represents Core fraction of cycles in which CPU was likely limited due to the MITE pipeline (the legacy decode pipeline)This metric represents Core fraction of cycles in which CPU was likely limited due to the MITE pipeline (the legacy decode pipeline). This pipeline is used for code that was not pre-cached in the DSB or LSD. For example; inefficiencies due to asymmetric decoders; use of long immediate or LCP can manifest as MITE fetch bandwidth bottleneck. Sample with: FRONTEND_RETIRED.ANY_DSB_MISS100%00tma_port_0Compute;TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2PUOPS_DISPATCHED_PORT.PORT_0 / tma_info_core_core_clkstma_port_0 > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch)This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch). Sample with: UOPS_DISPATCHED.PORT_0. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_port_1TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2PUOPS_DISPATCHED_PORT.PORT_1 / tma_info_core_core_clkstma_port_1 > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port 1 (ALU)This metric represents Core fraction of cycles CPU dispatched uops on execution port 1 (ALU). Sample with: UOPS_DISPATCHED.PORT_1. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_port_2TopdownL6;tma_L6_group;tma_load_op_utilization_groupUOPS_DISPATCHED_PORT.PORT_2 / tma_info_core_core_clkstma_port_2 > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port 2 ([SNB+]Loads and Store-address; [ICL+] Loads)100%00tma_port_3TopdownL6;tma_L6_group;tma_load_op_utilization_groupUOPS_DISPATCHED_PORT.PORT_3 / tma_info_core_core_clkstma_port_3 > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port 3 ([SNB+]Loads and Store-address; [ICL+] Loads)100%00tma_port_4TopdownL6;tma_L6_group;tma_issueSpSt;tma_store_op_utilization_grouptma_store_op_utilizationtma_port_4 > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port 4 (Store-data)This metric represents Core fraction of cycles CPU dispatched uops on execution port 4 (Store-data). Related metrics: tma_split_stores100%00tma_port_5TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2PUOPS_DISPATCHED_PORT.PORT_5 / tma_info_core_core_clkstma_port_5 > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port 5 ([SNB+] Branches and ALU; [HSW+] ALU)This metric represents Core fraction of cycles CPU dispatched uops on execution port 5 ([SNB+] Branches and ALU; [HSW+] ALU). Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_6, tma_ports_utilized_2100%00tma_port_6TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2PUOPS_DISPATCHED_PORT.PORT_6 / tma_info_core_core_clkstma_port_6 > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU)This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU). Sample with: UOPS_DISPATCHED.PORT_6. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_ports_utilized_2100%00tma_port_7TopdownL6;tma_L6_group;tma_store_op_utilization_groupUOPS_DISPATCHED_PORT.PORT_7 / tma_info_core_core_clkstma_port_7 > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port 7 ([HSW+]simple Store-address)100%00tma_ports_utilized_1PortsUtil;TopdownL4;tma_L4_group;tma_issueL1;tma_ports_utilization_group((cpu@UOPS_EXECUTED.CORE\,cmask\=1@ - cpu@UOPS_EXECUTED.CORE\,cmask\=2@) / 2 if #SMT_on else (UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC - UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC) / tma_info_core_core_clks)tma_ports_utilized_1 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles where the CPU executed total of 1 uop per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)This metric represents fraction of cycles where the CPU executed total of 1 uop per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). This can be due to heavy data-dependency among software instructions; or over oversubscribing a particular hardware resource. In some other cases with high 1_Port_Utilized and L1_Bound; this metric can point to L1 data-cache latency bottleneck that may not necessarily manifest with complete execution starvation (due to the short L1 latency e.g. walking a linked list) - looking at the assembly can be helpful. Sample with: EXE_ACTIVITY.1_PORTS_UTIL. Related metrics: tma_l1_bound100%00tma_ports_utilized_2PortsUtil;TopdownL4;tma_L4_group;tma_issue2P;tma_ports_utilization_group((cpu@UOPS_EXECUTED.CORE\,cmask\=2@ - cpu@UOPS_EXECUTED.CORE\,cmask\=3@) / 2 if #SMT_on else (UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC - UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC) / tma_info_core_core_clks)tma_ports_utilized_2 > 0.15 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise).  Loop Vectorization -most compilers feature auto-Vectorization options today- reduces pressure on the execution ports as multiple elements are calculated with same uop. Sample with: EXE_ACTIVITY.2_PORTS_UTIL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6100%00tma_ports_utilized_3mBvCB;PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group(cpu@UOPS_EXECUTED.CORE\,cmask\=3@ / 2 if #SMT_on else UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC) / tma_info_core_core_clkstma_ports_utilized_3m > 0.4 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). Sample with: UOPS_EXECUTED.CYCLES_GE_3100%00tma_retiringBvUW;TmaL1;TopdownL1;tma_L1_groupUOPS_RETIRED.RETIRE_SLOTS / tma_info_thread_slotstma_retiring > 0.7 | tma_heavy_operations > 0.1This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retiredThis category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired. Ideally; all pipeline slots would be attributed to the Retiring category.  Retiring of 100% would indicate the maximum Pipeline_Width throughput was achieved.  Maximizing Retiring typically increases the Instructions-per-cycle (see IPC metric). Note that a high Retiring value does not necessary mean there is no room for more performance.  For example; Heavy-operations or Microcode Assists are categorized under Retiring. They often indicate suboptimal performance and can often be optimized or avoided. Sample with: UOPS_RETIRED.SLOTS100%TopdownL100tma_split_loadsTopdownL4;tma_L4_group;tma_l1_bound_grouptma_info_memory_load_miss_real_latency * LD_BLOCKS.NO_SR / tma_info_thread_clkstma_split_loads > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundaryThis metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundary. Sample with: MEM_INST_RETIRED.SPLIT_LOADS_PS100%01tma_split_storesTopdownL4;tma_L4_group;tma_issueSpSt;tma_store_bound_group2 * MEM_UOPS_RETIRED.SPLIT_STORES / tma_info_core_core_clkstma_split_stores > 0.2 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric represents rate of split store accessesThis metric represents rate of split store accesses.  Consider aligning your data to the 64-byte cache line granularity. Sample with: MEM_INST_RETIRED.SPLIT_STORES_PS. Related metrics: tma_port_4100%00tma_store_boundMemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_groupRESOURCE_STALLS.SB / tma_info_thread_clkstma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often CPU was stalled  due to RFO store memory accesses; RFO store issue a read-for-ownership request before the writeThis metric estimates how often CPU was stalled  due to RFO store memory accesses; RFO store issue a read-for-ownership request before the write. Even though store accesses do not typically stall out-of-order CPUs; there are few cases where stores can lead to actual stalls. This metric will be flagged should RFO stores be a bottleneck. Sample with: MEM_INST_RETIRED.ALL_STORES_PS100%00tma_store_op_utilizationTopdownL5;tma_L5_group;tma_ports_utilized_3m_groupUOPS_DISPATCHED_PORT.PORT_4 / tma_info_core_core_clkstma_store_op_utilization > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port for Store operationsThis metric represents Core fraction of cycles CPU dispatched uops on execution port for Store operations. Sample with: UOPS_DISPATCHED.PORT_7_8100%00tma_unknown_branchesBigFootprint;BvBC;FetchLat;TopdownL4;tma_L4_group;tma_branch_resteers_grouptma_branch_resteers - tma_mispredicts_resteers - tma_clears_resteerstma_unknown_branches > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))This metric represents fraction of cycles the CPU was stalled due to new branch address clearsThis metric represents fraction of cycles the CPU was stalled due to new branch address clears. These are fetched branches the Branch Prediction Unit was unable to recognize (e.g. first time the branch is fetched or hitting BTB capacity limit) hence called Unknown Branches. Sample with: FRONTEND_RETIRED.UNKNOWN_BRANCH100%00cpiCPU_CLK_UNHALTED.THREAD / INST_RETIRED.ANYCycles per instruction retired; indicating how much time each executed instruction took; in units of cycles1per_instr00cpu_operating_frequencyCPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC * #SYSTEM_TSC_FREQ / 1e9CPU operating frequency (in GHz)1GHz00cpu_utilizationtma_info_system_cpus_utilizedPercentage of time spent in the active CPU power state C0100%00dtlb_load_mpiDTLB_LOAD_MISSES.WALK_COMPLETED / INST_RETIRED.ANYRatio of number of completed page walks (for all page sizes) caused by demand data loads to the total number of completed instructionsRatio of number of completed page walks (for all page sizes) caused by demand data loads to the total number of completed instructions. This implies it missed in the DTLB and further levels of TLB1per_instr00dtlb_store_mpiDTLB_STORE_MISSES.WALK_COMPLETED / INST_RETIRED.ANYRatio of number of completed page walks (for all page sizes) caused by demand data stores to the total number of completed instructionsRatio of number of completed page walks (for all page sizes) caused by demand data stores to the total number of completed instructions. This implies it missed in the DTLB and further levels of TLB1per_instr00io_bandwidth_readcbox@UNC_C_TOR_INSERTS.OPCODE\,filter_opc\=0x19e@ * 64 / 1e6 / duration_timeBandwidth of IO reads that are initiated by end device controllers that are requesting memory from the CPU1MB/s00io_bandwidth_write(cbox@UNC_C_TOR_INSERTS.OPCODE\,filter_opc\=0x1c8\,filter_tid\=0x3e@ + cbox@UNC_C_TOR_INSERTS.OPCODE\,filter_opc\=0x180\,filter_tid\=0x3e@) * 64 / 1e6 / duration_timeBandwidth of IO writes that are initiated by end device controllers that are writing memory to the CPU1MB/s00itlb_large_page_mpiITLB_MISSES.WALK_COMPLETED_2M_4M / INST_RETIRED.ANYRatio of number of completed page walks (for 2 megabyte and 4 megabyte page sizes) caused by a code fetch to the total number of completed instructionsRatio of number of completed page walks (for 2 megabyte and 4 megabyte page sizes) caused by a code fetch to the total number of completed instructions. This implies it missed in the Instruction Translation Lookaside Buffer (ITLB) and further levels of TLB1per_instr00itlb_mpiITLB_MISSES.WALK_COMPLETED / INST_RETIRED.ANYRatio of number of completed page walks (for all page sizes) caused by a code fetch to the total number of completed instructionsRatio of number of completed page walks (for all page sizes) caused by a code fetch to the total number of completed instructions. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB1per_instr00l1_i_code_read_misses_with_prefetches_per_instrL2_RQSTS.ALL_CODE_RD / INST_RETIRED.ANYRatio of number of code read requests missing in L1 instruction cache (includes prefetches) to the total number of completed instructions1per_instr00l1d_demand_data_read_hits_per_instrMEM_LOAD_UOPS_RETIRED.L1_HIT / INST_RETIRED.ANYRatio of number of demand load requests hitting in L1 data cache to the total number of completed instructions1per_instr00l1d_mpiL1D.REPLACEMENT / INST_RETIRED.ANYRatio of number of requests missing L1 data cache (includes data+rfo w/ prefetches) to the total number of completed instructions1per_instr00l2_demand_code_mpiL2_RQSTS.CODE_RD_MISS / INST_RETIRED.ANYRatio of number of code read request missing L2 cache to the total number of completed instructions1per_instr00l2_demand_data_read_hits_per_instrMEM_LOAD_UOPS_RETIRED.L2_HIT / INST_RETIRED.ANYRatio of number of completed demand load requests hitting in L2 cache to the total number of completed instructions1per_instr00l2_demand_data_read_mpiMEM_LOAD_UOPS_RETIRED.L2_MISS / INST_RETIRED.ANYRatio of number of completed data read request missing L2 cache to the total number of completed instructions1per_instr00l2_mpiL2_LINES_IN.ALL / INST_RETIRED.ANYRatio of number of requests missing L2 cache (includes code+data+rfo w/ prefetches) to the total number of completed instructions1per_instr00llc_code_read_mpi_demand_plus_prefetch(cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\,filter_opc\=0x181@ + cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\,filter_opc\=0x191@) / INST_RETIRED.ANYRatio of number of code read requests missing last level core cache (includes demand w/ prefetches) to the total number of completed instructions1per_instr00llc_data_read_demand_plus_prefetch_miss_latency1e9 * (cbox@UNC_C_TOR_OCCUPANCY.MISS_OPCODE\,filter_opc\=0x182@ / cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\,filter_opc\=0x182@) / (UNC_C_CLOCKTICKS / (#num_cores / #num_packages * #num_packages)) * duration_timeAverage latency of a last level cache (LLC) demand and prefetch data read miss (read memory access) in nano seconds1ns00llc_data_read_demand_plus_prefetch_miss_latency_for_local_requests1e9 * (cbox@UNC_C_TOR_OCCUPANCY.MISS_LOCAL_OPCODE\,filter_opc\=0x182@ / cbox@UNC_C_TOR_INSERTS.MISS_LOCAL_OPCODE\,filter_opc\=0x182@) / (UNC_C_CLOCKTICKS / (#num_cores / #num_packages * #num_packages)) * duration_timeAverage latency of a last level cache (LLC) demand and prefetch data read miss (read memory access) addressed to local memory in nano seconds1ns00llc_data_read_demand_plus_prefetch_miss_latency_for_remote_requests1e9 * (cbox@UNC_C_TOR_OCCUPANCY.MISS_REMOTE_OPCODE\,filter_opc\=0x182@ / cbox@UNC_C_TOR_INSERTS.MISS_REMOTE_OPCODE\,filter_opc\=0x182@) / (UNC_C_CLOCKTICKS / (#num_cores / #num_packages * #num_packages)) * duration_timeAverage latency of a last level cache (LLC) demand and prefetch data read miss (read memory access) addressed to remote memory in nano seconds1ns00llc_data_read_mpi_demand_plus_prefetch(cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\,filter_opc\=0x182@ + cbox@UNC_C_TOR_INSERTS.MISS_OPCODE\,filter_opc\=0x192@) / INST_RETIRED.ANYRatio of number of data read requests missing last level core cache (includes demand w/ prefetches) to the total number of completed instructions1per_instr00loads_per_instrMEM_UOPS_RETIRED.ALL_LOADS / INST_RETIRED.ANYThe ratio of number of completed memory load instructions to the total number completed instructions1per_instr00memory_bandwidth_readUNC_M_CAS_COUNT.RD * 64 / 1e6 / duration_timeDDR memory read bandwidth (MB/sec)1MB/s00memory_bandwidth_total(UNC_M_CAS_COUNT.RD + UNC_M_CAS_COUNT.WR) * 64 / 1e6 / duration_timeDDR memory bandwidth (MB/sec)1MB/s00memory_bandwidth_writeUNC_M_CAS_COUNT.WR * 64 / 1e6 / duration_timeDDR memory write bandwidth (MB/sec)1MB/s00numa_reads_addressed_to_local_dramcbox@UNC_C_TOR_INSERTS.MISS_LOCAL_OPCODE\,filter_opc\=0x182@ / (cbox@UNC_C_TOR_INSERTS.MISS_LOCAL_OPCODE\,filter_opc\=0x182@ + cbox@UNC_C_TOR_INSERTS.MISS_REMOTE_OPCODE\,filter_opc\=0x182@)Memory read that miss the last level cache (LLC) addressed to local DRAM as a percentage of total memory read accesses, does not include LLC prefetches100%00numa_reads_addressed_to_remote_dramcbox@UNC_C_TOR_INSERTS.MISS_REMOTE_OPCODE\,filter_opc\=0x182@ / (cbox@UNC_C_TOR_INSERTS.MISS_LOCAL_OPCODE\,filter_opc\=0x182@ + cbox@UNC_C_TOR_INSERTS.MISS_REMOTE_OPCODE\,filter_opc\=0x182@)Memory reads that miss the last level cache (LLC) addressed to remote DRAM as a percentage of total memory read accesses, does not include LLC prefetches100%00percent_uops_delivered_from_decoded_icacheIDQ.DSB_UOPS / UOPS_ISSUED.ANYUops delivered from decoded instruction cache (decoded stream buffer or DSB) as a percent of total uops delivered to Instruction Decode Queue100%00percent_uops_delivered_from_legacy_decode_pipelineIDQ.MITE_UOPS / UOPS_ISSUED.ANYUops delivered from legacy decode pipeline (Micro-instruction Translation Engine or MITE) as a percent of total uops delivered to Instruction Decode Queue100%00percent_uops_delivered_from_loop_stream_detectorLSD.UOPS / UOPS_ISSUED.ANYUops delivered from loop stream detector(LSD) as a percent of total uops delivered to Instruction Decode Queue100%00percent_uops_delivered_from_microcode_sequencerIDQ.MS_UOPS / UOPS_ISSUED.ANYUops delivered from microcode sequencer (MS) as a percent of total uops delivered to Instruction Decode Queue100%00qpi_data_transmit_bwUNC_Q_TxL_FLITS_G0.DATA * 8 / 1e6 / duration_timeIntel(R) Quick Path Interconnect (QPI) data transmit bandwidth (MB/sec)1MB/s00stores_per_instrMEM_UOPS_RETIRED.ALL_STORES / INST_RETIRED.ANYThe ratio of number of completed memory store instructions to the total number completed instructions1per_instr00tma_contested_accessesBvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group(60 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD))) + 43 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD)))) / tma_info_thread_clkstma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accessesThis metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS. Related metrics: tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache100%01tma_data_sharingBvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group43 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD))) / tma_info_thread_clkstma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accessesThis metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT_PS. Related metrics: tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache100%01tma_false_sharingBvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group(200 * OFFCORE_RESPONSE.DEMAND_RFO.LLC_MISS.REMOTE_HITM + 60 * OFFCORE_RESPONSE.DEMAND_RFO.LLC_HIT.HITM_OTHER_CORE) / tma_info_thread_clkstma_false_sharing > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates how often CPU was handling synchronizations due to False SharingThis metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache100%00tma_info_memory_tlb_page_walks_utilizationMem;MemoryTLB(ITLB_MISSES.WALK_DURATION + DTLB_LOAD_MISSES.WALK_DURATION + DTLB_STORE_MISSES.WALK_DURATION + 7 * (DTLB_STORE_MISSES.WALK_COMPLETED + DTLB_LOAD_MISSES.WALK_COMPLETED + ITLB_MISSES.WALK_COMPLETED)) / (2 * tma_info_core_core_clks)tma_info_memory_tlb_page_walks_utilization > 0.5Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses00tma_info_system_mem_parallel_readsMem;MemoryBW;SoCUNC_C_TOR_OCCUPANCY.MISS_OPCODE@filter_opc\=0x182@ / UNC_C_TOR_OCCUPANCY.MISS_OPCODE@filter_opc\=0x182\,thresh\=1@Average number of parallel data read requests to external memoryAverage number of parallel data read requests to external memory. Accounts for demand loads and L1/L2 prefetches00tma_info_system_mem_read_latencyMem;MemoryLat;SoC1e9 * (UNC_C_TOR_OCCUPANCY.MISS_OPCODE@filter_opc\=0x182@ / UNC_C_TOR_INSERTS.MISS_OPCODE@filter_opc\=0x182@) / (tma_info_system_socket_clks / duration_time)Average latency of data read request to external memory (in nanoseconds)Average latency of data read request to external memory (in nanoseconds). Accounts for demand loads and L1/L2 prefetches. ([RKL+]memory-controller only)00tma_info_system_uncore_frequencySoCtma_info_system_socket_clks / 1e9 / duration_timeMeasured Average Uncore Frequency for the SoC [GHz]00tma_l3_hit_latencyBvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group41 * (MEM_LOAD_UOPS_RETIRED.L3_HIT * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD))) / tma_info_thread_clkstma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited).  Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance.  Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS. Related metrics: tma_mem_latency100%01tma_local_memServer;TopdownL5;tma_L5_group;tma_mem_latency_group200 * (MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD))) / tma_info_thread_clkstma_local_mem > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))This metric estimates fraction of cycles while the memory subsystem was handling loads from local memoryThis metric estimates fraction of cycles while the memory subsystem was handling loads from local memory. Caching will improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM_PS100%00tma_remote_cacheOffcore;Server;Snoop;TopdownL5;tma_L5_group;tma_issueSyncxn;tma_mem_latency_group(200 * (MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD))) + 180 * (MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD)))) / tma_info_thread_clkstma_remote_cache > 0.05 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))This metric estimates fraction of cycles while the memory subsystem was handling loads from remote cache in other sockets including synchronizations issuesThis metric estimates fraction of cycles while the memory subsystem was handling loads from remote cache in other sockets including synchronizations issues. This is caused often due to non-optimal NUMA allocations. #link to NUMA article. Sample with: MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM_PS;MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD_PS. Related metrics: tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_machine_clears100%01tma_remote_memServer;Snoop;TopdownL5;tma_L5_group;tma_mem_latency_group310 * (MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD))) / tma_info_thread_clkstma_remote_mem > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))This metric estimates fraction of cycles while the memory subsystem was handling loads from remote memoryThis metric estimates fraction of cycles while the memory subsystem was handling loads from remote memory. This is caused often due to non-optimal NUMA allocations. #link to NUMA article. Sample with: MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM_PS100%00uncore_frequencyUNC_C_CLOCKTICKS / (#num_cores / #num_packages * #num_packages) / 1e9 / duration_timeUncore operating frequency in GHz1GHz00dtlb_2mb_large_page_load_mpiDTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M / INST_RETIRED.ANYRatio of number of completed page walks (for 2 megabyte page sizes) caused by demand data loads to the total number of completed instructionsRatio of number of completed page walks (for 2 megabyte page sizes) caused by demand data loads to the total number of completed instructions. This implies it missed in the Data Translation Lookaside Buffer (DTLB) and further levels of TLB1per_instr00io_bandwidth_read(UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART0 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART1 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3) * 4 / 1e6 / duration_timeBandwidth of IO reads that are initiated by end device controllers that are requesting memory from the CPU1MB/s00io_bandwidth_write(UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.PART0 + UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.PART1 + UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.PART2 + UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.PART3) * 4 / 1e6 / duration_timeBandwidth of IO writes that are initiated by end device controllers that are writing memory to the CPU1MB/s00l1d_demand_data_read_hits_per_instrMEM_LOAD_RETIRED.L1_HIT / INST_RETIRED.ANYRatio of number of demand load requests hitting in L1 data cache to the total number of completed instructions1per_instr00l2_demand_data_read_hits_per_instrMEM_LOAD_RETIRED.L2_HIT / INST_RETIRED.ANYRatio of number of completed demand load requests hitting in L2 cache to the total number of completed instructions1per_instr00l2_demand_data_read_mpiMEM_LOAD_RETIRED.L2_MISS / INST_RETIRED.ANYRatio of number of completed data read request missing L2 cache to the total number of completed instructions1per_instr00llc_code_read_mpi_demand_plus_prefetchcha@UNC_CHA_TOR_INSERTS.IA_MISS\,config1\=0x12cc0233@ / INST_RETIRED.ANYRatio of number of code read requests missing last level core cache (includes demand w/ prefetches) to the total number of completed instructions1per_instr00llc_data_read_demand_plus_prefetch_miss_latency1e9 * (cha@UNC_CHA_TOR_OCCUPANCY.IA_MISS\,config1\=0x40433@ / cha@UNC_CHA_TOR_INSERTS.IA_MISS\,config1\=0x40433@) / (UNC_CHA_CLOCKTICKS / (source_count(UNC_CHA_CLOCKTICKS) * #num_packages)) * duration_timeAverage latency of a last level cache (LLC) demand and prefetch data read miss (read memory access) in nano seconds1ns00llc_data_read_demand_plus_prefetch_miss_latency_for_local_requests1e9 * (cha@UNC_CHA_TOR_OCCUPANCY.IA_MISS\,config1\=0x40432@ / cha@UNC_CHA_TOR_INSERTS.IA_MISS\,config1\=0x40432@) / (UNC_CHA_CLOCKTICKS / (source_count(UNC_CHA_CLOCKTICKS) * #num_packages)) * duration_timeAverage latency of a last level cache (LLC) demand and prefetch data read miss (read memory access) addressed to local memory in nano seconds1ns00llc_data_read_demand_plus_prefetch_miss_latency_for_remote_requests1e9 * (cha@UNC_CHA_TOR_OCCUPANCY.IA_MISS\,config1\=0x40431@ / cha@UNC_CHA_TOR_INSERTS.IA_MISS\,config1\=0x40431@) / (UNC_CHA_CLOCKTICKS / (source_count(UNC_CHA_CLOCKTICKS) * #num_packages)) * duration_timeAverage latency of a last level cache (LLC) demand and prefetch data read miss (read memory access) addressed to remote memory in nano seconds1ns00llc_data_read_mpi_demand_plus_prefetchcha@UNC_CHA_TOR_INSERTS.IA_MISS\,config1\=0x12d40433@ / INST_RETIRED.ANYRatio of number of data read requests missing last level core cache (includes demand w/ prefetches) to the total number of completed instructions1per_instr00llc_miss_local_memory_bandwidth_readUNC_CHA_REQUESTS.READS_LOCAL * 64 / 1e6 / duration_timeBandwidth (MB/sec) of read requests that miss the last level cache (LLC) and go to local memory1MB/s00llc_miss_local_memory_bandwidth_writeUNC_CHA_REQUESTS.WRITES_LOCAL * 64 / 1e6 / duration_timeBandwidth (MB/sec) of write requests that miss the last level cache (LLC) and go to local memory1MB/s00llc_miss_remote_memory_bandwidth_readUNC_CHA_REQUESTS.READS_REMOTE * 64 / 1e6 / duration_timeBandwidth (MB/sec) of read requests that miss the last level cache (LLC) and go to remote memory1MB/s00llc_miss_remote_memory_bandwidth_writeUNC_CHA_REQUESTS.WRITES_REMOTE * 64 / 1e6 / duration_timeBandwidth (MB/sec) of write requests that miss the last level cache (LLC) and go to remote memory1MB/s00loads_per_instrMEM_INST_RETIRED.ALL_LOADS / INST_RETIRED.ANYThe ratio of number of completed memory load instructions to the total number completed instructions1per_instr00numa_reads_addressed_to_local_dramcha@UNC_CHA_TOR_INSERTS.IA_MISS\,config1\=0x40432@ / (cha@UNC_CHA_TOR_INSERTS.IA_MISS\,config1\=0x40432@ + cha@UNC_CHA_TOR_INSERTS.IA_MISS\,config1\=0x40431@)Memory read that miss the last level cache (LLC) addressed to local DRAM as a percentage of total memory read accesses, does not include LLC prefetches100%00numa_reads_addressed_to_remote_dramcha@UNC_CHA_TOR_INSERTS.IA_MISS\,config1\=0x40431@ / (cha@UNC_CHA_TOR_INSERTS.IA_MISS\,config1\=0x40432@ + cha@UNC_CHA_TOR_INSERTS.IA_MISS\,config1\=0x40431@)Memory reads that miss the last level cache (LLC) addressed to remote DRAM as a percentage of total memory read accesses, does not include LLC prefetches100%00percent_uops_delivered_from_decoded_icacheIDQ.DSB_UOPS / (IDQ.DSB_UOPS + IDQ.MITE_UOPS + IDQ.MS_UOPS + LSD.UOPS)Uops delivered from decoded instruction cache (decoded stream buffer or DSB) as a percent of total uops delivered to Instruction Decode Queue100%00percent_uops_delivered_from_legacy_decode_pipelineIDQ.MITE_UOPS / (IDQ.DSB_UOPS + IDQ.MITE_UOPS + IDQ.MS_UOPS + LSD.UOPS)Uops delivered from legacy decode pipeline (Micro-instruction Translation Engine or MITE) as a percent of total uops delivered to Instruction Decode Queue100%00percent_uops_delivered_from_microcode_sequencerIDQ.MS_UOPS / (IDQ.DSB_UOPS + IDQ.MITE_UOPS + IDQ.MS_UOPS + LSD.UOPS)Uops delivered from microcode sequencer (MS) as a percent of total uops delivered to Instruction Decode Queue100%00pmem_memory_bandwidth_readUNC_M_PMM_RPQ_INSERTS * 64 / 1e6 / duration_timeIntel(R) Optane(TM) Persistent Memory(PMEM) memory read bandwidth (MB/sec)1MB/s00pmem_memory_bandwidth_total(UNC_M_PMM_RPQ_INSERTS + UNC_M_PMM_WPQ_INSERTS) * 64 / 1e6 / duration_timeIntel(R) Optane(TM) Persistent Memory(PMEM) memory bandwidth (MB/sec)1MB/s00pmem_memory_bandwidth_writeUNC_M_PMM_WPQ_INSERTS * 64 / 1e6 / duration_timeIntel(R) Optane(TM) Persistent Memory(PMEM) memory write bandwidth (MB/sec)1MB/s00stores_per_instrMEM_INST_RETIRED.ALL_STORES / INST_RETIRED.ANYThe ratio of number of completed memory store instructions to the total number completed instructions1per_instr00tma_alu_op_utilizationTopdownL5;tma_L5_group;tma_ports_utilized_3m_group(UOPS_DISPATCHED_PORT.PORT_0 + UOPS_DISPATCHED_PORT.PORT_1 + UOPS_DISPATCHED_PORT.PORT_5 + UOPS_DISPATCHED_PORT.PORT_6) / tma_info_thread_slotstma_alu_op_utilization > 0.4This metric represents Core fraction of cycles CPU dispatched uops on execution ports for ALU operations100%00tma_assistsBvIO;TopdownL4;tma_L4_group;tma_microcode_sequencer_group34 * (FP_ASSIST.ANY + OTHER_ASSISTS.ANY) / tma_info_thread_slotstma_assists > 0.1 & (tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1)This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of AssistsThis metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists. Assists are long sequences of uops that are required in certain corner-cases for operations that cannot be handled natively by the execution pipeline. For example; when working with very small floating point values (so-called Denormals); the FP units are not set up to perform these operations natively. Instead; a sequence of instructions to perform the computation on the Denormals is injected into the pipeline. Since these microcode sequences might be dozens of uops long; Assists can be extremely deleterious to performance and they can be avoided in many cases. Sample with: OTHER_ASSISTS.ANY100%00tma_backend_boundBvOB;TmaL1;TopdownL1;tma_L1_group1 - tma_frontend_bound - (UOPS_ISSUED.ANY + 4 * (INT_MISC.RECOVERY_CYCLES_ANY / 2 if #SMT_on else INT_MISC.RECOVERY_CYCLES)) / tma_info_thread_slotstma_backend_bound > 0.2This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the BackendThis category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend. Backend is the portion of the processor core where the out-of-order scheduler dispatches ready uops into their respective execution units; and once completed these uops get retired according to program order. For example; stalls due to data-cache misses or stalls due to the divider unit being overloaded are both categorized under Backend Bound. Backend Bound is further divided into two main categories: Memory Bound and Core Bound100%TopdownL100tma_branch_mispredictsBadSpec;BrMispredicts;BvMP;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBMBR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT) * tma_bad_speculationtma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15This metric represents fraction of slots the CPU has wasted due to Branch MispredictionThis metric represents fraction of slots the CPU has wasted due to Branch Misprediction.  These slots are either wasted by uops fetched from an incorrectly speculated program path; or stalls when the out-of-order part of the machine needs to recover its state from a speculative path. Sample with: BR_MISP_RETIRED.ALL_BRANCHES. Related metrics: tma_info_bad_spec_branch_misprediction_cost, tma_info_bottleneck_mispredictions, tma_mispredicts_resteers100%TopdownL201tma_branch_resteersFetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_groupINT_MISC.CLEAR_RESTEER_CYCLES / tma_info_thread_clks + tma_unknown_branchestma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric represents fraction of cycles the CPU was stalled due to Branch ResteersThis metric represents fraction of cycles the CPU was stalled due to Branch Resteers. Branch Resteers estimates the Frontend delay in fetching operations from corrected path; following all sorts of miss-predicted branches. For example; branchy code with lots of miss-predictions might get categorized under Branch Resteers. Note the value of this node may overlap with its siblings. Sample with: BR_MISP_RETIRED.ALL_BRANCHES100%00tma_ciscTopdownL4;tma_L4_group;tma_microcode_sequencer_groupmax(0, tma_microcode_sequencer - tma_assists)tma_cisc > 0.1 & (tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1)This metric estimates fraction of cycles the CPU retired uops originated from CISC (complex instruction set computer) instructionThis metric estimates fraction of cycles the CPU retired uops originated from CISC (complex instruction set computer) instruction. A CISC instruction has multiple uops that are required to perform the instruction's functionality as in the case of read-modify-write as an example. Since these instructions require multiple uops they may or may not imply sub-optimal use of machine resources100%00tma_clears_resteersBadSpec;MachineClears;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueMC(1 - BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT)) * INT_MISC.CLEAR_RESTEER_CYCLES / tma_info_thread_clkstma_clears_resteers > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Machine ClearsThis metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Machine Clears. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES. Related metrics: tma_l1_bound, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches100%00tma_contested_accessesBvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group(44 * tma_info_system_core_frequency * (MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM * (OCR.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE / (OCR.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE + OCR.DEMAND_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD))) + 44 * tma_info_system_core_frequency * MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clkstma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accessesThis metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS. Related metrics: tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache100%01tma_data_sharingBvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group44 * tma_info_system_core_frequency * (MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM * (1 - OCR.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE / (OCR.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE + OCR.DEMAND_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD))) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clkstma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accessesThis metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT_PS. Related metrics: tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache100%01tma_decoder0_aloneDSBmiss;FetchBW;TopdownL4;tma_L4_group;tma_issueD0;tma_mite_group(cpu@INST_DECODED.DECODERS\,cmask\=1@ - cpu@INST_DECODED.DECODERS\,cmask\=2@) / tma_info_core_core_clks / 2tma_decoder0_alone > 0.1 & (tma_mite > 0.1 & tma_fetch_bandwidth > 0.2)This metric represents fraction of cycles where decoder-0 was the only active decoderThis metric represents fraction of cycles where decoder-0 was the only active decoder. Related metrics: tma_few_uops_instructions100%00tma_dividerBvCB;TopdownL3;tma_L3_group;tma_core_bound_groupARITH.DIVIDER_ACTIVE / tma_info_thread_clkstma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)This metric represents fraction of cycles where the Divider unit was activeThis metric represents fraction of cycles where the Divider unit was active. Divide and square root instructions are performed by the Divider unit and can take considerably longer latency than integer or Floating Point addition; subtraction; or multiplication. Sample with: ARITH.DIVIDER_ACTIVE100%00tma_dram_boundMemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group(CYCLE_ACTIVITY.STALLS_L3_MISS / tma_info_thread_clks + (CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks - tma_l2_bound - tma_pmm_bound if #has_pmem > 0 else CYCLE_ACTIVITY.STALLS_L3_MISS / tma_info_thread_clks + (CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks - tma_l2_bound)tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loadsThis metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads. Better caching can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_MISS_PS100%01tma_dsbDSB;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group(IDQ.DSB_CYCLES_ANY - IDQ.DSB_CYCLES_OK) / tma_info_core_core_clks / 2tma_dsb > 0.15 & tma_fetch_bandwidth > 0.2This metric represents Core fraction of cycles in which CPU was likely limited due to DSB (decoded uop cache) fetch pipelineThis metric represents Core fraction of cycles in which CPU was likely limited due to DSB (decoded uop cache) fetch pipeline.  For example; inefficient utilization of the DSB cache structure or bank conflict when reading from it; are categorized here100%00tma_dsb_switchesDSBmiss;FetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueFBDSB2MITE_SWITCHES.PENALTY_CYCLES / tma_info_thread_clkstma_dsb_switches > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric represents fraction of cycles the CPU was stalled due to switches from DSB to MITE pipelinesThis metric represents fraction of cycles the CPU was stalled due to switches from DSB to MITE pipelines. The DSB (decoded i-cache) is a Uop Cache where the front-end directly delivers Uops (micro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter latency and delivered higher bandwidth than the MITE (legacy instruction decode pipeline). Switching between the two pipelines can cause penalties hence this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DSB_MISS_PS. Related metrics: tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp100%00tma_dtlb_loadBvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_groupmin(9 * cpu@DTLB_LOAD_MISSES.STLB_HIT\,cmask\=1@ + DTLB_LOAD_MISSES.WALK_ACTIVE, max(CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE_ACTIVITY.CYCLES_L1D_MISS, 0)) / tma_info_thread_clkstma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accessesThis metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_dtlb_store, tma_info_bottleneck_memory_data_tlbs, tma_info_bottleneck_memory_synchronization100%02tma_dtlb_storeBvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group(9 * cpu@DTLB_STORE_MISSES.STLB_HIT\,cmask\=1@ + DTLB_STORE_MISSES.WALK_ACTIVE) / tma_info_core_core_clkstma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates the fraction of cycles spent handling first-level data TLB store missesThis metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses.  As with ordinary data caching; focus on improving data locality and reducing working-set size to reduce DTLB overhead.  Additionally; consider using profile-guided optimization (PGO) to collocate frequently-used data on the same page.  Try using larger page sizes for large amounts of frequently-used data. Sample with: MEM_INST_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_dtlb_load, tma_info_bottleneck_memory_data_tlbs, tma_info_bottleneck_memory_synchronization100%00tma_false_sharingBvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group(110 * tma_info_system_core_frequency * (OCR.DEMAND_RFO.L3_MISS.REMOTE_HITM + OCR.PF_L2_RFO.L3_MISS.REMOTE_HITM) + 47.5 * tma_info_system_core_frequency * (OCR.DEMAND_RFO.L3_HIT.HITM_OTHER_CORE + OCR.PF_L2_RFO.L3_HIT.HITM_OTHER_CORE)) / tma_info_thread_clkstma_false_sharing > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates how often CPU was handling synchronizations due to False SharingThis metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache100%01tma_fb_fullBvMS;MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_grouptma_info_memory_load_miss_real_latency * cpu@L1D_PEND_MISS.FB_FULL\,cmask\=1@ / tma_info_thread_clkstma_fb_full > 0.3This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceedThis metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed. The higher the metric value; the deeper the memory hierarchy level the misses are satisfied from (metric values >1 are valid). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or external memory). Related metrics: tma_info_bottleneck_cache_memory_bandwidth, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full, tma_store_latency, tma_streaming_stores100%02tma_fetch_bandwidthFetchBW;Frontend;TmaL2;TopdownL2;tma_L2_group;tma_frontend_bound_group;tma_issueFBtma_frontend_bound - tma_fetch_latencytma_fetch_bandwidth > 0.2This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issuesThis metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues.  For example; inefficiencies at the instruction decoders; or restrictions for caching in the DSB (decoded uops cache) are categorized under Fetch Bandwidth. In such cases; the Frontend typically delivers suboptimal amount of uops to the Backend. Sample with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_2_PS. Related metrics: tma_dsb_switches, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp100%TopdownL200tma_few_uops_instructionsTopdownL3;tma_L3_group;tma_heavy_operations_group;tma_issueD0tma_heavy_operations - tma_microcode_sequencertma_few_uops_instructions > 0.05 & tma_heavy_operations > 0.1This metric represents fraction of slots where the CPU was retiring instructions that that are decoder into two or up to ([SNB+] four; [ADL+] five) uopsThis metric represents fraction of slots where the CPU was retiring instructions that that are decoder into two or up to ([SNB+] four; [ADL+] five) uops. This highly-correlates with the number of uops in such instructions. Related metrics: tma_decoder0_alone100%02tma_fp_arithHPC;TopdownL3;tma_L3_group;tma_light_operations_grouptma_x87_use + tma_fp_scalar + tma_fp_vectortma_fp_arith > 0.2 & tma_light_operations > 0.6This metric represents overall arithmetic floating-point (FP) operations fraction the CPU has executed (retired)This metric represents overall arithmetic floating-point (FP) operations fraction the CPU has executed (retired). Note this metric's value may exceed its parent due to use of "Uops" CountDomain and FMA double-counting100%01tma_fp_assistsHPC;TopdownL5;tma_L5_group;tma_assists_group34 * FP_ASSIST.ANY / tma_info_thread_slotstma_fp_assists > 0.1This metric roughly estimates fraction of slots the CPU retired uops as a result of handing Floating Point (FP) AssistsThis metric roughly estimates fraction of slots the CPU retired uops as a result of handing Floating Point (FP) Assists. FP Assist may apply when working with very small floating point values (so-called Denormals)100%00tma_fp_vectorCompute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2Pcpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\,umask\=0xfc@ / UOPS_RETIRED.RETIRE_SLOTStma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)This metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widthsThis metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widths. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%01tma_fp_vector_512bCompute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P(FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / UOPS_RETIRED.RETIRE_SLOTStma_fp_vector_512b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))This metric approximates arithmetic FP vector uops fraction the CPU has retired for 512-bit wide vectorsThis metric approximates arithmetic FP vector uops fraction the CPU has retired for 512-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_fused_instructionsBranches;BvBO;Pipeline;TopdownL3;tma_L3_group;tma_light_operations_grouptma_light_operations * UOPS_RETIRED.MACRO_FUSED / UOPS_RETIRED.RETIRE_SLOTStma_fused_instructions > 0.1 & tma_light_operations > 0.6This metric represents fraction of slots where the CPU was retiring fused instructions -- where one uop can represent multiple contiguous instructionsThis metric represents fraction of slots where the CPU was retiring fused instructions -- where one uop can represent multiple contiguous instructions. CMP+JCC or DEC+JCC are common examples of legacy fusions. {([MTL] Note new MOV+OP and Load+OP fusions appear under Other_Light_Ops in MTL!)}100%00tma_heavy_operationsRetire;TmaL2;TopdownL2;tma_L2_group;tma_retiring_group(UOPS_RETIRED.RETIRE_SLOTS + UOPS_RETIRED.MACRO_FUSED - INST_RETIRED.ANY) / tma_info_thread_slotstma_heavy_operations > 0.1This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequencesThis metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences. ([ICL+] Note this may overcount due to approximation using indirect events; [ADL+] .)100%TopdownL200tma_icache_missesBigFootprint;BvBC;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_group(ICACHE_16B.IFDATA_STALL + 2 * cpu@ICACHE_16B.IFDATA_STALL\,cmask\=1\,edge@) / tma_info_thread_clkstma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric represents fraction of cycles the CPU was stalled due to instruction cache missesThis metric represents fraction of cycles the CPU was stalled due to instruction cache misses. Sample with: FRONTEND_RETIRED.L2_MISS_PS;FRONTEND_RETIRED.L1I_MISS_PS100%00tma_info_bad_spec_branch_misprediction_costBad;BrMispredicts;tma_issueBMtma_info_bottleneck_mispredictions * tma_info_thread_slots / BR_MISP_RETIRED.ALL_BRANCHES / 100Branch Misprediction Cost: Fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear)Branch Misprediction Cost: Fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear). Related metrics: tma_branch_mispredicts, tma_info_bottleneck_mispredictions, tma_mispredicts_resteers00tma_info_bad_spec_spec_clears_ratioBrMispredictsINT_MISC.CLEARS_COUNT / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT)Speculative to Retired ratio of all clears (covering mispredicts and nukes)00tma_info_botlnk_l0_core_bound_likelyCor;SMT(100 * (1 - tma_core_bound / tma_ports_utilization if tma_core_bound < tma_ports_utilization else 1) if tma_info_system_smt_2t_utilization > 0.5 else 0)tma_info_botlnk_l0_core_bound_likely > 0.5Probability of Core Bound bottleneck hidden by SMT-profiling artifacts01tma_info_botlnk_l2_dsb_bandwidthDSB;FetchBW;tma_issueFB100 * (tma_frontend_bound * (tma_fetch_bandwidth / (tma_fetch_bandwidth + tma_fetch_latency)) * (tma_dsb / (tma_dsb + tma_mite)))tma_info_botlnk_l2_dsb_bandwidth > 10Total pipeline cost of DSB (uop cache) hits - subset of the Instruction_Fetch_BW BottleneckTotal pipeline cost of DSB (uop cache) hits - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp00tma_info_botlnk_l2_dsb_missesDSBmiss;Fed;tma_issueFB100 * (tma_fetch_latency * tma_dsb_switches / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + tma_fetch_bandwidth * tma_mite / (tma_dsb + tma_mite))tma_info_botlnk_l2_dsb_misses > 10Total pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW BottleneckTotal pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp01tma_info_bottleneck_big_codeBigFootprint;BvBC;Fed;Frontend;IcMiss;MemoryTLB100 * tma_fetch_latency * (tma_itlb_misses + tma_icache_misses + tma_unknown_branches) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)tma_info_bottleneck_big_code > 20Total pipeline cost of instruction fetch related bottlenecks by large code footprint programs (i-side cache; TLB and BTB misses)01tma_info_bottleneck_branching_overheadBvBO;Ret100 * ((BR_INST_RETIRED.ALL_BRANCHES + 2 * BR_INST_RETIRED.NEAR_CALL + INST_RETIRED.NOP) / tma_info_thread_slots)tma_info_bottleneck_branching_overhead > 5Total pipeline cost of instructions used for program control-flow - a subset of the Retiring category in TMATotal pipeline cost of instructions used for program control-flow - a subset of the Retiring category in TMA. Examples include function calls; loops and alignments. (A lower bound)00tma_info_bottleneck_cache_memory_bandwidthBvMB;Mem;MemoryBW;Offcore;tma_issueBW100 * (tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_fb_full / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_hit_latency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)))tma_info_bottleneck_cache_memory_bandwidth > 20Total pipeline cost of external Memory- or Cache-Bandwidth related bottlenecksTotal pipeline cost of external Memory- or Cache-Bandwidth related bottlenecks. Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full00tma_info_bottleneck_cache_memory_latencyBvML;Mem;MemoryLat;Offcore;tma_issueLat100 * (tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_l3_hit_latency / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * tma_l2_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_store_latency / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_l1_hit_latency / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_hit_latency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)))tma_info_bottleneck_cache_memory_latency > 20Total pipeline cost of external Memory- or Cache-Latency related bottlenecksTotal pipeline cost of external Memory- or Cache-Latency related bottlenecks. Related metrics: tma_l3_hit_latency, tma_mem_latency00tma_info_bottleneck_instruction_fetch_bwBvFB;Fed;FetchBW;Frontend100 * (tma_frontend_bound - (1 - 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts) * tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) - tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_fetch_latency * (tma_ms_switches + tma_branch_resteers * (tma_clears_resteers + tma_mispredicts_resteers * (10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts)) / (tma_clears_resteers + tma_mispredicts_resteers + tma_unknown_branches)) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) - tma_info_bottleneck_big_codetma_info_bottleneck_instruction_fetch_bw > 20Total pipeline cost of instruction fetch bandwidth related bottlenecks (when the front-end could not sustain operations delivery to the back-end)01tma_info_bottleneck_irregular_overheadBad;BvIO;Cor;Ret;tma_issueMS100 * (tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_fetch_latency * (tma_ms_switches + tma_branch_resteers * (tma_clears_resteers + tma_mispredicts_resteers * (10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts)) / (tma_clears_resteers + tma_mispredicts_resteers + tma_unknown_branches)) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts * tma_branch_mispredicts + tma_machine_clears * tma_other_nukes / tma_other_nukes + tma_core_bound * (tma_serializing_operation + tma_core_bound * RS_EVENTS.EMPTY_CYCLES / tma_info_thread_clks * tma_ports_utilized_0) / (tma_divider + tma_ports_utilization + tma_serializing_operation) + tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_heavy_operations)tma_info_bottleneck_irregular_overhead > 10Total pipeline cost of irregular execution (e.gTotal pipeline cost of irregular execution (e.g. FP-assists in HPC, Wait time with work imbalance multithreaded workloads, overhead in system services or virtualized environments). Related metrics: tma_microcode_sequencer, tma_ms_switches00tma_info_bottleneck_memory_data_tlbsBvMT;Mem;MemoryTLB;Offcore;tma_issueTLB100 * (tma_memory_bound * (tma_l1_bound / max(tma_memory_bound, tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_dtlb_load / max(tma_l1_bound, tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_hit_latency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_dtlb_store / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency)))tma_info_bottleneck_memory_data_tlbs > 20Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs)Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs). Related metrics: tma_dtlb_load, tma_dtlb_store, tma_info_bottleneck_memory_synchronization01tma_info_bottleneck_memory_synchronizationBvMS;Mem;Offcore;tma_issueTLB100 * (tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) * tma_remote_cache / (tma_local_mem + tma_remote_cache + tma_remote_mem) + tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * (tma_contested_accesses + tma_data_sharing) / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full) + tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * tma_false_sharing / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency - tma_store_latency)) + tma_machine_clears * (1 - tma_other_nukes / tma_other_nukes))tma_info_bottleneck_memory_synchronization > 10Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors)Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors). Related metrics: tma_dtlb_load, tma_dtlb_store, tma_info_bottleneck_memory_data_tlbs00tma_info_bottleneck_mispredictionsBad;BadSpec;BrMispredicts;BvMP;tma_issueBM100 * (1 - 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts) * (tma_branch_mispredicts + tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))tma_info_bottleneck_mispredictions > 20Total pipeline cost of Branch Misprediction related bottlenecksTotal pipeline cost of Branch Misprediction related bottlenecks. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost, tma_mispredicts_resteers01tma_info_bottleneck_useful_workBvUW;Ret100 * (tma_retiring - (BR_INST_RETIRED.ALL_BRANCHES + 2 * BR_INST_RETIRED.NEAR_CALL + INST_RETIRED.NOP) / tma_info_thread_slots - tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_heavy_operations)tma_info_bottleneck_useful_work > 20Total pipeline cost of "useful operations" - the portion of Retiring category not covered by Branching_Overhead nor Irregular_Overhead00tma_info_branches_callretBad;Branches(BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_RETURN) / BR_INST_RETIRED.ALL_BRANCHESFraction of branches that are CALL or RET00tma_info_branches_cond_ntBad;Branches;CodeGen;PGOBR_INST_RETIRED.NOT_TAKEN / BR_INST_RETIRED.ALL_BRANCHESFraction of branches that are non-taken conditionals00tma_info_branches_cond_tkBad;Branches;CodeGen;PGO(BR_INST_RETIRED.CONDITIONAL - BR_INST_RETIRED.NOT_TAKEN) / BR_INST_RETIRED.ALL_BRANCHESFraction of branches that are taken conditionals00tma_info_branches_jumpBad;Branches(BR_INST_RETIRED.NEAR_TAKEN - (BR_INST_RETIRED.COND - BR_INST_RETIRED.NOT_TAKEN) - 2 * BR_INST_RETIRED.NEAR_CALL) / BR_INST_RETIRED.ALL_BRANCHESFraction of branches that are unconditional (direct or indirect) jumps01tma_info_core_epcPowerUOPS_EXECUTED.THREAD / tma_info_thread_clksuops Executed per Cycle00tma_info_core_flopcFlops;Ret(FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.8_FLOPS + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / tma_info_core_core_clksFloating Point Operations Per Cycle01tma_info_core_fp_arith_utilizationCor;Flops;HPC(FP_ARITH_INST_RETIRED.SCALAR + cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\,umask\=0xfc@) / (2 * tma_info_core_core_clks)Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width)Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width). Values > 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less common)00tma_info_frontend_dsb_coverageDSB;Fed;FetchBW;tma_issueFBIDQ.DSB_UOPS / (IDQ.DSB_UOPS + IDQ.MITE_UOPS + IDQ.MS_UOPS)tma_info_frontend_dsb_coverage < 0.7 & tma_info_thread_ipc / 4 > 0.35Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache)Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache). Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_inst_mix_iptb, tma_lcp00tma_info_frontend_dsb_switch_costDSBmissDSB2MITE_SWITCHES.PENALTY_CYCLES / DSB2MITE_SWITCHES.COUNTAverage number of cycles of a switch from the DSB fetch-unit to MITE fetch unit - see DSB_Switches tree node for details00tma_info_frontend_fetch_upcFed;FetchBWUOPS_ISSUED.ANY / cpu@UOPS_ISSUED.ANY\,cmask\=1@Average number of Uops issued by front-end when it issued something00tma_info_frontend_icache_miss_latencyFed;FetchLat;IcMissICACHE_16B.IFDATA_STALL / cpu@ICACHE_16B.IFDATA_STALL\,cmask\=1\,edge@ + 2Average Latency for L1 instruction cache misses00tma_info_frontend_ipdsb_miss_retDSBmiss;FedINST_RETIRED.ANY / FRONTEND_RETIRED.ANY_DSB_MISStma_info_frontend_ipdsb_miss_ret < 50Instructions per non-speculative DSB miss (lower number means higher occurrence rate)00tma_info_frontend_l2mpki_codeIcMiss1e3 * FRONTEND_RETIRED.L2_MISS / INST_RETIRED.ANYL2 cache true code cacheline misses per kilo instruction00tma_info_frontend_l2mpki_code_allIcMiss1e3 * L2_RQSTS.CODE_RD_MISS / INST_RETIRED.ANYL2 cache speculative code cacheline misses per kilo instruction00tma_info_inst_mix_iparithFlops;InsTypeINST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\,umask\=0xfc@)tma_info_inst_mix_iparith < 10Instructions per FP Arithmetic instruction (lower number means higher occurrence rate)Instructions per FP Arithmetic instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting. Approximated prior to BDW01tma_info_inst_mix_iparith_avx512Flops;FpVector;InsTypeINST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE)tma_info_inst_mix_iparith_avx512 < 10Instructions per FP Arithmetic AVX 512-bit instruction (lower number means higher occurrence rate)Instructions per FP Arithmetic AVX 512-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting00tma_info_inst_mix_ipflopFlops;InsTypeINST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.8_FLOPS + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE)tma_info_inst_mix_ipflop < 10Instructions per Floating Point (FP) Operation (lower number means higher occurrence rate)01tma_info_inst_mix_iploadInsTypeINST_RETIRED.ANY / MEM_INST_RETIRED.ALL_LOADStma_info_inst_mix_ipload < 3Instructions per Load (lower number means higher occurrence rate)00tma_info_inst_mix_ippauseFlops;FpVector;InsTypetma_info_inst_mix_instructions / ROB_MISC_EVENTS.PAUSE_INSTInstructions per PAUSE (lower number means higher occurrence rate)00tma_info_inst_mix_ipstoreInsTypeINST_RETIRED.ANY / MEM_INST_RETIRED.ALL_STOREStma_info_inst_mix_ipstore < 8Instructions per Store (lower number means higher occurrence rate)00tma_info_inst_mix_ipswpfPrefetchesINST_RETIRED.ANY / cpu@SW_PREFETCH_ACCESS.T0\,umask\=0xF@tma_info_inst_mix_ipswpf < 100Instructions per Software prefetch instruction (of any type: NTA/T0/T1/T2/Prefetch) (lower number means higher occurrence rate)00tma_info_inst_mix_iptbBranches;Fed;FetchBW;Frontend;PGO;tma_issueFBINST_RETIRED.ANY / BR_INST_RETIRED.NEAR_TAKENtma_info_inst_mix_iptb < 9Instructions per taken branchInstructions per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_lcp00tma_info_memory_core_l2_evictions_nonsilent_pkiL2Evicts;Mem;Server1e3 * L2_LINES_OUT.NON_SILENT / tma_info_inst_mix_instructionsRate of non silent evictions from the L2 cache per Kilo instruction00tma_info_memory_core_l2_evictions_silent_pkiL2Evicts;Mem;Server1e3 * L2_LINES_OUT.SILENT / tma_info_inst_mix_instructionsRate of silent evictions from the L2 cache per Kilo instruction where the evicted lines are dropped (no writeback to L3 or memory)00tma_info_memory_fb_hpkiCacheHits;Mem1e3 * MEM_LOAD_RETIRED.FB_HIT / INST_RETIRED.ANYFill Buffer (FB) hits per kilo instructions for retired demand loads (L1D misses that merge into ongoing miss-handling entries)00tma_info_memory_l1mpkiCacheHits;Mem1e3 * MEM_LOAD_RETIRED.L1_MISS / INST_RETIRED.ANYL1 cache true misses per kilo instruction for retired demand loads00tma_info_memory_l1mpki_loadCacheHits;Mem1e3 * L2_RQSTS.ALL_DEMAND_DATA_RD / INST_RETIRED.ANYL1 cache true misses per kilo instruction for all demand loads (including speculative)00tma_info_memory_l2mpkiBackend;CacheHits;Mem1e3 * MEM_LOAD_RETIRED.L2_MISS / INST_RETIRED.ANYL2 cache true misses per kilo instruction for retired demand loads00tma_info_memory_l3_cache_access_bwMem;MemoryBW;Offcore64 * OFFCORE_REQUESTS.ALL_REQUESTS / 1e9 / duration_timeAverage per-thread data access bandwidth to the L3 cache [GB / sec]00tma_info_memory_l3mpkiMem1e3 * MEM_LOAD_RETIRED.L3_MISS / INST_RETIRED.ANYL3 cache true misses per kilo instruction for retired demand loads00tma_info_memory_load_miss_real_latencyMem;MemoryBound;MemoryLatL1D_PEND_MISS.PENDING / (MEM_LOAD_RETIRED.L1_MISS + MEM_LOAD_RETIRED.FB_HIT)Actual Average Latency for L1 data-cache miss demand load operations (in core cycles)00tma_info_memory_mix_uc_load_pkiMem1e3 * MEM_LOAD_MISC_RETIRED.UC / INST_RETIRED.ANYUn-cacheable retired load per kilo instruction00tma_info_memory_mlpMem;MemoryBW;MemoryBoundL1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLESMemory-Level-Parallelism (average number of L1 miss demand load when there is at least one such missMemory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)00tma_info_memory_tlb_code_stlb_mpkiFed;MemoryTLB1e3 * ITLB_MISSES.WALK_COMPLETED / INST_RETIRED.ANYSTLB (2nd level TLB) code speculative misses per kilo instruction (misses of any page-size that complete the page walk)00tma_info_memory_tlb_load_stlb_mpkiMem;MemoryTLB1e3 * DTLB_LOAD_MISSES.WALK_COMPLETED / INST_RETIRED.ANYSTLB (2nd level TLB) data load speculative misses per kilo instruction (misses of any page-size that complete the page walk)00tma_info_memory_tlb_page_walks_utilizationMem;MemoryTLB(ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_PENDING + DTLB_STORE_MISSES.WALK_PENDING + EPT.WALK_PENDING) / (2 * tma_info_core_core_clks)tma_info_memory_tlb_page_walks_utilization > 0.5Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses02tma_info_memory_tlb_store_stlb_mpkiMem;MemoryTLB1e3 * DTLB_STORE_MISSES.WALK_COMPLETED / INST_RETIRED.ANYSTLB (2nd level TLB) data store speculative misses per kilo instruction (misses of any page-size that complete the page walk)00tma_info_pipeline_executeCor;Pipeline;PortsUtil;SMTUOPS_EXECUTED.THREAD / (UOPS_EXECUTED.CORE_CYCLES_GE_1 / 2 if #SMT_on else cpu@UOPS_EXECUTED.THREAD\,cmask\=1@)Instruction-Level-Parallelism (average number of uops executed when there is execution) per core00tma_info_pipeline_fetch_dsbFed;FetchBWIDQ.DSB_UOPS / IDQ.DSB_CYCLES_ANYAverage number of uops fetched from DSB per cycle00tma_info_pipeline_fetch_miteFed;FetchBWIDQ.MITE_UOPS / IDQ.MITE_CYCLESAverage number of uops fetched from MITE per cycle00tma_info_pipeline_ipassistMicroSeq;Pipeline;Ret;RetireINST_RETIRED.ANY / (FP_ASSIST.ANY + OTHER_ASSISTS.ANY)tma_info_pipeline_ipassist < 100e3Instructions per a microcode Assist invocationInstructions per a microcode Assist invocation. See Assists tree node for details (lower number means higher occurrence rate)00tma_info_system_dram_bw_useHPC;MemOffcore;MemoryBW;SoC;tma_issueBW64 * (UNC_M_CAS_COUNT.RD + UNC_M_CAS_COUNT.WR) / 1e9 / duration_timeAverage external Memory Bandwidth Use for reads and writes [GB / sec]Average external Memory Bandwidth Use for reads and writes [GB / sec]. Related metrics: tma_fb_full, tma_info_bottleneck_cache_memory_bandwidth, tma_mem_bandwidth, tma_sq_full00tma_info_system_gflopsCor;Flops;HPC(FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.8_FLOPS + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / 1e9 / duration_timeGiga Floating Point Operations Per SecondGiga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width01tma_info_system_io_read_bwIoBW;MemOffcore;Server;SoC(UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART0 + UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART1 + UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART2 + UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART3) * 4 / 1e9 / duration_timeAverage IO (network or disk) Bandwidth Use for Reads [GB / sec]Average IO (network or disk) Bandwidth Use for Reads [GB / sec]. Bandwidth of IO reads that are initiated by end device controllers that are requesting memory from the CPU00tma_info_system_io_write_bwIoBW;MemOffcore;Server;SoC(UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART0 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART1 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3) * 4 / 1e9 / duration_timeAverage IO (network or disk) Bandwidth Use for Writes [GB / sec]Average IO (network or disk) Bandwidth Use for Writes [GB / sec]. Bandwidth of IO writes that are initiated by end device controllers that are writing memory to the CPU00tma_info_system_mem_dram_read_latencyMemOffcore;MemoryLat;Server;SoC1e9 * (UNC_M_RPQ_OCCUPANCY / UNC_M_RPQ_INSERTS) / imc_0@event\=0x0@Average latency of data read request to external DRAM memory [in nanoseconds]Average latency of data read request to external DRAM memory [in nanoseconds]. Accounts for demand loads and L1/L2 data-read prefetches00tma_info_system_mem_parallel_readsMem;MemoryBW;SoCUNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD / UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD@thresh\=1@Average number of parallel data read requests to external memoryAverage number of parallel data read requests to external memory. Accounts for demand loads and L1/L2 prefetches00tma_info_system_mem_pmm_read_latencyMemOffcore;MemoryLat;Server;SoC(1e9 * (UNC_M_PMM_RPQ_OCCUPANCY.ALL / UNC_M_PMM_RPQ_INSERTS) / imc_0@event\=0x0@ if #has_pmem > 0 else 0)Average latency of data read request to external 3D X-Point memory [in nanoseconds]Average latency of data read request to external 3D X-Point memory [in nanoseconds]. Accounts for demand loads and L1/L2 data-read prefetches00tma_info_system_mem_read_latencyMem;MemoryLat;SoC1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD / UNC_CHA_TOR_INSERTS.IA_MISS_DRD) / (tma_info_system_socket_clks / duration_time)Average latency of data read request to external memory (in nanoseconds)Average latency of data read request to external memory (in nanoseconds). Accounts for demand loads and L1/L2 prefetches. ([RKL+]memory-controller only)00tma_info_system_pmm_read_bwMemOffcore;MemoryBW;Server;SoC(64 * UNC_M_PMM_RPQ_INSERTS / 1e9 / duration_time if #has_pmem > 0 else 0)Average 3DXP Memory Bandwidth Use for reads [GB / sec]00tma_info_system_pmm_write_bwMemOffcore;MemoryBW;Server;SoC(64 * UNC_M_PMM_WPQ_INSERTS / 1e9 / duration_time if #has_pmem > 0 else 0)Average 3DXP Memory Bandwidth Use for Writes [GB / sec]00tma_info_system_power_license0_utilizationPower(CORE_POWER.LVL0_TURBO_LICENSE / 2 / tma_info_core_core_clks if #SMT_on else CORE_POWER.LVL0_TURBO_LICENSE / tma_info_core_core_clks)Fraction of Core cycles where the core was running with power-delivery for baseline license level 0Fraction of Core cycles where the core was running with power-delivery for baseline license level 0.  This includes non-AVX codes, SSE, AVX 128-bit, and low-current AVX 256-bit codes00tma_info_system_power_license1_utilizationPower(CORE_POWER.LVL1_TURBO_LICENSE / 2 / tma_info_core_core_clks if #SMT_on else CORE_POWER.LVL1_TURBO_LICENSE / tma_info_core_core_clks)tma_info_system_power_license1_utilization > 0.5Fraction of Core cycles where the core was running with power-delivery for license level 1Fraction of Core cycles where the core was running with power-delivery for license level 1.  This includes high current AVX 256-bit instructions as well as low current AVX 512-bit instructions00tma_info_system_power_license2_utilizationPower(CORE_POWER.LVL2_TURBO_LICENSE / 2 / tma_info_core_core_clks if #SMT_on else CORE_POWER.LVL2_TURBO_LICENSE / tma_info_core_core_clks)tma_info_system_power_license2_utilization > 0.5Fraction of Core cycles where the core was running with power-delivery for license level 2 (introduced in SKX)Fraction of Core cycles where the core was running with power-delivery for license level 2 (introduced in SKX).  This includes high current AVX 512-bit instructions00tma_info_system_socket_clksSoCcha_0@event\=0x0@Socket actual clocks when any core is active on that socket00tma_itlb_missesBigFootprint;BvBC;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_groupICACHE_TAG.STALLS / tma_info_thread_clkstma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) missesThis metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: FRONTEND_RETIRED.STLB_MISS_PS;FRONTEND_RETIRED.ITLB_MISS_PS100%00tma_l1_hit_latencyBvML;MemoryLat;TopdownL4;tma_L4_group;tma_l1_bound_groupmin(2 * (MEM_INST_RETIRED.ALL_LOADS - MEM_LOAD_RETIRED.FB_HIT - MEM_LOAD_RETIRED.L1_MISS) * 20 / 100, max(CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE_ACTIVITY.CYCLES_L1D_MISS, 0)) / tma_info_thread_clkstma_l1_hit_latency > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates fraction of cycles with demand load accesses that hit the L1 cacheThis metric roughly estimates fraction of cycles with demand load accesses that hit the L1 cache. The short latency of the L1 data cache may be exposed in pointer-chasing memory access patterns as an example. Sample with: MEM_LOAD_RETIRED.L1_HIT100%00tma_l2_boundBvML;CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_groupMEM_LOAD_RETIRED.L2_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / (MEM_LOAD_RETIRED.L2_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + cpu@L1D_PEND_MISS.FB_FULL\,cmask\=1@) * ((CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks)tma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled due to L2 cache accesses by loadsThis metric estimates how often the CPU was stalled due to L2 cache accesses by loads.  Avoiding cache misses (i.e. L1 misses/L2 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L2_HIT_PS100%01tma_l3_boundCacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group(CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS) / tma_info_thread_clkstma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling CoreThis metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core.  Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS100%00tma_l3_hit_latencyBvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group17 * tma_info_system_core_frequency * (MEM_LOAD_RETIRED.L3_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2)) / tma_info_thread_clkstma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited).  Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance.  Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS. Related metrics: tma_info_bottleneck_cache_memory_latency, tma_mem_latency100%00tma_lcpFetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueFBDECODE.LCP / tma_info_thread_clkstma_lcp > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric represents fraction of cycles CPU was stalled due to Length Changing Prefixes (LCPs)This metric represents fraction of cycles CPU was stalled due to Length Changing Prefixes (LCPs). Using proper compiler flags or Intel Compiler by default will certainly avoid this. #Link: Optimization Guide about LCP BKMs. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb100%00tma_load_op_utilizationTopdownL5;tma_L5_group;tma_ports_utilized_3m_group(UOPS_DISPATCHED_PORT.PORT_2 + UOPS_DISPATCHED_PORT.PORT_3 + UOPS_DISPATCHED_PORT.PORT_7 - UOPS_DISPATCHED_PORT.PORT_4) / (2 * tma_info_core_core_clks)tma_load_op_utilization > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port for Load operationsThis metric represents Core fraction of cycles CPU dispatched uops on execution port for Load operations. Sample with: UOPS_DISPATCHED.PORT_2_3100%00tma_load_stlb_hitMemoryTLB;TopdownL5;tma_L5_group;tma_dtlb_load_grouptma_dtlb_load - tma_load_stlb_misstma_load_stlb_hit > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))This metric roughly estimates the fraction of cycles where the (first level) DTLB was missed by load accesses, that later on hit in second-level TLB (STLB)100%02tma_load_stlb_missMemoryTLB;TopdownL5;tma_L5_group;tma_dtlb_load_groupDTLB_LOAD_MISSES.WALK_ACTIVE / tma_info_thread_clkstma_load_stlb_miss > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))This metric estimates the fraction of cycles where the Second-level TLB (STLB) was missed by load accesses, performing a hardware page walk100%00tma_local_memServer;TopdownL5;tma_L5_group;tma_mem_latency_group59.5 * tma_info_system_core_frequency * MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clkstma_local_mem > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))This metric estimates fraction of cycles while the memory subsystem was handling loads from local memoryThis metric estimates fraction of cycles while the memory subsystem was handling loads from local memory. Caching will improve the latency and increase performance. Sample with: MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM100%00tma_lock_latencyOffcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group(12 * max(0, MEM_INST_RETIRED.LOCK_LOADS - L2_RQSTS.ALL_RFO) + MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES * (11 * L2_RQSTS.RFO_HIT + min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO))) / tma_info_thread_clkstma_lock_latency > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric represents fraction of cycles the CPU spent handling cache misses due to lock operationsThis metric represents fraction of cycles the CPU spent handling cache misses due to lock operations. Due to the microarchitecture handling of locks; they are classified as L1_Bound regardless of what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOADS. Related metrics: tma_store_latency100%00tma_mem_bandwidthBvMS;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBWmin(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\,cmask\=4@) / tma_info_thread_clkstma_mem_bandwidth > 0.2 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM)This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM).  The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_fb_full, tma_info_bottleneck_cache_memory_bandwidth, tma_info_system_dram_bw_use, tma_sq_full100%00tma_mem_latencyBvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLatmin(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD) / tma_info_thread_clks - tma_mem_bandwidthtma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM)This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM).  This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: tma_info_bottleneck_cache_memory_latency, tma_l3_hit_latency100%00tma_memory_boundBackend;TmaL2;TopdownL2;tma_L2_group;tma_backend_bound_group(CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES) * tma_backend_boundtma_memory_bound > 0.2 & tma_backend_bound > 0.2This metric represents fraction of slots the Memory subsystem within the Backend was a bottleneckThis metric represents fraction of slots the Memory subsystem within the Backend was a bottleneck.  Memory Bound estimates fraction of slots where pipeline is likely stalled due to demand load or store instructions. This accounts mainly for (1) non-completed in-flight memory demand loads which coincides with execution units starvation; in addition to (2) cases where stores could impose backpressure on the pipeline when many of them get buffered at the same time (less common out of the two)100%TopdownL201tma_memory_operationsPipeline;TopdownL3;tma_L3_group;tma_light_operations_grouptma_light_operations * MEM_INST_RETIRED.ANY / INST_RETIRED.ANYtma_memory_operations > 0.1 & tma_light_operations > 0.6This metric represents fraction of slots where the CPU was retiring memory operations -- uops for memory load or store accesses100%00tma_microcode_sequencerMicroSeq;TopdownL3;tma_L3_group;tma_heavy_operations_group;tma_issueMC;tma_issueMSUOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY * IDQ.MS_UOPS / tma_info_thread_slotstma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1This metric represents fraction of slots the CPU was retiring uops fetched by the Microcode Sequencer (MS) unitThis metric represents fraction of slots the CPU was retiring uops fetched by the Microcode Sequencer (MS) unit.  The MS is used for CISC instructions not supported by the default decoders (like repeat move strings; or CPUID); or by microcode assists used to address some operation modes (like in Floating Point assists). These cases can often be avoided. Sample with: IDQ.MS_UOPS. Related metrics: tma_clears_resteers, tma_info_bottleneck_irregular_overhead, tma_l1_bound, tma_machine_clears, tma_ms_switches100%02tma_mispredicts_resteersBadSpec;BrMispredicts;BvMP;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueBMBR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT) * INT_MISC.CLEAR_RESTEER_CYCLES / tma_info_thread_clkstma_mispredicts_resteers > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stageThis metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost, tma_info_bottleneck_mispredictions100%00tma_mixing_vectorsTopdownL5;tma_L5_group;tma_issueMV;tma_ports_utilized_0_groupUOPS_ISSUED.VECTOR_WIDTH_MISMATCH / UOPS_ISSUED.ANYtma_mixing_vectors > 0.05This metric estimates penalty in terms of percentage of([SKL+] injected blend uops out of all Uops Issued -- the Count Domain; [ADL+] cycles)This metric estimates penalty in terms of percentage of([SKL+] injected blend uops out of all Uops Issued -- the Count Domain; [ADL+] cycles). Usually a Mixing_Vectors over 5% is worth investigating. Read more in Appendix B1 of the Optimizations Guide for this topic. Related metrics: tma_ms_switches100%00tma_ms_switchesFetchLat;MicroSeq;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueMC;tma_issueMS;tma_issueMV;tma_issueSO2 * IDQ.MS_SWITCHES / tma_info_thread_clkstma_ms_switches > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS)This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS). Commonly used instructions are optimized for delivery by the DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Certain operations cannot be handled natively by the execution pipeline; and must be performed by microcode (small programs injected into the execution stream). Switching to the MS too often can negatively impact performance. The MS is designated to deliver long uop flows required by CISC instructions like CPUID; or uncommon conditions like Floating Point Assists when dealing with Denormals. Sample with: IDQ.MS_SWITCHES. Related metrics: tma_clears_resteers, tma_info_bottleneck_irregular_overhead, tma_l1_bound, tma_machine_clears, tma_microcode_sequencer, tma_mixing_vectors, tma_serializing_operation100%00tma_non_fused_branchesBranches;BvBO;Pipeline;TopdownL3;tma_L3_group;tma_light_operations_grouptma_light_operations * (BR_INST_RETIRED.ALL_BRANCHES - UOPS_RETIRED.MACRO_FUSED) / UOPS_RETIRED.RETIRE_SLOTStma_non_fused_branches > 0.1 & tma_light_operations > 0.6This metric represents fraction of slots where the CPU was retiring branch instructions that were not fusedThis metric represents fraction of slots where the CPU was retiring branch instructions that were not fused. Non-conditional branches like direct JMP or CALL would count here. Can be used to examine fusible conditional jumps that were not fused100%00tma_nop_instructionsBvBO;Pipeline;TopdownL4;tma_L4_group;tma_other_light_ops_grouptma_light_operations * INST_RETIRED.NOP / UOPS_RETIRED.RETIRE_SLOTStma_nop_instructions > 0.1 & (tma_other_light_ops > 0.3 & tma_light_operations > 0.6)This metric represents fraction of slots where the CPU was retiring NOP (no op) instructionsThis metric represents fraction of slots where the CPU was retiring NOP (no op) instructions. Compilers often use NOPs for certain address alignments - e.g. start address of a function or loop body. Sample with: INST_RETIRED.NOP100%00tma_other_light_opsPipeline;TopdownL3;tma_L3_group;tma_light_operations_groupmax(0, tma_light_operations - (tma_fp_arith + tma_memory_operations + tma_fused_instructions + tma_non_fused_branches))tma_other_light_ops > 0.3 & tma_light_operations > 0.6This metric represents the remaining light uops fraction the CPU has executed - remaining means not covered by other sibling nodesThis metric represents the remaining light uops fraction the CPU has executed - remaining means not covered by other sibling nodes. May undercount due to FMA double counting100%00tma_other_mispredictsBrMispredicts;BvIO;TopdownL3;tma_L3_group;tma_branch_mispredicts_groupmax(tma_branch_mispredicts * (1 - BR_MISP_RETIRED.ALL_BRANCHES / (INT_MISC.CLEARS_COUNT - MACHINE_CLEARS.COUNT)), 0.0001)tma_other_mispredicts > 0.05 & (tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15)This metric estimates fraction of slots the CPU was stalled due to other cases of misprediction (non-retired x86 branches or other types)100%00tma_other_nukesBvIO;Machine_Clears;TopdownL3;tma_L3_group;tma_machine_clears_groupmax(tma_machine_clears * (1 - MACHINE_CLEARS.MEMORY_ORDERING / MACHINE_CLEARS.COUNT), 0.0001)tma_other_nukes > 0.05 & (tma_machine_clears > 0.1 & tma_bad_speculation > 0.15)This metric represents fraction of slots the CPU has wasted due to Nukes (Machine Clears) not related to memory ordering100%00tma_pmm_boundMemoryBound;Server;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group(((1 - (19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)) + 10 * (MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) / (19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)) + 10 * (MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)) + (25 * (MEM_LOAD_RETIRED.LOCAL_PMM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)) + 33 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))))) * (CYCLE_ACTIVITY.STALLS_L3_MISS / tma_info_thread_clks + (CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks - tma_l2_bound) if 1e6 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM + MEM_LOAD_RETIRED.LOCAL_PMM) > MEM_LOAD_RETIRED.L1_MISS else 0) if #has_pmem > 0 else 0)tma_pmm_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric roughly estimates (based on idle latencies) how often the CPU was stalled on accesses to external 3D-Xpoint (Crystal Ridge, a.k.aThis metric roughly estimates (based on idle latencies) how often the CPU was stalled on accesses to external 3D-Xpoint (Crystal Ridge, a.k.a. IXP) memory by loads, PMM stands for Persistent Memory Module100%01tma_ports_utilizationPortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group((tma_ports_utilized_0 * tma_info_thread_clks + (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL)) / tma_info_thread_clks if ARITH.DIVIDER_ACTIVE < CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY else (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL) / tma_info_thread_clks)tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)This metric estimates fraction of cycles the CPU performance was potentially limited due to Core computation issues (non divider-related)This metric estimates fraction of cycles the CPU performance was potentially limited due to Core computation issues (non divider-related).  Two distinct categories can be attributed into this metric: (1) heavy data-dependency among contiguous instructions would manifest in this metric - such cases are often referred to as low Instruction Level Parallelism (ILP). (2) Contention on some hardware execution unit other than Divider. For example; when there are too many multiply operations100%00tma_ports_utilized_0PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_groupEXE_ACTIVITY.EXE_BOUND_0_PORTS / tma_info_thread_clkstma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise)This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise). Long-latency instructions like divides may contribute to this metric100%00tma_ports_utilized_1PortsUtil;TopdownL4;tma_L4_group;tma_issueL1;tma_ports_utilization_group((UOPS_EXECUTED.CORE_CYCLES_GE_1 - UOPS_EXECUTED.CORE_CYCLES_GE_2) / 2 if #SMT_on else EXE_ACTIVITY.1_PORTS_UTIL) / tma_info_core_core_clkstma_ports_utilized_1 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles where the CPU executed total of 1 uop per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)This metric represents fraction of cycles where the CPU executed total of 1 uop per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). This can be due to heavy data-dependency among software instructions; or over oversubscribing a particular hardware resource. In some other cases with high 1_Port_Utilized and L1_Bound; this metric can point to L1 data-cache latency bottleneck that may not necessarily manifest with complete execution starvation (due to the short L1 latency e.g. walking a linked list) - looking at the assembly can be helpful. Related metrics: tma_l1_bound100%00tma_ports_utilized_2PortsUtil;TopdownL4;tma_L4_group;tma_issue2P;tma_ports_utilization_group((UOPS_EXECUTED.CORE_CYCLES_GE_2 - UOPS_EXECUTED.CORE_CYCLES_GE_3) / 2 if #SMT_on else EXE_ACTIVITY.2_PORTS_UTIL) / tma_info_core_core_clkstma_ports_utilized_2 > 0.15 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise).  Loop Vectorization -most compilers feature auto-Vectorization options today- reduces pressure on the execution ports as multiple elements are calculated with same uop. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6100%00tma_ports_utilized_3mBvCB;PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group(UOPS_EXECUTED.CORE_CYCLES_GE_3 / 2 if #SMT_on else UOPS_EXECUTED.CORE_CYCLES_GE_3) / tma_info_core_core_clkstma_ports_utilized_3m > 0.4 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)100%00tma_remote_cacheOffcore;Server;Snoop;TopdownL5;tma_L5_group;tma_issueSyncxn;tma_mem_latency_group(89.5 * tma_info_system_core_frequency * MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM + 89.5 * tma_info_system_core_frequency * MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clkstma_remote_cache > 0.05 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))This metric estimates fraction of cycles while the memory subsystem was handling loads from remote cache in other sockets including synchronizations issuesThis metric estimates fraction of cycles while the memory subsystem was handling loads from remote cache in other sockets including synchronizations issues. This is caused often due to non-optimal NUMA allocations. #link to NUMA article. Sample with: MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM_PS;MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD_PS. Related metrics: tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_machine_clears100%02tma_remote_memServer;Snoop;TopdownL5;tma_L5_group;tma_mem_latency_group127 * tma_info_system_core_frequency * MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clkstma_remote_mem > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))This metric estimates fraction of cycles while the memory subsystem was handling loads from remote memoryThis metric estimates fraction of cycles while the memory subsystem was handling loads from remote memory. This is caused often due to non-optimal NUMA allocations. #link to NUMA article. Sample with: MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM_PS100%00tma_serializing_operationBvIO;PortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group;tma_issueSOPARTIAL_RAT_STALLS.SCOREBOARD / tma_info_thread_clkstma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)This metric represents fraction of cycles the CPU issue-pipeline was stalled due to serializing operationsThis metric represents fraction of cycles the CPU issue-pipeline was stalled due to serializing operations. Instructions like CPUID; WRMSR or LFENCE serialize the out-of-order execution which may limit performance. Sample with: PARTIAL_RAT_STALLS.SCOREBOARD. Related metrics: tma_ms_switches100%00tma_slow_pauseTopdownL4;tma_L4_group;tma_serializing_operation_group40 * ROB_MISC_EVENTS.PAUSE_INST / tma_info_thread_clkstma_slow_pause > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles the CPU was stalled due to PAUSE InstructionsThis metric represents fraction of cycles the CPU was stalled due to PAUSE Instructions. Sample with: MISC_RETIRED.PAUSE_INST100%00tma_split_loadsTopdownL4;tma_L4_group;tma_l1_bound_grouptma_info_memory_load_miss_real_latency * LD_BLOCKS.NO_SR / tma_info_thread_clkstma_split_loads > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundaryThis metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundary. Sample with: MEM_INST_RETIRED.SPLIT_LOADS_PS100%02tma_split_storesTopdownL4;tma_L4_group;tma_issueSpSt;tma_store_bound_groupMEM_INST_RETIRED.SPLIT_STORES / tma_info_core_core_clkstma_split_stores > 0.2 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric represents rate of split store accessesThis metric represents rate of split store accesses.  Consider aligning your data to the 64-byte cache line granularity. Sample with: MEM_INST_RETIRED.SPLIT_STORES_PS. Related metrics: tma_port_4100%00tma_sq_fullBvMS;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group(OFFCORE_REQUESTS_BUFFER.SQ_FULL / 2 if #SMT_on else OFFCORE_REQUESTS_BUFFER.SQ_FULL) / tma_info_core_core_clkstma_sq_full > 0.3 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors)This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors). Related metrics: tma_fb_full, tma_info_bottleneck_cache_memory_bandwidth, tma_info_system_dram_bw_use, tma_mem_bandwidth100%00tma_store_boundMemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_groupEXE_ACTIVITY.BOUND_ON_STORES / tma_info_thread_clkstma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often CPU was stalled  due to RFO store memory accesses; RFO store issue a read-for-ownership request before the writeThis metric estimates how often CPU was stalled  due to RFO store memory accesses; RFO store issue a read-for-ownership request before the write. Even though store accesses do not typically stall out-of-order CPUs; there are few cases where stores can lead to actual stalls. This metric will be flagged should RFO stores be a bottleneck. Sample with: MEM_INST_RETIRED.ALL_STORES_PS100%00tma_store_latencyBvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group(L2_RQSTS.RFO_HIT * 11 * (1 - MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) + (1 - MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / tma_info_thread_clkstma_store_latency > 0.1 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles the CPU spent handling L1D store missesThis metric estimates fraction of cycles the CPU spent handling L1D store misses. Store accesses usually less impact out-of-order core performance; however; holding resources for longer time can lead into undesired implications (e.g. contention on L1D fill-buffer entries - see FB_Full). Related metrics: tma_fb_full, tma_lock_latency100%02tma_store_stlb_missMemoryTLB;TopdownL5;tma_L5_group;tma_dtlb_store_groupDTLB_STORE_MISSES.WALK_ACTIVE / tma_info_core_core_clkstma_store_stlb_miss > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))This metric estimates the fraction of cycles where the STLB was missed by store accesses, performing a hardware page walk100%00tma_unknown_branchesBigFootprint;BvBC;FetchLat;TopdownL4;tma_L4_group;tma_branch_resteers_group9 * BACLEARS.ANY / tma_info_thread_clkstma_unknown_branches > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))This metric represents fraction of cycles the CPU was stalled due to new branch address clearsThis metric represents fraction of cycles the CPU was stalled due to new branch address clears. These are fetched branches the Branch Prediction Unit was unable to recognize (e.g. first time the branch is fetched or hitting BTB capacity limit) hence called Unknown Branches. Sample with: BACLEARS.ANY100%00tma_x87_useCompute;TopdownL4;tma_L4_group;tma_fp_arith_grouptma_retiring * UOPS_EXECUTED.X87 / UOPS_EXECUTED.THREADtma_x87_use > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)This metric serves as an approximation of legacy x87 usageThis metric serves as an approximation of legacy x87 usage. It accounts for instructions beyond X87 FP arithmetic operations; hence may be used as a thermometer to avoid X87 high usage and preferably upgrade to modern ISA. See Tip under Tuning Hint100%00uncore_frequencyUNC_CHA_CLOCKTICKS / (source_count(UNC_CHA_CLOCKTICKS) * #num_packages) / 1e9 / duration_timeUncore operating frequency in GHz1GHz00upi_data_receive_bwUNC_UPI_RxL_FLITS.ALL_DATA * 7.111111111111111 / 1e6 / duration_timeIntel(R) Ultra Path Interconnect (UPI) data receive bandwidth (MB/sec)1MB/s00upi_data_transmit_bwUNC_UPI_TxL_FLITS.ALL_DATA * 7.111111111111111 / 1e6 / duration_timeIntel(R) Ultra Path Interconnect (UPI) data transmit bandwidth (MB/sec)1MB/s00IoBWLLC_MISSES.PCIE_READUNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART0 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART1 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3PCI Express bandwidth reading at IIO. Derived from unc_iio_data_req_of_cpu.mem_read.part0Counts every read request for 4 bytes of data made by IIO Part0 to a unit on the main die (generally memory). In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the bus4Bytes00LLC_MISSES.PCIE_WRITEUNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART0 + UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART1 + UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART2 + UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART3PCI Express bandwidth writing at IIO. Derived from unc_iio_data_req_of_cpu.mem_write.part0Counts every write request of 4 bytes of data made by IIO Part0 to a unit on the main die (generally memory). In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the bus4Bytes00UNC_M_PMM_BANDWIDTH.TOTALUNC_M_PMM_RPQ_INSERTS + UNC_M_PMM_WPQ_INSERTSIntel Optane DC persistent memory bandwidth total (MB/sec). Derived from unc_m_pmm_rpq_inserts6.103515625E-5MB/sec00UNC_M_PMM_READ_LATENCYUNC_M_PMM_RPQ_OCCUPANCY.ALL / UNC_M_PMM_RPQ_INSERTS / UNC_M_CLOCKTICKSIntel Optane DC persistent memory read latency (ns). Derived from unc_m_pmm_rpq_occupancy.all6000000000ns00power_channel_ppdUNC_M_POWER_CHANNEL_PPD / UNC_M_CLOCKTICKS * 100Cycles where DRAM ranks are in power down (CKE) mode+C37Counts cycles when all the ranks in the channel are in PPD (PreCharge Power Down) mode. If IBT (Input Buffer Terminators)=off is enabled, then this event counts the cycles in PPD mode. If IBT=off is not enabled, then this event counts the number of cycles when being in PPD mode could have been taken advantage of00power_self_refreshUNC_M_POWER_SELF_REFRESH / UNC_M_CLOCKTICKS * 100Cycles Memory is in self refresh power modeCounts the number of cycles when the iMC (memory controller) is in self-refresh and has a clock. This happens in some ACPI CPU package C-states for the sleep levels. For example, the PCU (Power Control Unit) may ask the iMC to enter self-refresh even though some of the cores are still processing. One use of this is for Intel? Dynamic Power Technology.  Self-refresh is required during package C3 and C6, but there is no clock in the iMC at this time, so it is not possible to count these cases00IPCINST_RETIRED.ANY / cyclesInstructions Per Cycle (per Logical Processor)00CPI1 / IPCCycles Per Instruction (per Logical Processor)00CLKScyclesPer-Logical Processor actual clocks when the Logical Processor is active00IpMispredictINST_RETIRED.ANY / BR_MISP_RETIRED.ALL_BRANCHESNumber of Instructions per non-speculative Branch Misprediction (JEClear)00IpBranchINST_RETIRED.ANY / BR_INST_RETIRED.ALL_BRANCHESInstructions per Branch (lower number means higher occurrence rate)00InstructionsINST_RETIRED.ANYTotal number of retired Instructions00L3_Cache_Fill_BW64 * LONGEST_LAT_CACHE.MISS / 1e9Average per-core data fill bandwidth to the L3 cache [GB / sec]00CPU_UtilizationCPU_CLK_UNHALTED.REF_TSC / msr@tsc@Average CPU Utilization00Average_Frequencycycles / CPU_CLK_UNHALTED.REF_TSC * msr@tsc@ / 1e9Measured Average Frequency for unhalted processors [GHz]00Turbo_Utilizationcycles / CPU_CLK_UNHALTED.REF_TSCAverage Frequency Utilization relative nominal frequency00Kernel_Utilizationcycles:k / cyclesFraction of cycles spent in the Operating System (OS) Kernel mode00dtlb_2nd_level_2mb_large_page_load_mpiDTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M / INST_RETIRED.ANYRatio of number of completed page walks (for 2 megabyte page sizes) caused by demand data loads to the total number of completed instructionsRatio of number of completed page walks (for 2 megabyte page sizes) caused by demand data loads to the total number of completed instructions. This implies it missed in the Data Translation Lookaside Buffer (DTLB) and further levels of TLB1per_instr00dtlb_2nd_level_load_mpiDTLB_LOAD_MISSES.WALK_COMPLETED / INST_RETIRED.ANYRatio of number of completed page walks (for all page sizes) caused by demand data loads to the total number of completed instructionsRatio of number of completed page walks (for all page sizes) caused by demand data loads to the total number of completed instructions. This implies it missed in the DTLB and further levels of TLB1per_instr00dtlb_2nd_level_store_mpiDTLB_STORE_MISSES.WALK_COMPLETED / INST_RETIRED.ANYRatio of number of completed page walks (for all page sizes) caused by demand data stores to the total number of completed instructionsRatio of number of completed page walks (for all page sizes) caused by demand data stores to the total number of completed instructions. This implies it missed in the DTLB and further levels of TLB1per_instr00iio_bandwidth_readUNC_IIO_DATA_REQ_OF_CPU.MEM_READ.ALL_PARTS * 4 / 1e6 / duration_timeBandwidth observed by the integrated I/O traffic controller (IIO) of IO reads that are initiated by end device controllers that are requesting memory from the CPU1MB/s00iio_bandwidth_writeUNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.ALL_PARTS * 4 / 1e6 / duration_timeBandwidth observed by the integrated I/O traffic controller (IIO) of IO writes that are initiated by end device controllers that are writing memory to the CPU1MB/s00io_bandwidth_readUNC_CHA_TOR_INSERTS.IO_PCIRDCUR * 64 / 1e6 / duration_timeBandwidth of IO reads that are initiated by end device controllers that are requesting memory from the CPU1MB/s00io_bandwidth_read_localUNC_CHA_TOR_INSERTS.IO_PCIRDCUR_LOCAL * 64 / 1e6 / duration_timeBandwidth of IO reads that are initiated by end device controllers that are requesting memory from the local CPU socket1MB/s00io_bandwidth_read_remoteUNC_CHA_TOR_INSERTS.IO_PCIRDCUR_REMOTE * 64 / 1e6 / duration_timeBandwidth of IO reads that are initiated by end device controllers that are requesting memory from a remote CPU socket1MB/s00io_bandwidth_write(UNC_CHA_TOR_INSERTS.IO_ITOM + UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR) * 64 / 1e6 / duration_timeBandwidth of IO writes that are initiated by end device controllers that are writing memory to the CPU1MB/s00io_bandwidth_write_local(UNC_CHA_TOR_INSERTS.IO_ITOM_LOCAL + UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR_LOCAL) * 64 / 1e6 / duration_timeBandwidth of IO writes that are initiated by end device controllers that are writing memory to the local CPU socket1MB/s00io_bandwidth_write_remote(UNC_CHA_TOR_INSERTS.IO_ITOM_REMOTE + UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR_REMOTE) * 64 / 1e6 / duration_timeBandwidth of IO writes that are initiated by end device controllers that are writing memory to a remote CPU socket1MB/s00io_percent_of_inbound_full_writes_that_miss_l3UNC_CHA_TOR_INSERTS.IO_MISS_ITOM / UNC_CHA_TOR_INSERTS.IO_ITOMPercentage of inbound full cacheline writes initiated by end device controllers that miss the L3 cache100%00io_percent_of_inbound_partial_writes_that_miss_l3(UNC_CHA_TOR_INSERTS.IO_MISS_ITOMCACHENEAR + UNC_CHA_TOR_INSERTS.IO_MISS_RFO) / (UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR + UNC_CHA_TOR_INSERTS.IO_RFO)Percentage of inbound partial cacheline writes initiated by end device controllers that miss the L3 cache100%00io_percent_of_inbound_reads_that_miss_l3UNC_CHA_TOR_INSERTS.IO_MISS_PCIRDCUR / UNC_CHA_TOR_INSERTS.IO_PCIRDCURPercentage of inbound reads initiated by end device controllers that miss the L3 cache100%00itlb_2nd_level_large_page_mpiITLB_MISSES.WALK_COMPLETED_2M_4M / INST_RETIRED.ANYRatio of number of completed page walks (for 2 megabyte and 4 megabyte page sizes) caused by a code fetch to the total number of completed instructionsRatio of number of completed page walks (for 2 megabyte and 4 megabyte page sizes) caused by a code fetch to the total number of completed instructions. This implies it missed in the Instruction Translation Lookaside Buffer (ITLB) and further levels of TLB1per_instr00itlb_2nd_level_mpiITLB_MISSES.WALK_COMPLETED / INST_RETIRED.ANYRatio of number of completed page walks (for all page sizes) caused by a code fetch to the total number of completed instructionsRatio of number of completed page walks (for all page sizes) caused by a code fetch to the total number of completed instructions. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB1per_instr00llc_code_read_mpi_demand_plus_prefetchUNC_CHA_TOR_INSERTS.IA_MISS_CRD / INST_RETIRED.ANYRatio of number of code read requests missing last level core cache (includes demand w/ prefetches) to the total number of completed instructions1per_instr00llc_data_read_mpi_demand_plus_prefetch(UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFDATA + UNC_CHA_TOR_INSERTS.IA_MISS_DRD + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF) / INST_RETIRED.ANYRatio of number of data read requests missing last level core cache (includes demand w/ prefetches) to the total number of completed instructions1per_instr00llc_demand_data_read_miss_latency1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD / UNC_CHA_TOR_INSERTS.IA_MISS_DRD) / (UNC_CHA_CLOCKTICKS / (source_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD) * #num_packages)) * duration_timeAverage latency of a last level cache (LLC) demand data read miss (read memory access) in nano seconds1ns00llc_demand_data_read_miss_latency_for_local_requests1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_LOCAL / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL) / (UNC_CHA_CLOCKTICKS / (source_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_LOCAL) * #num_packages)) * duration_timeAverage latency of a last level cache (LLC) demand data read miss (read memory access) addressed to local memory in nano seconds1ns00llc_demand_data_read_miss_latency_for_remote_requests1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_REMOTE / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE) / (UNC_CHA_CLOCKTICKS / (source_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_REMOTE) * #num_packages)) * duration_timeAverage latency of a last level cache (LLC) demand data read miss (read memory access) addressed to remote memory in nano seconds1ns00llc_demand_data_read_miss_to_dram_latency1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_DDR / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_DDR) / (UNC_CHA_CLOCKTICKS / (source_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_DDR) * #num_packages)) * duration_timeAverage latency of a last level cache (LLC) demand data read miss (read memory access) addressed to DRAM in nano seconds1ns00llc_demand_data_read_miss_to_pmem_latency1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PMM / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PMM) / (UNC_CHA_CLOCKTICKS / (source_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PMM) * #num_packages)) * duration_timeAverage latency of a last level cache (LLC) demand data read miss (read memory access) addressed to Intel(R) Optane(TM) Persistent Memory(PMEM) in nano seconds1ns00memory_extra_write_bw_due_to_directory_updates(UNC_CHA_DIR_UPDATE.HA + UNC_CHA_DIR_UPDATE.TOR + UNC_M2M_DIRECTORY_UPDATE.ANY) * 64 / 1e6 / duration_timeMemory write bandwidth (MB/sec) caused by directory updates; includes DDR and Intel(R) Optane(TM) Persistent Memory(PMEM)1MB/s00numa_reads_addressed_to_local_dram(UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL) / (UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE)Memory read that miss the last level cache (LLC) addressed to local DRAM as a percentage of total memory read accesses, does not include LLC prefetches100%00numa_reads_addressed_to_remote_dram(UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE) / (UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE)Memory reads that miss the last level cache (LLC) addressed to remote DRAM as a percentage of total memory read accesses, does not include LLC prefetches100%00tma_alu_op_utilizationTopdownL5;tma_L5_group;tma_ports_utilized_3m_group(UOPS_DISPATCHED.PORT_0 + UOPS_DISPATCHED.PORT_1 + UOPS_DISPATCHED.PORT_5_11 + UOPS_DISPATCHED.PORT_6) / (5 * tma_info_core_core_clks)tma_alu_op_utilization > 0.4This metric represents Core fraction of cycles CPU dispatched uops on execution ports for ALU operations100%00tma_amx_busyBvCB;Compute;HPC;Server;TopdownL3;tma_L3_group;tma_core_bound_groupEXE.AMX_BUSY / tma_info_core_core_clkstma_amx_busy > 0.5 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)This metric estimates fraction of cycles where the Advanced Matrix eXtensions (AMX) execution engine was busy with tile (arithmetic) operations100%00tma_assistsBvIO;TopdownL4;tma_L4_group;tma_microcode_sequencer_group78 * ASSISTS.ANY / tma_info_thread_slotstma_assists > 0.1 & (tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1)This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of AssistsThis metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists. Assists are long sequences of uops that are required in certain corner-cases for operations that cannot be handled natively by the execution pipeline. For example; when working with very small floating point values (so-called Denormals); the FP units are not set up to perform these operations natively. Instead; a sequence of instructions to perform the computation on the Denormals is injected into the pipeline. Since these microcode sequences might be dozens of uops long; Assists can be extremely deleterious to performance and they can be avoided in many cases. Sample with: ASSISTS.ANY100%00tma_avx_assistsHPC;TopdownL5;tma_L5_group;tma_assists_group63 * ASSISTS.SSE_AVX_MIX / tma_info_thread_slotstma_avx_assists > 0.1This metric estimates fraction of slots the CPU retired uops as a result of handing SSE to AVX* or AVX* to SSE transition Assists100%00tma_backend_boundBvOB;Default;TmaL1;TopdownL1;tma_L1_grouptopdown\-be\-bound / (topdown\-fe\-bound + topdown\-bad\-spec + topdown\-retiring + topdown\-be\-bound) + 0 * tma_info_thread_slotstma_backend_bound > 0.2This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the BackendThis category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend. Backend is the portion of the processor core where the out-of-order scheduler dispatches ready uops into their respective execution units; and once completed these uops get retired according to program order. For example; stalls due to data-cache misses or stalls due to the divider unit being overloaded are both categorized under Backend Bound. Backend Bound is further divided into two main categories: Memory Bound and Core Bound. Sample with: TOPDOWN.BACKEND_BOUND_SLOTS100%TopdownL1;DefaultTopdownL100tma_branch_mispredictsBadSpec;BrMispredicts;BvMP;Default;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBMtopdown\-br\-mispredict / (topdown\-fe\-bound + topdown\-bad\-spec + topdown\-retiring + topdown\-be\-bound) + 0 * tma_info_thread_slotstma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15This metric represents fraction of slots the CPU has wasted due to Branch MispredictionThis metric represents fraction of slots the CPU has wasted due to Branch Misprediction.  These slots are either wasted by uops fetched from an incorrectly speculated program path; or stalls when the out-of-order part of the machine needs to recover its state from a speculative path. Sample with: TOPDOWN.BR_MISPREDICT_SLOTS. Related metrics: tma_info_bad_spec_branch_misprediction_cost, tma_info_bottleneck_mispredictions, tma_mispredicts_resteers100%TopdownL2;DefaultTopdownL200tma_c01_waitC0Wait;TopdownL4;tma_L4_group;tma_serializing_operation_groupCPU_CLK_UNHALTED.C01 / tma_info_thread_clkstma_c01_wait > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles the CPU was stalled due staying in C0.1 power-performance optimized state (Faster wakeup time; Smaller power savings)100%00tma_c02_waitC0Wait;TopdownL4;tma_L4_group;tma_serializing_operation_groupCPU_CLK_UNHALTED.C02 / tma_info_thread_clkstma_c02_wait > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles the CPU was stalled due staying in C0.2 power-performance optimized state (Slower wakeup time; Larger power savings)100%00tma_clears_resteersBadSpec;MachineClears;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueMC(1 - tma_branch_mispredicts / tma_bad_speculation) * INT_MISC.CLEAR_RESTEER_CYCLES / tma_info_thread_clkstma_clears_resteers > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Machine ClearsThis metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Machine Clears. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES. Related metrics: tma_l1_bound, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches100%00tma_contested_accessesBvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group(76.6 * tma_info_system_core_frequency * (MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD * (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM / (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM + OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))) + 74.6 * tma_info_system_core_frequency * MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clkstma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accessesThis metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS. Related metrics: tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache100%00tma_core_boundBackend;Compute;Default;TmaL2;TopdownL2;tma_L2_group;tma_backend_bound_groupmax(0, tma_backend_bound - tma_memory_bound)tma_core_bound > 0.1 & tma_backend_bound > 0.2This metric represents fraction of slots where Core non-memory issues were of a bottleneckThis metric represents fraction of slots where Core non-memory issues were of a bottleneck.  Shortage in hardware compute resources; or dependencies in software's instructions are both categorized under Core Bound. Hence it may indicate the machine ran out of an out-of-order resource; certain execution units are overloaded or dependencies in program's data- or instruction-flow are limiting the performance (e.g. FP-chained long-latency arithmetic operations)100%TopdownL2;DefaultTopdownL200tma_data_sharingBvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group74.6 * tma_info_system_core_frequency * (MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD + MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD * (1 - OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM / (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM + OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clkstma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accessesThis metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD. Related metrics: tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache100%00tma_dividerBvCB;TopdownL3;tma_L3_group;tma_core_bound_groupARITH.DIV_ACTIVE / tma_info_thread_clkstma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)This metric represents fraction of cycles where the Divider unit was activeThis metric represents fraction of cycles where the Divider unit was active. Divide and square root instructions are performed by the Divider unit and can take considerably longer latency than integer or Floating Point addition; subtraction; or multiplication. Sample with: ARITH.DIVIDER_ACTIVE100%00tma_dram_boundMemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_groupMEMORY_ACTIVITY.STALLS_L3_MISS / tma_info_thread_clkstma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loadsThis metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads. Better caching can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_MISS_PS100%00tma_dtlb_loadBvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_groupmin(7 * cpu@DTLB_LOAD_MISSES.STLB_HIT\,cmask\=1@ + DTLB_LOAD_MISSES.WALK_ACTIVE, max(CYCLE_ACTIVITY.CYCLES_MEM_ANY - MEMORY_ACTIVITY.CYCLES_L1D_MISS, 0)) / tma_info_thread_clkstma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accessesThis metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_dtlb_store, tma_info_bottleneck_memory_data_tlbs, tma_info_bottleneck_memory_synchronization100%00tma_dtlb_storeBvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group(7 * cpu@DTLB_STORE_MISSES.STLB_HIT\,cmask\=1@ + DTLB_STORE_MISSES.WALK_ACTIVE) / tma_info_core_core_clkstma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates the fraction of cycles spent handling first-level data TLB store missesThis metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses.  As with ordinary data caching; focus on improving data locality and reducing working-set size to reduce DTLB overhead.  Additionally; consider using profile-guided optimization (PGO) to collocate frequently-used data on the same page.  Try using larger page sizes for large amounts of frequently-used data. Sample with: MEM_INST_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_dtlb_load, tma_info_bottleneck_memory_data_tlbs, tma_info_bottleneck_memory_synchronization100%00tma_false_sharingBvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group81 * tma_info_system_core_frequency * OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM / tma_info_thread_clkstma_false_sharing > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates how often CPU was handling synchronizations due to False SharingThis metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache100%00tma_fb_fullBvMS;MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_groupL1D_PEND_MISS.FB_FULL / tma_info_thread_clkstma_fb_full > 0.3This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceedThis metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed. The higher the metric value; the deeper the memory hierarchy level the misses are satisfied from (metric values >1 are valid). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or external memory). Related metrics: tma_info_bottleneck_cache_memory_bandwidth, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full, tma_store_latency, tma_streaming_stores100%00tma_fetch_bandwidthDefault;FetchBW;Frontend;TmaL2;TopdownL2;tma_L2_group;tma_frontend_bound_group;tma_issueFBmax(0, tma_frontend_bound - tma_fetch_latency)tma_fetch_bandwidth > 0.2This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issuesThis metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues.  For example; inefficiencies at the instruction decoders; or restrictions for caching in the DSB (decoded uops cache) are categorized under Fetch Bandwidth. In such cases; the Frontend typically delivers suboptimal amount of uops to the Backend. Sample with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_2_PS. Related metrics: tma_dsb_switches, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp100%TopdownL2;DefaultTopdownL200tma_fetch_latencyDefault;Frontend;TmaL2;TopdownL2;tma_L2_group;tma_frontend_bound_grouptopdown\-fetch\-lat / (topdown\-fe\-bound + topdown\-bad\-spec + topdown\-retiring + topdown\-be\-bound) - INT_MISC.UOP_DROPPING / tma_info_thread_slotstma_fetch_latency > 0.1 & tma_frontend_bound > 0.15This metric represents fraction of slots the CPU was stalled due to Frontend latency issuesThis metric represents fraction of slots the CPU was stalled due to Frontend latency issues.  For example; instruction-cache misses; iTLB misses or fetch stalls after a branch misprediction are categorized under Frontend Latency. In such cases; the Frontend eventually delivers no uops for some period. Sample with: FRONTEND_RETIRED.LATENCY_GE_16_PS;FRONTEND_RETIRED.LATENCY_GE_8_PS100%TopdownL2;DefaultTopdownL200tma_fp_assistsHPC;TopdownL5;tma_L5_group;tma_assists_group30 * ASSISTS.FP / tma_info_thread_slotstma_fp_assists > 0.1This metric roughly estimates fraction of slots the CPU retired uops as a result of handing Floating Point (FP) AssistsThis metric roughly estimates fraction of slots the CPU retired uops as a result of handing Floating Point (FP) Assists. FP Assist may apply when working with very small floating point values (so-called Denormals)100%00tma_fp_scalarCompute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P(FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIRED2.SCALAR) / (tma_retiring * tma_info_thread_slots)tma_fp_scalar > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)This metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retiredThis metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retired. May overcount due to FMA double counting. Related metrics: tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_fp_vectorCompute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P(FP_ARITH_INST_RETIRED.VECTOR + FP_ARITH_INST_RETIRED2.VECTOR) / (tma_retiring * tma_info_thread_slots)tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)This metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widthsThis metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widths. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_fp_vector_128bCompute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P(FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED2.128B_PACKED_HALF) / (tma_retiring * tma_info_thread_slots)tma_fp_vector_128b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))This metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectorsThis metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_fp_vector_256bCompute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P(FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED2.256B_PACKED_HALF) / (tma_retiring * tma_info_thread_slots)tma_fp_vector_256b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))This metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectorsThis metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_fp_vector_512bCompute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P(FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE + FP_ARITH_INST_RETIRED2.512B_PACKED_HALF) / (tma_retiring * tma_info_thread_slots)tma_fp_vector_512b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))This metric approximates arithmetic FP vector uops fraction the CPU has retired for 512-bit wide vectorsThis metric approximates arithmetic FP vector uops fraction the CPU has retired for 512-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_frontend_boundBvFB;BvIO;Default;PGO;TmaL1;TopdownL1;tma_L1_grouptopdown\-fe\-bound / (topdown\-fe\-bound + topdown\-bad\-spec + topdown\-retiring + topdown\-be\-bound) - INT_MISC.UOP_DROPPING / tma_info_thread_slotstma_frontend_bound > 0.15This category represents fraction of slots where the processor's Frontend undersupplies its BackendThis category represents fraction of slots where the processor's Frontend undersupplies its Backend. Frontend denotes the first part of the processor core responsible to fetch operations that are executed later on by the Backend part. Within the Frontend; a branch predictor predicts the next address to fetch; cache-lines are fetched from the memory subsystem; parsed into instructions; and lastly decoded into micro-operations (uops). Ideally the Frontend can issue Pipeline_Width uops every cycle to the Backend. Frontend Bound denotes unutilized issue-slots when there is no Backend stall; i.e. bubbles where Frontend delivered no uops while Backend could have accepted them. For example; stalls due to instruction-cache misses would be categorized under Frontend Bound. Sample with: FRONTEND_RETIRED.LATENCY_GE_4_PS100%TopdownL1;DefaultTopdownL100tma_fused_instructionsBranches;BvBO;Pipeline;TopdownL3;tma_L3_group;tma_light_operations_grouptma_light_operations * INST_RETIRED.MACRO_FUSED / (tma_retiring * tma_info_thread_slots)tma_fused_instructions > 0.1 & tma_light_operations > 0.6This metric represents fraction of slots where the CPU was retiring fused instructions -- where one uop can represent multiple contiguous instructionsThis metric represents fraction of slots where the CPU was retiring fused instructions -- where one uop can represent multiple contiguous instructions. CMP+JCC or DEC+JCC are common examples of legacy fusions. {([MTL] Note new MOV+OP and Load+OP fusions appear under Other_Light_Ops in MTL!)}100%00tma_heavy_operationsDefault;Retire;TmaL2;TopdownL2;tma_L2_group;tma_retiring_grouptopdown\-heavy\-ops / (topdown\-fe\-bound + topdown\-bad\-spec + topdown\-retiring + topdown\-be\-bound) + 0 * tma_info_thread_slotstma_heavy_operations > 0.1This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequencesThis metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences. ([ICL+] Note this may overcount due to approximation using indirect events; [ADL+] .)100%TopdownL2;DefaultTopdownL200tma_icache_missesBigFootprint;BvBC;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_groupICACHE_DATA.STALLS / tma_info_thread_clkstma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric represents fraction of cycles the CPU was stalled due to instruction cache missesThis metric represents fraction of cycles the CPU was stalled due to instruction cache misses. Sample with: FRONTEND_RETIRED.L2_MISS_PS;FRONTEND_RETIRED.L1I_MISS_PS100%00tma_info_bad_spec_ipmisp_cond_ntakenBad;BrMispredictsINST_RETIRED.ANY / BR_MISP_RETIRED.COND_NTAKENtma_info_bad_spec_ipmisp_cond_ntaken < 200Instructions per retired mispredicts for conditional non-taken branches (lower number means higher occurrence rate)00tma_info_bad_spec_ipmisp_cond_takenBad;BrMispredictsINST_RETIRED.ANY / BR_MISP_RETIRED.COND_TAKENtma_info_bad_spec_ipmisp_cond_taken < 200Instructions per retired mispredicts for conditional taken branches (lower number means higher occurrence rate)00tma_info_bad_spec_ipmisp_indirectBad;BrMispredictsINST_RETIRED.ANY / BR_MISP_RETIRED.INDIRECTtma_info_bad_spec_ipmisp_indirect < 1e3Instructions per retired mispredicts for indirect CALL or JMP branches (lower number means higher occurrence rate)00tma_info_bad_spec_ipmisp_retBad;BrMispredictsINST_RETIRED.ANY / BR_MISP_RETIRED.RETtma_info_bad_spec_ipmisp_ret < 500Instructions per retired mispredicts for return branches (lower number means higher occurrence rate)00tma_info_botlnk_l2_dsb_missesDSBmiss;Fed;tma_issueFB100 * (tma_fetch_latency * tma_dsb_switches / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + tma_fetch_bandwidth * tma_mite / (tma_dsb + tma_mite))tma_info_botlnk_l2_dsb_misses > 10Total pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW BottleneckTotal pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp00tma_info_bottleneck_cache_memory_bandwidthBvMB;Mem;MemoryBW;Offcore;tma_issueBW100 * (tma_memory_bound * (tma_dram_bound / (tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_dram_bound + tma_store_bound)) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bound * (tma_l3_bound / (tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_dram_bound + tma_store_bound)) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * (tma_l1_bound / (tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_dram_bound + tma_store_bound)) * (tma_fb_full / (tma_dtlb_load + tma_store_fwd_blk + tma_l1_hit_latency + tma_lock_latency + tma_split_loads + tma_fb_full)))tma_info_bottleneck_cache_memory_bandwidth > 20Total pipeline cost of external Memory- or Cache-Bandwidth related bottlenecksTotal pipeline cost of external Memory- or Cache-Bandwidth related bottlenecks. Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full00tma_info_bottleneck_cache_memory_latencyBvML;Mem;MemoryLat;Offcore;tma_issueLat100 * (tma_memory_bound * (tma_dram_bound / (tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_dram_bound + tma_store_bound)) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bound * (tma_l3_bound / (tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_dram_bound + tma_store_bound)) * (tma_l3_hit_latency / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * tma_l2_bound / (tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_dram_bound + tma_store_bound) + tma_memory_bound * (tma_store_bound / (tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_dram_bound + tma_store_bound)) * (tma_store_latency / (tma_store_latency + tma_false_sharing + tma_split_stores + tma_streaming_stores + tma_dtlb_store)) + tma_memory_bound * (tma_l1_bound / (tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_dram_bound + tma_store_bound)) * (tma_l1_hit_latency / (tma_dtlb_load + tma_store_fwd_blk + tma_l1_hit_latency + tma_lock_latency + tma_split_loads + tma_fb_full)))tma_info_bottleneck_cache_memory_latency > 20Total pipeline cost of external Memory- or Cache-Latency related bottlenecksTotal pipeline cost of external Memory- or Cache-Latency related bottlenecks. Related metrics: tma_l3_hit_latency, tma_mem_latency00tma_info_bottleneck_compute_bound_estBvCB;Cor;tma_issueComp100 * (tma_core_bound * tma_divider / (tma_amx_busy + tma_divider + tma_ports_utilization + tma_serializing_operation) + tma_core_bound * tma_amx_busy / (tma_amx_busy + tma_divider + tma_ports_utilization + tma_serializing_operation) + tma_core_bound * (tma_ports_utilization / (tma_amx_busy + tma_divider + tma_ports_utilization + tma_serializing_operation)) * (tma_ports_utilized_3m / (tma_ports_utilized_0 + tma_ports_utilized_1 + tma_ports_utilized_2 + tma_ports_utilized_3m)))tma_info_bottleneck_compute_bound_est > 20Total pipeline cost when the execution is compute-bound - an estimationTotal pipeline cost when the execution is compute-bound - an estimation. Covers Core Bound when High ILP as well as when long-latency execution units are busy. Related metrics: 00tma_info_bottleneck_instruction_fetch_bwBvFB;Fed;FetchBW;Frontend100 * (tma_frontend_bound - (1 - 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts) * tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) - (1 - INST_RETIRED.REP_ITERATION / cpu@UOPS_RETIRED.MS\,cmask\=1@) * (tma_fetch_latency * (tma_ms_switches + tma_branch_resteers * (tma_clears_resteers + tma_mispredicts_resteers * tma_other_mispredicts / tma_branch_mispredicts) / (tma_clears_resteers + tma_mispredicts_resteers + tma_unknown_branches)) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))) - tma_info_bottleneck_big_codetma_info_bottleneck_instruction_fetch_bw > 20Total pipeline cost of instruction fetch bandwidth related bottlenecks (when the front-end could not sustain operations delivery to the back-end)00tma_info_bottleneck_irregular_overheadBad;BvIO;Cor;Ret;tma_issueMS100 * ((1 - INST_RETIRED.REP_ITERATION / cpu@UOPS_RETIRED.MS\,cmask\=1@) * (tma_fetch_latency * (tma_ms_switches + tma_branch_resteers * (tma_clears_resteers + tma_mispredicts_resteers * tma_other_mispredicts / tma_branch_mispredicts) / (tma_clears_resteers + tma_mispredicts_resteers + tma_unknown_branches)) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) + 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts * tma_branch_mispredicts + tma_machine_clears * tma_other_nukes / tma_other_nukes + tma_core_bound * (tma_serializing_operation + cpu@RS.EMPTY\,umask\=1@ / tma_info_thread_clks * tma_ports_utilized_0) / (tma_amx_busy + tma_divider + tma_ports_utilization + tma_serializing_operation) + tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_heavy_operations)tma_info_bottleneck_irregular_overhead > 10Total pipeline cost of irregular execution (e.gTotal pipeline cost of irregular execution (e.g. FP-assists in HPC, Wait time with work imbalance multithreaded workloads, overhead in system services or virtualized environments). Related metrics: tma_microcode_sequencer, tma_ms_switches00tma_info_bottleneck_memory_data_tlbsBvMT;Mem;MemoryTLB;Offcore;tma_issueTLB100 * (tma_memory_bound * (tma_l1_bound / max(tma_memory_bound, tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_dram_bound + tma_store_bound)) * (tma_dtlb_load / max(tma_l1_bound, tma_dtlb_load + tma_store_fwd_blk + tma_l1_hit_latency + tma_lock_latency + tma_split_loads + tma_fb_full)) + tma_memory_bound * (tma_store_bound / (tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_dram_bound + tma_store_bound)) * (tma_dtlb_store / (tma_store_latency + tma_false_sharing + tma_split_stores + tma_streaming_stores + tma_dtlb_store)))tma_info_bottleneck_memory_data_tlbs > 20Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs)Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs). Related metrics: tma_dtlb_load, tma_dtlb_store, tma_info_bottleneck_memory_synchronization00tma_info_bottleneck_memory_synchronizationBvMS;Mem;Offcore;tma_issueTLB100 * (tma_memory_bound * (tma_dram_bound / (tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_dram_bound + tma_store_bound) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) * tma_remote_cache / (tma_local_mem + tma_remote_mem + tma_remote_cache) + tma_l3_bound / (tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_dram_bound + tma_store_bound) * (tma_contested_accesses + tma_data_sharing) / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full) + tma_store_bound / (tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_dram_bound + tma_store_bound) * tma_false_sharing / (tma_store_latency + tma_false_sharing + tma_split_stores + tma_streaming_stores + tma_dtlb_store - tma_store_latency)) + tma_machine_clears * (1 - tma_other_nukes / tma_other_nukes))tma_info_bottleneck_memory_synchronization > 10Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors)Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors). Related metrics: tma_dtlb_load, tma_dtlb_store, tma_info_bottleneck_memory_data_tlbs00tma_info_branches_cond_ntBad;Branches;CodeGen;PGOBR_INST_RETIRED.COND_NTAKEN / BR_INST_RETIRED.ALL_BRANCHESFraction of branches that are non-taken conditionals00tma_info_branches_cond_tkBad;Branches;CodeGen;PGOBR_INST_RETIRED.COND_TAKEN / BR_INST_RETIRED.ALL_BRANCHESFraction of branches that are taken conditionals00tma_info_branches_jumpBad;Branches(BR_INST_RETIRED.NEAR_TAKEN - BR_INST_RETIRED.COND_TAKEN - 2 * BR_INST_RETIRED.NEAR_CALL) / BR_INST_RETIRED.ALL_BRANCHESFraction of branches that are unconditional (direct or indirect) jumps00tma_info_core_core_clksSMT(CPU_CLK_UNHALTED.DISTRIBUTED if #SMT_on else tma_info_thread_clks)Core actual clocks when any Logical Processor is active on the Physical Core00tma_info_core_flopcFlops;Ret(FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIRED2.SCALAR_HALF + 2 * (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED2.COMPLEX_SCALAR_HALF) + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * (FP_ARITH_INST_RETIRED2.128B_PACKED_HALF + FP_ARITH_INST_RETIRED.8_FLOPS) + 16 * (FP_ARITH_INST_RETIRED2.256B_PACKED_HALF + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) + 32 * FP_ARITH_INST_RETIRED2.512B_PACKED_HALF) / tma_info_core_core_clksFloating Point Operations Per Cycle00tma_info_core_fp_arith_utilizationCor;Flops;HPC(FP_ARITH_DISPATCHED.PORT_0 + FP_ARITH_DISPATCHED.PORT_1 + FP_ARITH_DISPATCHED.PORT_5) / (2 * tma_info_core_core_clks)Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width)Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width). Values > 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less common)00tma_info_frontend_dsb_coverageDSB;Fed;FetchBW;tma_issueFBIDQ.DSB_UOPS / UOPS_ISSUED.ANYtma_info_frontend_dsb_coverage < 0.7 & tma_info_thread_ipc / 6 > 0.35Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache)Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache). Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_inst_mix_iptb, tma_lcp00tma_info_frontend_dsb_switch_costDSBmissDSB2MITE_SWITCHES.PENALTY_CYCLES / cpu@DSB2MITE_SWITCHES.PENALTY_CYCLES\,cmask\=1\,edge@Average number of cycles of a switch from the DSB fetch-unit to MITE fetch unit - see DSB_Switches tree node for details00tma_info_frontend_icache_miss_latencyFed;FetchLat;IcMissICACHE_DATA.STALLS / cpu@ICACHE_DATA.STALLS\,cmask\=1\,edge@Average Latency for L1 instruction cache misses00tma_info_frontend_unknown_branch_costFedINT_MISC.UNKNOWN_BRANCH_CYCLES / cpu@INT_MISC.UNKNOWN_BRANCH_CYCLES\,cmask\=1\,edge@Average number of cycles the front-end was delayed due to an Unknown Branch detectionAverage number of cycles the front-end was delayed due to an Unknown Branch detection. See Unknown_Branches node00tma_info_inst_mix_iparithFlops;InsTypeINST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIRED2.SCALAR + (FP_ARITH_INST_RETIRED.VECTOR + FP_ARITH_INST_RETIRED2.VECTOR))tma_info_inst_mix_iparith < 10Instructions per FP Arithmetic instruction (lower number means higher occurrence rate)Instructions per FP Arithmetic instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting. Approximated prior to BDW00tma_info_inst_mix_iparith_avx128Flops;FpVector;InsTypeINST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED2.128B_PACKED_HALF)tma_info_inst_mix_iparith_avx128 < 10Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate)Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting00tma_info_inst_mix_iparith_avx256Flops;FpVector;InsTypeINST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED2.256B_PACKED_HALF)tma_info_inst_mix_iparith_avx256 < 10Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate)Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting00tma_info_inst_mix_iparith_avx512Flops;FpVector;InsTypeINST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE + FP_ARITH_INST_RETIRED2.512B_PACKED_HALF)tma_info_inst_mix_iparith_avx512 < 10Instructions per FP Arithmetic AVX 512-bit instruction (lower number means higher occurrence rate)Instructions per FP Arithmetic AVX 512-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting00tma_info_inst_mix_iparith_scalar_hpFlops;FpScalar;InsType;ServerINST_RETIRED.ANY / FP_ARITH_INST_RETIRED2.SCALARtma_info_inst_mix_iparith_scalar_hp < 10Instructions per FP Arithmetic Scalar Half-Precision instruction (lower number means higher occurrence rate)Instructions per FP Arithmetic Scalar Half-Precision instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting00tma_info_inst_mix_ipflopFlops;InsTypeINST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIRED2.SCALAR_HALF + 2 * (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED2.COMPLEX_SCALAR_HALF) + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * (FP_ARITH_INST_RETIRED2.128B_PACKED_HALF + FP_ARITH_INST_RETIRED.8_FLOPS) + 16 * (FP_ARITH_INST_RETIRED2.256B_PACKED_HALF + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) + 32 * FP_ARITH_INST_RETIRED2.512B_PACKED_HALF)tma_info_inst_mix_ipflop < 10Instructions per Floating Point (FP) Operation (lower number means higher occurrence rate)00tma_info_inst_mix_ippauseFlops;FpVector;InsTypetma_info_inst_mix_instructions / CPU_CLK_UNHALTED.PAUSE_INSTInstructions per PAUSE (lower number means higher occurrence rate)00tma_info_inst_mix_iptbBranches;Fed;FetchBW;Frontend;PGO;tma_issueFBINST_RETIRED.ANY / BR_INST_RETIRED.NEAR_TAKENtma_info_inst_mix_iptb < 13Instructions per taken branchInstructions per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_lcp00tma_info_memory_l2mpki_rfoCacheMisses;Offcore1e3 * L2_RQSTS.RFO_MISS / INST_RETIRED.ANYOffcore requests (L2 cache miss) per kilo instruction for demand RFOs00tma_info_memory_latency_data_l2_mlpMemory_BW;OffcoreOFFCORE_REQUESTS_OUTSTANDING.DATA_RD / OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RDAverage Parallel L2 cache miss data reads00tma_info_memory_latency_load_l2_mlpMemory_BW;OffcoreOFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / cpu@OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD\,cmask\=1@Average Parallel L2 cache miss demand Loads00tma_info_memory_latency_load_l3_miss_latencyMemory_Lat;OffcoreOFFCORE_REQUESTS_OUTSTANDING.L3_MISS_DEMAND_DATA_RD / OFFCORE_REQUESTS.L3_MISS_DEMAND_DATA_RDAverage Latency for L3 cache miss demand Loads00tma_info_memory_load_miss_real_latencyMem;MemoryBound;MemoryLatL1D_PEND_MISS.PENDING / MEM_LOAD_COMPLETED.L1_MISS_ANYActual Average Latency for L1 data-cache miss demand load operations (in core cycles)00tma_info_memory_mix_bus_lock_pkiMem1e3 * SQ_MISC.BUS_LOCK / INST_RETIRED.ANY"Bus lock" per kilo instruction00tma_info_memory_mix_offcore_mwrite_any_pkiOffcore1e3 * OCR.MODIFIED_WRITE.ANY_RESPONSE / tma_info_inst_mix_instructionsOff-core accesses per kilo instruction for modified write requests00tma_info_memory_mix_offcore_read_any_pkiCacheHits;Offcore1e3 * OCR.READS_TO_CORE.ANY_RESPONSE / tma_info_inst_mix_instructionsOff-core accesses per kilo instruction for reads-to-core requests (speculative; including in-core HW prefetches)00tma_info_memory_mix_offcore_read_l3m_pkiOffcore1e3 * OCR.READS_TO_CORE.L3_MISS / tma_info_inst_mix_instructionsL3 cache misses per kilo instruction for reads-to-core requests (speculative; including in-core HW prefetches)00tma_info_memory_soc_r2c_dram_bwHPC;Mem;MemoryBW;SoC64 * OCR.READS_TO_CORE.DRAM / 1e9 / duration_timeAverage DRAM BW for Reads-to-Core (R2C) covering for memory attached to local- and remote-socketAverage DRAM BW for Reads-to-Core (R2C) covering for memory attached to local- and remote-socket. See R2C_Offcore_BW00tma_info_memory_soc_r2c_l3m_bwHPC;Mem;MemoryBW;SoC64 * OCR.READS_TO_CORE.L3_MISS / 1e9 / duration_timeAverage L3-cache miss BW for Reads-to-Core (R2C)Average L3-cache miss BW for Reads-to-Core (R2C). This covering going to DRAM or other memory off-chip memory tears. See R2C_Offcore_BW00tma_info_memory_soc_r2c_offcore_bwHPC;Mem;MemoryBW;SoC64 * OCR.READS_TO_CORE.ANY_RESPONSE / 1e9 / duration_timeAverage Off-core access BW for Reads-to-Core (R2C)Average Off-core access BW for Reads-to-Core (R2C). R2C account for demand or prefetch load/RFO/code access that fill data into the Core caches00tma_info_memory_tlb_page_walks_utilizationMem;MemoryTLB(ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_PENDING + DTLB_STORE_MISSES.WALK_PENDING) / (4 * tma_info_core_core_clks)tma_info_memory_tlb_page_walks_utilization > 0.5Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses00tma_info_pipeline_fetch_miteFed;FetchBWIDQ.MITE_UOPS / IDQ.MITE_CYCLES_ANYAverage number of uops fetched from MITE per cycle00tma_info_pipeline_ipassistMicroSeq;Pipeline;Ret;RetireINST_RETIRED.ANY / ASSISTS.ANYtma_info_pipeline_ipassist < 100e3Instructions per a microcode Assist invocationInstructions per a microcode Assist invocation. See Assists tree node for details (lower number means higher occurrence rate)00tma_info_pipeline_retirePipeline;Rettma_retiring * tma_info_thread_slots / cpu@UOPS_RETIRED.SLOTS\,cmask\=1@Average number of Uops retired in cycles where at least one uop has retired00tma_info_pipeline_strings_cyclesMicroSeq;Pipeline;RetINST_RETIRED.REP_ITERATION / cpu@UOPS_RETIRED.SLOTS\,cmask\=1@tma_info_pipeline_strings_cycles > 0.1Estimated fraction of retirement-cycles dealing with repeat instructions00tma_info_system_c0_waitC0WaitCPU_CLK_UNHALTED.C0_WAIT / tma_info_thread_clkstma_info_system_c0_wait > 0.05Fraction of cycles the processor is waiting yet unhalted; covering legacy PAUSE instruction, as well as C0.1 / C0.2 power-performance optimized states00tma_info_system_gflopsCor;Flops;HPC(FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIRED2.SCALAR_HALF + 2 * (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED2.COMPLEX_SCALAR_HALF) + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * (FP_ARITH_INST_RETIRED2.128B_PACKED_HALF + FP_ARITH_INST_RETIRED.8_FLOPS) + 16 * (FP_ARITH_INST_RETIRED2.256B_PACKED_HALF + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) + 32 * FP_ARITH_INST_RETIRED2.512B_PACKED_HALF) / 1e9 / duration_timeGiga Floating Point Operations Per SecondGiga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width00tma_info_system_io_read_bwIoBW;MemOffcore;Server;SoCUNC_CHA_TOR_INSERTS.IO_PCIRDCUR * 64 / 1e9 / duration_timeAverage IO (network or disk) Bandwidth Use for Reads [GB / sec]Average IO (network or disk) Bandwidth Use for Reads [GB / sec]. Bandwidth of IO reads that are initiated by end device controllers that are requesting memory from the CPU00tma_info_system_io_write_bwIoBW;MemOffcore;Server;SoC(UNC_CHA_TOR_INSERTS.IO_ITOM + UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR) * 64 / 1e9 / duration_timeAverage IO (network or disk) Bandwidth Use for Writes [GB / sec]Average IO (network or disk) Bandwidth Use for Writes [GB / sec]. Bandwidth of IO writes that are initiated by end device controllers that are writing memory to the CPU00tma_info_system_mem_dram_read_latencyMemOffcore;MemoryLat;Server;SoC1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_DDR / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_DDR) / uncore_cha_0@event\=0x1@Average latency of data read request to external DRAM memory [in nanoseconds]Average latency of data read request to external DRAM memory [in nanoseconds]. Accounts for demand loads and L1/L2 data-read prefetches00tma_info_system_mem_pmm_read_latencyMemOffcore;MemoryLat;Server;SoC(1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PMM / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PMM) / uncore_cha_0@event\=0x1@ if #has_pmem > 0 else 0)Average latency of data read request to external 3D X-Point memory [in nanoseconds]Average latency of data read request to external 3D X-Point memory [in nanoseconds]. Accounts for demand loads and L1/L2 data-read prefetches00tma_info_system_mem_read_latencyMem;MemoryLat;SoC1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD / UNC_CHA_TOR_INSERTS.IA_MISS_DRD) / (tma_info_system_socket_clks / duration_time)Average latency of data read request to external memory (in nanoseconds)Average latency of data read request to external memory (in nanoseconds). Accounts for demand loads and L1/L2 prefetches. ([RKL+]memory-controller only)01tma_info_system_smt_2t_utilizationSMT(1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_DISTRIBUTED if #SMT_on else 0)Fraction of cycles where both hardware Logical Processors were active00tma_info_system_socket_clksSoCuncore_cha_0@event\=0x1@Socket actual clocks when any core is active on that socket00tma_info_system_upi_data_transmit_bwServer;SoCUNC_UPI_TxL_FLITS.ALL_DATA * 64 / 9 / 1e6Cross-socket Ultra Path Interconnect (UPI) data transmit bandwidth for data only [MB / sec]00tma_info_thread_slotsTmaL1;tma_L1_groupTOPDOWN.SLOTSTotal issue-pipeline slots (per-Physical Core till ICL; per-Logical Processor ICL onward)00tma_info_thread_slots_utilizationSMT;TmaL1;tma_L1_group(tma_info_thread_slots / (TOPDOWN.SLOTS / 2) if #SMT_on else 1)Fraction of Physical Core issue-slots utilized by this Logical Processor00tma_info_thread_uoppiPipeline;Ret;Retiretma_retiring * tma_info_thread_slots / INST_RETIRED.ANYtma_info_thread_uoppi > 1.05Uops Per Instruction00tma_info_thread_uptbBranches;Fed;FetchBWtma_retiring * tma_info_thread_slots / BR_INST_RETIRED.NEAR_TAKENtma_info_thread_uptb < 9Uops per taken branch00tma_int_vector_128bCompute;IntVector;Pipeline;TopdownL4;tma_L4_group;tma_int_operations_group;tma_issue2P(INT_VEC_RETIRED.ADD_128 + INT_VEC_RETIRED.VNNI_128) / (tma_retiring * tma_info_thread_slots)tma_int_vector_128b > 0.1 & (tma_int_operations > 0.1 & tma_light_operations > 0.6)This metric represents 128-bit vector Integer ADD/SUB/SAD or VNNI (Vector Neural Network Instructions) uops fraction the CPU has retiredThis metric represents 128-bit vector Integer ADD/SUB/SAD or VNNI (Vector Neural Network Instructions) uops fraction the CPU has retired. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_int_vector_256bCompute;IntVector;Pipeline;TopdownL4;tma_L4_group;tma_int_operations_group;tma_issue2P(INT_VEC_RETIRED.ADD_256 + INT_VEC_RETIRED.MUL_256 + INT_VEC_RETIRED.VNNI_256) / (tma_retiring * tma_info_thread_slots)tma_int_vector_256b > 0.1 & (tma_int_operations > 0.1 & tma_light_operations > 0.6)This metric represents 256-bit vector Integer ADD/SUB/SAD/MUL or VNNI (Vector Neural Network Instructions) uops fraction the CPU has retiredThis metric represents 256-bit vector Integer ADD/SUB/SAD/MUL or VNNI (Vector Neural Network Instructions) uops fraction the CPU has retired. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_l1_boundCacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_issueL1;tma_issueMC;tma_memory_bound_groupmax((EXE_ACTIVITY.BOUND_ON_LOADS - MEMORY_ACTIVITY.STALLS_L1D_MISS) / tma_info_thread_clks, 0)tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled without loads missing the L1 data cacheThis metric estimates how often the CPU was stalled without loads missing the L1 data cache.  The L1 data cache typically has the shortest latency.  However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT_PS;MEM_LOAD_RETIRED.FB_HIT_PS. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1100%00tma_l1_hit_latencyBvML;MemoryLat;TopdownL4;tma_L4_group;tma_l1_bound_groupmin(2 * (MEM_INST_RETIRED.ALL_LOADS - MEM_LOAD_RETIRED.FB_HIT - MEM_LOAD_RETIRED.L1_MISS) * 20 / 100, max(CYCLE_ACTIVITY.CYCLES_MEM_ANY - MEMORY_ACTIVITY.CYCLES_L1D_MISS, 0)) / tma_info_thread_clkstma_l1_hit_latency > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates fraction of cycles with demand load accesses that hit the L1 cacheThis metric roughly estimates fraction of cycles with demand load accesses that hit the L1 cache. The short latency of the L1 data cache may be exposed in pointer-chasing memory access patterns as an example. Sample with: MEM_LOAD_RETIRED.L1_HIT100%00tma_l2_boundBvML;CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group(MEMORY_ACTIVITY.STALLS_L1D_MISS - MEMORY_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clkstma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled due to L2 cache accesses by loadsThis metric estimates how often the CPU was stalled due to L2 cache accesses by loads.  Avoiding cache misses (i.e. L1 misses/L2 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L2_HIT_PS100%00tma_l3_boundCacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group(MEMORY_ACTIVITY.STALLS_L2_MISS - MEMORY_ACTIVITY.STALLS_L3_MISS) / tma_info_thread_clkstma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling CoreThis metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core.  Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS100%00tma_l3_hit_latencyBvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group32.6 * tma_info_system_core_frequency * (MEM_LOAD_RETIRED.L3_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2)) / tma_info_thread_clkstma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited).  Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance.  Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS. Related metrics: tma_info_bottleneck_cache_memory_latency, tma_mem_latency100%00tma_light_operationsDefault;Retire;TmaL2;TopdownL2;tma_L2_group;tma_retiring_groupmax(0, tma_retiring - tma_heavy_operations)tma_light_operations > 0.6This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation)This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation). This correlates with total number of instructions used by the program. A uops-per-instruction (see UopPI metric) ratio of 1 or less should be expected for decently optimized code running on Intel Core/Xeon products. While this often indicates efficient X86 instructions were executed; high value does not necessarily mean better performance cannot be achieved. ([ICL+] Note this may undercount due to approximation using indirect events; [ADL+] .). Sample with: INST_RETIRED.PREC_DIST100%TopdownL2;DefaultTopdownL200tma_load_op_utilizationTopdownL5;tma_L5_group;tma_ports_utilized_3m_groupUOPS_DISPATCHED.PORT_2_3_10 / (3 * tma_info_core_core_clks)tma_load_op_utilization > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port for Load operationsThis metric represents Core fraction of cycles CPU dispatched uops on execution port for Load operations. Sample with: UOPS_DISPATCHED.PORT_2_3_10100%00tma_local_memServer;TopdownL5;tma_L5_group;tma_mem_latency_group72 * tma_info_system_core_frequency * MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clkstma_local_mem > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))This metric estimates fraction of cycles while the memory subsystem was handling loads from local memoryThis metric estimates fraction of cycles while the memory subsystem was handling loads from local memory. Caching will improve the latency and increase performance. Sample with: MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM100%00tma_lock_latencyOffcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group(16 * max(0, MEM_INST_RETIRED.LOCK_LOADS - L2_RQSTS.ALL_RFO) + MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES * (10 * L2_RQSTS.RFO_HIT + min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO))) / tma_info_thread_clkstma_lock_latency > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric represents fraction of cycles the CPU spent handling cache misses due to lock operationsThis metric represents fraction of cycles the CPU spent handling cache misses due to lock operations. Due to the microarchitecture handling of locks; they are classified as L1_Bound regardless of what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOADS. Related metrics: tma_store_latency100%00tma_machine_clearsBadSpec;BvMS;Default;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxnmax(0, tma_bad_speculation - tma_branch_mispredicts)tma_machine_clears > 0.1 & tma_bad_speculation > 0.15This metric represents fraction of slots the CPU has wasted due to Machine ClearsThis metric represents fraction of slots the CPU has wasted due to Machine Clears.  These slots are either wasted by uops fetched prior to the clear; or stalls the out-of-order portion of the machine needs to recover its state after the clear. For example; this can happen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modifying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT. Related metrics: tma_clears_resteers, tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_l1_bound, tma_microcode_sequencer, tma_ms_switches, tma_remote_cache100%TopdownL2;DefaultTopdownL200tma_mba_stallsMemoryBW;Offcore;Server;TopdownL5;tma_L5_group;tma_mem_bandwidth_groupINT_MISC.MBA_STALLS / tma_info_thread_clkstma_mba_stalls > 0.1 & (tma_mem_bandwidth > 0.2 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))This metric estimates fraction of cycles where the core's performance was likely hurt due to memory bandwidth Allocation feature (RDT's memory bandwidth throttling)100%00tma_memory_boundBackend;Default;TmaL2;TopdownL2;tma_L2_group;tma_backend_bound_grouptopdown\-mem\-bound / (topdown\-fe\-bound + topdown\-bad\-spec + topdown\-retiring + topdown\-be\-bound) + 0 * tma_info_thread_slotstma_memory_bound > 0.2 & tma_backend_bound > 0.2This metric represents fraction of slots the Memory subsystem within the Backend was a bottleneckThis metric represents fraction of slots the Memory subsystem within the Backend was a bottleneck.  Memory Bound estimates fraction of slots where pipeline is likely stalled due to demand load or store instructions. This accounts mainly for (1) non-completed in-flight memory demand loads which coincides with execution units starvation; in addition to (2) cases where stores could impose backpressure on the pipeline when many of them get buffered at the same time (less common out of the two)100%TopdownL2;DefaultTopdownL200tma_memory_fenceTopdownL4;tma_L4_group;tma_serializing_operation_group13 * MISC2_RETIRED.LFENCE / tma_info_thread_clkstma_memory_fence > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles the CPU was stalled due to LFENCE Instructions100%02tma_memory_operationsPipeline;TopdownL3;tma_L3_group;tma_light_operations_grouptma_light_operations * MEM_UOP_RETIRED.ANY / (tma_retiring * tma_info_thread_slots)tma_memory_operations > 0.1 & tma_light_operations > 0.6This metric represents fraction of slots where the CPU was retiring memory operations -- uops for memory load or store accesses100%00tma_microcode_sequencerMicroSeq;TopdownL3;tma_L3_group;tma_heavy_operations_group;tma_issueMC;tma_issueMSUOPS_RETIRED.MS / tma_info_thread_slotstma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1This metric represents fraction of slots the CPU was retiring uops fetched by the Microcode Sequencer (MS) unitThis metric represents fraction of slots the CPU was retiring uops fetched by the Microcode Sequencer (MS) unit.  The MS is used for CISC instructions not supported by the default decoders (like repeat move strings; or CPUID); or by microcode assists used to address some operation modes (like in Floating Point assists). These cases can often be avoided. Sample with: UOPS_RETIRED.MS. Related metrics: tma_clears_resteers, tma_info_bottleneck_irregular_overhead, tma_l1_bound, tma_machine_clears, tma_ms_switches100%00tma_mispredicts_resteersBadSpec;BrMispredicts;BvMP;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueBMtma_branch_mispredicts / tma_bad_speculation * INT_MISC.CLEAR_RESTEER_CYCLES / tma_info_thread_clkstma_mispredicts_resteers > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stageThis metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost, tma_info_bottleneck_mispredictions100%00tma_miteDSBmiss;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group(IDQ.MITE_CYCLES_ANY - IDQ.MITE_CYCLES_OK) / tma_info_core_core_clks / 2tma_mite > 0.1 & tma_fetch_bandwidth > 0.2This metric represents Core fraction of cycles in which CPU was likely limited due to the MITE pipeline (the legacy decode pipeline)This metric represents Core fraction of cycles in which CPU was likely limited due to the MITE pipeline (the legacy decode pipeline). This pipeline is used for code that was not pre-cached in the DSB or LSD. For example; inefficiencies due to asymmetric decoders; use of long immediate or LCP can manifest as MITE fetch bandwidth bottleneck. Sample with: FRONTEND_RETIRED.ANY_DSB_MISS100%00tma_mixing_vectorsTopdownL5;tma_L5_group;tma_issueMV;tma_ports_utilized_0_group160 * ASSISTS.SSE_AVX_MIX / tma_info_thread_clkstma_mixing_vectors > 0.05This metric estimates penalty in terms of percentage of([SKL+] injected blend uops out of all Uops Issued -- the Count Domain; [ADL+] cycles)This metric estimates penalty in terms of percentage of([SKL+] injected blend uops out of all Uops Issued -- the Count Domain; [ADL+] cycles). Usually a Mixing_Vectors over 5% is worth investigating. Read more in Appendix B1 of the Optimizations Guide for this topic. Related metrics: tma_ms_switches100%00tma_ms_switchesFetchLat;MicroSeq;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueMC;tma_issueMS;tma_issueMV;tma_issueSO3 * cpu@UOPS_RETIRED.MS\,cmask\=1\,edge@ / (UOPS_RETIRED.SLOTS / UOPS_ISSUED.ANY) / tma_info_thread_clkstma_ms_switches > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS)This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS). Commonly used instructions are optimized for delivery by the DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Certain operations cannot be handled natively by the execution pipeline; and must be performed by microcode (small programs injected into the execution stream). Switching to the MS too often can negatively impact performance. The MS is designated to deliver long uop flows required by CISC instructions like CPUID; or uncommon conditions like Floating Point Assists when dealing with Denormals. Sample with: IDQ.MS_SWITCHES. Related metrics: tma_clears_resteers, tma_info_bottleneck_irregular_overhead, tma_l1_bound, tma_machine_clears, tma_microcode_sequencer, tma_mixing_vectors, tma_serializing_operation100%00tma_non_fused_branchesBranches;BvBO;Pipeline;TopdownL3;tma_L3_group;tma_light_operations_grouptma_light_operations * (BR_INST_RETIRED.ALL_BRANCHES - INST_RETIRED.MACRO_FUSED) / (tma_retiring * tma_info_thread_slots)tma_non_fused_branches > 0.1 & tma_light_operations > 0.6This metric represents fraction of slots where the CPU was retiring branch instructions that were not fusedThis metric represents fraction of slots where the CPU was retiring branch instructions that were not fused. Non-conditional branches like direct JMP or CALL would count here. Can be used to examine fusible conditional jumps that were not fused100%00tma_nop_instructionsBvBO;Pipeline;TopdownL4;tma_L4_group;tma_other_light_ops_grouptma_light_operations * INST_RETIRED.NOP / (tma_retiring * tma_info_thread_slots)tma_nop_instructions > 0.1 & (tma_other_light_ops > 0.3 & tma_light_operations > 0.6)This metric represents fraction of slots where the CPU was retiring NOP (no op) instructionsThis metric represents fraction of slots where the CPU was retiring NOP (no op) instructions. Compilers often use NOPs for certain address alignments - e.g. start address of a function or loop body. Sample with: INST_RETIRED.NOP100%00tma_page_faultsTopdownL5;tma_L5_group;tma_assists_group99 * ASSISTS.PAGE_FAULT / tma_info_thread_slotstma_page_faults > 0.05This metric roughly estimates fraction of slots the CPU retired uops as a result of handing Page FaultsThis metric roughly estimates fraction of slots the CPU retired uops as a result of handing Page Faults. A Page Fault may apply on first application access to a memory page. Note operating system handling of page faults accounts for the majority of its cost100%00tma_port_0Compute;TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2PUOPS_DISPATCHED.PORT_0 / tma_info_core_core_clkstma_port_0 > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch)This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch). Sample with: UOPS_DISPATCHED.PORT_0. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_port_1TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2PUOPS_DISPATCHED.PORT_1 / tma_info_core_core_clkstma_port_1 > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port 1 (ALU)This metric represents Core fraction of cycles CPU dispatched uops on execution port 1 (ALU). Sample with: UOPS_DISPATCHED.PORT_1. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_port_6TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2PUOPS_DISPATCHED.PORT_6 / tma_info_core_core_clkstma_port_6 > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU)This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU). Sample with: UOPS_DISPATCHED.PORT_6. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_ports_utilized_2100%00tma_ports_utilizationPortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group((tma_ports_utilized_0 * tma_info_thread_clks + (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * cpu@EXE_ACTIVITY.2_PORTS_UTIL\,umask\=0xc@)) / tma_info_thread_clks if ARITH.DIV_ACTIVE < CYCLE_ACTIVITY.STALLS_TOTAL - EXE_ACTIVITY.BOUND_ON_LOADS else (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * cpu@EXE_ACTIVITY.2_PORTS_UTIL\,umask\=0xc@) / tma_info_thread_clks)tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)This metric estimates fraction of cycles the CPU performance was potentially limited due to Core computation issues (non divider-related)This metric estimates fraction of cycles the CPU performance was potentially limited due to Core computation issues (non divider-related).  Two distinct categories can be attributed into this metric: (1) heavy data-dependency among contiguous instructions would manifest in this metric - such cases are often referred to as low Instruction Level Parallelism (ILP). (2) Contention on some hardware execution unit other than Divider. For example; when there are too many multiply operations100%00tma_ports_utilized_0PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group(EXE_ACTIVITY.EXE_BOUND_0_PORTS + max(cpu@RS.EMPTY\,umask\=1@ - RESOURCE_STALLS.SCOREBOARD, 0)) / tma_info_thread_clks * (CYCLE_ACTIVITY.STALLS_TOTAL - EXE_ACTIVITY.BOUND_ON_LOADS) / tma_info_thread_clkstma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise)This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise). Long-latency instructions like divides may contribute to this metric100%00tma_ports_utilized_1PortsUtil;TopdownL4;tma_L4_group;tma_issueL1;tma_ports_utilization_groupEXE_ACTIVITY.1_PORTS_UTIL / tma_info_thread_clkstma_ports_utilized_1 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles where the CPU executed total of 1 uop per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)This metric represents fraction of cycles where the CPU executed total of 1 uop per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). This can be due to heavy data-dependency among software instructions; or over oversubscribing a particular hardware resource. In some other cases with high 1_Port_Utilized and L1_Bound; this metric can point to L1 data-cache latency bottleneck that may not necessarily manifest with complete execution starvation (due to the short L1 latency e.g. walking a linked list) - looking at the assembly can be helpful. Sample with: EXE_ACTIVITY.1_PORTS_UTIL. Related metrics: tma_l1_bound100%00tma_ports_utilized_2PortsUtil;TopdownL4;tma_L4_group;tma_issue2P;tma_ports_utilization_groupEXE_ACTIVITY.2_PORTS_UTIL / tma_info_thread_clkstma_ports_utilized_2 > 0.15 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise).  Loop Vectorization -most compilers feature auto-Vectorization options today- reduces pressure on the execution ports as multiple elements are calculated with same uop. Sample with: EXE_ACTIVITY.2_PORTS_UTIL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6100%02tma_ports_utilized_3mBvCB;PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_groupUOPS_EXECUTED.CYCLES_GE_3 / tma_info_thread_clkstma_ports_utilized_3m > 0.4 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). Sample with: UOPS_EXECUTED.CYCLES_GE_3100%02tma_remote_cacheOffcore;Server;Snoop;TopdownL5;tma_L5_group;tma_issueSyncxn;tma_mem_latency_group(133 * tma_info_system_core_frequency * MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM + 133 * tma_info_system_core_frequency * MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clkstma_remote_cache > 0.05 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))This metric estimates fraction of cycles while the memory subsystem was handling loads from remote cache in other sockets including synchronizations issuesThis metric estimates fraction of cycles while the memory subsystem was handling loads from remote cache in other sockets including synchronizations issues. This is caused often due to non-optimal NUMA allocations. #link to NUMA article. Sample with: MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM_PS;MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD_PS. Related metrics: tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_machine_clears100%00tma_remote_memServer;Snoop;TopdownL5;tma_L5_group;tma_mem_latency_group153 * tma_info_system_core_frequency * MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clkstma_remote_mem > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))This metric estimates fraction of cycles while the memory subsystem was handling loads from remote memoryThis metric estimates fraction of cycles while the memory subsystem was handling loads from remote memory. This is caused often due to non-optimal NUMA allocations. #link to NUMA article. Sample with: MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM_PS100%00tma_retiringBvUW;Default;TmaL1;TopdownL1;tma_L1_grouptopdown\-retiring / (topdown\-fe\-bound + topdown\-bad\-spec + topdown\-retiring + topdown\-be\-bound) + 0 * tma_info_thread_slotstma_retiring > 0.7 | tma_heavy_operations > 0.1This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retiredThis category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired. Ideally; all pipeline slots would be attributed to the Retiring category.  Retiring of 100% would indicate the maximum Pipeline_Width throughput was achieved.  Maximizing Retiring typically increases the Instructions-per-cycle (see IPC metric). Note that a high Retiring value does not necessary mean there is no room for more performance.  For example; Heavy-operations or Microcode Assists are categorized under Retiring. They often indicate suboptimal performance and can often be optimized or avoided. Sample with: UOPS_RETIRED.SLOTS100%TopdownL1;DefaultTopdownL100tma_serializing_operationBvIO;PortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group;tma_issueSORESOURCE_STALLS.SCOREBOARD / tma_info_thread_clks + tma_c02_waittma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)This metric represents fraction of cycles the CPU issue-pipeline was stalled due to serializing operationsThis metric represents fraction of cycles the CPU issue-pipeline was stalled due to serializing operations. Instructions like CPUID; WRMSR or LFENCE serialize the out-of-order execution which may limit performance. Sample with: RESOURCE_STALLS.SCOREBOARD. Related metrics: tma_ms_switches100%00tma_shuffles_256bHPC;Pipeline;TopdownL4;tma_L4_group;tma_other_light_ops_grouptma_light_operations * INT_VEC_RETIRED.SHUFFLES / (tma_retiring * tma_info_thread_slots)tma_shuffles_256b > 0.1 & (tma_other_light_ops > 0.3 & tma_light_operations > 0.6)This metric represents fraction of slots where the CPU was retiring Shuffle operations of 256-bit vector size (FP or Integer)This metric represents fraction of slots where the CPU was retiring Shuffle operations of 256-bit vector size (FP or Integer). Shuffles may incur slow cross "vector lane" data transfers100%00tma_slow_pauseTopdownL4;tma_L4_group;tma_serializing_operation_groupCPU_CLK_UNHALTED.PAUSE / tma_info_thread_clkstma_slow_pause > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles the CPU was stalled due to PAUSE InstructionsThis metric represents fraction of cycles the CPU was stalled due to PAUSE Instructions. Sample with: CPU_CLK_UNHALTED.PAUSE_INST100%02tma_split_loadsTopdownL4;tma_L4_group;tma_l1_bound_grouptma_info_memory_load_miss_real_latency * LD_BLOCKS.NO_SR / tma_info_thread_clkstma_split_loads > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundaryThis metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundary. Sample with: MEM_INST_RETIRED.SPLIT_LOADS_PS100%00tma_sq_fullBvMS;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group(XQ.FULL_CYCLES + L1D_PEND_MISS.L2_STALLS) / tma_info_thread_clkstma_sq_full > 0.3 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors)This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors). Related metrics: tma_fb_full, tma_info_bottleneck_cache_memory_bandwidth, tma_info_system_dram_bw_use, tma_mem_bandwidth100%00tma_store_latencyBvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group(MEM_STORE_RETIRED.L2_HIT * 10 * (1 - MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) + (1 - MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / tma_info_thread_clkstma_store_latency > 0.1 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles the CPU spent handling L1D store missesThis metric estimates fraction of cycles the CPU spent handling L1D store misses. Store accesses usually less impact out-of-order core performance; however; holding resources for longer time can lead into undesired implications (e.g. contention on L1D fill-buffer entries - see FB_Full). Related metrics: tma_fb_full, tma_lock_latency100%00tma_store_op_utilizationTopdownL5;tma_L5_group;tma_ports_utilized_3m_group(UOPS_DISPATCHED.PORT_4_9 + UOPS_DISPATCHED.PORT_7_8) / (4 * tma_info_core_core_clks)tma_store_op_utilization > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port for Store operationsThis metric represents Core fraction of cycles CPU dispatched uops on execution port for Store operations. Sample with: UOPS_DISPATCHED.PORT_7_8100%00tma_streaming_storesMemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueSmSt;tma_store_bound_group9 * OCR.STREAMING_WR.ANY_RESPONSE / tma_info_thread_clkstma_streaming_stores > 0.2 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates how often CPU was stalled  due to Streaming store memory accesses; Streaming store optimize out a read request required by RFO storesThis metric estimates how often CPU was stalled  due to Streaming store memory accesses; Streaming store optimize out a read request required by RFO stores. Even though store accesses do not typically stall out-of-order CPUs; there are few cases where stores can lead to actual stalls. This metric will be flagged should Streaming stores be a bottleneck. Sample with: OCR.STREAMING_WR.ANY_RESPONSE. Related metrics: tma_fb_full100%00tma_unknown_branchesBigFootprint;BvBC;FetchLat;TopdownL4;tma_L4_group;tma_branch_resteers_groupINT_MISC.UNKNOWN_BRANCH_CYCLES / tma_info_thread_clkstma_unknown_branches > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))This metric represents fraction of cycles the CPU was stalled due to new branch address clearsThis metric represents fraction of cycles the CPU was stalled due to new branch address clears. These are fetched branches the Branch Prediction Unit was unable to recognize (e.g. first time the branch is fetched or hitting BTB capacity limit) hence called Unknown Branches. Sample with: FRONTEND_RETIRED.UNKNOWN_BRANCH100%00tma_mem_bandwidth_groupMetrics contributing to tma_mem_bandwidth categoryC6_Module_ResidencyPowercstate_module@c6\-residency@ / TSCC6 residency percent per module100%00cpu_utilizationtma_info_system_cpu_utilizationPercentage of time spent in the active CPU power state C0100%00l1_i_code_read_misses_with_prefetches_per_instrICACHE.MISSES / INST_RETIRED.ANYRatio of number of code read requests missing in L1 instruction cache (includes prefetches) to the total number of completed instructions1per_instr00l2_mpiLONGEST_LAT_CACHE.REFERENCE / INST_RETIRED.ANYRatio of number of requests missing L2 cache (includes code+data+rfo w/ prefetches) to the total number of completed instructions1per_instr00llc_code_read_mpi_demand_plus_prefetch(UNC_CHA_TOR_INSERTS.IA_MISS_CRD + UNC_CHA_TOR_INSERTS.IA_MISS_CRD_PREF) / INST_RETIRED.ANYRatio of number of code read requests missing last level core cache (includes demand w/ prefetches) to the total number of completed instructions1per_instr00llc_data_read_mpi_demand_plus_prefetch(UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_PREF + UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFDATA) / INST_RETIRED.ANYRatio of number of data read requests missing last level core cache (includes demand w/ prefetches) to the total number of completed instructions1per_instr00llc_demand_data_read_miss_latency1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_OPT / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT) / (UNC_CHA_CLOCKTICKS / (source_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_OPT) * #num_packages)) * duration_timeAverage latency of a last level cache (LLC) demand data read miss (read memory access) in nano seconds1ns00loads_retired_per_instrMEM_UOPS_RETIRED.ALL_LOADS / INST_RETIRED.ANYLoad operations retired per instruction1per_instr00memory_bandwidth_read(UNC_M_CAS_COUNT_SCH0.RD + UNC_M_CAS_COUNT_SCH1.RD) * 64 / 1e6 / duration_timeDDR memory read bandwidth (MB/sec)1MB/s00memory_bandwidth_total(UNC_M_CAS_COUNT_SCH0.RD + UNC_M_CAS_COUNT_SCH1.RD + UNC_M_CAS_COUNT_SCH0.WR + UNC_M_CAS_COUNT_SCH1.WR) * 64 / 1e6 / duration_timeDDR memory bandwidth (MB/sec)1MB/s00memory_bandwidth_write(UNC_M_CAS_COUNT_SCH0.WR + UNC_M_CAS_COUNT_SCH1.WR) * 64 / 1e6 / duration_timeDDR memory write bandwidth (MB/sec)1MB/s00stores_retired_per_instrMEM_UOPS_RETIRED.ALL_STORES / INST_RETIRED.ANYStore operations retired per instruction1per_instr00tma_backend_boundTopdownL1;tma_L1_groupTOPDOWN_BE_BOUND.ALL_P / (6 * CPU_CLK_UNHALTED.CORE)tma_backend_bound > 0.1Counts the total number of issue slots that were not consumed by the backend due to backend stallsCounts the total number of issue slots that were not consumed by the backend due to backend stalls. Note that uops must be available for consumption in order for this event to count. If a uop is not available (IQ is empty), this event will not count100%TopdownL100tma_bad_speculationTopdownL1;tma_L1_groupTOPDOWN_BAD_SPECULATION.ALL_P / (6 * CPU_CLK_UNHALTED.CORE)tma_bad_speculation > 0.15Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clearCounts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear. Only issue slots wasted due to fast nukes such as memory ordering nukes are counted. Other nukes are not accounted for. Counts all issue slots blocked during this recovery window including relevant microcode flows and while uops are not yet available in the instruction queue (IQ). Also includes the issue slots that were consumed by the backend but were thrown away because they were younger than the mispredict or machine clear100%TopdownL100tma_branch_detectTopdownL3;tma_L3_group;tma_ifetch_latency_groupTOPDOWN_FE_BOUND.BRANCH_DETECT / (6 * CPU_CLK_UNHALTED.CORE)tma_branch_detect > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to BACLEARS, which occurs when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontendCounts the number of issue slots that were not delivered by the frontend due to BACLEARS, which occurs when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend. Includes BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branches100%00tma_branch_mispredictsTopdownL2;tma_L2_group;tma_bad_speculation_groupTOPDOWN_BAD_SPECULATION.MISPREDICT / (6 * CPU_CLK_UNHALTED.CORE)tma_branch_mispredicts > 0.05 & tma_bad_speculation > 0.15Counts the number of issue slots that were not consumed by the backend due to branch mispredicts100%TopdownL200tma_branch_resteerTopdownL3;tma_L3_group;tma_ifetch_latency_groupTOPDOWN_FE_BOUND.BRANCH_RESTEER / (6 * CPU_CLK_UNHALTED.CORE)tma_branch_resteer > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to BTCLEARS, which occurs when the Branch Target Buffer (BTB) predicts a taken branch100%00tma_ciscTopdownL3;tma_L3_group;tma_ifetch_bandwidth_groupTOPDOWN_FE_BOUND.CISC / (6 * CPU_CLK_UNHALTED.CORE)tma_cisc > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to the microcode sequencer (MS)100%00tma_core_boundTopdownL2;tma_L2_group;tma_backend_bound_groupTOPDOWN_BE_BOUND.ALLOC_RESTRICTIONS / (6 * CPU_CLK_UNHALTED.CORE)tma_core_bound > 0.1 & tma_backend_bound > 0.1Counts the number of cycles due to backend bound stalls that are bounded by core restrictions and not attributed to an outstanding load or stores, or resource limitation100%TopdownL200tma_decodeTopdownL3;tma_L3_group;tma_ifetch_bandwidth_groupTOPDOWN_FE_BOUND.DECODE / (6 * CPU_CLK_UNHALTED.CORE)tma_decode > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to decode stalls100%00tma_fast_nukeTopdownL3;tma_L3_group;tma_machine_clears_groupTOPDOWN_BAD_SPECULATION.FASTNUKE / (6 * CPU_CLK_UNHALTED.CORE)tma_fast_nuke > 0.05 & (tma_machine_clears > 0.05 & tma_bad_speculation > 0.15)Counts the number of issue slots that were not consumed by the backend due to a machine clear that does not require the use of microcode, classified as a fast nuke, due to memory ordering, memory disambiguation and memory renaming100%00tma_frontend_boundTopdownL1;tma_L1_groupTOPDOWN_FE_BOUND.ALL_P / (6 * CPU_CLK_UNHALTED.CORE)tma_frontend_bound > 0.2Counts the number of issue slots that were not consumed by the backend due to frontend stalls100%TopdownL100tma_icache_missesTopdownL3;tma_L3_group;tma_ifetch_latency_groupTOPDOWN_FE_BOUND.ICACHE / (6 * CPU_CLK_UNHALTED.CORE)tma_icache_misses > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to instruction cache misses100%00tma_ifetch_bandwidthTopdownL2;tma_L2_group;tma_frontend_bound_groupTOPDOWN_FE_BOUND.FRONTEND_BANDWIDTH / (6 * CPU_CLK_UNHALTED.CORE)tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2Counts the number of issue slots that were not delivered by the frontend due to frontend bandwidth restrictions due to decode, predecode, cisc, and other limitations100%TopdownL200tma_ifetch_latencyTopdownL2;tma_L2_group;tma_frontend_bound_groupTOPDOWN_FE_BOUND.FRONTEND_LATENCY / (6 * CPU_CLK_UNHALTED.CORE)tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2Counts the number of issue slots that were not delivered by the frontend due to frontend latency restrictions due to icache misses, itlb misses, branch detection, and resteer limitations100%TopdownL200tma_info_arith_inst_mix_ipflopFlopsINST_RETIRED.ANY / FP_FLOPS_RETIRED.ALLInstructions per Floating Point (FP) Operation00tma_info_arith_inst_mix_ipfparith_avx128FlopsINST_RETIRED.ANY / (FP_INST_RETIRED.128B_DP + FP_INST_RETIRED.128B_SP)Instructions per FP Arithmetic AVX/SSE 128-bit instruction00tma_info_arith_inst_mix_ipfparith_scalar_dpFlopsINST_RETIRED.ANY / FP_INST_RETIRED.64B_DPInstructions per FP Arithmetic Scalar Double-Precision instruction00tma_info_arith_inst_mix_ipfparith_scalar_spFlopsINST_RETIRED.ANY / FP_INST_RETIRED.32B_SPInstructions per FP Arithmetic Scalar Single-Precision instruction00tma_info_bottleneck_%_dtlb_miss_bound_cyclestma_info_bottleneck_dtlb_miss_bound_cyclesPercentage of time that retirement is stalled due to a first level data TLB miss00tma_info_bottleneck_%_ifetch_miss_bound_cyclesIfetchtma_info_bottleneck_ifetch_miss_bound_cyclesPercentage of time that allocation and retirement is stalled by the Frontend Cluster due to an Ifetch Miss, either Icache or ITLB MissPercentage of time that allocation and retirement is stalled by the Frontend Cluster due to an Ifetch Miss, either Icache or ITLB Miss. See Info.Ifetch_Bound00tma_info_bottleneck_%_load_miss_bound_cyclesLoad_Store_Misstma_info_bottleneck_load_miss_bound_cyclesPercentage of time that retirement is stalled due to an L1 missPercentage of time that retirement is stalled due to an L1 miss. See Info.Load_Miss_Bound00tma_info_bottleneck_%_mem_exec_bound_cyclesMem_Exectma_info_bottleneck_mem_exec_bound_cyclesPercentage of time that retirement is stalled by the Memory Cluster due to a pipeline stallPercentage of time that retirement is stalled by the Memory Cluster due to a pipeline stall. See Info.Mem_Exec_Bound00tma_info_bottleneck_dtlb_miss_bound_cyclesCycles100 * (LD_HEAD.DTLB_MISS_AT_RET + LD_HEAD.PGWALK_AT_RET) / CPU_CLK_UNHALTED.COREPercentage of time that retirement is stalled due to a first level data TLB miss100%00tma_info_bottleneck_ifetch_miss_bound_cyclesCycles;Ifetch100 * MEM_BOUND_STALLS_IFETCH.ALL / CPU_CLK_UNHALTED.COREPercentage of time that allocation and retirement is stalled by the Frontend Cluster due to an Ifetch Miss, either Icache or ITLB MissPercentage of time that allocation and retirement is stalled by the Frontend Cluster due to an Ifetch Miss, either Icache or ITLB Miss. See Info.Ifetch_Bound100%00tma_info_bottleneck_load_miss_bound_cyclesCycles;Load_Store_Miss100 * MEM_BOUND_STALLS_LOAD.ALL / CPU_CLK_UNHALTED.COREPercentage of time that retirement is stalled due to an L1 missPercentage of time that retirement is stalled due to an L1 miss. See Info.Load_Miss_Bound100%00tma_info_bottleneck_mem_exec_bound_cyclesCycles;Mem_Exec100 * LD_HEAD.ANY_AT_RET / CPU_CLK_UNHALTED.COREPercentage of time that retirement is stalled by the Memory Cluster due to a pipeline stallPercentage of time that retirement is stalled by the Memory Cluster due to a pipeline stall. See Info.Mem_Exec_Bound100%00tma_info_br_inst_mix_ipcallINST_RETIRED.ANY / BR_INST_RETIRED.NEAR_CALLInstruction per (near) call (lower number means higher occurrence rate)00tma_info_buffer_stalls_%_load_buffer_stall_cyclestma_info_buffer_stalls_load_buffer_stall_cyclesPercentage of time that allocation is stalled due to load buffer full00tma_info_buffer_stalls_%_mem_rsv_stall_cyclestma_info_buffer_stalls_mem_rsv_stall_cyclesPercentage of time that allocation is stalled due to memory reservation stations full00tma_info_buffer_stalls_%_store_buffer_stall_cyclestma_info_buffer_stalls_store_buffer_stall_cyclesPercentage of time that allocation is stalled due to store buffer full00tma_info_buffer_stalls_load_buffer_stall_cycles100 * MEM_SCHEDULER_BLOCK.LD_BUF / CPU_CLK_UNHALTED.COREPercentage of time that allocation is stalled due to load buffer full100%00tma_info_buffer_stalls_mem_rsv_stall_cycles100 * MEM_SCHEDULER_BLOCK.RSV / CPU_CLK_UNHALTED.COREPercentage of time that allocation is stalled due to memory reservation stations full100%00tma_info_buffer_stalls_store_buffer_stall_cycles100 * MEM_SCHEDULER_BLOCK.ST_BUF / CPU_CLK_UNHALTED.COREPercentage of time that allocation is stalled due to store buffer full100%00tma_info_core_flopcFlopsFP_FLOPS_RETIRED.ALL / CPU_CLK_UNHALTED.COREFloating Point Operations Per Cycle00tma_info_core_upiTOPDOWN_RETIRING.ALL_P / INST_RETIRED.ANYUops Per Instruction00tma_info_ifetch_miss_bound_%_ifetchmissbound_with_l2hittma_info_ifetch_miss_bound_ifetchmissbound_with_l2hitPercentage of ifetch miss bound stalls, where the ifetch miss hits in the L200tma_info_ifetch_miss_bound_%_ifetchmissbound_with_l3hittma_info_ifetch_miss_bound_ifetchmissbound_with_l3hitPercentage of ifetch miss bound stalls, where the ifetch miss hits in the L300tma_info_ifetch_miss_bound_%_ifetchmissbound_with_l3miss100 * MEM_BOUND_STALLS_IFETCH.LLC_MISS / MEM_BOUND_STALLS_IFETCH.ALLPercentage of ifetch miss bound stalls, where the ifetch miss subsequently misses in the L300tma_info_ifetch_miss_bound_ifetchmissbound_with_l2hit100 * MEM_BOUND_STALLS_IFETCH.L2_HIT / MEM_BOUND_STALLS_IFETCH.ALLPercentage of ifetch miss bound stalls, where the ifetch miss hits in the L2100%00tma_info_ifetch_miss_bound_ifetchmissbound_with_l3hit100 * MEM_BOUND_STALLS_IFETCH.LLC_HIT / MEM_BOUND_STALLS_IFETCH.ALLPercentage of ifetch miss bound stalls, where the ifetch miss hits in the L3100%00tma_info_load_miss_bound_%_loadmissbound_with_l2hitload_store_boundtma_info_load_miss_bound_loadmissbound_with_l2hitPercentage of memory bound stalls where retirement is stalled due to an L1 miss that hit the L200tma_info_load_miss_bound_%_loadmissbound_with_l3hitload_store_boundtma_info_load_miss_bound_loadmissbound_with_l3hitPercentage of memory bound stalls where retirement is stalled due to an L1 miss that hit the L300tma_info_load_miss_bound_%_loadmissbound_with_l3missload_store_bound100 * MEM_BOUND_STALLS_LOAD.LLC_MISS / MEM_BOUND_STALLS_LOAD.ALLPercentage of memory bound stalls where retirement is stalled due to an L1 miss that subsequently misses the L300tma_info_load_miss_bound_loadmissbound_with_l2hitload_store_bound100 * MEM_BOUND_STALLS_LOAD.L2_HIT / MEM_BOUND_STALLS_LOAD.ALLPercentage of memory bound stalls where retirement is stalled due to an L1 miss that hit the L2100%00tma_info_load_miss_bound_loadmissbound_with_l3hitload_store_bound100 * MEM_BOUND_STALLS_LOAD.LLC_HIT / MEM_BOUND_STALLS_LOAD.ALLPercentage of memory bound stalls where retirement is stalled due to an L1 miss that hit the L3100%00tma_info_load_store_bound_load_boundload_store_bound100 * (LD_HEAD.L1_BOUND_AT_RET + MEM_BOUND_STALLS_LOAD.ALL) / CPU_CLK_UNHALTED.CORECounts the number of cycles that the oldest load of the load buffer is stalled at retirement00tma_info_mem_exec_blocks_%_loads_with_adressaliasingtma_info_mem_exec_blocks_loads_with_adressaliasingPercentage of total non-speculative loads with an address aliasing block00tma_info_mem_exec_blocks_%_loads_with_storefwdblktma_info_mem_exec_blocks_loads_with_storefwdblkPercentage of total non-speculative loads with a store forward or unknown store address block00tma_info_mem_exec_blocks_loads_with_adressaliasing100 * LD_BLOCKS.ADDRESS_ALIAS / MEM_UOPS_RETIRED.ALL_LOADSPercentage of total non-speculative loads with an address aliasing block100%00tma_info_mem_exec_blocks_loads_with_storefwdblk100 * LD_BLOCKS.DATA_UNKNOWN / MEM_UOPS_RETIRED.ALL_LOADSPercentage of total non-speculative loads with a store forward or unknown store address block100%00tma_info_mem_exec_bound_%_loadhead_with_l1misstma_info_mem_exec_bound_loadhead_with_l1missPercentage of Memory Execution Bound due to a first level data cache miss00tma_info_mem_exec_bound_%_loadhead_with_otherpipelineblkstma_info_mem_exec_bound_loadhead_with_otherpipelineblksPercentage of Memory Execution Bound due to other block cases, such as pipeline conflicts, fences, etc00tma_info_mem_exec_bound_%_loadhead_with_pagewalktma_info_mem_exec_bound_loadhead_with_pagewalkPercentage of Memory Execution Bound due to a pagewalk00tma_info_mem_exec_bound_%_loadhead_with_stlbhittma_info_mem_exec_bound_loadhead_with_stlbhitPercentage of Memory Execution Bound due to a second level TLB miss00tma_info_mem_exec_bound_%_loadhead_with_storefwdingtma_info_mem_exec_bound_loadhead_with_storefwdingPercentage of Memory Execution Bound due to a store forward address match00tma_info_mem_exec_bound_loadhead_with_l1miss100 * LD_HEAD.L1_MISS_AT_RET / LD_HEAD.ANY_AT_RETPercentage of Memory Execution Bound due to a first level data cache miss100%00tma_info_mem_exec_bound_loadhead_with_otherpipelineblks100 * LD_HEAD.OTHER_AT_RET / LD_HEAD.ANY_AT_RETPercentage of Memory Execution Bound due to other block cases, such as pipeline conflicts, fences, etc100%00tma_info_mem_exec_bound_loadhead_with_pagewalk100 * LD_HEAD.PGWALK_AT_RET / LD_HEAD.ANY_AT_RETPercentage of Memory Execution Bound due to a pagewalk100%00tma_info_mem_exec_bound_loadhead_with_stlbhit100 * LD_HEAD.DTLB_MISS_AT_RET / LD_HEAD.ANY_AT_RETPercentage of Memory Execution Bound due to a second level TLB miss100%00tma_info_mem_exec_bound_loadhead_with_storefwding100 * LD_HEAD.ST_ADDR_AT_RET / LD_HEAD.ANY_AT_RETPercentage of Memory Execution Bound due to a store forward address match100%00tma_info_mem_mix_memload_ratio1e3 * MEM_UOPS_RETIRED.ALL_LOADS / TOPDOWN_RETIRING.ALL_PRatio of mem load uops to all uops00tma_info_serialization _%_tpause_cyclestma_info_serialization_tpause_cyclesPercentage of time that the core is stalled due to a TPAUSE or UMWAIT instruction00tma_info_serialization_tpause_cycles100 * SERIALIZATION.C01_MS_SCB / (6 * CPU_CLK_UNHALTED.CORE)Percentage of time that the core is stalled due to a TPAUSE or UMWAIT instruction100%00tma_info_system_gflopsFlopsFP_FLOPS_RETIRED.ALL / (duration_time * 1e9)Giga Floating Point Operations Per SecondGiga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width00tma_info_uop_mix_fpdiv_uop_ratio100 * UOPS_RETIRED.FPDIV / TOPDOWN_RETIRING.ALL_PPercentage of all uops which are FPDiv uops00tma_info_uop_mix_idiv_uop_ratio100 * UOPS_RETIRED.IDIV / TOPDOWN_RETIRING.ALL_PPercentage of all uops which are IDiv uops00tma_info_uop_mix_microcode_uop_ratio100 * UOPS_RETIRED.MS / TOPDOWN_RETIRING.ALL_PPercentage of all uops which are microcode ops00tma_info_uop_mix_x87_uop_ratio100 * UOPS_RETIRED.X87 / TOPDOWN_RETIRING.ALL_PPercentage of all uops which are x87 uops00tma_itlb_missesTopdownL3;tma_L3_group;tma_ifetch_latency_groupTOPDOWN_FE_BOUND.ITLB_MISS / (6 * CPU_CLK_UNHALTED.CORE)tma_itlb_misses > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to Instruction Table Lookaside Buffer (ITLB) misses100%00tma_machine_clearsTopdownL2;tma_L2_group;tma_bad_speculation_groupTOPDOWN_BAD_SPECULATION.MACHINE_CLEARS / (6 * CPU_CLK_UNHALTED.CORE)tma_machine_clears > 0.05 & tma_bad_speculation > 0.15Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a machine clear (nuke) of any kind including memory ordering and memory disambiguation100%TopdownL200tma_mem_schedulerTopdownL3;tma_L3_group;tma_resource_bound_groupTOPDOWN_BE_BOUND.MEM_SCHEDULER / (6 * CPU_CLK_UNHALTED.CORE)tma_mem_scheduler > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)Counts the number of issue slots that were not consumed by the backend due to memory reservation stalls in which a scheduler is not able to accept uops100%00tma_non_mem_schedulerTopdownL3;tma_L3_group;tma_resource_bound_groupTOPDOWN_BE_BOUND.NON_MEM_SCHEDULER / (6 * CPU_CLK_UNHALTED.CORE)tma_non_mem_scheduler > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)Counts the number of issue slots that were not consumed by the backend due to IEC or FPC RAT stalls, which can be due to FIQ or IEC reservation stalls in which the integer, floating point or SIMD scheduler is not able to accept uops100%00tma_nukeTopdownL3;tma_L3_group;tma_machine_clears_groupTOPDOWN_BAD_SPECULATION.NUKE / (6 * CPU_CLK_UNHALTED.CORE)tma_nuke > 0.05 & (tma_machine_clears > 0.05 & tma_bad_speculation > 0.15)Counts the number of issue slots that were not consumed by the backend due to a machine clear that requires the use of microcode (slow nuke)100%00tma_other_fbTopdownL3;tma_L3_group;tma_ifetch_bandwidth_groupTOPDOWN_FE_BOUND.OTHER / (6 * CPU_CLK_UNHALTED.CORE)tma_other_fb > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to other common frontend stalls not categorized100%00tma_predecodeTopdownL3;tma_L3_group;tma_ifetch_bandwidth_groupTOPDOWN_FE_BOUND.PREDECODE / (6 * CPU_CLK_UNHALTED.CORE)tma_predecode > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to wrong predecodes100%00tma_registerTopdownL3;tma_L3_group;tma_resource_bound_groupTOPDOWN_BE_BOUND.REGISTER / (6 * CPU_CLK_UNHALTED.CORE)tma_register > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)Counts the number of issue slots that were not consumed by the backend due to the physical register file unable to accept an entry (marble stalls)100%00tma_reorder_bufferTopdownL3;tma_L3_group;tma_resource_bound_groupTOPDOWN_BE_BOUND.REORDER_BUFFER / (6 * CPU_CLK_UNHALTED.CORE)tma_reorder_buffer > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)Counts the number of issue slots that were not consumed by the backend due to the reorder buffer being full (ROB stalls)100%00tma_retiringTopdownL1;tma_L1_groupTOPDOWN_RETIRING.ALL_P / (6 * CPU_CLK_UNHALTED.CORE)tma_retiring > 0.75Counts the number of issue slots that result in retirement slots100%TopdownL100tma_serializationTopdownL3;tma_L3_group;tma_resource_bound_groupTOPDOWN_BE_BOUND.SERIALIZATION / (6 * CPU_CLK_UNHALTED.CORE)tma_serialization > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)Counts the number of issue slots that were not consumed by the backend due to scoreboards from the instruction queue (IQ), jump execution unit (JEU), or microcode sequencer (MS)100%00tma_dividerBvCB;TopdownL3;tma_L3_group;tma_core_bound_group10 * ARITH.DIVIDER_UOPS / tma_info_core_core_clkstma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)This metric represents fraction of cycles where the Divider unit was activeThis metric represents fraction of cycles where the Divider unit was active. Divide and square root instructions are performed by the Divider unit and can take considerably longer latency than integer or Floating Point addition; subtraction; or multiplication. Sample with: ARITH.DIVIDER_UOPS100%00tma_dram_boundMemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group(1 - MEM_LOAD_UOPS_RETIRED.L3_HIT / (MEM_LOAD_UOPS_RETIRED.L3_HIT + 7 * MEM_LOAD_UOPS_RETIRED.L3_MISS)) * CYCLE_ACTIVITY.STALLS_L2_PENDING / tma_info_thread_clkstma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loadsThis metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads. Better caching can improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L3_MISS_PS100%03tma_dtlb_loadBvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group(8 * DTLB_LOAD_MISSES.STLB_HIT + DTLB_LOAD_MISSES.WALK_DURATION) / tma_info_thread_clkstma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accessesThis metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_UOPS_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_dtlb_store100%00tma_dtlb_storeBvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group(8 * DTLB_STORE_MISSES.STLB_HIT + DTLB_STORE_MISSES.WALK_DURATION) / tma_info_thread_clkstma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates the fraction of cycles spent handling first-level data TLB store missesThis metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses.  As with ordinary data caching; focus on improving data locality and reducing working-set size to reduce DTLB overhead.  Additionally; consider using profile-guided optimization (PGO) to collocate frequently-used data on the same page.  Try using larger page sizes for large amounts of frequently-used data. Sample with: MEM_UOPS_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_dtlb_load100%00tma_false_sharingBvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group60 * OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.HITM_OTHER_CORE / tma_info_thread_clkstma_false_sharing > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates how often CPU was handling synchronizations due to False SharingThis metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache100%00tma_fb_fullBvMS;MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_grouptma_info_memory_load_miss_real_latency * cpu@L1D_PEND_MISS.REQUEST_FB_FULL\,cmask\=1@ / tma_info_thread_clkstma_fb_full > 0.3This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceedThis metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed. The higher the metric value; the deeper the memory hierarchy level the misses are satisfied from (metric values >1 are valid). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or external memory). Related metrics: tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full, tma_store_latency, tma_streaming_stores100%01tma_fetch_latencyFrontend;TmaL2;TopdownL2;tma_L2_group;tma_frontend_bound_group4 * min(CPU_CLK_UNHALTED.THREAD, IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE) / tma_info_thread_slotstma_fetch_latency > 0.1 & tma_frontend_bound > 0.15This metric represents fraction of slots the CPU was stalled due to Frontend latency issuesThis metric represents fraction of slots the CPU was stalled due to Frontend latency issues.  For example; instruction-cache misses; iTLB misses or fetch stalls after a branch misprediction are categorized under Frontend Latency. In such cases; the Frontend eventually delivers no uops for some period. Sample with: RS_EVENTS.EMPTY_END100%TopdownL200tma_info_core_ilpBackend;Cor;Pipeline;PortsUtil(UOPS_EXECUTED.CORE / 2 / (cpu@UOPS_EXECUTED.CORE\,cmask\=1@ / 2 if #SMT_on else cpu@UOPS_EXECUTED.CORE\,cmask\=1@) if #SMT_on else UOPS_EXECUTED.CORE / (cpu@UOPS_EXECUTED.CORE\,cmask\=1@ / 2 if #SMT_on else cpu@UOPS_EXECUTED.CORE\,cmask\=1@))Instruction-Level-Parallelism (average number of uops executed when there is execution) per thread (logical-processor)00tma_info_memory_tlb_page_walks_utilizationMem;MemoryTLB(ITLB_MISSES.WALK_DURATION + DTLB_LOAD_MISSES.WALK_DURATION + DTLB_STORE_MISSES.WALK_DURATION) / tma_info_core_core_clkstma_info_memory_tlb_page_walks_utilization > 0.5Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses00tma_itlb_missesBigFootprint;BvBC;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group(14 * ITLB_MISSES.STLB_HIT + ITLB_MISSES.WALK_DURATION) / tma_info_thread_clkstma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) missesThis metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: ITLB_MISSES.WALK_COMPLETED100%00tma_l1_boundCacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_issueL1;tma_issueMC;tma_memory_bound_groupmax((min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.STALLS_LDM_PENDING) - CYCLE_ACTIVITY.STALLS_L1D_PENDING) / tma_info_thread_clks, 0)tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled without loads missing the L1 data cacheThis metric estimates how often the CPU was stalled without loads missing the L1 data cache.  The L1 data cache typically has the shortest latency.  However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_UOPS_RETIRED.L1_HIT_PS;MEM_LOAD_UOPS_RETIRED.HIT_LFB_PS. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1100%00tma_l2_boundBvML;CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group(CYCLE_ACTIVITY.STALLS_L1D_PENDING - CYCLE_ACTIVITY.STALLS_L2_PENDING) / tma_info_thread_clkstma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled due to L2 cache accesses by loadsThis metric estimates how often the CPU was stalled due to L2 cache accesses by loads.  Avoiding cache misses (i.e. L1 misses/L2 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L2_HIT_PS100%00tma_l3_boundCacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_groupMEM_LOAD_UOPS_RETIRED.L3_HIT / (MEM_LOAD_UOPS_RETIRED.L3_HIT + 7 * MEM_LOAD_UOPS_RETIRED.L3_MISS) * CYCLE_ACTIVITY.STALLS_L2_PENDING / tma_info_thread_clkstma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling CoreThis metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core.  Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS100%03tma_mem_bandwidthBvMS;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBWmin(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\,cmask\=6@) / tma_info_thread_clkstma_mem_bandwidth > 0.2 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM)This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM).  The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_sq_full100%00tma_memory_boundBackend;TmaL2;TopdownL2;tma_L2_group;tma_backend_bound_group((min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.STALLS_LDM_PENDING) + RESOURCE_STALLS.SB) / (min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) + (cpu@UOPS_EXECUTED.CORE\,cmask\=1@ - (cpu@UOPS_EXECUTED.CORE\,cmask\=3@ if tma_info_thread_ipc > 1.8 else cpu@UOPS_EXECUTED.CORE\,cmask\=2@)) / 2 - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0) + RESOURCE_STALLS.SB) if #SMT_on else min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) + cpu@UOPS_EXECUTED.CORE\,cmask\=1@ - (cpu@UOPS_EXECUTED.CORE\,cmask\=3@ if tma_info_thread_ipc > 1.8 else cpu@UOPS_EXECUTED.CORE\,cmask\=2@) - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0) + RESOURCE_STALLS.SB) * tma_backend_boundtma_memory_bound > 0.2 & tma_backend_bound > 0.2This metric represents fraction of slots the Memory subsystem within the Backend was a bottleneckThis metric represents fraction of slots the Memory subsystem within the Backend was a bottleneck.  Memory Bound estimates fraction of slots where pipeline is likely stalled due to demand load or store instructions. This accounts mainly for (1) non-completed in-flight memory demand loads which coincides with execution units starvation; in addition to (2) cases where stores could impose backpressure on the pipeline when many of them get buffered at the same time (less common out of the two)100%TopdownL201tma_ports_utilizationPortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group(min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) + (cpu@UOPS_EXECUTED.CORE\,cmask\=1@ - (cpu@UOPS_EXECUTED.CORE\,cmask\=3@ if tma_info_thread_ipc > 1.8 else cpu@UOPS_EXECUTED.CORE\,cmask\=2@)) / 2 - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0) + RESOURCE_STALLS.SB if #SMT_on else min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) + cpu@UOPS_EXECUTED.CORE\,cmask\=1@ - (cpu@UOPS_EXECUTED.CORE\,cmask\=3@ if tma_info_thread_ipc > 1.8 else cpu@UOPS_EXECUTED.CORE\,cmask\=2@) - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0) + RESOURCE_STALLS.SB - RESOURCE_STALLS.SB - min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.STALLS_LDM_PENDING)) / tma_info_thread_clkstma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)This metric estimates fraction of cycles the CPU performance was potentially limited due to Core computation issues (non divider-related)This metric estimates fraction of cycles the CPU performance was potentially limited due to Core computation issues (non divider-related).  Two distinct categories can be attributed into this metric: (1) heavy data-dependency among contiguous instructions would manifest in this metric - such cases are often referred to as low Instruction Level Parallelism (ILP). (2) Contention on some hardware execution unit other than Divider. For example; when there are too many multiply operations100%01tma_ports_utilized_0PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group(cpu@UOPS_EXECUTED.CORE\,inv\,cmask\=1@ / 2 if #SMT_on else (min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0)) / tma_info_core_core_clks)tma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise)This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise). Long-latency instructions like divides may contribute to this metric100%00tma_ports_utilized_1PortsUtil;TopdownL4;tma_L4_group;tma_issueL1;tma_ports_utilization_group((cpu@UOPS_EXECUTED.CORE\,cmask\=1@ - cpu@UOPS_EXECUTED.CORE\,cmask\=2@) / 2 if #SMT_on else (cpu@UOPS_EXECUTED.CORE\,cmask\=1@ - cpu@UOPS_EXECUTED.CORE\,cmask\=2@) / tma_info_core_core_clks)tma_ports_utilized_1 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles where the CPU executed total of 1 uop per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)This metric represents fraction of cycles where the CPU executed total of 1 uop per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). This can be due to heavy data-dependency among software instructions; or over oversubscribing a particular hardware resource. In some other cases with high 1_Port_Utilized and L1_Bound; this metric can point to L1 data-cache latency bottleneck that may not necessarily manifest with complete execution starvation (due to the short L1 latency e.g. walking a linked list) - looking at the assembly can be helpful. Related metrics: tma_l1_bound100%00tma_ports_utilized_2PortsUtil;TopdownL4;tma_L4_group;tma_issue2P;tma_ports_utilization_group((cpu@UOPS_EXECUTED.CORE\,cmask\=2@ - cpu@UOPS_EXECUTED.CORE\,cmask\=3@) / 2 if #SMT_on else (cpu@UOPS_EXECUTED.CORE\,cmask\=2@ - cpu@UOPS_EXECUTED.CORE\,cmask\=3@) / tma_info_core_core_clks)tma_ports_utilized_2 > 0.15 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise).  Loop Vectorization -most compilers feature auto-Vectorization options today- reduces pressure on the execution ports as multiple elements are calculated with same uop. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6100%00tma_ports_utilized_3mBvCB;PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group(cpu@UOPS_EXECUTED.CORE\,cmask\=3@ / 2 if #SMT_on else cpu@UOPS_EXECUTED.CORE\,cmask\=3@) / tma_info_core_core_clkstma_ports_utilized_3m > 0.4 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)100%00io_bandwidth_writecbox@UNC_C_TOR_INSERTS.OPCODE\,filter_opc\=0x1c8\,filter_tid\=0x3e@ * 64 / 1e6 / duration_timeBandwidth of IO writes that are initiated by end device controllers that are writing memory to the CPU1MB/s00percent_uops_delivered_from_loop_stream_detector(UOPS_ISSUED.ANY - IDQ.MITE_UOPS - IDQ.MS_UOPS - IDQ.DSB_UOPS) / UOPS_ISSUED.ANYUops delivered from loop stream detector(LSD) as a percent of total uops delivered to Instruction Decode Queue100%00tma_4k_aliasingTopdownL4;tma_L4_group;tma_l1_bound_groupLD_BLOCKS_PARTIAL.ADDRESS_ALIAS / tma_info_thread_clkstma_4k_aliasing > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates how often memory load accesses were aliased by preceding stores (in program order) with a 4K address offsetThis metric estimates how often memory load accesses were aliased by preceding stores (in program order) with a 4K address offset. False match is possible; which incur a few cycles load re-issue. However; the short re-issue duration is often hidden by the out-of-order core and HW optimizations; hence a user may safely ignore a high value of this metric unless it manages to propagate up into parent nodes of the hierarchy (e.g. to L1_Bound)100%02tma_alu_op_utilizationTopdownL5;tma_L5_group;tma_ports_utilized_3m_group(UOPS_DISPATCHED.PORT_0 + UOPS_DISPATCHED.PORT_1 + UOPS_DISPATCHED.PORT_5 + UOPS_DISPATCHED.PORT_6) / (4 * tma_info_core_core_clks)tma_alu_op_utilization > 0.4This metric represents Core fraction of cycles CPU dispatched uops on execution ports for ALU operations100%00tma_assistsBvIO;TopdownL4;tma_L4_group;tma_microcode_sequencer_group34 * ASSISTS.ANY / tma_info_thread_slotstma_assists > 0.1 & (tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1)This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of AssistsThis metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists. Assists are long sequences of uops that are required in certain corner-cases for operations that cannot be handled natively by the execution pipeline. For example; when working with very small floating point values (so-called Denormals); the FP units are not set up to perform these operations natively. Instead; a sequence of instructions to perform the computation on the Denormals is injected into the pipeline. Since these microcode sequences might be dozens of uops long; Assists can be extremely deleterious to performance and they can be avoided in many cases. Sample with: ASSISTS.ANY100%00tma_backend_boundBvOB;Default;TmaL1;TopdownL1;tma_L1_grouptopdown\-be\-bound / (topdown\-fe\-bound + topdown\-bad\-spec + topdown\-retiring + topdown\-be\-bound) + 5 * INT_MISC.CLEARS_COUNT / tma_info_thread_slotstma_backend_bound > 0.2This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the BackendThis category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend. Backend is the portion of the processor core where the out-of-order scheduler dispatches ready uops into their respective execution units; and once completed these uops get retired according to program order. For example; stalls due to data-cache misses or stalls due to the divider unit being overloaded are both categorized under Backend Bound. Backend Bound is further divided into two main categories: Memory Bound and Core Bound. Sample with: TOPDOWN.BACKEND_BOUND_SLOTS100%TopdownL1;DefaultTopdownL100tma_branch_instructionsBranches;BvBO;Pipeline;TopdownL3;tma_L3_group;tma_light_operations_grouptma_light_operations * BR_INST_RETIRED.ALL_BRANCHES / (tma_retiring * tma_info_thread_slots)tma_branch_instructions > 0.1 & tma_light_operations > 0.6This metric represents fraction of slots where the CPU was retiring branch instructions100%00tma_branch_mispredictsBadSpec;BrMispredicts;BvMP;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBMBR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT) * tma_bad_speculationtma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15This metric represents fraction of slots the CPU has wasted due to Branch MispredictionThis metric represents fraction of slots the CPU has wasted due to Branch Misprediction.  These slots are either wasted by uops fetched from an incorrectly speculated program path; or stalls when the out-of-order part of the machine needs to recover its state from a speculative path. Sample with: BR_MISP_RETIRED.ALL_BRANCHES. Related metrics: tma_info_bad_spec_branch_misprediction_cost, tma_info_bottleneck_mispredictions, tma_mispredicts_resteers100%TopdownL200tma_contested_accessesBvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group(29 * tma_info_system_core_frequency * MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM + 23.5 * tma_info_system_core_frequency * MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clkstma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accessesThis metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS. Related metrics: tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache100%01tma_data_sharingBvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group23.5 * tma_info_system_core_frequency * MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clkstma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accessesThis metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT_PS. Related metrics: tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache100%01tma_dram_boundMemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_groupCYCLE_ACTIVITY.STALLS_L3_MISS / tma_info_thread_clks + (CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks - tma_l2_boundtma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loadsThis metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads. Better caching can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_MISS_PS100%01tma_dtlb_loadBvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_groupmin(7 * cpu@DTLB_LOAD_MISSES.STLB_HIT\,cmask\=1@ + DTLB_LOAD_MISSES.WALK_ACTIVE, max(CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE_ACTIVITY.CYCLES_L1D_MISS, 0)) / tma_info_thread_clkstma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accessesThis metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_dtlb_store, tma_info_bottleneck_memory_data_tlbs, tma_info_bottleneck_memory_synchronization100%00tma_false_sharingBvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group32.5 * tma_info_system_core_frequency * OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM / tma_info_thread_clkstma_false_sharing > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates how often CPU was handling synchronizations due to False SharingThis metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache100%00tma_fetch_latencyFrontend;TmaL2;TopdownL2;tma_L2_group;tma_frontend_bound_group(5 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE - INT_MISC.UOP_DROPPING) / tma_info_thread_slotstma_fetch_latency > 0.1 & tma_frontend_bound > 0.15This metric represents fraction of slots the CPU was stalled due to Frontend latency issuesThis metric represents fraction of slots the CPU was stalled due to Frontend latency issues.  For example; instruction-cache misses; iTLB misses or fetch stalls after a branch misprediction are categorized under Frontend Latency. In such cases; the Frontend eventually delivers no uops for some period. Sample with: FRONTEND_RETIRED.LATENCY_GE_16_PS;FRONTEND_RETIRED.LATENCY_GE_8_PS100%TopdownL200tma_few_uops_instructionsTopdownL3;tma_L3_group;tma_heavy_operations_group;tma_issueD0tma_heavy_operations - tma_microcode_sequencertma_few_uops_instructions > 0.05 & tma_heavy_operations > 0.1This metric represents fraction of slots where the CPU was retiring instructions that that are decoder into two or up to ([SNB+] four; [ADL+] five) uopsThis metric represents fraction of slots where the CPU was retiring instructions that that are decoder into two or up to ([SNB+] four; [ADL+] five) uops. This highly-correlates with the number of uops in such instructions. Related metrics: tma_decoder0_alone100%00tma_fp_assistsHPC;TopdownL5;tma_L5_group;tma_assists_group34 * ASSISTS.FP / tma_info_thread_slotstma_fp_assists > 0.1This metric roughly estimates fraction of slots the CPU retired uops as a result of handing Floating Point (FP) AssistsThis metric roughly estimates fraction of slots the CPU retired uops as a result of handing Floating Point (FP) Assists. FP Assist may apply when working with very small floating point values (so-called Denormals)100%00tma_fp_scalarCompute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2PFP_ARITH_INST_RETIRED.SCALAR / (tma_retiring * tma_info_thread_slots)tma_fp_scalar > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)This metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retiredThis metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retired. May overcount due to FMA double counting. Related metrics: tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_fp_vectorCompute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2PFP_ARITH_INST_RETIRED.VECTOR / (tma_retiring * tma_info_thread_slots)tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)This metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widthsThis metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widths. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_fp_vector_128bCompute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P(FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE) / (tma_retiring * tma_info_thread_slots)tma_fp_vector_128b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))This metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectorsThis metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_fp_vector_256bCompute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P(FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / (tma_retiring * tma_info_thread_slots)tma_fp_vector_256b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))This metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectorsThis metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_fp_vector_512bCompute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P(FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / (tma_retiring * tma_info_thread_slots)tma_fp_vector_512b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))This metric approximates arithmetic FP vector uops fraction the CPU has retired for 512-bit wide vectorsThis metric approximates arithmetic FP vector uops fraction the CPU has retired for 512-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_heavy_operationsRetire;TmaL2;TopdownL2;tma_L2_group;tma_retiring_grouptma_microcode_sequencer + tma_retiring * (UOPS_DECODED.DEC0 - cpu@UOPS_DECODED.DEC0\,cmask\=1@) / IDQ.MITE_UOPStma_heavy_operations > 0.1This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequencesThis metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences. ([ICL+] Note this may overcount due to approximation using indirect events; [ADL+] .)100%TopdownL200tma_info_bad_spec_branch_misprediction_costBad;BrMispredicts;tma_issueBMtma_info_bottleneck_mispredictions * tma_info_thread_slots / BR_MISP_RETIRED.ALL_BRANCHES / 100Branch Misprediction Cost: Fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear)Branch Misprediction Cost: Fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear). Related metrics: tma_branch_mispredicts, tma_info_bottleneck_mispredictions, tma_mispredicts_resteers01tma_info_botlnk_core_bound_likelyCor;Metric;SMTtma_info_botlnk_l0_core_bound_likelytma_info_botlnk_core_bound_likely > 0.5Probability of Core Bound bottleneck hidden by SMT-profiling artifacts00tma_info_botlnk_dsb_missesDSBmiss;Fed;Scaled_Slots;tma_issueFB100 * (tma_fetch_latency * tma_dsb_switches / (tma_icache_misses + tma_itlb_misses + tma_branch_resteers + tma_ms_switches + tma_lcp + tma_dsb_switches) + tma_fetch_bandwidth * tma_mite / (tma_mite + tma_dsb + tma_lsd))tma_info_botlnk_dsb_misses > 10Total pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW Bottleneck00tma_info_botlnk_ic_missesFed;FetchLat;IcMiss;Scaled_Slots;tma_issueFL100 * (tma_fetch_latency * tma_icache_misses / (tma_icache_misses + tma_itlb_misses + tma_branch_resteers + tma_ms_switches + tma_lcp + tma_dsb_switches))tma_info_botlnk_ic_misses > 5Total pipeline cost of Instruction Cache misses - subset of the Big_Code Bottleneck00tma_info_botlnk_l2_dsb_missesDSBmiss;Fed;tma_issueFB100 * (tma_fetch_latency * tma_dsb_switches / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + tma_fetch_bandwidth * tma_mite / (tma_dsb + tma_lsd + tma_mite))tma_info_botlnk_l2_dsb_misses > 10Total pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW BottleneckTotal pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp01tma_info_botlnk_l2_ic_missesFed;FetchLat;IcMiss;tma_issueFL100 * (tma_fetch_latency * tma_icache_misses / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))tma_info_botlnk_l2_ic_misses > 5Total pipeline cost of Instruction Cache misses - subset of the Big_Code BottleneckTotal pipeline cost of Instruction Cache misses - subset of the Big_Code Bottleneck. Related metrics: 01tma_info_bottleneck_cache_memory_bandwidthBvMB;Mem;MemoryBW;Offcore;tma_issueBW100 * (tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_fb_full / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_hit_latency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)))tma_info_bottleneck_cache_memory_bandwidth > 20Total pipeline cost of external Memory- or Cache-Bandwidth related bottlenecksTotal pipeline cost of external Memory- or Cache-Bandwidth related bottlenecks. Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full00tma_info_bottleneck_cache_memory_latencyBvML;Mem;MemoryLat;Offcore;tma_issueLat100 * (tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_l3_hit_latency / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * tma_l2_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_store_latency / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_l1_hit_latency / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_hit_latency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)))tma_info_bottleneck_cache_memory_latency > 20Total pipeline cost of external Memory- or Cache-Latency related bottlenecksTotal pipeline cost of external Memory- or Cache-Latency related bottlenecks. Related metrics: tma_l3_hit_latency, tma_mem_latency00tma_info_bottleneck_memory_data_tlbsBvMT;Mem;MemoryTLB;Offcore;tma_issueTLB100 * (tma_memory_bound * (tma_l1_bound / max(tma_memory_bound, tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_dtlb_load / max(tma_l1_bound, tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_hit_latency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_dtlb_store / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)))tma_info_bottleneck_memory_data_tlbs > 20Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs)Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs). Related metrics: tma_dtlb_load, tma_dtlb_store, tma_info_bottleneck_memory_synchronization01tma_info_core_flopcFlops;Ret(FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.8_FLOPS + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / tma_info_core_core_clksFloating Point Operations Per Cycle00tma_info_frontend_dsb_coverageDSB;Fed;FetchBW;tma_issueFBIDQ.DSB_UOPS / UOPS_ISSUED.ANYtma_info_frontend_dsb_coverage < 0.7 & tma_info_thread_ipc / 5 > 0.35Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache)Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache). Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_inst_mix_iptb, tma_lcp00tma_info_frontend_icache_miss_latencyFed;FetchLat;IcMissICACHE_16B.IFDATA_STALL / cpu@ICACHE_16B.IFDATA_STALL\,cmask\=1\,edge@Average Latency for L1 instruction cache misses00tma_info_frontend_lsd_coverageFed;LSDLSD.UOPS / UOPS_ISSUED.ANYFraction of Uops delivered by the LSD (Loop Stream Detector; aka Loop Cache)00tma_info_inst_mix_ipflopFlops;InsTypeINST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.8_FLOPS + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE)tma_info_inst_mix_ipflop < 10Instructions per Floating Point (FP) Operation (lower number means higher occurrence rate)00tma_info_inst_mix_ippauseFlops;FpVector;InsTypetma_info_inst_mix_instructions / MISC_RETIRED.PAUSE_INSTInstructions per PAUSE (lower number means higher occurrence rate)00tma_info_inst_mix_iptbBranches;Fed;FetchBW;Frontend;PGO;tma_issueFBINST_RETIRED.ANY / BR_INST_RETIRED.NEAR_TAKENtma_info_inst_mix_iptb < 11Instructions per taken branchInstructions per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_lcp00tma_info_memory_bus_lock_pkiMem;Metrictma_info_memory_mix_bus_lock_pki"Bus lock" per kilo instruction00tma_info_memory_code_stlb_mpkiFed;MemoryTLB;Metrictma_info_memory_tlb_code_stlb_mpkiSTLB (2nd level TLB) code speculative misses per kilo instruction (misses of any page-size that complete the page walk)00tma_info_memory_data_l2_mlpMemory_BW;Metric;Offcoretma_info_memory_latency_data_l2_mlpAverage Parallel L2 cache miss data reads00tma_info_memory_l1d_cache_fill_bw_2tCore_Metric;Mem;MemoryBWtma_info_memory_l1d_cache_fill_bwAverage per-core data fill bandwidth to the L1 data cache [GB / sec]00tma_info_memory_l2_cache_fill_bw_2tCore_Metric;Mem;MemoryBWtma_info_memory_l2_cache_fill_bwAverage per-core data fill bandwidth to the L2 cache [GB / sec]00tma_info_memory_l2mpki_allCacheHits;Mem;Offcore1e3 * (OFFCORE_REQUESTS.ALL_DATA_RD - OFFCORE_REQUESTS.DEMAND_DATA_RD + L2_RQSTS.ALL_DEMAND_MISS + L2_RQSTS.SWPF_MISS) / tma_info_inst_mix_instructionsL2 cache ([RKL+] true) misses per kilo instruction for all request types (including speculative)00tma_info_memory_l3_cache_access_bw_2tCore_Metric;Mem;MemoryBW;Offcoretma_info_memory_l3_cache_access_bwAverage per-core data access bandwidth to the L3 cache [GB / sec]00tma_info_memory_l3_cache_fill_bw_2tCore_Metric;Mem;MemoryBWtma_info_memory_l3_cache_fill_bwAverage per-core data fill bandwidth to the L3 cache [GB / sec]00tma_info_memory_latency_load_l2_miss_latencyMemory_Lat;Offcoretma_info_memory_load_l2_miss_latencyAverage Latency for L2 cache miss demand Loads00tma_info_memory_latency_load_l3_miss_latencyMemory_Lat;Offcorecpu@OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD\,umask\=0x10@ / OFFCORE_REQUESTS.L3_MISS_DEMAND_DATA_RDAverage Latency for L3 cache miss demand Loads00tma_info_memory_load_l2_miss_latencyClocks_Latency;Memory_Lat;OffcoreOFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / OFFCORE_REQUESTS.DEMAND_DATA_RDAverage Latency for L2 cache miss demand Loads00tma_info_memory_load_l2_mlpMemory_BW;Metric;OffcoreOFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / cpu@OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD\,cmask\=0x1@Average Parallel L2 cache miss demand Loads00tma_info_memory_load_l3_miss_latencyClocks_Latency;Memory_Lat;Offcorecpu@OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD\,umask\=0x0@ / OFFCORE_REQUESTS.L3_MISS_DEMAND_DATA_RDAverage Latency for L3 cache miss demand Loads00tma_info_memory_load_stlb_mpkiMem;MemoryTLB;Metrictma_info_memory_tlb_load_stlb_mpkiSTLB (2nd level TLB) data load speculative misses per kilo instruction (misses of any page-size that complete the page walk)00tma_info_memory_mix_uc_load_pkiMemtma_info_memory_uc_load_pkiUn-cacheable retired load per kilo instruction00tma_info_memory_page_walks_utilizationCore_Metric;Mem;MemoryTLBtma_info_memory_tlb_page_walks_utilizationtma_info_memory_page_walks_utilization > 0.5Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses00tma_info_memory_store_stlb_mpkiMem;MemoryTLB;Metrictma_info_memory_tlb_store_stlb_mpkiSTLB (2nd level TLB) data store speculative misses per kilo instruction (misses of any page-size that complete the page walk)00tma_info_memory_tlb_page_walks_utilizationMem;MemoryTLB(ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_PENDING + DTLB_STORE_MISSES.WALK_PENDING) / (2 * tma_info_core_core_clks)tma_info_memory_tlb_page_walks_utilization > 0.5Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses00tma_info_memory_uc_load_pkiMem;Metric1e3 * MEM_LOAD_MISC_RETIRED.UC / INST_RETIRED.ANYUn-cacheable retired load per kilo instruction00tma_info_pipeline_fetch_lsdFed;FetchBWLSD.UOPS / LSD.CYCLES_ACTIVEAverage number of uops fetched from LSD per cycle00tma_info_system_gflopsCor;Flops;HPC(FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.8_FLOPS + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / 1e9 / duration_timeGiga Floating Point Operations Per SecondGiga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width00tma_info_system_power_license0_utilizationPowerCORE_POWER.LVL0_TURBO_LICENSE / tma_info_core_core_clksFraction of Core cycles where the core was running with power-delivery for baseline license level 0Fraction of Core cycles where the core was running with power-delivery for baseline license level 0.  This includes non-AVX codes, SSE, AVX 128-bit, and low-current AVX 256-bit codes00tma_info_system_power_license1_utilizationPowerCORE_POWER.LVL1_TURBO_LICENSE / tma_info_core_core_clkstma_info_system_power_license1_utilization > 0.5Fraction of Core cycles where the core was running with power-delivery for license level 1Fraction of Core cycles where the core was running with power-delivery for license level 1.  This includes high current AVX 256-bit instructions as well as low current AVX 512-bit instructions00tma_info_system_power_license2_utilizationPowerCORE_POWER.LVL2_TURBO_LICENSE / tma_info_core_core_clkstma_info_system_power_license2_utilization > 0.5Fraction of Core cycles where the core was running with power-delivery for license level 2 (introduced in SKX)Fraction of Core cycles where the core was running with power-delivery for license level 2 (introduced in SKX).  This includes high current AVX 512-bit instructions00tma_info_thread_uptbBranches;Fed;FetchBWtma_retiring * tma_info_thread_slots / BR_INST_RETIRED.NEAR_TAKENtma_info_thread_uptb < 7.5Uops per taken branch00tma_l2_boundBvML;CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_groupMEM_LOAD_RETIRED.L2_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / (MEM_LOAD_RETIRED.L2_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + L1D_PEND_MISS.FB_FULL_PERIODS) * ((CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks)tma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled due to L2 cache accesses by loadsThis metric estimates how often the CPU was stalled due to L2 cache accesses by loads.  Avoiding cache misses (i.e. L1 misses/L2 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L2_HIT_PS100%01tma_l3_boundCacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group(CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS) / tma_info_thread_clkstma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling CoreThis metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core.  Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS100%02tma_l3_hit_latencyBvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group9 * tma_info_system_core_frequency * (MEM_LOAD_RETIRED.L3_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2)) / tma_info_thread_clkstma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited).  Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance.  Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS. Related metrics: tma_info_bottleneck_cache_memory_latency, tma_mem_latency100%00tma_load_op_utilizationTopdownL5;tma_L5_group;tma_ports_utilized_3m_groupUOPS_DISPATCHED.PORT_2_3 / (2 * tma_info_core_core_clks)tma_load_op_utilization > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port for Load operationsThis metric represents Core fraction of cycles CPU dispatched uops on execution port for Load operations. Sample with: UOPS_DISPATCHED.PORT_2_3100%00tma_lock_latencyOffcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group(16 * max(0, MEM_INST_RETIRED.LOCK_LOADS - L2_RQSTS.ALL_RFO) + MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES * (10 * L2_RQSTS.RFO_HIT + min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO))) / tma_info_thread_clkstma_lock_latency > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric represents fraction of cycles the CPU spent handling cache misses due to lock operationsThis metric represents fraction of cycles the CPU spent handling cache misses due to lock operations. Due to the microarchitecture handling of locks; they are classified as L1_Bound regardless of what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOADS. Related metrics: tma_store_latency100%01tma_lsdFetchBW;LSD;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group(LSD.CYCLES_ACTIVE - LSD.CYCLES_OK) / tma_info_core_core_clks / 2tma_lsd > 0.15 & tma_fetch_bandwidth > 0.2This metric represents Core fraction of cycles in which CPU was likely limited due to LSD (Loop Stream Detector) unitThis metric represents Core fraction of cycles in which CPU was likely limited due to LSD (Loop Stream Detector) unit.  LSD typically does well sustaining Uop supply. However; in some rare cases; optimal uop-delivery could not be reached for small loops whose size (in terms of number of uops) does not suit well the LSD structure100%00tma_memory_boundBackend;TmaL2;TopdownL2;tma_L2_group;tma_backend_bound_group(CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES) * tma_backend_boundtma_memory_bound > 0.2 & tma_backend_bound > 0.2This metric represents fraction of slots the Memory subsystem within the Backend was a bottleneckThis metric represents fraction of slots the Memory subsystem within the Backend was a bottleneck.  Memory Bound estimates fraction of slots where pipeline is likely stalled due to demand load or store instructions. This accounts mainly for (1) non-completed in-flight memory demand loads which coincides with execution units starvation; in addition to (2) cases where stores could impose backpressure on the pipeline when many of them get buffered at the same time (less common out of the two)100%TopdownL200tma_memory_operationsPipeline;TopdownL3;tma_L3_group;tma_light_operations_grouptma_light_operations * MEM_INST_RETIRED.ANY / INST_RETIRED.ANYtma_memory_operations > 0.1 & tma_light_operations > 0.6This metric represents fraction of slots where the CPU was retiring memory operations -- uops for memory load or store accesses100%01tma_microcode_sequencerMicroSeq;TopdownL3;tma_L3_group;tma_heavy_operations_group;tma_issueMC;tma_issueMSUOPS_RETIRED.SLOTS / UOPS_ISSUED.ANY * IDQ.MS_UOPS / tma_info_thread_slotstma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1This metric represents fraction of slots the CPU was retiring uops fetched by the Microcode Sequencer (MS) unitThis metric represents fraction of slots the CPU was retiring uops fetched by the Microcode Sequencer (MS) unit.  The MS is used for CISC instructions not supported by the default decoders (like repeat move strings; or CPUID); or by microcode assists used to address some operation modes (like in Floating Point assists). These cases can often be avoided. Sample with: IDQ.MS_UOPS. Related metrics: tma_clears_resteers, tma_info_bottleneck_irregular_overhead, tma_l1_bound, tma_machine_clears, tma_ms_switches100%00tma_mite_4wideDSBmiss;FetchBW;TopdownL4;tma_L4_group;tma_mite_group(cpu@IDQ.MITE_UOPS\,cmask\=4@ - cpu@IDQ.MITE_UOPS\,cmask\=5@) / tma_info_thread_clkstma_mite_4wide > 0.05 & (tma_mite > 0.1 & tma_fetch_bandwidth > 0.2)This metric represents fraction of cycles where (only) 4 uops were delivered by the MITE pipeline100%00tma_ms_switchesFetchLat;MicroSeq;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueMC;tma_issueMS;tma_issueMV;tma_issueSO3 * IDQ.MS_SWITCHES / tma_info_thread_clkstma_ms_switches > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS)This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS). Commonly used instructions are optimized for delivery by the DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Certain operations cannot be handled natively by the execution pipeline; and must be performed by microcode (small programs injected into the execution stream). Switching to the MS too often can negatively impact performance. The MS is designated to deliver long uop flows required by CISC instructions like CPUID; or uncommon conditions like Floating Point Assists when dealing with Denormals. Sample with: IDQ.MS_SWITCHES. Related metrics: tma_clears_resteers, tma_info_bottleneck_irregular_overhead, tma_l1_bound, tma_machine_clears, tma_microcode_sequencer, tma_mixing_vectors, tma_serializing_operation100%00tma_other_light_opsPipeline;TopdownL3;tma_L3_group;tma_light_operations_groupmax(0, tma_light_operations - (tma_fp_arith + tma_memory_operations + tma_branch_instructions))tma_other_light_ops > 0.3 & tma_light_operations > 0.6This metric represents the remaining light uops fraction the CPU has executed - remaining means not covered by other sibling nodesThis metric represents the remaining light uops fraction the CPU has executed - remaining means not covered by other sibling nodes. May undercount due to FMA double counting100%01tma_port_0Compute;TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2PUOPS_DISPATCHED.PORT_0 / tma_info_core_core_clkstma_port_0 > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch)This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch). Sample with: UOPS_DISPATCHED.PORT_0. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_port_1TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2PUOPS_DISPATCHED.PORT_1 / tma_info_core_core_clkstma_port_1 > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port 1 (ALU)This metric represents Core fraction of cycles CPU dispatched uops on execution port 1 (ALU). Sample with: UOPS_DISPATCHED.PORT_1. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_port_5TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2PUOPS_DISPATCHED.PORT_5 / tma_info_core_core_clkstma_port_5 > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port 5 ([SNB+] Branches and ALU; [HSW+] ALU)This metric represents Core fraction of cycles CPU dispatched uops on execution port 5 ([SNB+] Branches and ALU; [HSW+] ALU). Sample with: UOPS_DISPATCHED.PORT_5. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_6, tma_ports_utilized_2100%00tma_port_6TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2PUOPS_DISPATCHED.PORT_6 / tma_info_core_core_clkstma_port_6 > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU)This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU). Sample with: UOPS_DISPATCHED.PORT_6. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_ports_utilized_2100%00tma_ports_utilized_0PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_groupcpu@EXE_ACTIVITY.3_PORTS_UTIL\,umask\=0x80@ / tma_info_thread_clkstma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise)This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise). Long-latency instructions like divides may contribute to this metric100%00tma_ports_utilized_2PortsUtil;TopdownL4;tma_L4_group;tma_issue2P;tma_ports_utilization_groupEXE_ACTIVITY.2_PORTS_UTIL / tma_info_thread_clkstma_ports_utilized_2 > 0.15 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise).  Loop Vectorization -most compilers feature auto-Vectorization options today- reduces pressure on the execution ports as multiple elements are calculated with same uop. Sample with: EXE_ACTIVITY.2_PORTS_UTIL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6100%00tma_ports_utilized_3mBvCB;PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_groupUOPS_EXECUTED.CYCLES_GE_3 / tma_info_thread_clkstma_ports_utilized_3m > 0.4 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). Sample with: UOPS_EXECUTED.CYCLES_GE_3100%00tma_serializing_operationBvIO;PortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group;tma_issueSORESOURCE_STALLS.SCOREBOARD / tma_info_thread_clkstma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)This metric represents fraction of cycles the CPU issue-pipeline was stalled due to serializing operationsThis metric represents fraction of cycles the CPU issue-pipeline was stalled due to serializing operations. Instructions like CPUID; WRMSR or LFENCE serialize the out-of-order execution which may limit performance. Sample with: RESOURCE_STALLS.SCOREBOARD. Related metrics: tma_ms_switches100%00tma_slow_pauseTopdownL4;tma_L4_group;tma_serializing_operation_group140 * MISC_RETIRED.PAUSE_INST / tma_info_thread_clkstma_slow_pause > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles the CPU was stalled due to PAUSE InstructionsThis metric represents fraction of cycles the CPU was stalled due to PAUSE Instructions. Sample with: MISC_RETIRED.PAUSE_INST100%00tma_split_storesTopdownL4;tma_L4_group;tma_issueSpSt;tma_store_bound_groupMEM_INST_RETIRED.SPLIT_STORES / tma_info_core_core_clkstma_split_stores > 0.2 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric represents rate of split store accessesThis metric represents rate of split store accesses.  Consider aligning your data to the 64-byte cache line granularity. Sample with: MEM_INST_RETIRED.SPLIT_STORES_PS. Related metrics: tma_port_4100%02tma_sq_fullBvMS;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_groupL1D_PEND_MISS.L2_STALL / tma_info_thread_clkstma_sq_full > 0.3 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors)This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors). Related metrics: tma_fb_full, tma_info_bottleneck_cache_memory_bandwidth, tma_info_system_dram_bw_use, tma_mem_bandwidth100%00tma_store_fwd_blkTopdownL4;tma_L4_group;tma_l1_bound_group13 * LD_BLOCKS.STORE_FORWARD / tma_info_thread_clkstma_store_fwd_blk > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates fraction of cycles when the memory subsystem had loads blocked since they could not forward data from earlier (in program order) overlapping storesThis metric roughly estimates fraction of cycles when the memory subsystem had loads blocked since they could not forward data from earlier (in program order) overlapping stores. To streamline memory operations in the pipeline; a load can avoid waiting for memory if a prior in-flight store is writing the data that the load wants to read (store forwarding process). However; in some cases the load may be blocked for a significant time pending the store forward. For example; when the prior store is writing a smaller region than the load is reading100%02tma_store_latencyBvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group(L2_RQSTS.RFO_HIT * 10 * (1 - MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) + (1 - MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / tma_info_thread_clkstma_store_latency > 0.1 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles the CPU spent handling L1D store missesThis metric estimates fraction of cycles the CPU spent handling L1D store misses. Store accesses usually less impact out-of-order core performance; however; holding resources for longer time can lead into undesired implications (e.g. contention on L1D fill-buffer entries - see FB_Full). Related metrics: tma_fb_full, tma_lock_latency100%00tma_unknown_branchesBigFootprint;BvBC;FetchLat;TopdownL4;tma_L4_group;tma_branch_resteers_group10 * BACLEARS.ANY / tma_info_thread_clkstma_unknown_branches > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))This metric represents fraction of cycles the CPU was stalled due to new branch address clearsThis metric represents fraction of cycles the CPU was stalled due to new branch address clears. These are fetched branches the Branch Prediction Unit was unable to recognize (e.g. first time the branch is fetched or hitting BTB capacity limit) hence called Unknown Branches. Sample with: BACLEARS.ANY100%00io_bandwidth_write(UNC_CHA_TOR_INSERTS.IO_HIT_ITOM + UNC_CHA_TOR_INSERTS.IO_MISS_ITOM + UNC_CHA_TOR_INSERTS.IO_HIT_ITOMCACHENEAR + UNC_CHA_TOR_INSERTS.IO_MISS_ITOMCACHENEAR) * 64 / 1e6 / duration_timeBandwidth of IO writes that are initiated by end device controllers that are writing memory to the CPU1MB/s00tma_contested_accessesBvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group(44 * tma_info_system_core_frequency * (MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM * (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM / (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM + OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))) + 43.5 * tma_info_system_core_frequency * MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clkstma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accessesThis metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS. Related metrics: tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache100%01tma_data_sharingBvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group43.5 * tma_info_system_core_frequency * (MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM * (1 - OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM / (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM + OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clkstma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accessesThis metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT_PS. Related metrics: tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache100%01tma_false_sharingBvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group48 * tma_info_system_core_frequency * OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM / tma_info_thread_clkstma_false_sharing > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates how often CPU was handling synchronizations due to False SharingThis metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache100%00tma_info_bottleneck_cache_memory_latencyBvML;Mem;MemoryLat;Offcore;tma_issueLat100 * (tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_l3_hit_latency / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * tma_l2_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_store_latency / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_l1_hit_latency / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_hit_latency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)))tma_info_bottleneck_cache_memory_latency > 20Total pipeline cost of external Memory- or Cache-Latency related bottlenecksTotal pipeline cost of external Memory- or Cache-Latency related bottlenecks. Related metrics: tma_l3_hit_latency, tma_mem_latency00tma_info_bottleneck_memory_data_tlbsBvMT;Mem;MemoryTLB;Offcore;tma_issueTLB100 * (tma_memory_bound * (tma_l1_bound / max(tma_memory_bound, tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_dtlb_load / max(tma_l1_bound, tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_hit_latency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_dtlb_store / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)))tma_info_bottleneck_memory_data_tlbs > 20Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs)Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs). Related metrics: tma_dtlb_load, tma_dtlb_store, tma_info_bottleneck_memory_synchronization01tma_info_bottleneck_memory_synchronizationBvMS;Mem;Offcore;tma_issueTLB100 * (tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) * tma_remote_cache / (tma_local_mem + tma_remote_cache + tma_remote_mem) + tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * (tma_contested_accesses + tma_data_sharing) / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full) + tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * tma_false_sharing / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores - tma_store_latency)) + tma_machine_clears * (1 - tma_other_nukes / tma_other_nukes))tma_info_bottleneck_memory_synchronization > 10Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors)Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors). Related metrics: tma_dtlb_load, tma_dtlb_store, tma_info_bottleneck_memory_data_tlbs00tma_info_system_io_write_bwIoBW;MemOffcore;Server;SoC(UNC_CHA_TOR_INSERTS.IO_HIT_ITOM + UNC_CHA_TOR_INSERTS.IO_MISS_ITOM + UNC_CHA_TOR_INSERTS.IO_HIT_ITOMCACHENEAR + UNC_CHA_TOR_INSERTS.IO_MISS_ITOMCACHENEAR) * 64 / 1e9 / duration_timeAverage IO (network or disk) Bandwidth Use for Writes [GB / sec]Average IO (network or disk) Bandwidth Use for Writes [GB / sec]. Bandwidth of IO writes that are initiated by end device controllers that are writing memory to the CPU00tma_info_system_mem_dram_read_latencyMemOffcore;MemoryLat;Server;SoC1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_DDR / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_DDR) / cha_0@event\=0x0@Average latency of data read request to external DRAM memory [in nanoseconds]Average latency of data read request to external DRAM memory [in nanoseconds]. Accounts for demand loads and L1/L2 data-read prefetches00tma_info_system_mem_pmm_read_latencyMemOffcore;MemoryLat;Server;SoC(1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PMM / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PMM) / cha_0@event\=0x0@ if #has_pmem > 0 else 0)Average latency of data read request to external 3D X-Point memory [in nanoseconds]Average latency of data read request to external 3D X-Point memory [in nanoseconds]. Accounts for demand loads and L1/L2 data-read prefetches00tma_l3_hit_latencyBvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group19 * tma_info_system_core_frequency * (MEM_LOAD_RETIRED.L3_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2)) / tma_info_thread_clkstma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited).  Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance.  Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS. Related metrics: tma_info_bottleneck_cache_memory_latency, tma_mem_latency100%00tma_local_memServer;TopdownL5;tma_L5_group;tma_mem_latency_group43.5 * tma_info_system_core_frequency * MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clkstma_local_mem > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))This metric estimates fraction of cycles while the memory subsystem was handling loads from local memoryThis metric estimates fraction of cycles while the memory subsystem was handling loads from local memory. Caching will improve the latency and increase performance. Sample with: MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM100%00tma_pmm_boundMemoryBound;Server;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group(((1 - (19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)) + 10 * (MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) / (19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)) + 10 * (MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)) + (25 * (MEM_LOAD_RETIRED.LOCAL_PMM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)) + 33 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))))) * (CYCLE_ACTIVITY.STALLS_L3_MISS / tma_info_thread_clks + (CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks - tma_l2_bound) if 1e6 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM + MEM_LOAD_RETIRED.LOCAL_PMM) > MEM_LOAD_RETIRED.L1_MISS else 0) if #has_pmem > 0 else 0)tma_pmm_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric roughly estimates (based on idle latencies) how often the CPU was stalled on accesses to external 3D-Xpoint (Crystal Ridge, a.k.aThis metric roughly estimates (based on idle latencies) how often the CPU was stalled on accesses to external 3D-Xpoint (Crystal Ridge, a.k.a. IXP) memory by loads, PMM stands for Persistent Memory Module100%00tma_remote_cacheOffcore;Server;Snoop;TopdownL5;tma_L5_group;tma_issueSyncxn;tma_mem_latency_group(97 * tma_info_system_core_frequency * MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM + 97 * tma_info_system_core_frequency * MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clkstma_remote_cache > 0.05 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))This metric estimates fraction of cycles while the memory subsystem was handling loads from remote cache in other sockets including synchronizations issuesThis metric estimates fraction of cycles while the memory subsystem was handling loads from remote cache in other sockets including synchronizations issues. This is caused often due to non-optimal NUMA allocations. #link to NUMA article. Sample with: MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM_PS;MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD_PS. Related metrics: tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_machine_clears100%00tma_remote_memServer;Snoop;TopdownL5;tma_L5_group;tma_mem_latency_group108 * tma_info_system_core_frequency * MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clkstma_remote_mem > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))This metric estimates fraction of cycles while the memory subsystem was handling loads from remote memoryThis metric estimates fraction of cycles while the memory subsystem was handling loads from remote memory. This is caused often due to non-optimal NUMA allocations. #link to NUMA article. Sample with: MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM_PS100%00tma_slow_pauseTopdownL4;tma_L4_group;tma_serializing_operation_group37 * MISC_RETIRED.PAUSE_INST / tma_info_thread_clkstma_slow_pause > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles the CPU was stalled due to PAUSE InstructionsThis metric represents fraction of cycles the CPU was stalled due to PAUSE Instructions. Sample with: MISC_RETIRED.PAUSE_INST100%00tma_alu_op_utilizationTopdownL5;tma_L5_group;tma_ports_utilized_3m_group(UOPS_DISPATCHED_PORT.PORT_0 + UOPS_DISPATCHED_PORT.PORT_1 + UOPS_DISPATCHED_PORT.PORT_5) / (3 * tma_info_core_core_clks)tma_alu_op_utilization > 0.4This metric represents Core fraction of cycles CPU dispatched uops on execution ports for ALU operations100%02tma_contested_accessesBvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group(60 * (MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.LLC_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_RETIRED.LLC_MISS))) + 43 * (MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.LLC_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_RETIRED.LLC_MISS)))) / tma_info_thread_clkstma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accessesThis metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS. Related metrics: tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache100%01tma_data_sharingBvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group43 * (MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.LLC_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_RETIRED.LLC_MISS))) / tma_info_thread_clkstma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accessesThis metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT_PS. Related metrics: tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache100%01tma_dram_boundMemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group(1 - MEM_LOAD_UOPS_RETIRED.LLC_HIT / (MEM_LOAD_UOPS_RETIRED.LLC_HIT + 7 * MEM_LOAD_UOPS_RETIRED.LLC_MISS)) * CYCLE_ACTIVITY.STALLS_L2_PENDING / tma_info_thread_clkstma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loadsThis metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads. Better caching can improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L3_MISS_PS100%03tma_dtlb_loadBvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group(7 * DTLB_LOAD_MISSES.STLB_HIT + DTLB_LOAD_MISSES.WALK_DURATION) / tma_info_thread_clkstma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accessesThis metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_UOPS_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_dtlb_store100%00tma_dtlb_storeBvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group(7 * DTLB_STORE_MISSES.STLB_HIT + DTLB_STORE_MISSES.WALK_DURATION) / tma_info_thread_clkstma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates the fraction of cycles spent handling first-level data TLB store missesThis metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses.  As with ordinary data caching; focus on improving data locality and reducing working-set size to reduce DTLB overhead.  Additionally; consider using profile-guided optimization (PGO) to collocate frequently-used data on the same page.  Try using larger page sizes for large amounts of frequently-used data. Sample with: MEM_UOPS_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_dtlb_load100%00tma_false_sharingBvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group60 * OFFCORE_RESPONSE.DEMAND_RFO.LLC_HIT.HITM_OTHER_CORE / tma_info_thread_clkstma_false_sharing > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates how often CPU was handling synchronizations due to False SharingThis metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache100%00tma_fp_scalarCompute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P(FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP_OPS_EXE.SSE_SCALAR_DOUBLE) / UOPS_EXECUTED.THREADtma_fp_scalar > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)This metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retiredThis metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retired. May overcount due to FMA double counting. Related metrics: tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_fp_vectorCompute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P(FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + FP_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE) / UOPS_EXECUTED.THREADtma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)This metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widthsThis metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widths. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_fp_vector_128bCompute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P(FP_COMP_OPS_EXE.SSE_SCALAR_DOUBLE + FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE) / UOPS_EXECUTED.THREADtma_fp_vector_128b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))This metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectorsThis metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_fp_vector_256bCompute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P(SIMD_FP_256.PACKED_DOUBLE + SIMD_FP_256.PACKED_SINGLE) / UOPS_EXECUTED.THREADtma_fp_vector_256b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))This metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectorsThis metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%00tma_icache_missesBigFootprint;BvBC;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_groupICACHE.IFETCH_STALL / tma_info_thread_clks - tma_itlb_missestma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric represents fraction of cycles the CPU was stalled due to instruction cache misses100%00tma_info_core_flopcFlops;Ret(FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP_OPS_EXE.SSE_SCALAR_DOUBLE + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 * (FP_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE) + 8 * SIMD_FP_256.PACKED_SINGLE) / tma_info_core_core_clksFloating Point Operations Per Cycle00tma_info_inst_mix_iparithFlops;InsType1 / (tma_fp_scalar + tma_fp_vector)tma_info_inst_mix_iparith < 10Instructions per FP Arithmetic instruction (lower number means higher occurrence rate)Instructions per FP Arithmetic instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting. Approximated prior to BDW00tma_info_memory_l3mpkiMem1e3 * MEM_LOAD_UOPS_RETIRED.LLC_MISS / INST_RETIRED.ANYL3 cache true misses per kilo instruction for retired demand loads00tma_info_system_gflopsCor;Flops;HPC(FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP_OPS_EXE.SSE_SCALAR_DOUBLE + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 * (FP_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE) + 8 * SIMD_FP_256.PACKED_SINGLE) / 1e9 / duration_timeGiga Floating Point Operations Per SecondGiga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width00tma_itlb_missesBigFootprint;BvBC;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group(12 * ITLB_MISSES.STLB_HIT + ITLB_MISSES.WALK_DURATION) / tma_info_thread_clkstma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) missesThis metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: ITLB_MISSES.WALK_COMPLETED100%00tma_l3_boundCacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_groupMEM_LOAD_UOPS_RETIRED.LLC_HIT / (MEM_LOAD_UOPS_RETIRED.LLC_HIT + 7 * MEM_LOAD_UOPS_RETIRED.LLC_MISS) * CYCLE_ACTIVITY.STALLS_L2_PENDING / tma_info_thread_clkstma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling CoreThis metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core.  Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS100%03tma_l3_hit_latencyBvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group29 * (MEM_LOAD_UOPS_RETIRED.LLC_HIT * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.LLC_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_RETIRED.LLC_MISS))) / tma_info_thread_clkstma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited).  Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance.  Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS. Related metrics: tma_mem_latency100%01tma_load_op_utilizationTopdownL5;tma_L5_group;tma_ports_utilized_3m_group(UOPS_DISPATCHED_PORT.PORT_2 + UOPS_DISPATCHED_PORT.PORT_3 - UOPS_DISPATCHED_PORT.PORT_4) / (2 * tma_info_core_core_clks)tma_load_op_utilization > 0.6This metric represents Core fraction of cycles CPU dispatched uops on execution port for Load operationsThis metric represents Core fraction of cycles CPU dispatched uops on execution port for Load operations. Sample with: UOPS_DISPATCHED.PORT_2_3100%02tma_memory_boundBackend;TmaL2;TopdownL2;tma_L2_group;tma_backend_bound_group(min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.STALLS_LDM_PENDING) + RESOURCE_STALLS.SB) / (min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) + UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC - (UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC if tma_info_thread_ipc > 1.8 else UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC) - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0) + RESOURCE_STALLS.SB) * tma_backend_boundtma_memory_bound > 0.2 & tma_backend_bound > 0.2This metric represents fraction of slots the Memory subsystem within the Backend was a bottleneckThis metric represents fraction of slots the Memory subsystem within the Backend was a bottleneck.  Memory Bound estimates fraction of slots where pipeline is likely stalled due to demand load or store instructions. This accounts mainly for (1) non-completed in-flight memory demand loads which coincides with execution units starvation; in addition to (2) cases where stores could impose backpressure on the pipeline when many of them get buffered at the same time (less common out of the two)100%TopdownL201tma_ms_switchesFetchLat;MicroSeq;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueMC;tma_issueMS;tma_issueMV;tma_issueSO3 * IDQ.MS_SWITCHES / tma_info_thread_clkstma_ms_switches > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS)This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS). Commonly used instructions are optimized for delivery by the DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Certain operations cannot be handled natively by the execution pipeline; and must be performed by microcode (small programs injected into the execution stream). Switching to the MS too often can negatively impact performance. The MS is designated to deliver long uop flows required by CISC instructions like CPUID; or uncommon conditions like Floating Point Assists when dealing with Denormals. Sample with: IDQ.MS_SWITCHES. Related metrics: tma_clears_resteers, tma_l1_bound, tma_machine_clears, tma_microcode_sequencer, tma_mixing_vectors, tma_serializing_operation100%00tma_ports_utilizationPortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group(min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) + UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC - (UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC if tma_info_thread_ipc > 1.8 else UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC) - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0) + RESOURCE_STALLS.SB - RESOURCE_STALLS.SB - min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.STALLS_LDM_PENDING)) / tma_info_thread_clkstma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)This metric estimates fraction of cycles the CPU performance was potentially limited due to Core computation issues (non divider-related)This metric estimates fraction of cycles the CPU performance was potentially limited due to Core computation issues (non divider-related).  Two distinct categories can be attributed into this metric: (1) heavy data-dependency among contiguous instructions would manifest in this metric - such cases are often referred to as low Instruction Level Parallelism (ILP). (2) Contention on some hardware execution unit other than Divider. For example; when there are too many multiply operations100%01tma_split_loadsTopdownL4;tma_L4_group;tma_l1_bound_group13 * LD_BLOCKS.NO_SR / tma_info_thread_clkstma_split_loads > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundaryThis metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundary. Sample with: MEM_UOPS_RETIRED.SPLIT_LOADS_PS100%01tma_x87_useCompute;TopdownL4;tma_L4_group;tma_fp_arith_groupUOPS_RETIRED.RETIRE_SLOTS * FP_COMP_OPS_EXE.X87 / UOPS_EXECUTED.THREADtma_x87_use > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)This metric serves as an approximation of legacy x87 usageThis metric serves as an approximation of legacy x87 usage. It accounts for instructions beyond X87 FP arithmetic operations; hence may be used as a thermometer to avoid X87 high usage and preferably upgrade to modern ISA. See Tip under Tuning Hint100%00tma_contested_accessesBvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group(60 * (MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.LLC_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_LLC_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_FWD))) + 43 * (MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.LLC_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_LLC_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_FWD)))) / tma_info_thread_clkstma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accessesThis metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS. Related metrics: tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache100%01tma_data_sharingBvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group43 * (MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.LLC_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_LLC_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_FWD))) / tma_info_thread_clkstma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accessesThis metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT_PS. Related metrics: tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache100%01tma_l3_hit_latencyBvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group41 * (MEM_LOAD_UOPS_RETIRED.LLC_HIT * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.LLC_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_LLC_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_FWD))) / tma_info_thread_clkstma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited).  Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance.  Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS. Related metrics: tma_mem_latency100%01tma_local_memServer;TopdownL5;tma_L5_group;tma_mem_latency_group200 * (MEM_LOAD_UOPS_LLC_MISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.LLC_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_LLC_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_FWD))) / tma_info_thread_clkstma_local_mem > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))This metric estimates fraction of cycles while the memory subsystem was handling loads from local memoryThis metric estimates fraction of cycles while the memory subsystem was handling loads from local memory. Caching will improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM_PS100%00tma_remote_cacheOffcore;Server;Snoop;TopdownL5;tma_L5_group;tma_issueSyncxn;tma_mem_latency_group(200 * (MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_HITM * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.LLC_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_LLC_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_FWD))) + 180 * (MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_FWD * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.LLC_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_LLC_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_FWD)))) / tma_info_thread_clkstma_remote_cache > 0.05 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))This metric estimates fraction of cycles while the memory subsystem was handling loads from remote cache in other sockets including synchronizations issuesThis metric estimates fraction of cycles while the memory subsystem was handling loads from remote cache in other sockets including synchronizations issues. This is caused often due to non-optimal NUMA allocations. #link to NUMA article. Sample with: MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM_PS;MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD_PS. Related metrics: tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_machine_clears100%01tma_remote_memServer;Snoop;TopdownL5;tma_L5_group;tma_mem_latency_group310 * (MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.LLC_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_LLC_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_FWD))) / tma_info_thread_clkstma_remote_mem > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))This metric estimates fraction of cycles while the memory subsystem was handling loads from remote memoryThis metric estimates fraction of cycles while the memory subsystem was handling loads from remote memory. This is caused often due to non-optimal NUMA allocations. #link to NUMA article. Sample with: MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM_PS100%00tma_dsb_switchesDSBmiss;FetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueFBDSB2MITE_SWITCHES.PENALTY_CYCLES / tma_info_thread_clkstma_dsb_switches > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric represents fraction of cycles the CPU was stalled due to switches from DSB to MITE pipelinesThis metric represents fraction of cycles the CPU was stalled due to switches from DSB to MITE pipelines. The DSB (decoded i-cache) is a Uop Cache where the front-end directly delivers Uops (micro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter latency and delivered higher bandwidth than the MITE (legacy instruction decode pipeline). Switching between the two pipelines can cause penalties hence this metric measures the exposed penalty. Related metrics: tma_fetch_bandwidth, tma_info_frontend_dsb_coverage, tma_lcp100%00tma_dtlb_loadBvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group(7 * DTLB_LOAD_MISSES.STLB_HIT + DTLB_LOAD_MISSES.WALK_DURATION) / tma_info_thread_clkstma_dtlb_load > 0.1This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accessesThis metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_UOPS_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_dtlb_store100%00tma_fetch_bandwidthFetchBW;Frontend;TmaL2;TopdownL2;tma_L2_group;tma_frontend_bound_group;tma_issueFBtma_frontend_bound - tma_fetch_latencytma_fetch_bandwidth > 0.2This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issuesThis metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues.  For example; inefficiencies at the instruction decoders; or restrictions for caching in the DSB (decoded uops cache) are categorized under Fetch Bandwidth. In such cases; the Frontend typically delivers suboptimal amount of uops to the Backend. Related metrics: tma_dsb_switches, tma_info_frontend_dsb_coverage, tma_lcp100%TopdownL200tma_fp_scalarCompute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P(FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP_OPS_EXE.SSE_SCALAR_DOUBLE) / UOPS_DISPATCHED.THREADtma_fp_scalar > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)This metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retiredThis metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retired. May overcount due to FMA double counting. Related metrics: tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_6, tma_ports_utilized_2100%00tma_fp_vectorCompute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P(FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + FP_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE) / UOPS_DISPATCHED.THREADtma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)This metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widthsThis metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widths. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_6, tma_ports_utilized_2100%00tma_fp_vector_128bCompute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P(FP_COMP_OPS_EXE.SSE_SCALAR_DOUBLE + FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE) / UOPS_DISPATCHED.THREADtma_fp_vector_128b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))This metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectorsThis metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_6, tma_ports_utilized_2100%00tma_fp_vector_256bCompute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P(SIMD_FP_256.PACKED_DOUBLE + SIMD_FP_256.PACKED_SINGLE) / UOPS_DISPATCHED.THREADtma_fp_vector_256b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))This metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectorsThis metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_512b, tma_port_6, tma_ports_utilized_2100%00tma_info_core_ilpBackend;Cor;Pipeline;PortsUtilUOPS_DISPATCHED.THREAD / (cpu@UOPS_DISPATCHED.CORE\,cmask\=1@ / 2 if #SMT_on else cpu@UOPS_DISPATCHED.CORE\,cmask\=1@)Instruction-Level-Parallelism (average number of uops executed when there is execution) per thread (logical-processor)00tma_info_frontend_dsb_coverageDSB;Fed;FetchBW;tma_issueFBIDQ.DSB_UOPS / (IDQ.DSB_UOPS + LSD.UOPS + IDQ.MITE_UOPS + IDQ.MS_UOPS)tma_info_frontend_dsb_coverage < 0.7 & tma_info_thread_ipc / 4 > 0.35Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache)Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache). Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_lcp00tma_info_system_dram_bw_useHPC;MemOffcore;MemoryBW;SoC;tma_issueBW64 * (UNC_M_CAS_COUNT.RD + UNC_M_CAS_COUNT.WR) / 1e9 / duration_timeAverage external Memory Bandwidth Use for reads and writes [GB / sec]Average external Memory Bandwidth Use for reads and writes [GB / sec]. Related metrics: tma_mem_bandwidth00tma_info_thread_execute_per_issueCor;PipelineUOPS_DISPATCHED.THREAD / UOPS_ISSUED.ANYThe ratio of Executed- by Issued-UopsThe ratio of Executed- by Issued-Uops. Ratio > 1 suggests high rate of uop micro-fusions. Ratio < 1 suggest high rate of "execute" at rename stage00tma_lcpFetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueFBILD_STALL.LCP / tma_info_thread_clkstma_lcp > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric represents fraction of cycles CPU was stalled due to Length Changing Prefixes (LCPs)This metric represents fraction of cycles CPU was stalled due to Length Changing Prefixes (LCPs). Using proper compiler flags or Intel Compiler by default will certainly avoid this. #Link: Optimization Guide about LCP BKMs. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_frontend_dsb_coverage100%00tma_machine_clearsBadSpec;BvMS;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxntma_bad_speculation - tma_branch_mispredictstma_machine_clears > 0.1 & tma_bad_speculation > 0.15This metric represents fraction of slots the CPU has wasted due to Machine ClearsThis metric represents fraction of slots the CPU has wasted due to Machine Clears.  These slots are either wasted by uops fetched prior to the clear; or stalls the out-of-order portion of the machine needs to recover its state after the clear. For example; this can happen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modifying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT. Related metrics: tma_clears_resteers, tma_l1_bound, tma_microcode_sequencer, tma_ms_switches, tma_remote_cache100%TopdownL201tma_mem_bandwidthBvMS;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBWmin(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\,cmask\=6@) / tma_info_thread_clkstma_mem_bandwidth > 0.2 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM)This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM).  The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_info_system_dram_bw_use100%00tma_mem_latencyBvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLatmin(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD) / tma_info_thread_clks - tma_mem_bandwidthtma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM)This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM).  This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: 100%00tma_memory_boundBackend;TmaL2;TopdownL2;tma_L2_group;tma_backend_bound_group(min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.STALLS_L1D_PENDING) + RESOURCE_STALLS.SB) / (min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_DISPATCH) + cpu@UOPS_DISPATCHED.THREAD\,cmask\=1@ - (cpu@UOPS_DISPATCHED.THREAD\,cmask\=3@ if tma_info_thread_ipc > 1.8 else cpu@UOPS_DISPATCHED.THREAD\,cmask\=2@) - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0) + RESOURCE_STALLS.SB) * tma_backend_boundtma_memory_bound > 0.2 & tma_backend_bound > 0.2This metric represents fraction of slots the Memory subsystem within the Backend was a bottleneckThis metric represents fraction of slots the Memory subsystem within the Backend was a bottleneck.  Memory Bound estimates fraction of slots where pipeline is likely stalled due to demand load or store instructions. This accounts mainly for (1) non-completed in-flight memory demand loads which coincides with execution units starvation; in addition to (2) cases where stores could impose backpressure on the pipeline when many of them get buffered at the same time (less common out of the two)100%TopdownL201tma_ports_utilizationPortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group(min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_DISPATCH) + cpu@UOPS_DISPATCHED.THREAD\,cmask\=1@ - (cpu@UOPS_DISPATCHED.THREAD\,cmask\=3@ if tma_info_thread_ipc > 1.8 else cpu@UOPS_DISPATCHED.THREAD\,cmask\=2@) - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0) + RESOURCE_STALLS.SB - RESOURCE_STALLS.SB - min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.STALLS_L1D_PENDING)) / tma_info_thread_clkstma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)This metric estimates fraction of cycles the CPU performance was potentially limited due to Core computation issues (non divider-related)This metric estimates fraction of cycles the CPU performance was potentially limited due to Core computation issues (non divider-related).  Two distinct categories can be attributed into this metric: (1) heavy data-dependency among contiguous instructions would manifest in this metric - such cases are often referred to as low Instruction Level Parallelism (ILP). (2) Contention on some hardware execution unit other than Divider. For example; when there are too many multiply operations100%01tma_x87_useCompute;TopdownL4;tma_L4_group;tma_fp_arith_groupUOPS_RETIRED.RETIRE_SLOTS * FP_COMP_OPS_EXE.X87 / UOPS_DISPATCHED.THREADtma_x87_use > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)This metric serves as an approximation of legacy x87 usageThis metric serves as an approximation of legacy x87 usage. It accounts for instructions beyond X87 FP arithmetic operations; hence may be used as a thermometer to avoid X87 high usage and preferably upgrade to modern ISA. See Tip under Tuning Hint100%00tma_backend_boundTopdownL1;tma_L1_groupcpu_atom@TOPDOWN_BE_BOUND.ALL_P@ / (6 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_backend_bound > 0.1Counts the total number of issue slots that were not consumed by the backend due to backend stallsCounts the total number of issue slots that were not consumed by the backend due to backend stalls. Note that uops must be available for consumption in order for this event to count. If a uop is not available (IQ is empty), this event will not count100%TopdownL100tma_bad_speculationTopdownL1;tma_L1_groupcpu_atom@TOPDOWN_BAD_SPECULATION.ALL_P@ / (6 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_bad_speculation > 0.15Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clearCounts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear. Only issue slots wasted due to fast nukes such as memory ordering nukes are counted. Other nukes are not accounted for. Counts all issue slots blocked during this recovery window including relevant microcode flows and while uops are not yet available in the instruction queue (IQ). Also includes the issue slots that were consumed by the backend but were thrown away because they were younger than the mispredict or machine clear100%TopdownL100tma_branch_detectTopdownL3;tma_L3_group;tma_ifetch_latency_groupcpu_atom@TOPDOWN_FE_BOUND.BRANCH_DETECT@ / (6 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_branch_detect > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to BACLEARS, which occurs when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontendCounts the number of issue slots that were not delivered by the frontend due to BACLEARS, which occurs when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend. Includes BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branches100%00tma_branch_mispredictsTopdownL2;tma_L2_group;tma_bad_speculation_groupcpu_atom@TOPDOWN_BAD_SPECULATION.MISPREDICT@ / (6 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_branch_mispredicts > 0.05 & tma_bad_speculation > 0.15Counts the number of issue slots that were not consumed by the backend due to branch mispredicts100%TopdownL200tma_branch_resteerTopdownL3;tma_L3_group;tma_ifetch_latency_groupcpu_atom@TOPDOWN_FE_BOUND.BRANCH_RESTEER@ / (6 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_branch_resteer > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to BTCLEARS, which occurs when the Branch Target Buffer (BTB) predicts a taken branch100%00tma_ciscTopdownL3;tma_L3_group;tma_ifetch_bandwidth_groupcpu_atom@TOPDOWN_FE_BOUND.CISC@ / (6 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_cisc > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to the microcode sequencer (MS)100%00tma_core_boundTopdownL2;tma_L2_group;tma_backend_bound_groupcpu_atom@TOPDOWN_BE_BOUND.ALLOC_RESTRICTIONS@ / (6 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_core_bound > 0.1 & tma_backend_bound > 0.1Counts the number of cycles due to backend bound stalls that are bounded by core restrictions and not attributed to an outstanding load or stores, or resource limitation100%TopdownL200tma_decodeTopdownL3;tma_L3_group;tma_ifetch_bandwidth_groupcpu_atom@TOPDOWN_FE_BOUND.DECODE@ / (6 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_decode > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to decode stalls100%00tma_fast_nukeTopdownL3;tma_L3_group;tma_machine_clears_groupcpu_atom@TOPDOWN_BAD_SPECULATION.FASTNUKE@ / (6 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_fast_nuke > 0.05 & (tma_machine_clears > 0.05 & tma_bad_speculation > 0.15)Counts the number of issue slots that were not consumed by the backend due to a machine clear that does not require the use of microcode, classified as a fast nuke, due to memory ordering, memory disambiguation and memory renaming100%00tma_frontend_boundTopdownL1;tma_L1_groupcpu_atom@TOPDOWN_FE_BOUND.ALL_P@ / (6 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_frontend_bound > 0.2Counts the number of issue slots that were not consumed by the backend due to frontend stalls100%TopdownL100tma_icache_missesTopdownL3;tma_L3_group;tma_ifetch_latency_groupcpu_atom@TOPDOWN_FE_BOUND.ICACHE@ / (6 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_icache_misses > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to instruction cache misses100%00tma_ifetch_bandwidthTopdownL2;tma_L2_group;tma_frontend_bound_groupcpu_atom@TOPDOWN_FE_BOUND.FRONTEND_BANDWIDTH@ / (6 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2Counts the number of issue slots that were not delivered by the frontend due to frontend bandwidth restrictions due to decode, predecode, cisc, and other limitations100%TopdownL200tma_ifetch_latencyTopdownL2;tma_L2_group;tma_frontend_bound_groupcpu_atom@TOPDOWN_FE_BOUND.FRONTEND_LATENCY@ / (6 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2Counts the number of issue slots that were not delivered by the frontend due to frontend latency restrictions due to icache misses, itlb misses, branch detection, and resteer limitations100%TopdownL200tma_info_arith_inst_mix_ipflopFlopscpu_atom@INST_RETIRED.ANY@ / cpu_atom@FP_FLOPS_RETIRED.ALL@Instructions per Floating Point (FP) Operation00tma_info_arith_inst_mix_ipfparith_avx128Flopscpu_atom@INST_RETIRED.ANY@ / (cpu_atom@FP_INST_RETIRED.128B_DP@ + cpu_atom@FP_INST_RETIRED.128B_SP@)Instructions per FP Arithmetic AVX/SSE 128-bit instruction00tma_info_arith_inst_mix_ipfparith_scalar_dpFlopscpu_atom@INST_RETIRED.ANY@ / cpu_atom@FP_INST_RETIRED.64B_DP@Instructions per FP Arithmetic Scalar Double-Precision instruction00tma_info_arith_inst_mix_ipfparith_scalar_spFlopscpu_atom@INST_RETIRED.ANY@ / cpu_atom@FP_INST_RETIRED.32B_SP@Instructions per FP Arithmetic Scalar Single-Precision instruction00tma_info_bottleneck_%_ifetch_miss_bound_cyclesIfetch100 * cpu_atom@MEM_BOUND_STALLS_IFETCH.ALL@ / cpu_atom@CPU_CLK_UNHALTED.CORE@Percentage of time that allocation and retirement is stalled by the Frontend Cluster due to an Ifetch Miss, either Icache or ITLB MissPercentage of time that allocation and retirement is stalled by the Frontend Cluster due to an Ifetch Miss, either Icache or ITLB Miss. See Info.Ifetch_Bound00tma_info_bottleneck_%_load_miss_bound_cyclesLoad_Store_Miss100 * cpu_atom@MEM_BOUND_STALLS_LOAD.ALL@ / cpu_atom@CPU_CLK_UNHALTED.CORE@Percentage of time that retirement is stalled due to an L1 missPercentage of time that retirement is stalled due to an L1 miss. See Info.Load_Miss_Bound00tma_info_br_inst_mix_ipcallcpu_atom@INST_RETIRED.ANY@ / cpu_atom@BR_INST_RETIRED.NEAR_CALL@Instruction per (near) call (lower number means higher occurrence rate)00tma_info_core_flopcFlopscpu_atom@FP_FLOPS_RETIRED.ALL@ / cpu_atom@CPU_CLK_UNHALTED.CORE@Floating Point Operations Per Cycle00tma_info_core_upicpu_atom@TOPDOWN_RETIRING.ALL_P@ / cpu_atom@INST_RETIRED.ANY@Uops Per Instruction00tma_info_ifetch_miss_bound_%_ifetchmissbound_with_l2hit100 * cpu_atom@MEM_BOUND_STALLS_IFETCH.L2_HIT@ / cpu_atom@MEM_BOUND_STALLS_IFETCH.ALL@Percentage of ifetch miss bound stalls, where the ifetch miss hits in the L200tma_info_ifetch_miss_bound_%_ifetchmissbound_with_l3hit100 * cpu_atom@MEM_BOUND_STALLS_IFETCH.LLC_HIT@ / cpu_atom@MEM_BOUND_STALLS_IFETCH.ALL@Percentage of ifetch miss bound stalls, where the ifetch miss hits in the L300tma_info_ifetch_miss_bound_%_ifetchmissbound_with_l3miss100 * cpu_atom@MEM_BOUND_STALLS_IFETCH.LLC_MISS@ / cpu_atom@MEM_BOUND_STALLS_IFETCH.ALL@Percentage of ifetch miss bound stalls, where the ifetch miss subsequently misses in the L300tma_info_load_miss_bound_%_loadmissbound_with_l2hitload_store_bound100 * cpu_atom@MEM_BOUND_STALLS_LOAD.L2_HIT@ / cpu_atom@MEM_BOUND_STALLS_LOAD.ALL@Percentage of memory bound stalls where retirement is stalled due to an L1 miss that hit the L200tma_info_load_miss_bound_%_loadmissbound_with_l3hitload_store_bound100 * cpu_atom@MEM_BOUND_STALLS_LOAD.LLC_HIT@ / cpu_atom@MEM_BOUND_STALLS_LOAD.ALL@Percentage of memory bound stalls where retirement is stalled due to an L1 miss that hit the L300tma_info_load_miss_bound_%_loadmissbound_with_l3missload_store_bound100 * cpu_atom@MEM_BOUND_STALLS_LOAD.LLC_MISS@ / cpu_atom@MEM_BOUND_STALLS_LOAD.ALL@Percentage of memory bound stalls where retirement is stalled due to an L1 miss that subsequently misses the L300tma_info_load_store_bound_load_boundload_store_bound100 * (cpu_atom@LD_HEAD.L1_BOUND_AT_RET@ + cpu_atom@MEM_BOUND_STALLS_LOAD.ALL@) / cpu_atom@CPU_CLK_UNHALTED.CORE@Counts the number of cycles that the oldest load of the load buffer is stalled at retirement00tma_info_mem_exec_blocks_%_loads_with_adressaliasing100 * cpu_atom@LD_BLOCKS.ADDRESS_ALIAS@ / cpu_atom@MEM_UOPS_RETIRED.ALL_LOADS@Percentage of total non-speculative loads with an address aliasing block00tma_info_mem_mix_memload_ratio1e3 * cpu_atom@MEM_UOPS_RETIRED.ALL_LOADS@ / cpu_atom@TOPDOWN_RETIRING.ALL_P@Ratio of mem load uops to all uops00tma_info_serialization _%_tpause_cycles100 * cpu_atom@SERIALIZATION.C01_MS_SCB@ / (6 * cpu_atom@CPU_CLK_UNHALTED.CORE@)Percentage of time that the core is stalled due to a TPAUSE or UMWAIT instruction00tma_info_system_gflopsFlopscpu_atom@FP_FLOPS_RETIRED.ALL@ / (duration_time * 1e9)Giga Floating Point Operations Per SecondGiga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width00tma_info_uop_mix_fpdiv_uop_ratio100 * cpu_atom@UOPS_RETIRED.FPDIV@ / cpu_atom@TOPDOWN_RETIRING.ALL_P@Percentage of all uops which are FPDiv uops00tma_info_uop_mix_idiv_uop_ratio100 * cpu_atom@UOPS_RETIRED.IDIV@ / cpu_atom@TOPDOWN_RETIRING.ALL_P@Percentage of all uops which are IDiv uops00tma_info_uop_mix_microcode_uop_ratio100 * cpu_atom@UOPS_RETIRED.MS@ / cpu_atom@TOPDOWN_RETIRING.ALL_P@Percentage of all uops which are microcode ops00tma_info_uop_mix_x87_uop_ratio100 * cpu_atom@UOPS_RETIRED.X87@ / cpu_atom@TOPDOWN_RETIRING.ALL_P@Percentage of all uops which are x87 uops00tma_itlb_missesTopdownL3;tma_L3_group;tma_ifetch_latency_groupcpu_atom@TOPDOWN_FE_BOUND.ITLB_MISS@ / (6 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_itlb_misses > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to Instruction Table Lookaside Buffer (ITLB) misses100%00tma_machine_clearsTopdownL2;tma_L2_group;tma_bad_speculation_groupcpu_atom@TOPDOWN_BAD_SPECULATION.MACHINE_CLEARS@ / (6 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_machine_clears > 0.05 & tma_bad_speculation > 0.15Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a machine clear (nuke) of any kind including memory ordering and memory disambiguation100%TopdownL200tma_mem_schedulerTopdownL3;tma_L3_group;tma_resource_bound_groupcpu_atom@TOPDOWN_BE_BOUND.MEM_SCHEDULER@ / (6 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_mem_scheduler > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)Counts the number of issue slots that were not consumed by the backend due to memory reservation stalls in which a scheduler is not able to accept uops100%00tma_non_mem_schedulerTopdownL3;tma_L3_group;tma_resource_bound_groupcpu_atom@TOPDOWN_BE_BOUND.NON_MEM_SCHEDULER@ / (6 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_non_mem_scheduler > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)Counts the number of issue slots that were not consumed by the backend due to IEC or FPC RAT stalls, which can be due to FIQ or IEC reservation stalls in which the integer, floating point or SIMD scheduler is not able to accept uops100%00tma_nukeTopdownL3;tma_L3_group;tma_machine_clears_groupcpu_atom@TOPDOWN_BAD_SPECULATION.NUKE@ / (6 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_nuke > 0.05 & (tma_machine_clears > 0.05 & tma_bad_speculation > 0.15)Counts the number of issue slots that were not consumed by the backend due to a machine clear that requires the use of microcode (slow nuke)100%00tma_other_fbTopdownL3;tma_L3_group;tma_ifetch_bandwidth_groupcpu_atom@TOPDOWN_FE_BOUND.OTHER@ / (6 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_other_fb > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to other common frontend stalls not categorized100%00tma_predecodeTopdownL3;tma_L3_group;tma_ifetch_bandwidth_groupcpu_atom@TOPDOWN_FE_BOUND.PREDECODE@ / (6 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_predecode > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)Counts the number of issue slots that were not delivered by the frontend due to wrong predecodes100%00tma_registerTopdownL3;tma_L3_group;tma_resource_bound_groupcpu_atom@TOPDOWN_BE_BOUND.REGISTER@ / (6 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_register > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)Counts the number of issue slots that were not consumed by the backend due to the physical register file unable to accept an entry (marble stalls)100%00tma_reorder_bufferTopdownL3;tma_L3_group;tma_resource_bound_groupcpu_atom@TOPDOWN_BE_BOUND.REORDER_BUFFER@ / (6 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_reorder_buffer > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)Counts the number of issue slots that were not consumed by the backend due to the reorder buffer being full (ROB stalls)100%00tma_retiringTopdownL1;tma_L1_groupcpu_atom@TOPDOWN_RETIRING.ALL_P@ / (6 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_retiring > 0.75Counts the number of issue slots that result in retirement slots100%TopdownL100tma_serializationTopdownL3;tma_L3_group;tma_resource_bound_groupcpu_atom@TOPDOWN_BE_BOUND.SERIALIZATION@ / (6 * cpu_atom@CPU_CLK_UNHALTED.CORE@)tma_serialization > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)Counts the number of issue slots that were not consumed by the backend due to scoreboards from the instruction queue (IQ), jump execution unit (JEU), or microcode sequencer (MS)100%00tma_backend_boundBvOB;TmaL1;TopdownL1;tma_L1_groupcpu_core@topdown\-be\-bound@ / (cpu_core@topdown\-fe\-bound@ + cpu_core@topdown\-bad\-spec@ + cpu_core@topdown\-retiring@ + cpu_core@topdown\-be\-bound@) + 0 * tma_info_thread_slotstma_backend_bound > 0.2This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the BackendThis category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend. Backend is the portion of the processor core where the out-of-order scheduler dispatches ready uops into their respective execution units; and once completed these uops get retired according to program order. For example; stalls due to data-cache misses or stalls due to the divider unit being overloaded are both categorized under Backend Bound. Backend Bound is further divided into two main categories: Memory Bound and Core Bound. Sample with: TOPDOWN.BACKEND_BOUND_SLOTS100%TopdownL100tma_bad_speculationTmaL1;TopdownL1;tma_L1_groupmax(1 - (tma_frontend_bound + tma_backend_bound + tma_retiring), 0)tma_bad_speculation > 0.15This category represents fraction of slots wasted due to incorrect speculationsThis category represents fraction of slots wasted due to incorrect speculations. This include slots used to issue uops that do not eventually get retired and slots for which the issue-pipeline was blocked due to recovery from earlier incorrect speculation. For example; wasted work due to miss-predicted branches are categorized under Bad Speculation category. Incorrect data speculation followed by Memory Ordering Nukes is another example100%TopdownL100tma_contested_accessesBvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group(cpu_core@MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS@ * min(cpu_core@MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS@R, 24 * tma_info_system_core_frequency) + cpu_core@MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD@ * min(cpu_core@MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD@R, 25 * tma_info_system_core_frequency) * (cpu_core@OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM@ / (cpu_core@OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM@ + cpu_core@OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD@))) * (1 + cpu_core@MEM_LOAD_RETIRED.FB_HIT@ / cpu_core@MEM_LOAD_RETIRED.L1_MISS@ / 2) / tma_info_thread_clkstma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accessesThis metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS. Related metrics: tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache100%00tma_data_sharingBvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group(cpu_core@MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD@ * min(cpu_core@MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD@R, 24 * tma_info_system_core_frequency) + cpu_core@MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD@ * min(cpu_core@MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD@R, 24 * tma_info_system_core_frequency) * (1 - cpu_core@OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM@ / (cpu_core@OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM@ + cpu_core@OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD@))) * (1 + cpu_core@MEM_LOAD_RETIRED.FB_HIT@ / cpu_core@MEM_LOAD_RETIRED.L1_MISS@ / 2) / tma_info_thread_clkstma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accessesThis metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD. Related metrics: tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache100%00tma_dividerBvCB;TopdownL3;tma_L3_group;tma_core_bound_groupcpu_core@ARITH.DIV_ACTIVE@ / tma_info_thread_clkstma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)This metric represents fraction of cycles where the Divider unit was activeThis metric represents fraction of cycles where the Divider unit was active. Divide and square root instructions are performed by the Divider unit and can take considerably longer latency than integer or Floating Point addition; subtraction; or multiplication. Sample with: ARITH.DIVIDER_UOPS100%00tma_dtlb_loadBvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_groupcpu_core@MEM_INST_RETIRED.STLB_HIT_LOADS@ * min(cpu_core@MEM_INST_RETIRED.STLB_HIT_LOADS@R, 7) / tma_info_thread_clks + tma_load_stlb_misstma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accessesThis metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_dtlb_store, tma_info_bottleneck_memory_data_tlbs, tma_info_bottleneck_memory_synchronization100%00tma_dtlb_storeBvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_groupcpu_core@MEM_INST_RETIRED.STLB_HIT_STORES@ * min(cpu_core@MEM_INST_RETIRED.STLB_HIT_STORES@R, 7) / tma_info_thread_clks + tma_store_stlb_misstma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates the fraction of cycles spent handling first-level data TLB store missesThis metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses.  As with ordinary data caching; focus on improving data locality and reducing working-set size to reduce DTLB overhead.  Additionally; consider using profile-guided optimization (PGO) to collocate frequently-used data on the same page.  Try using larger page sizes for large amounts of frequently-used data. Sample with: MEM_INST_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_dtlb_load, tma_info_bottleneck_memory_data_tlbs, tma_info_bottleneck_memory_synchronization100%00tma_frontend_boundBvFB;BvIO;PGO;TmaL1;TopdownL1;tma_L1_groupcpu_core@topdown\-fe\-bound@ / (cpu_core@topdown\-fe\-bound@ + cpu_core@topdown\-bad\-spec@ + cpu_core@topdown\-retiring@ + cpu_core@topdown\-be\-bound@) - cpu_core@INT_MISC.UOP_DROPPING@ / tma_info_thread_slotstma_frontend_bound > 0.15This category represents fraction of slots where the processor's Frontend undersupplies its BackendThis category represents fraction of slots where the processor's Frontend undersupplies its Backend. Frontend denotes the first part of the processor core responsible to fetch operations that are executed later on by the Backend part. Within the Frontend; a branch predictor predicts the next address to fetch; cache-lines are fetched from the memory subsystem; parsed into instructions; and lastly decoded into micro-operations (uops). Ideally the Frontend can issue Pipeline_Width uops every cycle to the Backend. Frontend Bound denotes unutilized issue-slots when there is no Backend stall; i.e. bubbles where Frontend delivered no uops while Backend could have accepted them. For example; stalls due to instruction-cache misses would be categorized under Frontend Bound. Sample with: FRONTEND_RETIRED.LATENCY_GE_4_PS100%TopdownL100tma_info_bottleneck_memory_data_tlbsBvMT;Mem;MemoryTLB;Offcore;tma_issueTLB100 * (tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_dtlb_load / (tma_dtlb_load + tma_fb_full + tma_l1_hit_latency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_dtlb_store / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)))tma_info_bottleneck_memory_data_tlbs > 20Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs)Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs). Related metrics: tma_dtlb_load, tma_dtlb_store, tma_info_bottleneck_memory_synchronization00tma_info_system_dram_bw_useHPC;MemOffcore;MemoryBW;SoC;tma_issueBW64 * (UNC_HAC_ARB_TRK_REQUESTS.ALL + UNC_HAC_ARB_COH_TRK_REQUESTS.ALL) / 1e9 / duration_timeAverage external Memory Bandwidth Use for reads and writes [GB / sec]Average external Memory Bandwidth Use for reads and writes [GB / sec]. Related metrics: tma_fb_full, tma_info_bottleneck_cache_memory_bandwidth, tma_mem_bandwidth, tma_sq_full00tma_l3_hit_latencyBvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_groupcpu_core@MEM_LOAD_RETIRED.L3_HIT@ * min(cpu_core@MEM_LOAD_RETIRED.L3_HIT@R, 9 * tma_info_system_core_frequency) * (1 + cpu_core@MEM_LOAD_RETIRED.FB_HIT@ / cpu_core@MEM_LOAD_RETIRED.L1_MISS@ / 2) / tma_info_thread_clkstma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited).  Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance.  Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS. Related metrics: tma_info_bottleneck_cache_memory_latency, tma_mem_latency100%00tma_load_stlb_hitMemoryTLB;TopdownL5;tma_L5_group;tma_dtlb_load_groupmax(0, tma_dtlb_load - tma_load_stlb_miss)tma_load_stlb_hit > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))This metric roughly estimates the fraction of cycles where the (first level) DTLB was missed by load accesses, that later on hit in second-level TLB (STLB)100%00tma_lock_latencyOffcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_groupcpu_core@MEM_INST_RETIRED.LOCK_LOADS@ * cpu_core@MEM_INST_RETIRED.LOCK_LOADS@R / tma_info_thread_clkstma_lock_latency > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric represents fraction of cycles the CPU spent handling cache misses due to lock operationsThis metric represents fraction of cycles the CPU spent handling cache misses due to lock operations. Due to the microarchitecture handling of locks; they are classified as L1_Bound regardless of what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOADS. Related metrics: tma_store_latency100%00tma_mem_bandwidthBvMS;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBWmin(cpu_core@CPU_CLK_UNHALTED.THREAD@, cpu_core@OFFCORE_REQUESTS_OUTSTANDING.DATA_RD\,cmask\=4@) / tma_info_thread_clkstma_mem_bandwidth > 0.2 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM)This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM).  The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_fb_full, tma_info_bottleneck_cache_memory_bandwidth, tma_info_system_dram_bw_use, tma_sq_full100%00tma_ports_utilized_0PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_groupmax((cpu_core@EXE_ACTIVITY.EXE_BOUND_0_PORTS@ + max(cpu_core@RS.EMPTY_RESOURCE@ - cpu_core@RESOURCE_STALLS.SCOREBOARD@, 0)) / tma_info_thread_clks, 1) * (cpu_core@CYCLE_ACTIVITY.STALLS_TOTAL@ - cpu_core@EXE_ACTIVITY.BOUND_ON_LOADS@) / tma_info_thread_clkstma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise)This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise). Long-latency instructions like divides may contribute to this metric100%00tma_retiringBvUW;TmaL1;TopdownL1;tma_L1_groupcpu_core@topdown\-retiring@ / (cpu_core@topdown\-fe\-bound@ + cpu_core@topdown\-bad\-spec@ + cpu_core@topdown\-retiring@ + cpu_core@topdown\-be\-bound@) + 0 * tma_info_thread_slotstma_retiring > 0.7 | tma_heavy_operations > 0.1This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retiredThis category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired. Ideally; all pipeline slots would be attributed to the Retiring category.  Retiring of 100% would indicate the maximum Pipeline_Width throughput was achieved.  Maximizing Retiring typically increases the Instructions-per-cycle (see IPC metric). Note that a high Retiring value does not necessary mean there is no room for more performance.  For example; Heavy-operations or Microcode Assists are categorized under Retiring. They often indicate suboptimal performance and can often be optimized or avoided. Sample with: UOPS_RETIRED.SLOTS100%TopdownL100tma_split_loadsTopdownL4;tma_L4_group;tma_l1_bound_groupcpu_core@MEM_INST_RETIRED.SPLIT_LOADS@ * min(cpu_core@MEM_INST_RETIRED.SPLIT_LOADS@R, tma_info_memory_load_miss_real_latency) / tma_info_thread_clkstma_split_loads > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundaryThis metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundary. Sample with: MEM_INST_RETIRED.SPLIT_LOADS_PS100%00tma_split_storesTopdownL4;tma_L4_group;tma_issueSpSt;tma_store_bound_groupcpu_core@MEM_INST_RETIRED.SPLIT_STORES@ * min(cpu_core@MEM_INST_RETIRED.SPLIT_STORES@R, 1) / tma_info_thread_clkstma_split_stores > 0.2 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric represents rate of split store accessesThis metric represents rate of split store accesses.  Consider aligning your data to the 64-byte cache line granularity. Sample with: MEM_INST_RETIRED.SPLIT_STORES_PS. Related metrics: tma_port_4100%00tma_store_stlb_hitMemoryTLB;TopdownL5;tma_L5_group;tma_dtlb_store_groupmax(0, tma_dtlb_store - tma_store_stlb_miss)tma_store_stlb_hit > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))This metric roughly estimates the fraction of cycles where the TLB was missed by store accesses, hitting in the second-level TLB (STLB)100%00tma_info_system_mem_read_latencyMem;MemoryLat;SoC(UNC_ARB_TRK_OCCUPANCY.RD + UNC_ARB_DAT_OCCUPANCY.RD) / UNC_ARB_TRK_REQUESTS.RDAverage latency of data read request to external memory (in nanoseconds)Average latency of data read request to external memory (in nanoseconds). Accounts for demand loads and L1/L2 prefetches. ([RKL+]memory-controller only)00tma_dram_boundMemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group(1 - MEM_LOAD_UOPS_RETIRED.LLC_HIT / (MEM_LOAD_UOPS_RETIRED.LLC_HIT + 7 * MEM_LOAD_UOPS_MISC_RETIRED.LLC_MISS)) * CYCLE_ACTIVITY.STALLS_L2_PENDING / tma_info_thread_clkstma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loadsThis metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads. Better caching can improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L3_MISS_PS100%03tma_info_system_dram_bw_useHPC;MemOffcore;MemoryBW;SoC;tma_issueBW64 * (UNC_ARB_TRK_REQUESTS.ALL + UNC_ARB_COH_TRK_REQUESTS.ALL) / 1e6 / duration_time / 1e3Average external Memory Bandwidth Use for reads and writes [GB / sec]Average external Memory Bandwidth Use for reads and writes [GB / sec]. Related metrics: tma_mem_bandwidth00tma_l3_boundCacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_groupMEM_LOAD_UOPS_RETIRED.LLC_HIT / (MEM_LOAD_UOPS_RETIRED.LLC_HIT + 7 * MEM_LOAD_UOPS_MISC_RETIRED.LLC_MISS) * CYCLE_ACTIVITY.STALLS_L2_PENDING / tma_info_thread_clkstma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling CoreThis metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core.  Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS100%03tma_dram_boundMemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group(MEMORY_ACTIVITY.STALLS_L3_MISS / tma_info_thread_clks - tma_pmm_bound if #has_pmem > 0 else MEMORY_ACTIVITY.STALLS_L3_MISS / tma_info_thread_clks)tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loadsThis metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads. Better caching can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_MISS_PS100%00tma_heavy_operationsDefault;Retire;TmaL2;TopdownL2;tma_L2_group;tma_retiring_grouptopdown\-heavy\-ops / (topdown\-fe\-bound + topdown\-bad\-spec + topdown\-retiring + topdown\-be\-bound) + 0 * tma_info_thread_slotstma_heavy_operations > 0.1This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequencesThis metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences. ([ICL+] Note this may overcount due to approximation using indirect events; [ADL+] .). Sample with: UOPS_RETIRED.HEAVY100%TopdownL2;DefaultTopdownL200tma_info_bottleneck_cache_memory_bandwidthBvMB;Mem;MemoryBW;Offcore;tma_issueBW100 * (tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_fb_full / (tma_dtlb_load + tma_fb_full + tma_l1_hit_latency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)))tma_info_bottleneck_cache_memory_bandwidth > 20Total pipeline cost of external Memory- or Cache-Bandwidth related bottlenecksTotal pipeline cost of external Memory- or Cache-Bandwidth related bottlenecks. Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full00tma_info_bottleneck_cache_memory_latencyBvML;Mem;MemoryLat;Offcore;tma_issueLat100 * (tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_l3_hit_latency / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * tma_l2_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_store_latency / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_l1_hit_latency / (tma_dtlb_load + tma_fb_full + tma_l1_hit_latency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)))tma_info_bottleneck_cache_memory_latency > 20Total pipeline cost of external Memory- or Cache-Latency related bottlenecksTotal pipeline cost of external Memory- or Cache-Latency related bottlenecks. Related metrics: tma_l3_hit_latency, tma_mem_latency00tma_info_bottleneck_memory_data_tlbsBvMT;Mem;MemoryTLB;Offcore;tma_issueTLB100 * (tma_memory_bound * (tma_l1_bound / max(tma_memory_bound, tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_dtlb_load / max(tma_l1_bound, tma_dtlb_load + tma_fb_full + tma_l1_hit_latency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound)) * (tma_dtlb_store / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)))tma_info_bottleneck_memory_data_tlbs > 20Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs)Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs). Related metrics: tma_dtlb_load, tma_dtlb_store, tma_info_bottleneck_memory_synchronization00tma_ms_switchesFetchLat;MicroSeq;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueMC;tma_issueMS;tma_issueMV;tma_issueSO3 * cpu@UOPS_RETIRED.MS\,cmask\=1\,edge@ / (UOPS_RETIRED.SLOTS / UOPS_ISSUED.ANY) / tma_info_thread_clkstma_ms_switches > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS)This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS). Commonly used instructions are optimized for delivery by the DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Certain operations cannot be handled natively by the execution pipeline; and must be performed by microcode (small programs injected into the execution stream). Switching to the MS too often can negatively impact performance. The MS is designated to deliver long uop flows required by CISC instructions like CPUID; or uncommon conditions like Floating Point Assists when dealing with Denormals. Sample with: FRONTEND_RETIRED.MS_FLOWS. Related metrics: tma_clears_resteers, tma_info_bottleneck_irregular_overhead, tma_l1_bound, tma_machine_clears, tma_microcode_sequencer, tma_mixing_vectors, tma_serializing_operation100%00tma_pmm_boundMemoryBound;Server;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group(((1 - (19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)) + 10 * (MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) / (19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)) + 10 * (MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)) + (25 * (MEM_LOAD_RETIRED.LOCAL_PMM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)) + 33 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))))) * (MEMORY_ACTIVITY.STALLS_L3_MISS / tma_info_thread_clks) if 1e6 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM + MEM_LOAD_RETIRED.LOCAL_PMM) > MEM_LOAD_RETIRED.L1_MISS else 0) if #has_pmem > 0 else 0)tma_pmm_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)This metric roughly estimates (based on idle latencies) how often the CPU was stalled on accesses to external 3D-Xpoint (Crystal Ridge, a.k.aThis metric roughly estimates (based on idle latencies) how often the CPU was stalled on accesses to external 3D-Xpoint (Crystal Ridge, a.k.a. IXP) memory by loads, PMM stands for Persistent Memory Module100%00iio_bandwidth_readUNC_IIO_DATA_REQ_OF_CPU.MEM_READ.ALL_PARTS * 4 / 1e6 / duration_timeBandwidth observed by the integrated I/O traffic contoller (IIO) of IO reads that are initiated by end device controllers that are requesting memory from the CPU1MB/s00numa_reads_addressed_to_local_dram(UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_PREF_LOCAL) / (UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_PREF_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_REMOTE + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_PREF_REMOTE)Memory read that miss the last level cache (LLC) addressed to local DRAM as a percentage of total memory read accesses, does not include LLC prefetches100%00numa_reads_addressed_to_remote_dram(UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_REMOTE + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_PREF_REMOTE) / (UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_PREF_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_REMOTE + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_PREF_REMOTE)Memory reads that miss the last level cache (LLC) addressed to remote DRAM as a percentage of total memory read accesses, does not include LLC prefetches100%00tma_contested_accessesBvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group(18.5 * tma_info_system_core_frequency * MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM + 16.5 * tma_info_system_core_frequency * MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clkstma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accessesThis metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS. Related metrics: tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache100%01tma_data_sharingBvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group16.5 * tma_info_system_core_frequency * MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clkstma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accessesThis metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT_PS. Related metrics: tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache100%01tma_false_sharingBvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group22 * tma_info_system_core_frequency * OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM / tma_info_thread_clkstma_false_sharing > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates how often CPU was handling synchronizations due to False SharingThis metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache100%01tma_fp_vectorCompute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2PFP_ARITH_INST_RETIRED.VECTOR / UOPS_RETIRED.RETIRE_SLOTStma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)This metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widthsThis metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widths. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2100%01tma_info_bottleneck_cache_memory_latencyBvML;Mem;MemoryLat;Offcore;tma_issueLat100 * (tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_l3_hit_latency / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * tma_l2_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_store_latency / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_l1_hit_latency / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_hit_latency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)))tma_info_bottleneck_cache_memory_latency > 20Total pipeline cost of external Memory- or Cache-Latency related bottlenecksTotal pipeline cost of external Memory- or Cache-Latency related bottlenecks. Related metrics: tma_l3_hit_latency, tma_mem_latency00tma_info_bottleneck_memory_data_tlbsBvMT;Mem;MemoryTLB;Offcore;tma_issueTLB100 * (tma_memory_bound * (tma_l1_bound / max(tma_memory_bound, tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_dtlb_load / max(tma_l1_bound, tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_hit_latency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_dtlb_store / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency)))tma_info_bottleneck_memory_data_tlbs > 20Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs)Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs). Related metrics: tma_dtlb_load, tma_dtlb_store, tma_info_bottleneck_memory_synchronization01tma_info_bottleneck_memory_synchronizationBvMS;Mem;Offcore;tma_issueTLB100 * (tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_contested_accesses + tma_data_sharing) / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full) + tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * tma_false_sharing / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency - tma_store_latency)) + tma_machine_clears * (1 - tma_other_nukes / tma_other_nukes))tma_info_bottleneck_memory_synchronization > 10Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors)Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors). Related metrics: tma_dtlb_load, tma_dtlb_store, tma_info_bottleneck_memory_data_tlbs00tma_info_core_flopcFlops;Ret(FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / tma_info_core_core_clksFloating Point Operations Per Cycle01tma_info_inst_mix_iparithFlops;InsTypeINST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIRED.VECTOR)tma_info_inst_mix_iparith < 10Instructions per FP Arithmetic instruction (lower number means higher occurrence rate)Instructions per FP Arithmetic instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting. Approximated prior to BDW01tma_info_inst_mix_ipflopFlops;InsTypeINST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE)tma_info_inst_mix_ipflop < 10Instructions per Floating Point (FP) Operation (lower number means higher occurrence rate)01tma_info_system_gflopsCor;Flops;HPC(FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / 1e9 / duration_timeGiga Floating Point Operations Per SecondGiga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width01tma_info_system_mem_parallel_readsMem;MemoryBW;SoCUNC_ARB_TRK_OCCUPANCY.DATA_READ / UNC_ARB_TRK_OCCUPANCY.DATA_READ@cmask\=1@Average number of parallel data read requests to external memoryAverage number of parallel data read requests to external memory. Accounts for demand loads and L1/L2 prefetches00tma_info_system_mem_read_latencyMem;MemoryLat;SoC1e9 * (UNC_ARB_TRK_OCCUPANCY.DATA_READ / UNC_ARB_TRK_REQUESTS.DATA_READ) / (tma_info_system_socket_clks / duration_time)Average latency of data read request to external memory (in nanoseconds)Average latency of data read request to external memory (in nanoseconds). Accounts for demand loads and L1/L2 prefetches. ([RKL+]memory-controller only)00tma_l3_hit_latencyBvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group6.5 * tma_info_system_core_frequency * (MEM_LOAD_RETIRED.L3_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2)) / tma_info_thread_clkstma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited).  Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance.  Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS. Related metrics: tma_info_bottleneck_cache_memory_latency, tma_mem_latency100%00tma_lock_latencyOffcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group(12 * max(0, MEM_INST_RETIRED.LOCK_LOADS - L2_RQSTS.ALL_RFO) + MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES * (9 * L2_RQSTS.RFO_HIT + min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO))) / tma_info_thread_clkstma_lock_latency > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric represents fraction of cycles the CPU spent handling cache misses due to lock operationsThis metric represents fraction of cycles the CPU spent handling cache misses due to lock operations. Due to the microarchitecture handling of locks; they are classified as L1_Bound regardless of what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOADS. Related metrics: tma_store_latency100%00tma_store_latencyBvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group(L2_RQSTS.RFO_HIT * 9 * (1 - MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) + (1 - MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / tma_info_thread_clkstma_store_latency > 0.1 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles the CPU spent handling L1D store missesThis metric estimates fraction of cycles the CPU spent handling L1D store misses. Store accesses usually less impact out-of-order core performance; however; holding resources for longer time can lead into undesired implications (e.g. contention on L1D fill-buffer entries - see FB_Full). Related metrics: tma_fb_full, tma_lock_latency100%02llc_data_read_demand_plus_prefetch_miss_latency1e9 * (cha@UNC_CHA_TOR_OCCUPANCY.IA_MISS\,config1\=0x40433@ / cha@UNC_CHA_TOR_INSERTS.IA_MISS\,config1\=0x40433@) / (UNC_CHA_CLOCKTICKS / (#num_cores / #num_packages * #num_packages)) * duration_timeAverage latency of a last level cache (LLC) demand and prefetch data read miss (read memory access) in nano seconds1ns00llc_data_read_demand_plus_prefetch_miss_latency_for_local_requests1e9 * (cha@UNC_CHA_TOR_OCCUPANCY.IA_MISS\,config1\=0x40432@ / cha@UNC_CHA_TOR_INSERTS.IA_MISS\,config1\=0x40432@) / (UNC_CHA_CLOCKTICKS / (#num_cores / #num_packages * #num_packages)) * duration_timeAverage latency of a last level cache (LLC) demand and prefetch data read miss (read memory access) addressed to local memory in nano seconds1ns00llc_data_read_demand_plus_prefetch_miss_latency_for_remote_requests1e9 * (cha@UNC_CHA_TOR_OCCUPANCY.IA_MISS\,config1\=0x40431@ / cha@UNC_CHA_TOR_INSERTS.IA_MISS\,config1\=0x40431@) / (UNC_CHA_CLOCKTICKS / (#num_cores / #num_packages * #num_packages)) * duration_timeAverage latency of a last level cache (LLC) demand and prefetch data read miss (read memory access) addressed to remote memory in nano seconds1ns00tma_contested_accessesBvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group(44 * tma_info_system_core_frequency * (MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM * (OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE / (OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE + OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))) + 44 * tma_info_system_core_frequency * MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clkstma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accessesThis metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS. Related metrics: tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache100%01tma_data_sharingBvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group44 * tma_info_system_core_frequency * (MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM * (1 - OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE / (OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE + OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clkstma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accessesThis metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT_PS. Related metrics: tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache100%01tma_false_sharingBvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group(110 * tma_info_system_core_frequency * (OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS.REMOTE_HITM + OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS.REMOTE_HITM) + 47.5 * tma_info_system_core_frequency * (OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.HITM_OTHER_CORE + OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.HITM_OTHER_CORE)) / tma_info_thread_clkstma_false_sharing > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates how often CPU was handling synchronizations due to False SharingThis metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache100%01tma_info_bottleneck_memory_synchronizationBvMS;Mem;Offcore;tma_issueTLB100 * (tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) * tma_remote_cache / (tma_local_mem + tma_remote_cache + tma_remote_mem) + tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_contested_accesses + tma_data_sharing) / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full) + tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * tma_false_sharing / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency - tma_store_latency)) + tma_machine_clears * (1 - tma_other_nukes / tma_other_nukes))tma_info_bottleneck_memory_synchronization > 10Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors)Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors). Related metrics: tma_dtlb_load, tma_dtlb_store, tma_info_bottleneck_memory_data_tlbs00uncore_frequencyUNC_CHA_CLOCKTICKS / (#num_cores / #num_packages * #num_packages) / 1e9 / duration_timeUncore operating frequency in GHz1GHz00power_channel_ppdUNC_M_POWER_CHANNEL_PPD / UNC_M_CLOCKTICKS * 100Cycles where DRAM ranks are in power down (CKE) modeCounts cycles when all the ranks in the channel are in PPD (PreCharge Power Down) mode. If IBT (Input Buffer Terminators)=off is enabled, then this event counts the cycles in PPD mode. If IBT=off is not enabled, then this event counts the number of cycles when being in PPD mode could have been taken advantage of00LLC_MISSES.PCIE_READUNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART0 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART1 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3PCI Express bandwidth reading at IIO. Derived from unc_iio_data_req_of_cpu.mem_read.part0Data requested of the CPU : Card reading from DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 04Bytes00LLC_MISSES.PCIE_WRITEUNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART0 + UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART1 + UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART2 + UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART3PCI Express bandwidth writing at IIO. Derived from unc_iio_data_req_of_cpu.mem_write.part0Data requested of the CPU : Card writing to DRAM : Number of DWs (4 bytes) the card requests of the main die.    Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 04Bytes00power_channel_ppdUNC_M_POWER_CHANNEL_PPD / UNC_M_CLOCKTICKS * 100Cycles where DRAM ranks are in power down (CKE) modeChannel PPD Cycles : Number of cycles when all the ranks in the channel are in PPD mode.  If IBT=off is enabled, then this can be used to count those cycles.  If it is not enabled, then this can count the number of cycles when that could have been taken advantage of00power_self_refreshUNC_M_POWER_SELF_REFRESH / UNC_M_CLOCKTICKS * 100Cycles Memory is in self refresh power modeClock-Enabled Self-Refresh : Counts the number of cycles when the iMC is in self-refresh and the iMC still has a clock.  This happens in some package C-states.  For example, the PCU may ask the iMC to enter self-refresh even though some of the cores are still processing.  One use of this is for Monroe technology.  Self-refresh is required during package C3 and C6, but there is no clock in the iMC at this time, so it is not possible to count these cases00tma_contested_accessesBvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group(49 * tma_info_system_core_frequency * (MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD * (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM / (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM + OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))) + 48 * tma_info_system_core_frequency * MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clkstma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accessesThis metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS. Related metrics: tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache100%01tma_data_sharingBvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group48 * tma_info_system_core_frequency * (MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD + MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD * (1 - OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM / (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM + OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clkstma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accessesThis metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD. Related metrics: tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache100%01tma_false_sharingBvMS;DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group54 * tma_info_system_core_frequency * OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM / tma_info_thread_clkstma_false_sharing > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric roughly estimates how often CPU was handling synchronizations due to False SharingThis metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache100%00tma_info_system_dram_bw_useHPC;MemOffcore;MemoryBW;SoC;tma_issueBW64 * (arb@event\=0x81\,umask\=0x1@ + arb@event\=0x84\,umask\=0x1@) / 1e6 / duration_time / 1e3Average external Memory Bandwidth Use for reads and writes [GB / sec]Average external Memory Bandwidth Use for reads and writes [GB / sec]. Related metrics: tma_fb_full, tma_info_bottleneck_cache_memory_bandwidth, tma_mem_bandwidth, tma_sq_full00tma_l3_hit_latencyBvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group17.5 * tma_info_system_core_frequency * (MEM_LOAD_RETIRED.L3_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2)) / tma_info_thread_clkstma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited).  Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance.  Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS. Related metrics: tma_info_bottleneck_cache_memory_latency, tma_mem_latency100%00��R��R��R��R��R��R��R��R�R��R�R��R"�R��R'�R��R,�R��R1�R��R6�R��R;�R��R@�R��RE�R��RJ�R��RO�R��RT�R��RY�R��R^�R��Re�R��Ro�R��R{�R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��V��R�R��R�42��R
�R��R�R��R(�R��R;�R��R7�R��RF�R��RO�R��RX�R��Rd�R��Rn�R��Rx�R��R��R��R��R��R��R��R>�R��R��R��R��R��R��R��R��R��R�/��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R��R'�R1�R[�Re�R��R��R��R��R��R�R+�R��R<�R��RI�R1�RV�Re�Rc�R��Rp�R��R}�R�R��R��R��R��R�R6�Ri�R��R��R��R
�R'�R\�Rq�R��R��R��R��R)�R>�Rn�R��R��R��R�R�RI�R]�R��R��R��R��R*�RE�R{�R��R��R��R�R!�RG�RS�Ry�R��R��R��R��R��R�R�RE�RQ�Rw�R��R��R��R��R��R�R�RA�RM�Rs�R��R��R��R��R��R�R�RA�RO�Rw�R��R��R��R��R��R&�R9�Rg�R��R��R��R�R(�R6,XN,X\�Rr�R��R��R��R
�RC�RR�R|�R��R��R��R�R9�Ro�R��R��R��R�R"�RP�Rp�R��R��R��R�RU0M�0M 1Mw1M�1MG2M0�P3Mq3M�,�3MB4M�4M5M�5M�5Me6M�6M*7M�7M�7Mh8M�8Mb9M�9M.:M�:M;M�;M
<My<M�<MR=Mw�P��P�=M�>L�>L2>M�>MQ?M�?M<@M�_M�P�_Mt`M�`M��POaMi�P��P�aMbMT�P��P*MH�L��L<�L�L�L}�L��L[�L�L/�L��L&�L��L�L�@M
AMsAM�AM4BM�BMdCM�BM�CMCDM�DM6EM�EM%FMO*MbM�cM�bM&�PcM��P;?L�?L�?Le@L\EL�FLXGL�GL$HL�HL<IL�HL�IL�*M+MQ+M�+M�ILLJL�JL�KL'KL�KLYLL�LL�ML>MLNL|NL�NL-OL�OL�OLJPL�PL-QL�QLZRL�QL�RLSLOSL�SL6TL�TL�SL�TL�UL�UL8ULVLpVL�VL.WL�WL�WLIXL�XLYLdYL�YL�ZL=ZL�ZL_[L�[L�\L4\L�\L�+M�P?,M�,M�,MS-M�FMO]L�]L	GMwGM�GM[HM�HM@IMMgM^L�^L^_L�_L�`LCaL�aL�bL%cL�cLfdL	eL�eLIfL�fL�gLhLdM�hLiL�iLjL�jL(kL�dMS�P�P��P��PmnLcgP7�O��O��OI�O��O)�O�nL��O��Oy�O��O��O
�O��O�OToL�oL7M�MdpL�pL}qL3rL�rL�sL$tLQM�tLYuLvL�vL�M3wLyMM�wLNxL�xL�yLWzL
{L�{L�M=|L�|L}L~L-M�~L�MPMOL�Lj�L!�LӁL��L�L�M��LL�L��L��LtM)�L		M�	MĆLB�L߇L��LH�L��L��L%
M+�L��Lk�L�L�
M��LOM�M,�L��L7�L�L��L	�L[M��L&�LÒLY�L�M�Lk
M
M��L-�LܕL��Li�L0�LϘL�M��L*�L�L��LRM=�L�M�MڜLs�L$�L�L��L~�L�L%M��LQ�L��L��L�MR�LyMM��L��L,�L�L��Ln�L�L�M��LV�L�L��LNMW�L�M�M��L��L1�L�L��Ls�L�L#M��L[�L�L��L�M\�LeM�M�L��L3�L��L��Lr�L
�L�M��LW�L�L��L7MU�L�MnM��L��L)�L�L��Le�L��LM��LG�L��L��L�MB�LEM�M��L[�L��L��La�L�L��LD�L��L��L�LaM��L�M�M\�L��L��LL�L�L��L^�L&M	�L��L]�L�L�M��Ld M� MI�L��Lx�L9�L��L��LK�L�!M��L��LJ�L��L3"M��L�"Mh#M5�L��Lb�L"�L��L��L1�L$M��Lz�L-�L��L�$Mr�L<%M�%M�L��L@�L��L��Lv�L�Lh&M��LR�L�L��L'MG�L�'M5(M��Ll�L�L��L��L?�L��L�(My�L�L��Ld�Le)M�-M�IM�IMGJM�JMKMmKM�KM;LM�LM�LMiMM�MM9NM+.M.M��L�L��L��Lr�L�Lf�L�L@�L��L5�L��L#M�M�.M7/M�/M��PْP.�P��P��P�P�Po�P�P�/MgP�L�NMOM�OMPM�PMs�L��L�M@_MQM�QM�QMcRM�RMfSM��P#�P2UM�UMhVM�VM,WM�WMXM�XM�XMpYM�YMFZM�ZM�[M�[Md\M�\MC]M�]M%^M�^MU0M�0M 1Mw1M�1MG2M�2M3Mq3M�,�3MB4M�4M5M�5M�5Me6M�6M*7M�7M�7Mh8M�8Mb9M�9M.:M�:M;M�;M
<My<M�<MR=Mw�P��P�=M�>L�>L2>M�>MQ?M�?M<@M�_M�_Mt`M�`M��POaMi�P�aMbMT�P��P*MH�L��L<�L�L�L}�L��L[�L�L/�L��L&�L��L�L�@M
AMsAM�AM4BM�BMdCM�BM�CMCDM�DM6EM�EM%FMO*MbM�cM�bMcM��P;?L�?L�?Le@L\EL�FLXGL�GL$HL�HL<IL�HL�IL�*M+MQ+M�+M�ILLJL�JL�KL'KL�KLYLL�LL�ML>MLNL|NL�NL-OL�OL�OLJPL�PL-QL�QLZRL�QL�RLSLOSL�SL6TL�TL�SL�TL�UL�UL8ULVLpVL�VL.WL�WL�WLIXL�XLYLdYL�YL�ZL=ZL�ZL_[L�[L�\L4\L�\L�+M�P?,M�,M�,MS-M�FMO]L�]L	GMwGM�GM[HM�HM@IMMgM^L�^L^_L�_L�`LCaL�aL�bL%cL�cLfdL	eL�eLIfL�fL�gLhLdM�hLiL�iLjL�jL(kL�dM�P�lL)mL��PmnL7�O��O��OI�O��O)�O�nLZ�P��O��Oy�O��O��O
�O��O�OToL�oL7M�MdpL�pL}qL3rL�rL�sLȜP
�P�tLh�PvL�vL��P3wLyMM�wLNxL�xL�yLWzL
{L�P2�P=|L��P}L~LȻP�~L�MPMOL�Lj�L!�LӁL��L^�P^�P��L��P��L��L�P)�L		M�	MĆLB�L߇L��LH�L��L��P��P+�LJ�Pk�L�L�P��LOM�M,�L��L7�L�L��L�P��P��L��PÒLY�L:�P�Lk
M
M��L-�LܕL��Li�L0�L%�P¿P��LأP�L��Li�P=�L�M�MڜLs�L$�L�L��L~�L��P�P��L8�P��L��L��PR�LyMM��L��L,�L�L��Ln�LޥPd�P��L��P�L��L�PW�L�M�M��L��L1�L�L��Ls�LA�P��P��L�P�L��LG�P\�LeM�M�L��L3�L��L��Lr�L��P��P��LP�P�L��L��PU�L�MnM��L��L)�L�L��Le�L�P(�P��L��P��L��L��PB�LEM�M��L[�L��L��La�L�Ld�PD�L�P��L�Lf�P��P��P��PT�PܭP��PD�P�P��PV�P+�P�P��PU�P��P��P��Ld M� MI�L��Lx�L9�L��L��L��Pi�P��LG�PJ�L��L�P��L�"Mh#M5�L��Lb�L"�L��L��L��P��P��L��P-�L��LE�Pr�L<%M�%M�L��L@�L��L��Lv�LX�P��P��L�P�L��L��PG�L�'M5(M��Ll�L�L��L��L?�L��P�Py�LZ�P��Ld�L��P�-M�IM�IMGJM�JMKMmKM�KM;LM�LM�LMiMM�MM9NM+.M.M��L�L��L��Lr�L�Lf�L�L@�L��L5�L��L#M�M�.M7/M�/M��PْP.�P��P��P�P�Po�P�P�/MgP�L�NMOM�OMPM�PMs�L��L�M@_MQM�QM�QMcRM�RMfSMTM�TM2UM�UMhVM�VM,WM�WMXM�XM�XMpYM�YMFZM�ZM�[M�[Md\M�\MC]M�]M%^M�^MU0M�0M 1Mw1M�1MG2M�2M3Mq3M�,�3MB4M�4M5M�5M�5Me6M�6M*7M�7M�7Mh8M�8Mb9M�9M.:M�:M;M�;M
<My<M�<MR=Mw�P��P�=M�>L�>L2>M�>MQ?M�?M<@M�_M�P�_Mt`M�`M��POaMi�PݘP�aMbMT�P��P*MH�L��L<�L�L�L}�L��L[�L�L/�L��L&�L��L�L�@M
AMsAM�AM4BM�BMdCM�BM�CMCDM�DM6EM�EM%FMO*MbM�cM�bM&�PcM��P;?L�?L�?Le@L\EL�FLXGL�GL$HL�HL<IL�HL�IL�*M+MQ+M�+M�ILLJL�JL�KL'KL�KLYLL�LL�ML>MLNL|NL�NL-OL�OL�OLJPL�PL-QL�QLZRL�QL�RLSLOSL�SL6TL�TL�SL�TL�UL�UL8ULVLpVL�VL.WL�WL�WLIXL�XLYLdYL�YL�ZL=ZL�ZL_[L�[L�\L4\L�\L�+M�P?,M�,M�,MS-M�FMO]L�]L	GMwGM�GM[HM�HM@IMMgM^L�^L^_L�_L�`LCaL�aL�bL%cL�cLfdL	eL�eLIfL�fL�gLhLdM�hLiL�iLjL�jL(kL�dMcgP7�O��O��OI�O��O)�O�nL��O��Oy�O��O��O
�O��O�OToL��Oc�O�gPqhP�O��O3�O�O��O:�O�OiP��O�iP4�O��OHjP�jP�P1P�PrPP�PbP�kP!P4lP�P�P�lP~mP+P�P_P	P�	Pi
PPnP�P�nPfP'
PXoPpP�
PQP�P�P,P�PdP�pPP2qP�PyP�qPmrPP�P1P�PyP P�PsPgP�sPP�P,tP�tPjPP�PJP�P�P@PzuPPvP�Pf P�vPbwP� P�!P!"P�"Pk#P$P�$P�wP\%P�xP�%P�&P$yP�yP`'P�'P�(P@)P�)P�*P6+PrzP�+P{P�,Pj-P�{Ph|P.P�.PO/P0P�0Pk1P	2P
}P�2P�}Pz3PG4PT~P
P�4P�5P&6P�6P�7P<8P�8P�P�9PR�PE:P;P�P��P�;PB<P�<P�=P/>P�>Ps?P:�P2@P܂P�@P�APt�P�P$BP�BP@CP�CP�DP&EP�EP��PgFPB�P�FP�GPЅPz�PWHP�HP{IPJP�JPmKP�KP�P�LP��PRMPNP<�P�P�NPOOP�OP�PP<QP�QP�RP��P?SP(�P�SP�TP��Pk�P;UP�UPaVPWP�WPVXP�XP�P�YP��P>ZP[P0�PۍP�[P,\P�\Pc]P
^P�^PD_Pp�P�_P
�P�`P^aP��PL�P�aP�bPcP�cPmdPeP�eP�PdfP��P�-M�IM�IMGJM�JMKMmKM�KM;LM�LM�LMiMM�MM9NM+.M.M��L�L��L��Lr�L�Lf�L�L@�L��L5�L��L#M�M�.M7/M�/M��PْP.�P��P��P�P�Po�P�P�/MgP�L�NMOM�OMPM�PMs�L��L�M@_MQM�QM�QMcRM�RMfSMTM�TM2UM�UMhVM�VM,WM�WMXM�XM�XMpYM�YMFZM�ZM�[M�[Md\M�\MC]M�]M%^M�^Mx�PB�P��P
�Pm�P��P4�P��P��P
5Q`�P�Pq�X��X��X��Xv@Q�Xt�X��U�U��UPK[
XQ]P[.�U�U�X�U��U��XBW�T[4&W|{Qc�X<�XτQ��Xx�X�X��X��Xn�X�AWMYLW Y�MW�NW�OWQW�S��U��U��Q�
Yx
Y��UD�U2YKY��Q�UI�U�Y#�Q�V��Q
V�V�rW@sW�sW�Q�tWBTVYY�T�T�Y�{W�VUYV�T�V�V YpT	T�	TmTH
T�VT�T�TeT� Y?V%"Y�Ve V�"YR�RHR�R�$V%T�%V2&VuTT�T�&V@TT��W�'V|TW(V�T� T �W�*Y�(V��W�)Vh*V�+V�,V;1Y�.V�/V�0V"3Y��WE�Wr�W� R�!R(T�W[�3Y�+T�,Ta-T�(R��Ze5Y�6Y�8Y��WC,R/T�/T�-R&0T<1T��WA�W�W`:YHV�TJV;Y>Y�Y[?VVPRYDY�UR�\V�EYSIY�^RdV�hV�KY�OY�PYrtV��WLTY�wV�UYa�W�ZY�V��V�\Y_Y/aY�cYd�VƔV��WohY>lY`X�nYOqY�XsYuY��V�wYm{Yg$X`�Rq�V-&X�~Y��V=�P��P��P`�P��OX�O��Oj�O@�O��O2�3��O}�&���n����c�O���!O��P�3m�3��-F/i1&2�3>5�6+:�;�<��3��3a�3��3x�3��3��3��3�3S�3*��Z�\�^�3��3oqv�wvxy�y,z����3�zWq3�s3�t3q�38�3ܠ"<4�4�4�4�4��"��O��Oo4-{�|�}�~}�O������ԕ�O���!�#L�#y�#t�7���p3�5�������v3qw3Fy3{3}|3�R~3�3�3��3��3��Q�3Lj3'�3��3Œ3=�3��z�3������3��"�3h�3U�3ZA0��3,�3#�3{x�B�����3a�3f�3��3��3��3j	4`
4��"��"�"�4��H	�.3��v7=03z13��c9��O��O�>+ �C��37���O#�h^#���3ߢ��-���G2�3x4t5}6�7�8:a�Oe@��O�Aq�O|G�I*KYL{M�N�O���\0
�
�S��3��O�<3IB3�!��������f�OK�O"�V�OM�O���e3����*�3��3��#��#��#��#��#��#Q�#2g3��O�i3]j3k3�k3Kl3�"˾"��3ܴQ�.�3-�3�#U�#��#B�3��3��3��3��3��3��3��3	$����������T����i�����3I�3��O���3��3�G[I[�B[7E[>-P�)��)D�)��)��)2P>d�)��)D�)��)X�)��)>�)��)B�)��)2�)��)�)��)�)V �  ��)E�Oܕ"3�"F�,қ,^�,�,v�,ĝ,�,�,ȟ,ǡ,N�,��,ͥ,b�B�,h�,��Bf�B�B^�,��,��,���b@�c@�d@�e@�f@�g@�h@�i@�j@�k@�l@�m@xn@mo@bp@Wq@Or@Gs@?t@7u@/v@$w@x@
y@z@�z@�{@�|@�}@�~@�@��@��@��@��@��@��@x�@p�@h�@`�@X�@P�@E�@:�@.�@"�@�@
�@�@��@�@�@ݕ@і@ŗ@��@��@��@��@��@��@��@y�@q�@f�@[�@O�@C�@7�@+�@"�@�@�@�@��@�@�@ۭ@Ю@ů@��@��@��@��@��@��@��@�O+�@
�@-�@t�@Ƚ@��@@+N+��@J�@��@4�@��@�@|�@��@��@%�@��@�@e�@��@J�@��@5�@c�@X
+�+�++�+N+�++G!+x#++&+�(+�++D.+�0+�3+54+�4+?5+�5+L6+�6+\7+�7+i8+�8+s9+�9+�:+
;+�;+<+�<+��@ �@L>+�>+^?+�?+p@+�@+ZA+�A+VB+�B+dC+�C+uD+�D+�E+F+�F+G+�G+H+�H+#I+�I+)J+�J+/K+�K+=L+�L+KM+�M+_N+�N+sO+�O+�P+Q+��@CA�Q+�Q+TR+�R+$S+�S+�S+\T+�T+6U+�U+V+qV+�V+IW+?_+�_+-`+�`+-a+�a+�A�A�A�A�A�AA�A	AN
A�
A&A�AA�A�At
A�
A[A�A]k+2l+m+�m+�n+�o+r+[s+�t+�u+0w+zx+A�A^A�A�A5A�A�AbA%A�A�AfA&A�A�AB!A�"A�$A&A�'A^)A+A�,A#.A�/A�0A�1A�2A�3A�4A�5A�6A8A�8A:A,;A@<A4=AN>At?A�@A�AA�BA�CAEAAFA}GA�HA�IA:KAyLA�MA�NA-PAiQA�RA�SA&UAeVA�WA�XAZAU[A�\A�]A_AQ`A�aA�bAdAAeA�fA�gA�hA=jA|kA�lA�mA5oAspA�qA�rA0tAnuA�vA�wA�|++yA4zA@{AI|AU}A^~AjAU�AC�A.�A�A�A��A��A*�AǏAg�A�A��A��A��A��A��A��A��A��A��A��At�Ar�A}�Ar�Aj�AB�A��A�A]�A��A��Aa�A��A�AI�A°A%�A��AѴA=�A��A�A�AŻA�A_�A��A�A�Al�A��A��A�A
�A��A��A��Am�A_�A<�A�A��A3�At�A��A��A&�A}�A��A�A&�A��A�AH�A��A*�A��A��A��A��A��A��Ax�A_�AC�A*�A�A��A��A��A��A{�A.�A��Aq�A?�A	�A��A��AO�A(B�B�B9BB�	B�Bz
BDB�B�B"B�B3B�BmBB�B�BhB; B!B�!B�"By#BO$B%%B�&Bp(B*B{+B--B�.Bq0B(2B�3B5Bl6B�7B9BY:B�;B�<B5>B�?B�@BADB�GBKBG\B�_B cB�fB}NB�QBfUB�XB�iB]mB�pB.tB�wB�yB�{BQBĂB6�Bx�8v�8t�8r�8p�8q�8r�8s�8t�8u�8s�8q�8p�8o�8n�8m�8o�8q�8s�8u�8w�8v�8u�8s�8q�8o�8m�8n�8o�8p�8q9r9p9n9m9l9k9j9l9n	9p
9r9t9s
9r9p9n9l9j9k9l9m9n9o9m9k9j9i9h9g9i9k9m 9o!9q"9p#9o$9m%9k&9i'9g(9h)9i*9j+9k,9l-9j.9h/9g09f19e29d39f49h59j69l79n89m99l:9�:9@;9�;9<9�Aq<9=9v=9�I9K9AL9�M9�N9�R9�T9�U9�V9|Z9�]9za9s9�v9z9�}9�d9h9l9�o9�9��9��9�9��95�9p�9��9��9��9y�9ї9X�9И9N�9�9��9�9#�9��9�9��9�9��9m�9�9c�9ڥ9H�9ɦ9��90�9��9&�9��9 �9�9��9�95�9��9߰9��9!�9��9��9O�9ų96�9�9��9�9�9X�9÷9-�9��9�9t�9�9P�9<�9'�9��O��9�Ol�9.�9��O�9)�O��9��O��9d�9=�O��9��9�9��9"�9��9"�9��9�9�9��9�9��9	�9��9�9��9�9��9r�9��9=�9�9��9_�9��9>�9�9s�9��9u�9��9Q�9K�9��9�9�9��9��9��9��9��9��9��9�9M�9��99�9��9E�9��9/�9��9,�9��9A�9��9	�9w�9��9�����9t�K�9��9ί%��ٰ�9�9f�9�9��9[�9��9�9��9s: :�:�:�:i::�:�:-
:�:z
:$:�:z::�:�:�:�:�:�::-:E:=:m :�!:�":�#:�$:�%:':1(:!,:g-:�.:�/:91:�2:�3:5:]6:�7:�8:/::u;:�<:>:P?:�@:�A:%C:kD:�E:�F:CH:�I:�J:L:aM:�N:�O:6Q:R:�S:U:ZV:�W:�X:1Z:y[:�\:^:T_:�`:�a:/c:�t��h:[i:%j:�j:Qk:�k:�l:�l:u����3�����P����m:?n:�n:�o:p:�p:q:�q:�q:\r:�r:.s:�s:t:lt:�t:wu:�u:Jv:�v:!w:�w:,x:�x:	y:ty:�y:yz:{:�{:W|:�|:I}:�}:6~:�~:�n::�:}�:�:��:�:��:�:|�:�:��:�:y�:��:��:1�:ԇ:v�:��:��:��:��:��:��:Ϗ:ΐ:Б:��:��:ϔ:Ε:Ж:��:(�:��:�:.�:��:
�:m�:Т:�:��:�:s�:é:9�:��:)�:��:�:K�:��:�:�:j�:̸:�:g�:��:��:y�:c�:-�:�:�:�:��:��:�:`�:��:��:0�:��:��:*�:X�:��:H�:��:�:��:��:N�:F�:A�:9�:4�:"�:�:�:��:��:��:��:��:��:�:<�:�:��:k�:?�:��:��:��:��:L�:�:��:��:s;8;;�;�;G
;�;\
;;�;a;;�;�;�;a;A;;�;�;�;�;E;� ;�";$;�%;y';);�*;�,;�-;</;�0;�1;G3;�4;�5;A7;�8;�9;k=;�@;TD;�U;/Y;�\;`;�G;EK;�N;>R;�c;�f;rj;�m;Sq;�s;�u;/y;�|;(�;��;��;��;%�;��;��;�;��;�;v�;R�;��;+�;�;v�;X�;Ǎ;7�;��;&�;*�;��;��;-�;�;~�;�;[�;֔;̕;K�;��8�8k'_�8��8���}�8�C�h�%�Em'�m'!n'�n'-o'�o'Np'�p''q'�q'Ar'�r'�s'rt'Gu'%v'w'�w'9�8��8�8~�8��8|�8��8E�8��8E�8F2!��x'(y'�y'�y'7z'�z'�z'��8��8��8$r�n�j�E�8�|'�}'�~'�'G�'��'A�BU�OR�O��BZ�B�B��B��BېB��B�B��Bp�B[�B8�O–Bo�B�B˜Bh�B��B��B8�B��B��OTG,tJ,�M,�P,�S,V,+�O;�Bb�B�BV�O��Bx�O]�O
�BP�B��B�B3�B��B1�BU�B�h,k,Im,�o,��B��BZ�B�B͉,��,��,)�,��BD�B��Ba�Bd�)�)��)b�)�)��)`�)
�)�Q>ԹO��O��O�R>S>�S>CU>�V>�W>Y>bZ>�[>�\>@^>_>�_>�`>"b>Mc>xd>�e>�f>h>0i>�j>(l>�m>�o>.q>�r>�t>v>�w>Py>�z>B|>~>�>?�>Ă>��>�>ʇ>R�>Ŋ>8�>�>��>#�>��>o�>�>��>�>��>��>��>;�>ڠ>V�> �>��>A�>��>Y�>�>�>��>O�>�>�>��>Z�>�>t�>�>��>-�>�>L�>�>��>=�>��>a�>�>��>��>�>.�>+�>��>��>d�>�>��>��>G�>�>��>��>k�>=�>��>^�>��>��>�>��>6�>�>��>'�>��>:�>D5*-7*�8*�:**<*>*�?*{A*�>5?��O��O�O��O�?�
??�
?.?�?�O��O�O��O0?�?e?�?r?�?� ?�"?�#?S%?�&??(?�)?+?X,?�-?�/?�1?S3?(5?�6?�8?�:?g<?>?�??bA?\C?E?�F?�H?�J?;L?N?�O?Bl*ym*�n*�P?(R?FS?�T?�U?,W?SX?�Y?�y*rz**{*�Z?�{*�|*E}*�[?�\?w]?s^?-_?�_?�`?1�*f�*fa?b?�b?�c?�d?�e?�f?Pg?�*׈*��*��*Z�*G�*>�*��*َ*��*v�*`�*)h?eh?ti?,j?�j?Ak?�k?`l?�l?�m?in?�o?��*-�*ɔ*]�*�*w�*�*��*7q?��*�*�q?r?Zt?�v?�x?�z?}??�?��?��?�?��*'�?I�?_�?�?��?�?�?j�?�?��?p�?��?��?1�?��?!�?�?t�?�?��?q�?��?��?)�?��?
�?ֲ?T�?��?s�??�?��?d�?�?T�?�?��?�?��?�?��?d�?�?��?��?k�?6�?��?Y�?��?��?*�?��?S�?��?5�?��?z�?�?��?b�?��?��?�?��?I�?B�?��?��?o�?k�?�?�?��?@@6@�@�@Z@S	@@�@�@�@j@6@�@[@�@�@/@�@Z@�@'�*�*��*x�*�*�*��*s�*�!@�"@L$@&@w'@
)@z*@8,@�-@A/@�0@2@v3@.5@�6@+8@�9@S;@�<@V>@�?@A@RB@�C@AE@�F@H@�I@�J@uL@�M@sO@Q@S@�T@�V@^X@aZ@\@�]@�_@Ra@�*	�*��*�+B+B+�+�+�4��4�4�4�4ۊ4Ջ4ό4ɍ4Î4��4��4��4��4��4��4��4��4��4}�4x�4p�4h�4_�4V�4M�4D�4>�48�42�4,�4&�4�4�4�4�4��4�4�4�4�4�4ۮ4ӯ4˰4±4��4��4��4��4��4��4��4��4��4w�4o�4g�4_�4W�4R�4M�4H�4C�4>�46�4.�4%�4�4�4
�4�4��4��4��4��4��4��4��4��4��4��4��4��4��4��4��4��4�$�$�$��O��
$$�$�!$�$$�'$�*$x-$��4n6$�6$7$c7$�7$�7$T8$�8$�8$C9$�9$��4��4��43�4��4n�4�9$�:$F�4��43�4�5"5�55�5&5�5!5�5� 5q$5�'5_+5�.515�CyE$;35F$�G$�H$�I$s65մO�95�;5�@5F5�H5I5lK5�K5�P5c�O�U5�W5Jg$�^5�`5i5�m5�o5r5�v5�x5b{5�}5�5�5P�5�5K�5Ͳ$S�5D�5ʵOo�5��5��5�50�O"�5�5��$�5�5-�5��$?�5��$*�$��$��$6p$�$j�$��53�5ʹ5J�5ǵ5G�5Ķ5>�5ȷ5V�5߸5i�5�5_�5f�5�5m�5�5h�5�5d�5x�5��5��5�5��5�5��5�5��5��5w�5��$m�$��5/�5��5U�5�$��$2�$�$��5��5�5��5:�5q�5��5e�5j�5o�5t�5y�5x�5��5��5��5��5c�5��5a�5��5?�5��5/�5��5�5��5J�$�$�$�$�%�%�%�%x%O%5%%�%%[%�%�%�A%!%�%[%%�%�!%D#%%%�&%�(%G*%,%�-%z/%,1%�2%�4%l6%%8%�9%S;%��52?%p@%�A%�B%D%ME%�F%�G%I%J%NK%BL%�M%�N%�O%+Q%.R%S%�S%�T%�U%�V%rW%VX%;Y%�Y%�Z%[%\%~\%v]%V^%�^%�_%a%!b%.c%;d%We%qf%�g%yh%�i%\j%�k%ul%�m%�n%��5_�5��5��5?�5�5r�5J�5
�5�5y�5E�56�6�6L66�6N6�6�
606�
6x6�6q6�6�6�6�6�6�6�6�6�66+6B69 6V!6"6�#6�$6�%6\'6)6§%��%��%��%+�%�%Y86��%��%$�%��%*�%��%8�%��%�*62-6�/6:6];6�<6�=6?6\@6�A6�B6"D6aE6�F6�G6I6`J6�K6�L6&N6eO6�P6�Q6"S6dT6�U6�V6*X6iY6�Z6�[6&]6h^6�_6�`6.b6pc6�d6�e62g6sh6�i6�j69l6zm6�n6�o6@q6er6�s6t6�u6�v6�w6\y6�z6r�%a}6�~6��%��6��6u�6p�6�6��%j�6�6�6_�6}�6o�6��6��%{�6��6��6��6�6��6
�67�7�7H-&�
77�1&�
7&7y7�7#7�77�;&~77�7�7$7�%7K&'7�(70*7�.7D07�27�37157�77�87s:7R67�;7_=7�>7�?7�@7B7]D7�E7?G7�K70M7�N7P7fy&�z&lQ7}&8T7�U7~~&�X7.Z7�[7�^7y`7Y�&�h7*j76p7�q7^s7z7�{7��&k�77�7�7e�7��&s�&	�&~�&�7Y�7�&z�&�&Ρ7��&:�7̤7'_�7��7��7y
'��7��7A'.�7ԯ7k�7�7/ '��7Ҷ7 �7��70�7��7W�78�7�7
�7K�7��7��7T�7�7]�7�7�7�7�7!�7�7�7*�7"�7�7��7��7�7��7��7��7D�7��7��75�7��7�7_�7��7��7s�7��7B�7��7��7v�7��7E�7��7�7.�7v�7��7�7G�7��7�7�8�8�8�8w8`8U8588�85
8y8�8�
848�8�88@8�8"8k8�8S8�88�8�8� 8�!8�"8�#8�$8q%8X&8B'8)(8)8*8�+8�-8K/8�08�28q48&68�78�98�;8[=8%?8�@8�B8fD8$F8H8�I8�K8M8�N8P8�Q8zS8U8�V8fX8<Y8Z8�Z8�[8�\8s]85^8_8�_8�a88c8�d8If8�g8�i8Hk8m8�n8�o8Oq8�r8�s8Eu8�v8�w8*y8~z8�{8?8��8�8Z�8˚89�8��8��8��8s�8�8�8�8�8V�8��8�8�8��8��8p�8B'C'9D'�E'��8z�8�8��8qG'
I'�J'HL'B�8��8��8�8�M'�N'�O'�P'}Q'R'#�#w#�#d#�-uo#e4p#�p#�q#=>;r#�r#"@�s#,t#�t#R�u#fv#�<#M=#t>#�>#�?#�@#�A#nB#�B#��"W&Y)w#�`2h�x##Jy#�#��"i�#�#�"�#��#B�#ʉ#h�#�#&�#��#�#��#�#��#�#-�#|�#Й#�C#�D#`H##��#Q���j�{�#G{�#&��#�#��#%�#N�"1�"S�"��"��".z#0{#9���#S|#
�"��"}#G��������)�*,M-�.�/W0�0��"�P[�"o�",�"�Q��"gR��"�U�V[�#*�#��#&m�"�m��"�#L#�H#uI#]�"��"��"��"��"|�"�#� #�I#�J#�K#R�"*!#�!#9L#IM#N#�"��"�"��"'�"%�"�"#t##�N#�O#�P#.�"x�"��"*�"�"P�"H$#J%#xQ#oR#�S#T#��"�"��"<�"��"i�"R&#'#)U#�U#��V#9�"�"N�"4w��"��"�'#�(#uW#RX#�Y#��"�)#c*#YZ#��"9[#\#��"��"�"��"��"�"4+#	,#�\#�]#�^#�_#�"�"#�"�"O�"T#�,#�-#P`#a#�a#d#@#V#)#x#s#�.#_/#�b#Vc#yd#y#00#�0#8e#6#�1#2#�e#H3#�3#�f#�#�4#�5#tg#h6#h#	i#�	#�#9
#�
#�#�#7#8#�i#�j#�!�#9#�9#*l#�l#x#�#�##�#�#N:#�:#Rm#�m#�n#�#�;#/<#�}#y�E�#��e�#�����{��������#��D�]������D���T����
�G�B���d�5A[��V��Vm�VB�PK�S��P
�Pm�P��P4�P
5Q�%UV&U�&U�pU]'U�(U�rU�sU�,U�.U60U5uU2U�2UvU�vU5U�wU	)[�*[&,[�}U�~UkU9�U�U؁U>U�>U?U��U��U�BU�CUKEU`�P�P��U��S@�U��UؐUְS��U��U�U��U�-[��S3[.�U�U�X�U��U��Um�U�7[5�U��U�TM�U��U��U8�S��UV�S(�SQ�U��T#�U��U��U��U�S�S��U��U��U�UϯQ��UD�U2Y1	[��Q�UI�U|[�;[�V��Q
V�V�VN
VVTBTV{V�V�T�V�V�VQVV�T�V�VpT	T'VmTH
T�VT�T�TeT�V?V�Ve Vi!VR�R�"V�#VHR�R�$V%T�%V2&VuTT�T�&V@TT�T�'V|TW(V�T� TL!T�(V�)Vh*V�+V�,V�-V�.V�/V�0Vu1V�1VW'T� R�!R(T>3V�4V�6Vs8V�+T�,Ta-TY:V�;V�>V�AV�CV�EV0.T�GV/T�[U�/T�-R&0T<1T�1Tb2T�2THV�TJV�LV�OVuRV?VV�ET�XV�ZV�\V^V�`V�NTdV�hV�kV�oVqVrtV�T�wVzVVx�V��V�V��VikT�mT�oT�qT�sT!uT�wT�yTd�VƔVA�V˛V͟VáV�V�T��V�V�V
�V��V>�T�V{�T`�Rq�V��V��V=�P��P��P`�P�@[&�V�V�����?�7������
�����o������t��l�۪;���6���1���;�����ɴ1������i�ɹh�
���U�޼��C�������7���n���������O��Ov�����������
�������������k�A����;���b������������j���)����m�����K���"�����i���F������
���P�����$����2���e������P�����  " 2  + � � 0 	 �	 �
 �
 ] � C � 2
 �
  � R � ? � � � � v >-� [ � � � I � V �  �q
�ܕ"3�"p�"�"\�"җ"<���S�(�H�"������(�R���"�"u�"��"E�i�p���"[�%������P�����{�E�����s�=������k�5�������c�-�������X�%�������M������{�E������w�E��������<�����j����j�������}�h����d���
�Z���S��
LbzX��)��M#����W/��� u!]"J#�#8$�$F%�%2&�&F'�'!(!):*
+:,�,.j/j0�1V2�3F4^5�6o7`8Q9E:9;<�<.>z?�@�A�B!D�E�F(H.I�J�K�LWN�O�P�QDS<T�UWHX�Y�Z\�\E^�_�`�a�b�cedffg�h�i�jl�m�n�oTqFr�su�u�v{w:x�x�yxz!{�{�|�~Z�x����a��L���ΓF�T����1�y�������Ե����G��������W�����\�ʣ�ޠ��#����٤j�����z�ۧ<���`�������t�޾��x�O������R��L�������N����r�������M����=�1��c����3�m�a������B�m�	�
lJ�u���tR�6�r�v��6�.!�"$o%	&�&@'�'w()�*�+p-�.[0�1D3$45�5�6�7�8r9]:E;-<=>?@A B+C6D>EFFNGVHaIlJtK|L�M�N�O�P�Q�R�S�T�U�V}W"X�XjYZ�Zb[\�\�]�^�_z`$a�avb#c�c}d(e�ffhj�kDm�n�pqzq�qjr�rbs�s�t�u�v�w�x
z{|�|�}{~L4����؃������e�J�/�� �.�<�H�Y�j�{���ݒ4�ٓ~�#�ƕn����d����b�
���d����[�����6�إy����5���'���������m�O�/����ܳ���*�A�X�o���Y�1��ۿ����R� ��������������.�M�t��������=�U�p�����������@�����b��,�D�Y�n����������K��������k�@������f�.�����R��a���R��^��p�%	�
�
�`�D��]�y�$��{!m"�#�$�%�&	(*):*�*�+S,	-�-u./�/�0�1y2R3*4�46_9�<�?DC�F�I-MxP�SW[Z�]�_�a
f/j$n2r�uzU{{|t}�~Rb���͈��ΐ������� ���t<��b*�� �!M"#�#�$p%8&'�'�(^)&*�*�+�,L-.�.�/o0:12�2�3Z4%5�5�6�7H89�9�:r;><
=�=�>t?Y@�@,A�A�A!B�B�C�D�E�FyGH�HoI�I_J�JCK�K'LFM3OQS�ST|T�TjU�URV�V�W�X�Y�Z�[w\j]K^*_`�`Pd�g�jFn�q�t6x�{�~&�y�ʈފ��� ���c����	�}�����C���A�����v����ڝ����\�.�����\�Ѥf�ۥP�æL�ӧ_���P�����!���+���7����n�t�ί%��ٰ3����k�նm����8�ӹk�ݻR�ľ6������n�O�-�����������e�N�4�:�@�F�L�U�^�d�j�p�v���������������������������t���4���
�x���T����}���K���u����3�����P��� ���2�����Y���C��?���=���<���&���/���.������#�,�/�`�����]�S	�
��	]
}��
�
p�_�h�;�G�P#���uA��wC`�� �!�"�#%D&i'�(�)�*�+
- .6/O0e12�2�344�4�5�6�7�8�9;<?=i>�?�@�A`B0CD�D�ElF5G�G�H�ISJK�L(N�OQ�R7T�UIW�X\Z�[Q]�^a`�aVc�df�girj"k�k�l8m�m�n=o�p@r�su�vxy�z�{�|�}0�>���և!�j����M����*�s���ȪԬx����C�ƯI�ͰQ�ձL�ò:���.��������h�۶��J�����������$���j����a��C�h�%�~���`�����r��a��A����A����L���Y���"�������V��F2!��t�&��G	�
��$r�n�j�hJ'�a�!��!��!��
Q�!��
[�
��
��!]�!X�!��!�!��!H�!��!b�!��!��!��!��!�
��
�
��
/�
?�
��
j�
��
�Ot�
q�
n�
k�
h�
h�
h�
h�
h�
��
p�!�
�
��
W�
��C�?���!K�!A�!��	��G���!6�!��!��!O�!��!�!x�!��!>�!��!��![�!��!�!x�!��!6�!��! �!��!
�!u�!��!+�!��!��!T�!��!�!}�!��!;�!��!��!X�!��!�!u�!�!_�!�!L�!��!�!j�!�!0�!��!��!Y�!��!�!z�!�!8�!��!��!U�!��!)�!��!�!��!�!J"�""o"�"5"�"�"Z"�""w"�"5"�"�"h"�"S"�"2"�"�"K	"�	"
"t
"�
":"�"�"W"�"
"t
"�
"2"�""�"	"q"�"'"�"�"P"�""y"�"7"�"�"T"�""q"�"["�"H"�""f"�","�"�"U"�""v"�"4"�"�"Q"�"%"�" "� "� "F!"�!"""k""�""1#"�#"�#"V$"�$"%"s%"�%"1&"�&"�&"d'"�'"O("�("�V+"k-"�\�]^�F"CH"�K"=N"�cf�S"LT"�T"U"eU"�U"+V"�V"�V"TW"�W"X"qX"�X"/Y"�Y"�Y"LZ"�Z"6["�["#\"�\"�\"A]"�]"^"j^"�^"0_"�_"�_"Q`"�`"a"na"�a",b"�b"c"uc"�c"bd"�d"!e"�e"�e"Ff"�f"g"og"�g"1h"�h"�h"Ni"�i"j"kj"�j"?k"�k"*l"�l"	m"`m"�m""n"�n"�n"Ko"�o"p"pp"�p".q"�q"�q"Kr"�r"	s"~s"�s"it"�t"Hu"�u"�u"av"�v"'w"�w"�w"Px"�x"y"my"�y"+z"�z"�z"H{"�{"2|"�|"}"�}"�}"=~"�~""f"�",�"��"�"M�"��"�"j�"ɂ"(�"��"��"q�"�"^�"ƅ"�"|�"߆"B�"��"�"k�"Έ"-�"��"�"J�"��"�"g�"Ƌ";�"��"&�"��"�"\�"��"�"��"�"G�"��"
�"l�"ˑ"*�"��"�"G�"��"�"z�"�"e�"yh� D �! [" �" �# 1$ �$ e% �% ]& �& U' �' �( ) �) A* �+ �, �- / #0 .1 �2 �3 �4 6 7 8 p9 �: �; �< �= �> >@ TA �B �C �D �E 7H RJ |L �N �O �P 2S NU zW �Y �Z �[ ^^ �` c Re of �g Ij �l �n Bq ^r zs �t �u w 
x y z a{ }| �} �~ &� � � ֆ LJ �� � � ֎ �� �� �� Г ” ו ɖ �� �� �� �� � � � 0� �� �� �� �� Ŭ ح (� k� ɮ [� �� �� 8� � z� 
� �� U� � �� @� y� *� ۷ �� =� � �� X� � Ƽ }� /� � �� E� �� �� _� � �� {� 0� �� �� ;� �� �� =� �� �� Q� � �� n� #� �� �� E� �� �� i� � �� �� A� �� �� g� � �� �� H� � �� i� � �� �� ;� �� �� ]� � �� �� 4� �� �� M� � �� j� � �� �� ?� �� �� _� � �� � 5� �� �� Z� � �� �� &� �� � p� � N� �� /� �� � �� � \� � 6� �� � �� f� � �� b� !w!�!M!�!%!�!!n!�!J!�!*!�!!w!�!W!�!s!	!�	!t
! !�!y!&
!�
!�!.!�!�!)!�!|!&!�!|!'!�!~!*!�!z!"!�!s!!�!p!!�!o!!�!n !!!�!!m"!#!�#!p$!%!�%!u&!"'!�(!�)!+!E,!c-!�.!�/!!1!r2!�3!�4!�5!H7!r8!�9!�:!<!=!�>!�?!�@!&B!>C!VD!
G!OI!�K!�M!O!P!�R!U!nW!�Y!�Z!�[!�^!Ca!�c!Bf!rg!�h!�k!�m!�p!s!/t!^u!�v!�w!7y!]z!r{!�|!~!9!��!ʁ!J�!\�!|�!��!��!��!*�!<�!]�!s�!~�!��!ژ!�!�!$�! �!�!��!ɢ!��!�!H�!w�!�!9�!r�!��!γ!��!Z�!��!��!P�!��!��!U�!��!Y�ĮE�ƯP�հ\��n���~�	���������#�����Z�ܸf��M�ҺW�ܻ\��r�ͽ(���߾;�ÿJ��+������-������%���7�����b�#����l�-����s�7�����z�;�������E������O������S������]������g�(�����k�/�����}�B������Y�}�N�O����Q���
�p�����2�����^�����C����vJo���
{S7
�B�L��>�{$ ��R��3"~%�(,Y/�2�549�<�?�A�C�D�E�F�GI-J|K�L?M�M�N&O�OAP�PgQTfV�X�Zc]`�`ataPb1cd�d�e^f�f�g�h�itjWk�k�l�mcn9o�o6pq/rfsy�z�{v}�~i���8�	��#���
�$��‹��`�+���ϐ=���9����������L����Q�����V��t��b�؛N���2��� ��������m�۠F�ơF�΢Q�ԣT�ҤZ�ڥK���j��h��[���#�����]���#����U���&����P����a������Q�������e�N�2����������r�@�S�1������H���P���V���T���\���:���=���9���g������M����u�+���u�]����x�&�����e�%�����]�"�����7�&�����V���y�8����c�����6���w�/����S��;�]��	�
[�
1�vM'����eG&��}3!�6�8R*�+�.>0S2Z5>EGl;9=�>P@B�C�"a#{$�%�&�')�H�I�J�K�L�M�N�O�P�Q�R�S�T�U�V�W�X�Y�Z�[�\�]�^�_�a�b�cbd�e�f�g�hAj�kUl�l|np}qs�t)u�uYw�x�y�z�{�|F~�p�̀+��I��Dž$���߆<�ψ9�$��G���Ր‘`������ϗm�]����9�˞��(�2������]�5����̫b�5�
�ٰ��t�@��Դ��a�&����ɹ����'�=�[�|�����������'�6�H�W����^�
���e�q����������������?������i�2������>������2���A������4���8���=���$���&���
�}���4��[�Y�)
�
J�T���� 8�/�2�5 9D<h?�B�E?��S"%�'h*-�HLGO�R�UYW\�_�b!fci�l�o�q�s�t�uRw�xOz�{J}#��~"�m�ʅ�����@����`�Ƙ(����Z���$�����Y���}����9���a����� ���I���m�����)���Q���y����9���]��������A���i����+���S���~����I���}����O���k���\���n���w�������"���<���T���b���k���m���g����!���3���F���k�����(��\��v����J�������V����*�X�z�/�_��&�6�C	�	S
�
l�q�w
�
k��-�Q�u�1��Y�K�_�s���9�l�(�P�P�[ � i!�!�"#�#J$�$t%&�&'�'T(�(})*�*:+�+l,-�--.�.H/�/}01�172�2g34�455�5a6�6�7)8�8S9�9l:�:{;�;�<
=�=)>�>\?�?�@%A�AIB�BrC
D�D7E�E`F�F�G(H�HRI�IwJ�J�K2L�L[M�M�NO�ORP�P�QR�R,S�SeT�T�U%V�VQW�W�XY�YDZ�ZX[�[�\"]�]G^�^s_`�`;a�afb�bzcd�dDe�eif�f�gh�h3i�i>j�jIk�k[l�lzmn�n$o�o8p�pGq�qVr�rfs�s}tu�uv�v.w�~3�{|�|}�}~�w3x�xRy�yyz�z*�ƃ�V��w�
���Z�����F����8�܈��$�Ȋo����^����M����<����+�ғd��������@�ї_��~����?�כo�
���6�͞a����� ���C�Ӣc��������'���>�ȧ\�������F�֫f��������?�ԯf��������9�dzX��z����H��{����:�κe����� ���C�Ӿf��������,���C���Z���p����2���]���}����3���V���v����:���h����.���\�����������>���p������-���U���}����@���l����5���e�����"��X�-tY8Zl[�]�\�^�_�`�a�b�7�:dcTe*Bpf�g�P`���*���x���R���(�����Bh�hui"j�j�p+q�q�r�Rv~ST�U�i�u�Y�[jbvc_�/�1[3ĝ"�"��"ܠ"6�"��"�"8�"��"��"��"��"�"h�"ɮ"!�"~�"-{�|�}�~�v�w�����zfO�jOd� �#E>�,�5����758�8R:j;0=D>�\?�@�AxC�D�E�F��k�k�lXm6n�n�o<pl�}H�IGJ�J�K)�p�uLN�O&Q�RT�ULW�X�Y[�\|^�_a�bReqp�fChi�x{�{�~�������������"ó"�"~�"��"a�"��"�"r�"1�1�3�67Jv7"8�:�;a<��<�=�>?�;T�?�@3A4B8C��C�7�����$D�EJ��v����v�p��wG�H�I�J}6�7�8:KK�LN�N�DAP��#ARcUV�W}Y�Z�[�\^���\0
�
��O���
��������?�$�/�"�������������]�.��N�#Ot$O3&O"%OI'O{(O�nOlpO�oO\qOGrO3sO-)O�)O�+O}*O�,O�-O!tO�uO�tO�vO|wObxO|.O/O�0O�/O�1O�2OJyO�zO�yO�{Om|OE}O}3O�4O>5O�6O�5O�7O9O~O�O�~O��Oj�OL�O�9Om:O<O;O*=OT>O0�O��OكO��O��Og�O?O�?OUAOP@O_BO�COM�O͉O�O��O��Op�O=DO�DO�FO�EO�GO�HOR�O�O��OΏO��O��O�IOeJOvLO<KO�MOOO��Oq�Oa�O��O��O��O�OO�POZRO<QO}SO�TOʘO|�O��Oy�Oq�Oj�OrUO'VO�WO�VOYOJZOe�O�O�O�O�O�O�ZO�[O�]O�\O�^O `OܣO��O��O��O��O��O�`O�aOjcOLbO�dO�eO��Oe�Oq�Ob�OZ�OS�OL����~���(�-���:�*�;�^�<�����O�p� ��>��� ��"˾"�	���������T�������������0�ύn�
����Q��ߤ������������]����G�B�PK�S��P
�Pm�P��P4�P
5Q`�P�P��S@�U��UؐUְS��U��U�U��U�Z��S`�Z.�U�U�X�U��U��Um�U#[5�U��U�TM�U��U��U8�S~[V�S(�S��T#�U��U��U��U�S�S��U��U��U�UϯQ��UD�U2Y1	[��Q�UI�U|[[�V��Q
V�V�VN
VVTBTV�[�T�T�V�V�VQVV�T�V�VpT	T�[mTH
TT�T�TeT�[?V�Ve Vi!VR�RHR�R�$V%T�%V2&VuTT�T�&V@TT�T�'V|TW(V�T� TL!T�(V�)Vh*V�+V�,V�-V�.V�/V�0Vu1V�1VW'T� R�!R(T�"R�[�+T�,Ta-T�[�[0.TC,R/T�/T�-R&0T<1T�1Tb2T�2THV�TJV�LV�OVd[?VV�ET�XV�ZV�\V/"[�NTdV�hV�kV�oVqVrtV�T�wVzVVx�V��V�V��VikT�mT�oT�qT�sT!uT�wT�yTd�VƔVA�V˛V͟V�T��V�V�V
�V��V>�T�%[{�T`�Rq�V��V��V=�P��P��P`�PѴ�����L�����[�2�����ӱj��`��O/j2� O�!O="O#Oۻ�X�-tY8Zl[�]�\�^�_�`�a�b�7�:dcTe*Bpf�gBh�hui"j�j�p+q�q�r�Rv~ST�U�i�u�Y�[jbvc_�/�1[3ĝ"�"��"ܠ"6�"��"�"8�"��"��"��"��"�"h�"ɮ"!�"~�"-{�|�}�~�v�w������ �#E>�,�5����758�8R:j;0=D>�\?�@�AxC�D�E�F��k�k�lXm6n�n�o<pl�}H�IGJ�J�KuLN�O&Q�RT�ULW�X�Y[�\|^�_a�bReqp�fChi�x{�{�~�������������"ó"�"~�"��"a�"��"�"r�"1�1�3�67Jv7�N�N�;a<��<�=�>?�;T�?�@3A4B8C��C�7�����$D�EJ��v����v�p��wG�H�I�J}6�7�8:KK�LN�NcUV�W}Y�Z�[�\^���\0
�
��O���
��������?�$�/�"�������������]�.��N���N�N{�N��N��N;�N͡Nf�N��N �N��N��NO�NP�N�N�N�N;�N��N֨N�Nk�N��N��N+�NV�N��N�N��N��N�N��N �NK�N��N�N�N{�N��ND�N�N�H���ο/�s�u�NF�N�N�N!�NO��NX�N��N��N������V���,�N��´N�NQ�Nv�N�O�N��N(�N��NֹND�Nm�NHO��N��N%�N��NսNC�Nl�N�O��N��N$�N��N�NB�Nk�NvO��N��N#�N
O0O�O�O�OXO~OO�O��9�x����P�>O��N��NH�N�N�O�N��NL�N��N��(�[�������v�N"��N7�N��N��Nt	Ok�N�N��N1�Nd�N��N�N
O��N4�N��Nh�N��N�N:�N�
O��Nk�N
�N��N��N8�Nq�NOO
�N��NA�N�O!
O�O�
O�OYO�O.O�OOgO�O�OZO�OPO��N�Nv�N��N�O]�N��N��N>�N{�N��N�N�O��NZ�N��N]���N��N�N4�N:O��NA�N��NQ�Nd�N��N��N�O|�N�N��N�N+�N��N��NVOC�N��NZ�N��N��Nh�N��N�O
�N��N!�NtO�O�OO'O�O�ODO�O#�������Q�}�]O��N��N6�N]�N�O��N|�N�N��NE������g���,�NL����~���(�-���:�*�;�^�<�����O�p� ��>��� ��"˾"�	���������T�������������0�ύn�
����Q��ߤ������������]����G��&N>(N�)NE7N*:N4=N"@N	CNFN�HN>LNFON6RN;UNXN�ZNd]NY`N�bN��M�eN,jN�kN~mNSrNT+NR4N��M�5N�,N<-N�-N2uNyNC�M��M��MvzN�%N�3N�N��M�M�N��Ms�ME�M��M��M	/N�Nt�N��N�N��NT�MJ�M��M��M��M��Ms�M'�M�M�M�M��M�M5�M�M��Mo�M�M�M
�M�MnNFN	N�N�N�N�NlNNN	N�	N�
NwN�N}
N=N1N�N�N`NNNN�N�N�NcN#NN@NN�N�N�N�NГN�NH�Nb�N��N��N2N�NT!NR Nr"NM#N6%Nk$NӌNʍN
�Na�N��P
�P�,Xm�P�%UV&U�,X��V��VD�V��Z��V�V��V��V��Vn�Vu�Vq�Vn�Va-X+1Ut3UN4UO.X/XB0X�1X�~UkU9�U�U�2Xu3X4X�4X��Z�Z`�P�Pi5X��P�5X�7XI;X_>X�?XAXKBX�CX�DX�FX�GX�HX>JX�KXILXMX�MXPNXOX�PX�QX�RX�SXVUXuVXoS�WX�S�S~S1S�SWS�S�SVXXYX�YXtZX0[X�[Xo	S�\X�	S&]X�]XD^X_X�_X�`X�aXqbXPcXOdX=eX�S,fX�SQS�S�SgX�gX�hXbiX7jX�jX�kXhlXmX�mX�nX^oXpX�pX�S4S�SOSpqX�qX�rX4 S^sX� S%!SktX�tXxuXvX�vX�wXxyX�zX�|X~XXX��X�X|1Q>�X�X��V�V�1l1G1�1i1�1�11�161�1_1�1%11w11�11�1M1�1131 1� 1�!1�"1�#1�$1n%1�'1�)1�+1),1�,1-1�-1'.1�.1O/1�/1r11�11�21&31Y01�01�31@41�41K51�51q61�6181�81�91�:1l;1�<1�=1^>15?1@1�A1c0P�08�/��/�/�l'f�/�/y�/��/��/i�/�/��/��/W�/��//�/��/�/�/��/N�/��/&�/��/�/��/��/��/D�/�/s�/	�/m�/�/.�/��/��/��/5�/��/9�/��/60�0,0�00g0�0m0�0s0�0k0�0ڛ/�/ל/��/Q�/�/˟/��/K�/�/ˢ/d�/��/��/�/��/A�/֦/k�/�/��/u�/,�/�/��/T�/�/ȭ/��/,�/ϯ/r�/�/��/^�/�/��/�C1�D1PE1P�/�/��/0�/ж/s�/�/��/F1\�/��/��/9�/ػ/z�/�/��/�F1�G1gH1"I1�I1�J1YK1L1�L1`�/�/Կ/��/H�/�/��/�/<�/��/��/�M12�/��/��/9�/��/�M1ON1�/x�/�N1��/&O1�O1�O1EP1�P1Q1mQ1�Q1:R1�R1�S1T1�T11U1�U1.V1]�/��/2�/��/�/g�/�V1��/3�/W1��/?�/��/��/U�/
�/��/t�/�W1)�/��/��/<�/��/��/U�/	�/|X1MY1��/c�/	�/��/U�/��/��/P�/��/��/C�/��/��/5�/��/��/Z1�Z1�[1x\1A]1
^1�^1�_1-�/��/��/��/M�/�/��/��/�h'�^'�X0��.)Y0Z0�Z0�[0��.s�.R�.1�.��.��.��.I]0��.��.T�.��.��.!�.��.M�.�.V�.�.Z�.��.U/�/W/o�.y`0 a0�c0-d0�]0/�/i/�d0/�/�/�/t	/k
/b/Y/�e0sf0lg0eh0P
/'/�/�/�/�/l/^i0W/Gj0�j0k0�k0�k0Fl0�l0m0Wm0�m0"n0�n0Y/Z/�n0;/!/�o0/�/�p0�/�q0&r0�r0Js0�s0`t0�t0|u0v0�v0w0�w0>x0�x0Ry0�y0lz0�z0F/�/�/6/�/�/F/�/�/p /��%1!/�!/�"/n#/E$/%/�%/�&/{'/(/�(/t)/#*/�*/�+/h,/5-/�-/�0��0��%�0�./�//�0/�1/�M?�M�%�2/r3/N4/�&*5/�	&7/��0
6/��0ɘ0ҙ0��0��08/z&�8/�9/�&y:/ܚ0Û0^;/</�>/��0�</�=/��0q?/)@/�@/�A/+B/�B/�C/dD/�D/�E/MF/�F/uG/H/�H/�I/�J/RK/L/ݠ0�0�L/l�0��0�M/NN/O/.P/�P/�Q/��0��0TR/�R/uS/;T/�T/DU/��0��0�U/V�0��0��0E�0�0��0*�0Ǫ0f�0�0��0E�0�0��0)�0ϯ0k�0	�07V/�V/�W/iX/Y/�Y/�Z/�[/c\/']/^�&�]/�^/�_/}`/ja/Wb/Kc/d/�d/�e/nf/3g/�g/�h/�i/j/bk/>l/l�0g�0��&{�&6m/
n/��&�n/�o/�p/��&�q/��&�s/��0�r/}�0��0�0��0��0�t/��&�u/�v/<�&zw/>�0;�0ux/Iy/�{/'�0z/�z/%�0�|/�}/}~/I/�/��/_�/�/��/c�/7�/�/(�/��/ȇ/{�/y�/R�/��0u�03�/&�0T�0^�/h�0g�09�/�/Ǝ/g�/�/��/H�/f�03�0�0S�M��0�/8�0D�/͒/��0{�0X�0�0��0{�0:�0��0�0��0	�0r�0��0\�0��0��0A�0��0�0��0��0w�0��0_�0��0M�0��0&�0��0��0��0:�0��0>�0��0�0��0�0��0��0L�0��0*�0��/�/��0Q�/Ք/8�/�0_�0��/�0�/;�0t�/��0�/��/%�0��0�0~�0�0b�0�0L�0�0S�0�0Z�0�0��/_�/ɘ/̴.�.#U�.Ͷ.��.;�.=�.�.��.Ӻ.Z�.�.�.��.@�.�.��.)�.��.6�.��.2hc�.��.�0�0��#B�#<	0�
0�0�#��#�0��.0�.ç.��.m�.�.��.p�..�.�.B�.�.د.��.�.�{�#&�#��#.0c�.l�.c�.&�.(��G�]����R��!a%G����������3�.�.3�.�.��.`�.e�.�.�.��.�.��.-�.��.E�.S�SsTU�.��.2�.H�.\�.o�.��.��.��.��.ӟ.�.��.��.H�.�.z�.ܰ.Z�.��.`t�qڱ.�4w{�.�!�.��.��.Q�.p�=�.P���.��.��.=�.n�.��.��.��.P�.�.��.s�.*�.��.4���.3�.��.�.��.��.��..�.��.6�.��.��.
�.�.��PB�P
�Pm�P
5Q�%UV&U�&U��V��VD�V��V��V�V��V��V��Vn�Vu�V~�V`�V��Vq�Vn�V60U5uU2U�2UvU�vU5U�V�Vf�V��V=�V��V?�V�~UkU9�U�U؁U>U�>U?U��V��V��V_�Ul�U��U��U%�U�U`�P�P��U��V��VP�V_�Vp�Vv@QwW��U�W�W�LQW�	W�W�W.�UPW��Z�U��U�WBW#W4&W�)W5-W\�QτQZ0W+2W�4W�7W.;Wa>W�AWIFWE�ZLW��U�MW�NW�OWQW�S��U��Q��U�QWϯQ��QD�Ug�Z��Z�]WaW�dW>�Z�Y�Q��Q
V�V�rW@sW�sW�Q�tWBTV�uW�wW�T�yW�{W�V�|WV�T�V�V>}WpT	T�~Wt�Wy�Wx�WTu�W�T�TeT1�W?V��W�Ve VE�WR�R�"V�#VHR�R�$V%T�%V2&VuTT�T�&V@TT��W�'V|TW(V`�W� T �W�WȏW��W�WڑWԒW�)Vh*V��W�W*�W�+V�,Vf�W�.V�/V�0V��WE�Wr�W5�W"�W� R�!R(T>3V.�W͟W1�W�+T�,Ta-T��W�;VK�W�W.@V�@V��Wm�W/T�[U�W�/T�-R&0T<1T��WA�W�W��W�2RH�WG�WHVh�W��Wz�W��W<�W?VV�W��W�UR�\V��W?�W��W<�WdV�hV��Wm�W��W"�Wm�W��WO�W��Z��Wa�W݆R�V��V��W4�Z��W-�Wr�W��Ww�W��W�X�Xm
X�X`XiX�X2X�X�VX��V>�T� Xg$X`�Rq�V-&X�)X��V=�P��P`�P��V&�V�V�c)\d)Vf)7������
�����o�Ph)����t��l�۪;���6���1���;���7j)m)ɴ1�����r)p)�t)&u)�u)�u)[v)�v)*w)�w)�w)�z)@})�)��)��)��)��)]�)��)�)���)Ҏ);�)��)>�)}�)��)D�)��)��)��)��)��)�) �)^�)٢)T�)��)����m���a�)K���"�����i���F�ϧ)���|�)ެ)E�)��)�)~�)�)F�)��)C�)��)��)Z�)�)��)��);�)��)V�)��)I�)��){�)7�)��)��)��)�)o�)h�)� `�)P�)��)D�)��)��)d�)��)T�)��)D�)��)N�)��)X�)��)>�)��)B�)��)2�)��)�)��)�)V �  ��)͚,ܕ"3�"F�,қ,^�,�,v�,ĝ,�,�,ȟ,ǡ,N�,��,ͥ,�,h�,��,F�,ҫ,^�,��,��,��,V)W)X)�X)�Y)�Z)�[)d\)s])�^)�_)�`)�a)�b)G�+��+i�+
�+��+i�+��+v�+��+��+@�+��+�+r�+��+N�+��+D�+��+�+��+��+H�+��+�+��+�+��+)�+��+A�+�+��+��+@�+�+n�+?�+�+y�+N�+�+~�+�+��+T�++�+�+��+�+m�+�+8,j,�,O,
,�,�,Q,�,W,�,y",�$,q',^�';�'�'��'��'W�'��'�'��'��'��'��'��'[�'�'��'��'��'��'��'��'��'��'��'��'�'��'�'A�'%�'^�'�(�(.("(p(T(�(	(�	(�
(�(
(@(f(�(�(�(n(�(*(K(�(s(�(H(y(!(%"(�#(M%(�&(
((/)(�*(�+(--(�.(%0(�1(�2(44(I5(�6(Y8(�9(�:(<(;=(->(,?(�?(�@(�A(oB(6C(D(E(F(�F(xG(fH(JI(�K(yN(�P(�S(�U(�X(_[(�\(m](H^(Z_(Ip(�q(s(it(�u(�v(x(y(;z(L{(p|(�}(~(�`(�e(�a(�b(Kc(
d(e(�f(�g(Zh(<i(�i(tj(\k(3l(#m(Nn(ro(�(F�(7�(��(��(׉(�(�(��(K�(��(��('�(��(/�(�(	�(ގ(��(��(k�(%�(��(��(l�(�(��(ɝ(��(V�(�(ʤ(G�(�('�(�(��(x�(N�((�(��(ܯ(��(��(H�(��(��(��(4�(�(ζ(��(e�(5�(�(պ(��(��(��(��(��(��(f�(:�(�(��(��(��(d�(8�(��(��(��(Q�(�(��(��()�(��(�(��(�(��(��ݒ�(��(��(s�(=�(
�(��(��(z�(N�("�(��(��(��(u�(L�( �(��(��(s�(9�(�(��(��(^�(��(R�(��(D�(��(>�(��(6�(��(5�(��(7�(��(=�(�(��(�(��(��(��(A�(G)I)J)S)X)\)@)�)b)�)�)�)�)�)h!)�")�#)�$)�%)')y+)�/)�3)�7)�;)
@)zA)�B)�C)E)�E)"G)�G)dH)I)�I)GJ)�J)nK)�K)}L)M)�M)�N)tO)N)�N)�O)aP)�P)NQ)�Q)YR)�R)xS)T)�T)JU)�
++@+N+X
+�+�++�+N+�++G!+x#++&+�(+�++D.+�0+�3+54+�4+?5+�5+L6+�6+\7+�7+i8+�8+s9+�9+�:+
;+�;+<+�<+4=+�=+L>+�>+^?+�?+p@+�@+ZA+�A+VB+�B+dC+�C+uD+�D+�E+F+�F+G+�G+H+�H+#I+�I+)J+�J+/K+�K+=L+�L+KM+�M+_N+�N+sO+�O+�P+Q+�Q+�Q+TR+�R+$S+�S+�S+\T+�T+6U+�U+V+qV+�V+IW+�W+7X+�X+9Y+�Y+>Z+�Z+C[+�[+E\+�\+A]+�]+=^+�^+?_+�_+-`+�`+-a+�a+?b+�b+=c+�c+;d+�d+?e+�e+=f+�f+?g+�g+Eh+�h+Ni+�i+Wj+�j+]k+2l+m+�m+�n+�o+fp+<q+r+[s+�t+�u+0w+zx+�y+{+U|+�|+�|+~++�+`�'�Aɂ'b�'ք'i�'��'��'g�'�'��'��'��'�'��'�'A�'��''�'��'�'��'�'x�'�'^�'=�')�'�'��'z�'��'p�'�'`�'՘'J�'��'@�'H�'I�'E�'F�'��'B�'2�'��'4�'5�'1�'2�'/�'�'��'�'�'�'�'�'7�'T�'̭'&�'��'�'j�'د'@�'��'&�'��'�'v�'�'L�'��'"�'��'�'��'�'P�'��'�'��'��'\�'Ƹ'(�'��'ܹ'B�'��'��'q�'û'�'��'�'y�'�'��'��'U�'��'�'x�'�'&�'��'��'k�'��'�'��'>�'��'�'d�'��'@�'��'�'{�'��'<�'��'�'w�'��'d�'��'6�'��':�'��',�'��'�'v�'��'��'�'o�'��'5�'��'��']�'��'W�'��'>�'��'�'��':�'(�'�'i�'��'�'{�'��'K�'��'�'c�'��'[�'��'U�'��'/�'��'�'��'��'Z�'��'^�'��'j�'��'`�'��'�+W�+��+2�+K�+څ+g�+�+��+�+�+ى+��+I�+Ջ+J�+��+4�+��+�+��+��+e�+@�+(�+�+��+m�+�+[�+Ҕ+C�+��+%�+��+�+�+�+�+ۚ+��+ś+��+ �+��+��+v�+e�+P�+)�+��+#�+�+��+�+զ+A�+��+�+��+��+f�+ͩ+�+�+w�+ͬ+/�+��+��+e�+Ů+-�+��+��+Y�+��+�+{�+ڱ+?�+��+��+a�+dz+3�+��+�+L�+��+��+V�+¶+'�+��+�+G�+��+�+R�+��+�+]�+��+'�+{�+޻+H�+��+��+a�+̽+"�+��+�+Q�+��+4�+��+�+e�+�+a�+��+K�+��+�+��+��+��+	�+r�+��+0�+��+��+N�+��+0�+��+�+z�+��+t�+�+n�+��+�+t�+��+<�+��+=�+��+-�+��+��+��+n�+��+B�+��+�+��+�+��+��+V�+��+R�+��+V�+��+D�+��+k'6l'��l'�l'%�Em'�m'!n'�n'-o'�o'Np'�p''q'�q'Ar'�r'�s'rt'Gu'%v'w'�w'F2!��x'(y'�y'�y'7z'�z'�z'$r�n�j�[{'�|'�}'�~'�'G�'��'*,�+,�,,|-,@.,7/,�/,�0,x2,=3,4,�4,�5,�5,86,�6,)7,�7,:8,�8,-9,�9,r:,�:,=;,�;,�;,d<,
=,�=,�>,^?,
@,�@,A,�A,�B,YC,(D,E,F,TG,tJ,�M,�P,�S,V,�W,�Y,�Z,H[,O\,],�],�^,r_,"`,�`,�a,Bb,Kc,�c,e,�e,�e,Uf,g,Xg,�g,bh,�h,k,Im,�o,�r,�r,fs,�s,@t,�t,u,}u,�u,?v,�v,�v,9w,�w,�w,Zx,�x,y,�5"@6"Hy,�y,!z,�z,�z,J{,�{,|,�|,�|,]},�},~,|~,�~,D,�,�,��,�,��,"�,��,�,p�,ƃ,�,s�,ʄ,(�,��,؅,1�,~�,؆,&�,t�,RB"�B"�B"IC"�C"�C"ZD"�D"Ç,(�,��,�,g�,͉,��,��,)�,�cfd�)�)��)b�)�)��)`�)
�)��)_�)
�)��)`�)�)��)a�)z�)�)��)��)%�)t�)��)�)d�)��)�)Z�)��)��)g�)-�)�)��)Z�) �)�)_�)"�)��)C�)�)�**�*Q*�*^*�	*�*�*�*
*�*O*�*p*e**�*�*�*S!*0#*�$*�&*�(*a**,*.*�/*�1*T3*D5*-7*�8*�:**<*>*�?*{A*C*�C*jE*�F*kH*�I*�K*�L*�N*P*�P*OR*�S*NU*�V*mX*�Y*j[*�\*�^*e`*-b*�c*�e*zg*Oi*k*Bl*ym*�n*�o*�p*r*as*�t*v*2w*|x*�y*rz**{*�{*�|*E}*~*@*<�*��*��*u�*1�*f�*�*ۅ*��*Z�*�*׈*��*��*Z�*G�*>�*��*َ*��*v�*`�*T�*��*-�*ɔ*]�*�*w�*�*��*�*��*�*F�*@�*I�*+�*�*�*�*��*��*v�*��*ި*�*#�*�*c�*�*��*G�*ȶ*l�*�*��*+�*˾*H�*
�*��*)�*��*��*X�*.�*��*��*��*q�*'�*�*��*x�*�*�*��*s�*�*��*P�*��*g�*�*��*'�*��*W�*��*l�*�*��*	�*��*�*	�*��*�+B+B+�+�+Kh'�h'i'ti'�i'Bj'�j'�R'S'S'�S'\T'�T'<U'�U'V'�V'�V'OW'�W'#X'�X'�X']Y'�Y'%Z'�Z'�Z']['�['G\'�\'1]'�]'-^'�^'(_'�_'%`'�`')a'�a'*b'�b'�b'Tc'�c'd'�d'�d'Ye'�e'Sf'�f'Mg'�g'�$�$�$�$��
$$�$�!$�$$�'$�*$x-$d0$m3$Jo��n6$�6$7$c7$�7$�7$T8$�8$�8$C9$�9$�9$�:$�;$�<$=$�=$N>$�>$�?$9@$�@$xA$�B$�C$�D$�CyE$�EF$�G$�H$�I$`L$�N$�P$HS$�U$
X$cZ$z\$�^$�`$c$5e$Jg$ii$gk$lm$Dr$�t$iw$z$�|$@$��$�$��$�$��$D�$ސ$�o$s�$��$~�$��$Ş$H�$ţ$�$b�$5�$��$��$P�$Ͳ$)�$`�$u�$��$��$�$$�$��$��$��$�$n�$j�$l�$h�$i�$G�$#�$��$��$��$��$��$��$^�$M�$*�$'�$�$��$��$��$6p$�$j�$��$��$��$��$��$�$;�$��$�$��$��$��$��$�$��$�$o�$(�$��$��$��$>�$��$N�$��$m�$�$��$2�$�$f�$���$��$	����$��
�z�$J�$�$�$�$�%}%P%5%%%�%�%�%x%]	%�	%�
%(%"%�%�
%x%�%�%�%�%x%O%5%%�%%[%�%�%�A%!%�%[%%�%�!%D#%%%�&%�(%G*%,%�-%z/%,1%�2%�4%l6%%8%�9%S;%�<%�=%c>%2?%p@%�A%�B%D%ME%�F%�G%I%J%NK%BL%�M%�N%�O%+Q%.R%S%�S%�T%�U%�V%rW%VX%;Y%�Y%�Z%[%\%~\%v]%V^%�^%�_%a%!b%.c%;d%We%qf%�g%yh%�i%\j%�k%ul%�m%�n%�o%�p%�q%�r%�s%�t%v%-w%Fx%0y%@z%{%<|%"}%N~%b%?�%R�%f�%y�%~�%��%��%��%��%��%��%r�%��%z�%��%��%k�%��%ȟ%�.>0	�%Z5s�%§%��%��%��%+�%�%ڲ%��%��%$�%��%*�%��%8�%��%�%c�%��%�%[�%�%��%��%s�%��%��%��%��%�%r�%1�%K�%�%��%��%��%i�%�%��%��%�%R�%�%��%��%��%��%��%��%��%>�%
�%�%T�%
�%��%��%�%��%��%��%*�%��%��%��%��%�%U�%�%*�%G�%Z�%��%4�%��%��%c�%�%�%�%��%��%X�%t�%�%b�%�%�%[�%�%�%S&��%��%�&&�&�&�&f&�&�	&O
&]&�&�&�&&~&3&�&�&z&.&F&&�&�&�&�&�&�&W &�!&�&&"(&�"&$&v%&w)&�*&i+&1,&H-&�.&X0&�1&\3&�4&
6&87&U8&�9&�:&�;&�=&+?&U@&�E&G&�@&�A&�C&sH&�I&K&�L&uN&�O&�P&�Q&VS&�T&�U&�V&�W&�X&\Z&�[&�\&jY&�]&_&�_&�`&�a&�b&�c&vd&Qe&�f&�g&�i&�j&�h& l&�q&�r&�s&�t&�u&w&x&fy&�z&U|&}&�}&�&π&��&~~&e�&ك&��&^�&�&��&�&׉&��&Y�&{�&�&�&Ȏ&��&t�&I�&��&6�&�&Ε&��&+�&͙&��&U�&�&�&�&��&`�&�&{�&c�&��&��&�&Ĭ&��&o�&�&�&��&��&��&g�&��&e�&<�&,�&�&�&��&��&��&��&��&ף&��&��&(�&�&��&��&��&�&��&��&��&��&�&��&B�&!�&~�&`�&�&��&q�&��&^�&<�&�&K�&/�&�&��&��&s�&�&��&	�&��&Y�&~�&	�&�&��&�&z�&�&M�&��&%'�'''�'	'y
'A'�
'�'�'A'6'�'R'�'^'�'T'�'/ '�!'�"'�#'�$'�&'�''6)'�%'p*'�+'�,'�-'�.'�/'�0'�1'�2'4',5'K7'�8'=6':'�?'"A'B'C'9D'�E'qG'
I'�J'HL'�M'�N'�O'�P'}Q'R'�)N)�*���+,�,�f���F/i1&2�3>5�6+:�;�<LC
F�F�H|JWL;O�P`S�TDV�Z�\�]N^�^�b%joqv�wvxy�y,z���zI�Q�����d�v���x�
3���#-{<|�|�}�~��������(�ď\����ԕ�O���!�#L�#y�#t�7���/�#��#9�#��#L�#�#��#�5���������#�����/�������z�L������`�P�����������2����������e�����_�v��x�B�8���X���ސ��$$�Q�ړ���q�E��~��H���v����H	�
A��DΧ#�#�N�4^�#��c9A���>+ !I��a�#�#>'��ߢ��-���G2�3x4t5}6�7�8:C;L<e@JC�A�D�#��#۬#��MF|G�I*KYL{M�N�O:�M�U���\0
�
�vPpx�����,�S�!�#�#��#w�#i�#��#ְ#��#��#� �#8�#�#��#�#R�#��#��M.�Mƶ#��#�#��M�#�#��#��#r�#E�#^�#�#R$�#��#�$$�${$��#�#��#$$�$�$Y�#��#�#��#��#��##�#�$M
$l�M�$a�#4�#��#.
$B$	�MB�#��#_$��#"
�#C�#��#�$�yzB{�{}y��#v�#B5�
�#��#z�Z�	�����"$S%�%n&��#��#��#��#��#��#Q�#����4�ى��"���αֲr�ܴQ��#U�#��#����w����D���r�	$����������T������i�������~��=������*(B�PK�S��P
�Pm�P��P4�P
5Q`�P�PX�SְSֳS��S��S��S/�Z(!Z�$Zc(Zb�XτQ+Z�-Zz0Z3Z��S�STBT�Y�5Z�6Z	TW'T� R�!R(T��ZU�Y�+T�,Ta-T0.TC,R/T�/T�-R�9Z<1T�1Tb2T)�Y�Z�:Z�ET=Zh@Z�DZ�GZ�]T��YCLZ�T]�T�PZ'\2�\27]2�]2G^2�^2s_2`2�`2,a2�a2:b2�b2Zc2�c2ld20f2�f2���d2�e2ӱj�kg2��C�q�C��C/j2����C2�CԢCډF!�FƋF'KC�Fi2�2�2
2�272�2:2�2s22�2��F�F?�M��F��M~�M@�M��M-�F��F�
2+�F]2�:�2�2��F+2�>�2I�F��Ff�M�M��Me�M�F��F!�F�mC�F��F�oCs�Ft�F�F�O�R��F~S1�Ff�F��F��FٝF�hF�hF�jF�kFElF�{Iy|I�}I^~IEI�I��I�IU�C�_Ft`F�`FiaF�aFObFcF�cFndF#eF��F
�1�lF�mFZnF�nF`oF��1�oFqpF�pF��1\qFijM��1��1tF�tF�i�uF-vF�HCqp>yF�IC�rWs%�FzC��F��F~�FڈF�FĢF.k�F��I%�I��IS�I݃I�6Fy6F|7F�7F�8F�E�9Fb17J��B�9Fy:F�:Fy;F{d1<F|<F�<F:=F�=F>F}>F�>FY?F�?F@Fu@F�@FFAF�AF*BF�BF"CF~CF�CFLDF�DF"EF�EF�EFOFF�FF,GF�GF�GFVHFN�FU�F'l�M��FU�Fp�F�F�HFL�FIF�IF~u
v�v�vQx�yFX62
jM�lM{oMYpM#qMsM
tM�tMZuM�{FK|F�|Fx}F~F�~F;F�Fc�F�vM�wMfxM�xMdzM�{Mj|M(�F��F[XF	}M>YF�YF3ZF�ZFy[FU\F0]F�]F�^FT_F�}MOC�~M�RCCC�C�C�CyM�SC`�M�M
�M�MG�M��M(�M$�C�C+�CՅMöM��MX�MY�MU�Mu�M��ML�M,�C�M�M�MW�M�TCG�MJ!C�MܐM�!C�MC�M�M��MK�M�MH�M�#C:$C�M�M�$C�M�UCїM�&CL(CE'CQ)C_�M�VC�M>�M�*CÙM�+C�,C̚M��MؼM�-C�C^�M��M#�MT�M��C؞M��M��M7�M	�M/�M<�C�CZ�C��C�M�MäM��MץM�M��M��M-�M&�M��M��C#�M��MH�M��M��M��ML�C%�Cq�C��Cq�MZ�M8�M�MS�M��MT�M/C	0CnfFK7C��F�7C��M�F��M:�FG�M��F�Fd�F�F��F�:2M;2�F��F!�F��FI�F¸F/�F��FSgF�gF��1��I5�I��F��FY�FڻFd�F�Fo�F��F��F&�F��FC�F�FT�F0�ύn�
����F�S2�W2�Mq�F��M"[2\�Cx�PB�P��P
�Pm�P��P4�P��P��P
5Q`�P�Pq�X��X��X��Xv@Q�Xt�X��U�U��U��X
XQP�X.�U�U�X�U��U��XBWE�X4&W|{Qc�X<�XτQ��Xx�X�X��X��Xn�X�AWMYLW Y�MW�NW�OWQW�S��UY�Y�	Y��U��Q�
Yx
Y��UD�U2YKY��Q�UI�U�Y#�Q�V��Q
V�V�rW@sW�sW�Q�tWBTVYY�T�T�Y�{W�VUYV�T�V�V YpT	T�	TmTH
T�VT�T�TeT� Y?V%"Y�Ve V�"YM$Y�$YR�RHR�R�%Y�$V%T &Y�%V2&VuT�&YT�T�&V@TT��W�'V�(Y|TY)YW(V�T�)Y �W�*Yz+YF,Y-Y�(V�-Y��W�.Yh*VR/Y\0Y�+V�,V;1Y�.V�2Y�/V�0V"3Y��WE�Wr�W� R�!R(T�"R�3Y�+T�,Ta-T�(R��Ze5Y�6Y�8Y��WC,R/T�/T�-R&0T<1T��WA�W�W`:YHV�TJV;Y>Y�@Y?VVPRYDY�UR�\V�EYSIY�^RdV�hV�KY�OY�PYrtV��WLTY�wV�UYa�W�ZY�V��V�\Y_Y/aY�cYd�V�eY��WohY>lY`X�nYOqY�XsYuY��V�wYm{Yg$X`�Rq�V-&X�~Y��V=�P��P��P`�P2�3}�3�eM&��fM��3�gM�hM�3|iMP�3m�3��-F/i1&2�3>5�6+:�;�<��3��3a�3��3x�3��3��3��3�3S�3*��Z�\�^�3��3oqv�wvx�B0y�y,z����3�zWq3�s3�t3q�38�3ܠ"<4�4�4�4�4��"�44o4-{�|�}�~����ԕ�O���!�#L�#y�#m37���p3�5�������v3qw3Fy3{3}|3�R~3�3�3��3��3��ݘ3��3��3�3F�3�3Q�3Lj3'�3��3Œ3=�3��z�3������3��"�3h�3U�3ZA0��3,�3#�3{x�3f�3X�3��3��3a�3f�3��3��3��3j	4`
4��"��"�"�4��H	�.3��v7=03z13�23��<4^�#�c�?eM�eM�>+ �C��37���3#�#���3ߢ��-���G2�3x4t5}6�7�8:�=?e@�A�43|G�I*KYL{M�N�O���\0
�
���3�3IM#ı363&83�63=93B:3J;3��3��3L�30<3�=3�<3�>3�?3�@3y�3�3z�3b�3@A3�C3IB3�D3�E3'G3�3F�30�3�30H3%I39J3	�3Ը3��3h�3PK3VM3/L3fN3dO3eP3�3"�3�3��3DQ34S3R39T3,U3"V3��3D�3�V3��3��3�W3�X3�Y3�Z3f�3k�3�!(�3�[3)�3��3��Q\3����Ƥ3k]3�_3b3V�3d3�e3����*�3��3��#��#��#��#��#��#Q�#2g3�h3�i3]j3k3�k3Kl3�"˾"��3ܴQ�.�3-�3�#U�#��#�3i�3��3̬3�3B�3��3��3��3��3��3��3��3	$����������T����i�����3I�3e�3���3��3U0M�0M 1Mw1M�1MG2M�2M3Mq3M�,�3MB4M�4M5M�5M�5Me6M�6M*7M�7M�7Mh8M�8Mb9M�9M.:M�:M;M�;M
<My<M�<MR=M�=M�>L�>L2>M�>MQ?M�?M<@M�_M�_Mt`M�`MOaM�aMbM*MH�L��L<�L�L�L}�L��L[�L�L/�L��L&�L��L�L�@M
AMsAM�AM4BM�BMdCM�BM�CMCDM�DM6EM�EM%FMO*MbM�cM�bMcM;?L�?L�?Le@L�@LAL�AL�AL�BLIBL�BL^CL�CL#DL�DL�DL\EL�EL9FL�FL�FLXGL�GL$HL�HL<IL�HL�IL�*M+MQ+M�+M�ILLJL�JL�KL'KL�KLYLL�LL�ML>MLNL|NL�NL-OL�OL�OLJPL�PL-QL�QLZRL�QL�RLSLOSL�SL6TL�TL�SL�TL�UL�UL8ULVLpVL�VL.WL�WL�WLIXL�XLYLdYL�YL�ZL=ZL�ZL_[L�[L�\L4\L�\L�+M?,M�,M�,MS-M�FMO]L�]L	GMwGM�GM[HM�HM@IMMgM^L�^L^_L�_L�`LCaL�aL�bL%cL�cLfdL	eL�eLIfL�fL�gLhLdM�hLiL�iLjL�jL(kL�dM�nLToL�oL7M�MdpL�pL}qL3rL�rL�sL$tLQM�tLYuLvL�vL�M3wLyMM�wLNxL�xL�yLWzL
{L�{L�M=|L�|L}L~L-M�~L�MPMOL�Lj�L!�LӁL��L�L�M��LL�L��L��LtM)�L		M�	MĆLB�L߇L��LH�L��L��L%
M+�L��Lk�L�L�
M��LOM�M,�L��L7�L�L��L	�L[M��L&�LÒLY�L�M�Lk
M
M��L-�LܕL��Li�L0�LϘL�M��L*�L�L��LRM=�L�M�MڜLs�L$�L�L��L~�L�L%M��LQ�L��L��L�MR�LyMM��L��L,�L�L��Ln�L�L�M��LV�L�L��LNMW�L�M�M��L��L1�L�L��Ls�L�L#M��L[�L�L��L�M\�LeM�M�L��L3�L��L��Lr�L
�L�M��LW�L�L��L7MU�L�MnM��L��L)�L�L��Le�L��LM��LG�L��L��L�MB�LEM�M��L[�L��L��La�L�L��LD�L��L��L�LaM��L�M�M\�L��L��LL�L�L��L^�L&M	�L��L]�L�L�M��Ld M� MI�L��Lx�L9�L��L��LK�L�!M��L��LJ�L��L3"M��L�"Mh#M5�L��Lb�L"�L��L��L1�L$M��Lz�L-�L��L�$Mr�L<%M�%M�L��L@�L��L��Lv�L�Lh&M��LR�L�L��L'MG�L�'M5(M��Ll�L�L��L��L?�L��L�(My�L�L��Ld�Le)M�-M�IM�IMGJM�JMKMmKM�KM;LM�LM�LMiMM�MM9NM+.M.M��L�L��L��Lr�L�Lf�L�L@�L��L5�L��L#M�M�.M7/M�/M�/M�L�NMOM�OMPM�PMs�L��L�M@_MQM�QM�QMcRM�RMfSMTM�TM2UM�UMhVM�VM,WM�WMXM�XM�XMpYM�YMFZM�ZM�[M�[Md\M�\MC]M�]M%^M�^MU0M�0M 1Mw1M�1MG2M�2M3Mq3M�,�3MB4M�4M5M�5M�5Me6M�6M*7M�7M�7Mh8M�8Mb9M�9M.:M�:M;M�;M
<My<M�<MR=M�=M�>L�>L2>M�>MQ?M�?M<@M�_M�_Mt`M�`MOaM�aMbM*MH�L��L<�L�L�L}�L��L[�L�L/�L��L&�L��L�L�@M
AMsAM�AM4BM�BMdCM�BM�CMCDM�DM6EM�EM%FMO*MbM�cM�bMcM;?L�?L�?Le@L�@LAL�AL�AL�BLIBL�BL^CL�CL#DL�DL�DL\EL�EL9FL�FL�FLXGL�GL$HL�HL<IL�HL�IL�*M+MQ+M�+M�ILLJL�JL�KL'KL�KLYLL�LL�ML>MLNL|NL�NL-OL�OL�OLJPL�PL-QL�QLZRL�QL�RLSLOSL�SL6TL�TL�SL�TL�UL�UL8ULVLpVL�VL.WL�WL�WLIXL�XLYLdYL�YL�ZL=ZL�ZL_[L�[L�\L4\L�\L�+M?,M�,M�,MS-M�FMO]L�]L	GMwGM�GM[HM�HM@IMMgM^L�^L^_L�_L�`LCaL�aL�bL%cL�cLfdL	eL�eLIfL�fL�gLhLdM�hLiL�iLjL�jL(kL�dM�kL�lL)mL�mLmnL�nLToL�oL7M�MdpL�pL}qL3rL�rL�sL$tLQM�tLYuLvL�vL�M3wLyMM�wLNxL�xL�yLWzL
{L�{L�M=|L�|L}L~L-M�~L�MPMOL�Lj�L!�LӁL��L�L�M��LL�L��L��LtM)�L		M�	MĆLB�L߇L��LH�L��L��L%
M+�L��Lk�L�L�
M��LOM�M,�L��L7�L�L��L	�L[M��L&�LÒLY�L�M�Lk
M
M��L-�LܕL��Li�L0�LϘL�M��L*�L�L��LRM=�L�M�MڜLs�L$�L�L��L~�L�L%M��LQ�L��L��L�MR�LyMM��L��L,�L�L��Ln�L�L�M��LV�L�L��LNMW�L�M�M��L��L1�L�L��Ls�L�L#M��L[�L�L��L�M\�LeM�M�L��L3�L��L��Lr�L
�L�M��LW�L�L��L7MU�L�MnM��L��L)�L�L��Le�L��LM��LG�L��L��L�MB�LEM�M��L[�L��L��La�L�L��LD�L��L��L�LaM��L�M�M\�L��L��LL�L�L��L^�L&M	�L��L]�L�L�M��Ld M� MI�L��Lx�L9�L��L��LK�L�!M��L��LJ�L��L3"M��L�"Mh#M5�L��Lb�L"�L��L��L1�L$M��Lz�L-�L��L�$Mr�L<%M�%M�L��L@�L��L��Lv�L�Lh&M��LR�L�L��L'MG�L�'M5(M��Ll�L�L��L��L?�L��L�(My�L�L��Ld�Le)M�-M�IM�IMGJM�JMKMmKM�KM;LM�LM�LMiMM�MM9NM+.M.M��L�L��L��Lr�L�Lf�L�L@�L��L5�L��L#M�M�.M7/M�/M�/M�L�NMOM�OMPM�PMs�L��L�M@_MQM�QM�QMcRM�RMfSMTM�TM2UM�UMhVM�VM,WM�WMXM�XM�XMpYM�YMFZM�ZM�[M�[Md\M�\MC]M�]M%^M�^Mx�P��PB�P��P
�Pm�P��P4�P��P��P`�P�P
5Q5Q	7Q";QʍZ�Z;CQ�FQ�IQfKQ�LQ�OQ��Z
XQL�Z�_Q��Z�cQ�eQ4hQ��ZT�Z�tQ"xQ|{QQ\�QτQÆQ��QJ�Q>�Q[�QN�Zi�Q,�Qg�Q\�Qd�Q��Q��Q��Q��Q˨Q��Q��Q=�QϯQ��QD�QW�Q^�Q��Qg�QK�Q&�Z#�Q�Q��Q��Q��Qz�Q9�Q��Q�Q�Q��Q��Q��Q�Qh�Qb�QH�Q[�Q�Q��Q��Q��QE�Q��Q��Q�Q��Qt�Q9�Q2�Q%�Q��Q�Q��Q��Q�Q�Q��Qi�Q|�QR�RHR�R�R�R:R�R�RoRNR	R�	R�
R�RFR

R�
RmRARR�R�R�RZR�RZRSRVR�R�R�R�R#R�RR�R�R� R�!R"R��Z($R�%R7'R�'R�(R\+RC,R�,R[-R�-R\.R�/R0R�0R�1R+2R�2R�4R�7R7;R7=R�AR�DR�FR8�Z`MRPR�SRF�Z)WRܹZ_\R�^R��Z6gR�jRnRqoR�pR9tRWwR9zR�|R��R��R݆R+�R�R��R��RD�R��R$�RK�Z�R}�R��R��ZZ�R�R7�R��Z�Z �R��R�R��R��R7�Z��Ry�R��R��R��PSZ�TZ�XZ�[Z�\Z�^Z�_ZZaZ�bZidZgeZ�fZDhZ�iZwjZNkZ	lZ��P�lZtnZp�P�P�oZ�P;�P*Q�Q�Q?Q�Q�QZQ(Q�Q�QLpZ>Q�pZAqZ&rZsZtZuZvZ�
QwZ�Q�Q�Q�Q$xZHQ/Q�Q�Q�Q`Q-Q�Q"Q�Q�xZ�yZ
 QgzZt Q!Q~{Z|Z�|ZS}Z�}ZRZ�Z��Zj�ZՅZ3�Z{�Z�Z|1QU�Z+�Z]8L�9Lt;L4LC5L)7LX����`��������<L:=L�=L�/La0L1L�1L%2L�2L�2Lj3L��&��)�0�+�,�f���F/i1&2�3>5�6+:�;�<LC�F0
F�G0�F;I0�H�J0|JWL�K0M0;OqN0�P�O0`S�TDV�Z�\�]N^�^�b%joqv�wvx�B0y�y,z���zI�Q�����d�v���x�
3�-{<|�|�}�~��������(�ď\����ԕ�O���t�7���0�0 0�!0�"0�#0�$0�%0�'0�)0+0�,0�.0�00|20Q40#6090�90S:0Z;0`�P�����������2�������;0�=0e���ZA0_�v��x�B�8��P0X���ސ�$�Q�ړ���q�E��~��H���v����H	A��DΧ#�#��0�0�4^�#��c9
0A�0��>+ !I��a�#�-L#>'��ߢ��-���G2�3x4t5}6�70@0�8:C;L<e@JC�A�DF|G�I*KYL{M�N�O�U��>C0�\0
�
�vPpx�����,�S��3Xu�r!� hxv"c0z[0�{}y0B5�0E0��#z�Z�0	���"$3F0����4�ى��"��Q0αֲr�ܴQ��T0��w����D���r�	$����������T����i�������~��=��U0�*(̴.�.#U�.Ͷ.��.;�.=�.�.��.Ӻ.Z�.�.�.��.@�.�.��.)�.��.6�.��.2hc�.��.�0�0��#B�#<	0�
0�0�#��#�0��.0�.ç.��.m�.�.��.p�..�.�.�.LB�.�.د.��.�.�{�#&�#��#.0c�.l�.c�.&�.(��G�]����R��!a%}#G�����������3�.�.3�.�.��.`�.e�.�.�-L��.�.��.-�.��.E�.S�SsTU�.��.2�.H�.\�.o�.��.��.��.��.ӟ.�.��.��.H�.�.z�.ܰ.Z�.��.G/L�pts`t�qڱ.��V#Rv4w{�.�!�.��.��.Q�.p�=�.P���.��.��.=�.n�.��.��.��.P�.�.��.s�.*�.��.4���.3�.��.�.��.��.��..�.��.6�.��.��.
�.�.F/LC�L�	L	L�L�L�LdU�Kx�B��a�#>'G2�3��>C0�\0
�
�v+L�)LL� �L�L@"LH$L�U0�-"@��.6�.��.Lc�.��.��	�.د.��.L�+L/�K�L
�"��"��Kz�K�KC�Kh�K��K��K�K��K�KE�Ki�K��K{LzL�LyL�L�L�%L�&L�'L�(LE)L�)Ls*L+L*�K��Kl�K
�K��KR�K��K��KH�K��K��KF�K��K��KU�K�K��Kp�K+�K��K��K6�K��K��K��K��K'�Kg�K��KM�KJ�KG�KT�Ka�K[�K��K�K��K(�J��J��J]�J�J��J��JV�J��J>�J��J&�J��J�J��J�J��J�J{�J��Jg�J�JV�J�JH�J��JN�J�J<�J��J$�J��J�J��J��Jt�J��Jw�J�Jc�J�JO�J�JAK�K3K�K:K�K@K�KFK�KRK�K^K�KKK�K	K�	K
K�
KK�K=K�K8
K�
K K�KK�K�KpK�KsK�K_K�KKK�K=K�K/K�K6K�KK�KK}K�KkK�KmK�K[K�KGK�K6K�K(K�K.K�K+ K� K1!K�!K:"K�"KF#K�#Kf$K�$Kp%K�%Kv&K�&K'K(K�(K)K�)K2*K%+K%,K�,K,-K�-K|.K�.K/Ki/K�/K+0K�0K�0K`1K�1KH2K�2K03K�3K4K�4K5Ko5K�5K56K�6K�6KN7K�7K8Kk8K�8K99K�9K
:Kz:K�:KT;K�;K�<K�<K[=K�=K>Ku>K�>KF?K�?K+@K�@KAK�AK�AKqBK�BKtCK�CKwDK�DKzEK�EK�FKGK�GKHK�HKIK�IK�IKhJK�JKIKK�KK*LK�LK�LKQMK�MKNK{NK�NKGOK�OK(PK�PK	QK{QK�QKYRK�RKGSK�SK5TK�TKUK�UK�UK�VKZWK
XK�XKoYKZK�ZK�[K3\K�\K�]KF^K�^K�_K1`K�`K�aK*bK�bKKcK�cK)dK.eK�eKpfKgK�gKshK�hKKiK�iK)jK�jKkKokK�kKLlK�lK*mK�mKnKnnK�nKMoK�oK/pK�pK
qK{qK�qK^rK�rK:sK�sKtKctK�tK%uK�uK�uKAvK�vK�vK\wK�wKxK|xK�xK5yK�yK�yK_zK�zK/{K�{K�{K`|K�|K/}K�}K�}Kc~K�~K5K�K�K��K��Kd�KсKB�K��K%�K��K�Kn�K�K��K?�K܆Ky�K�K��KI�K�K��K�K��KQ�KŌK:�K��K�K��K�K͏K{�K.�K��Ko�K�K̓KA�K��K+�K��K�K��K��Kl�K��K��K@�K�K��KQ�K��K�Ke�K�K��K�K��K'�K��K�K��K�KJ�K��K�K|�KM�K!�K�KðK��Kh�KA�K�K��KϵK��K��KK�K�KȹK��K:�K�KȼK��Kc�K2�K�K�K��K�KH�K�K��K��K��K��KB�K��K��KC�K��K��KB�K��K��KE�K��K��K�kJ�lJXmJx�J��J��J#�J��Jg�J,�J��J`�J�J��JP�J��J��JN�J �J��Jn�J��I9�J��.��Jq�J��J��InJ�nJoJC�J	�Jt�J��I�Iu�I�I��J�fJ�pJF�J=�I�I*�K��I�I�I�IJ�IpJ�J��J�J��JN�JސIS�IqJ�qJ�rJ�I��I�I7�IV�I{�I��I��I��I��I�sJ{tJmuJ��I؞IIvJ&wJxJ��I�I��IأI��I*�IR�IU�IX�I[�I�xJ�yJ�zJ^�I��I�{JS|JZ�I��I�I�I�I�I!�I��I۵I}J�}J�~J��I�IXJ
�J��J��I˹I�I9�I4�I5�I2�I
�I�I��Im�J%�J�J��I��I��JZ�J�J[�I��I��I�I��I��I��I��I��Iq�IF�I��Jl�J7�J��I	�I�J��Jt�J��I�Ir�I��I��I��I��I��I��I��I:�J�J�J��I��I��Jt�J5�Jw�I��I%�IZ�If�Ix�I��Io�IX�IA�I��JJ��J*�Ic�Ij�J:�J�J%�I��I��I/�IH�Ig�I��Ix�In�Id�IٓJ��J��JZ�I��Iq�JH�J�Ji�I�I!�IG�Ii�If�Ic�I`�I��JҙJŚJ]J�J��Ja�J�J[J�JJ2J:JHJR	J7
JJJޝJ��J~�J�JJ�JC�J3�J!�J�JXJ�JBJ{J�J�JJ!J7J�J�J�J
�JMJ�J�J(�J
�JmJ� Jc"J�#J�$J"&JT'Ja(Jn)J{*J��J�J�J�+JҫJ��JF�J5,J�-J�.J0J1J2J3J�3J�4J�5J�JٮJ�6J�7J��JU�J�Jv8J�9J;JA<JE=JO>JU?J6@JʱJ��JAJHBJb�J;�J�JCJ�DJ�EJ�FJHJ=IJ<JJ;KJ:LJ�J˶J��J��J9MJ��J]�J�J�MJ)OJxPJ�QJ�RJ�SJ�TJ�UJcVJ@WJʻJ��JZ�JXJJYJZJ�J��JӿJ�ZJg\J�]J�^J`J8aJ=bJBcJGdJ��J��J��JLeJ��K!�K��K��K�K��K��Jx�J��J�Jk�JW�J��J�J�J��J}�J/�J�gJnjJB�PK�S��P
�Pm�P��P4�P
5Q`�P�PX�SְSֳS��S��S��SQ�Y(!Z�$Zc(Zb�XτQ+Z�-Zz0Z3Z��S�STBT�Y�5Z�6Z	TW'T� R�!R(Tl8ZU�Y�+T�,Ta-T�XU8ZU0.TN�T/T�[U�/T�-R�9Z<1T�1Tb2T)�Y$�Y�:Z�ET=Zh@Z�DZ�GZ�]T��YCLZ�T]�T�PZ��E�l
��E��El�E@�E�m
�n
[o
&p
I ^I7II�p
�Ir
I�I<I�I|I8v
�v
��
'�H5�HF�HW�H��H�HY�He�Ht�H��HөH!�Hi�H��H��HM�H��H۲H+�Hy�H��H�H{E�HǻH~�H8�H�H��H`�H�H��HU�H��H��HJ�H��H`�H��H|�H�H��H��H��H��H��H��H��H
�H��H*�H��HG�Hl�Ej�Eq�E�H�III2IHI�w
I5IXI{III!I�"I%I0'IT)I�*I�+I-IX.I�/I�0I 2I\3I�4I�5I$7I`8Iv�E_:I<I�=I�?I@I�@ICAICI�DI�FI�GI4HI��GhU�G��G��G��G�jIo�lRq��Gl�G2�G[�G��G��G��G�G6�N�.�G�G��G:HH�H�HdH4H�H�$H�)HX.H23H8H�<H�BEeE�E�E�!E�#E�

&Et(E�*E -Ev/E�1E�AHIBH�BHuCHDH�DH=EH�EH�C
A
�E
cFHFGHdI
HH�LHQH?VH[H�_H�dH�iHanH2sH�wH�|H��H��Hk�HI�H%�
��
�
u�
$32hI�hI�iI?jI�jI�kILlI�lI�mIAnI�nIkoIpI�pI*qI�qI=%3�'3�)3,3x&F<����TrI&tI�uI��wIzxI�yIE�i�D{I>0F�1Fa3F�4F��G�G�Gc�G��G|�GO�G��Gm�G��G��Gi�GE>
,�D1O
��D��D Q
��D��D�T
��D��D�W
�D)�G
�G��G��GB�G�h
�i
oj
k
�k
�E�HIUJI�JIUKI�KIQLI�LI[MI�MI�
��
�
��
=NIOOIePIuQI��
6�
t�
q�
n�
k�
h�
h�
h�
h�
h�
�
�RI�
��
W�
��C��XI=YI)TIUVI�("�V�Y�YI�^}\I�_I�bI�cfiEG�FG	�GGcHGZ	�	b	�HG�IG�	�JG�KG�/	�0	3	@4	�7	�7	�8	:	5;	^<	�LG�MG�NGPG]QG�RG�SG,UGuVG�WG�XGDZG�[G�\G^GZaG�dG�gG5kGVnGwqG�tG�wGyzG9}G�G��G|�G?�G�GōG��GK�G�G͘GǚGћGϜG%�G��G�G@�G��G�Ga�GşG)�G��G�GJ�G��G�Gb�G��G�G��GܣG@�G��GȥG!�G��G�G��G��Gh�G��G�G��G٪G=�G��G��G^�G¬G�G�G�G9�G��G�G;�Gi�G��G��G������F��F��F�FQ�F����>�Cw�F��CY�F��Fq�A�F��F��F+�F^�F��F��F��F(�FW�F��F��F��F#�F��F3�F��F��F��F?�F��F+uD�F��Fy�FU�F�U�V:�F�F�G�G�G�GuG@GhcG�G�G`G�G<G�	G�
G�UGh���
GUG�G!GNG�G�G�G2GzG� G�"G%GI'G{)G�+G�-G>0G2G�4G7Gn9G�;G�=G)>G�>G�?G@GGAGBG�BG�CG�DGډF!�FƋF'KC�Fi2�2�2
2�272�2:2�2s22�2��F�FcC��FdCPeC�eC-�F��F�
2+�F]2�:�2�2��F+2�>�2I�F��F'2j�F�F��F�F��F!�F�mC�F��F�oCs�Ft�F�F�O�R��F~S1�Ff�F��F��FٝF�hF�hF�jF�kFElF�{Iy|I�}I^~IEI�I��I�IU�C�_Ft`F�`FiaF�aFObFcF�cFndF#eF��F
�1�lF�mFZnF�nF`oF��1�oFqpF�pF��1\qF�qF��1��1tF�tF�i�uF-vF�HCqp>yF�IC�rWs%�FzC��F��F~�FڈF�FĢF.k�F��I%�I��IS�I݃I�6Fy6F|7F�7F�8F�E�9Fb17J��B�9Fy:F�:Fy;F{d1<F|<F�<F:=F�=F>F}>F�>FY?F�?F@Fu@F�@FFAF�AF*BF�BF"CF~CF�CFLDF�DF"EF�EF�EFOFF�FF,GF�GF�GFVHFN�FU�F'l��F��FU�Fp�F�F�HFL�FIF�IF~u
v�v�vQx�yFX62JF`LF�NF�OF@PF�PF�C�C�CrQFoRF�{FK|F�|Fx}F~F�~F;F�Fc�F�RF�SF`TF�TFVFOWF�WF(�F��F[XF�XF>YF�YF3ZF�ZFy[FU\F0]F�]F�^FT_FC�C�C�C��C(�F�F�Fq�C$�C�C+�CU�C C,�C�C�C�C[�C� C#C�#C:$Ca�C�C��C��C��C
&C�&CL(CE'CQ)CE�Cw�CI�C��C��Cx�C��Cy*C�CӿC��CF�C\�C<�C�CZ�C��C:�C%�C��C��C��C��C��C��C^�CL�C%�Cq�C��C�F/C	0CnfFK7C��F�7C�F��F:�FưF��F�Fd�F�F��F�:2M;2�F��F!�F��FI�F¸F/�F��FSgF�gF��1��I5�I��F��FY�FڻFd�F�Fo�F��F��F&�F��FC�F�FT�F0�ύn�
����F�S2�W2��Fq�F��F"[2\�CB�PK�S��P
�Pm�P��P4�P
5Q`�P�P��Sc�Y3�SX�SְSֳS��S�SZ��S_Z��SQ�Yy�S��S��Y��Y�SU{�S��Sb�XτQ��Y*�Y;�Y��Y��S�S��Y�S�STBT�Y�T�T�TpT	T=�Y�TeT�T;T�TR�R�R%T�TuT�T�T|T��Y�T� TL!T"T#T��XG&TW'T� R�!R(T��TU�Y�+T�,Ta-T�XU8ZU0.TN�T/T�[U�/T�-R&0T<1T�1Tb2T�2T)�Y�Xn�X$�Y^
ZNCT�ET>�Y?Z�KT�NT��X�VT$�Y�]T�cT��YikT�mT�oT�qT�sT!uTf�Y��X�T��T�T-ZZ�T�Y��T��T]�T>�T��T{�T	�Y��E�l
��E��El�E@�E�m
�n
[o
&p
I �p
�Er
�r
ns
t
�t
xu
8v
�v
V8E9E�9E�:E<;E�;E�<Ef=E>E�>E�?E:@E�@E�AEbBE��
CE�CEjDE!EE�EE�FE0GE�GE�HE:IE�IE�JE0KE�KE�LE7ME�ME�NE0OE�OE�PE4QE�QE�REASE�SE�TE4UE�UE�VE>WE�WE�XE�YEx[E ]E]^E�_E�aE�bE�dE*fEggEiE�jE�kE�mE4oEqpE
rE�sE�uExE6zE{E��hg!��U|E�}E@E�E��E#�E…E9�E��E�E��E�E>;B�EDI�L�OjS�W�[�_�c�g�kso�rtv�y}��ۄ��ܐϔl�Ej�Eq�Eߘs��ݣn��\�ر�w
>z
�|
�~
Y�E1�
��
ԅ
V�E��E2�EИEt�E�EH�EڞEi�E��E��EˤEl�E
�E��EO�E��E�E��E8�EʲE�E��ED�E�E��E��EN�EݾEr�E�E�EP�Ev�E�E��E]�
*�
��
��
o�
۷
��
'�
͹
��
��
Z�E��E<�E��E�E{�E�EhU�VDY�[(^�`
c�eqh�j�E%EeE�E�E/EnE�E�E;E~E�EEGE�E�E	EB	E�	E�	E
EC
E�
E�
EEAEE�EEOE�E�E

EIo�lRqs��2ju�v
xZy�z�{P}�~�@����6�N�G�~����%�]���͗�=�u���K
E�uE��������u�������$���w���W��BE�F�eE~����E�,��Ef
�
�!E�

�#EN	
�
�

&E�
g
t(E�
=
�*E�

 -E~
� 
v/ET#
�%
�1E*(
�*
-
z.
�/
n1
�2
e4
�5
47
$8
�9
;
�<
>
�?
�C
A
�E
,G
LH
dI
�J
�P
�V
)\
b
(h
m
�r
Tx
�}
�
��
g�
��
]�
%�
��
Ԣ
ۣ
�
�
@�
J�
T�
��
��
��
ĭ
��
&�
"4E5E�
6E67E�
�
��
�
�
&�
I�
:�
+�
E�
_�
P�
A�
[�
�
u�
$3F�FrF(F�F�FJF�F�FfFF�F�F6	F�	F�
F�F$
FiF�F�F8F}F�FFFF�F�FFRF�F,F�F\F�F� F'!F�!FW"F�"F�#F$F�$FJ%F�%F=%3�'3�)3,3x&F<����'F�(F�*F���+F�,F7-F�-Fe.F�.F��0���E�i��/F>0F�1Fa3F�4F+�D��DN�Dn�D��D��D��DA�D�:
G<
��D��DE>
,�D1O
��D��D Q
��D��D�T
��D��D�W
�D�Dy�D�D��D��DsE�h
�i
oj
k
�k
�ED�
��
j�
��
[�
��
�
��
��
��
f�
3�
��
��
��
��
�
��
�
��
/�
?�
��
j�
��
6�
t�
q�
n�
k�
h�
h�
h�
h�
h�
��E�
�
�
��
W�
��C�?�j:���	��G��E{�E��E9�E��E��E\�E��E �E�E��E=�E��E��E`�E��E$�E��E��EA�E��E�Ed�E��E(�E��E��EE�E��E�Eh�E��E,�E��E��EI�E��E
�El�E��E0�E��E��EM�E��E�Ep�E��E4�E��E��EQ�E��E�Et�E��E8�E��E��EU�E��E�Ex�E��E�V�Y�[\�\�\�]^�^6`<�E�cf��EG�E��E�Ed�E��E(�E��E��EK�E��E	�Eh�E��E,�E��E��EO�E��E
�El�E��E0�E��E��ES�E��E�Ep�E��E4�E��E��EW�E��E�Et�E�E8�E��E��E[�E��E�Ex�E�E<�E��E�E_�E��E�E|�E�E@�E��E�Ec�E�E!�E��E�ED�E��Eyh�~D5		aD�D��D��D��Dn�DM�D;�D�		�
	�	�	�
	�		+�D��D[�D��DZ	�	b	�	H�D7	4�D�D�	
�D��D�DԔD֕D�	�	ؖDחD֘DؙDU	ךD��D-�DԜD�/	�0	3	@4	�6	�7	�7	�8	:	5;	^<	{�D֝D3�D��D,�Dy�DC	�C	�E	�G	nH	^I	MJ	>K	2L	�M	�N	AP	�Q	�S	�T	V	�DD�D�D��D6�DܧD�D��D?�D٭Dp�D��DV�D��D��DH�D��D�D��DE�DܾD�D�De�D�D��D��D��D�D��D�r	ev	�y	K}	��	�	P�	��	�	0�	��	��	\�	]�	H�DX�	$�	ݳ	?�	�	��2�	��	�	��	��	��	j�	R�	:�	"�	
�	��	��	��	%�DH
#

�

G
�
s

�
X
�
��D0�D��D�
,
�
h
�
�
K
�
�
F�D��D)
*
<
H
�
l


�
C
�
}
$
��D��D��D�
#
�&
*
�-
�0
G4
�7
����@�C��C!�C!�C��C����>�C����CU�R�q���j��2�#��C�C+�CS�C�DTD�D[DD�DtD�DoD�D�"D�%Dk)D�,Dk0D�3D
7D�:D�=D{AD�DD%HDKKDqND�QDnUD�XDq\D�_D
cD�fDjD{mD�mDlnD�nDwoD�G�oD�pD�qDsDftD�tD�tD+uD�Q�R	T[uD�U�V"W�uD�vD
Y<Zn[�]`�ahcgaj�k�m$pdq	w z�|�Q�xyDh����{D�{D6��]�ޙ��"��X���ĩܬS�ϱ^�޶M�ڻg�&������G����%��������������#����O�&��"�g�����2�s��	%	o	�	�	
	�XC8YCM�1��1"ZC�ZC�[Cp\C�2Y]C�]C�^C�_C�`CxaC)bCC2�2cC�cCdC�dCPeC�eC[2
2FfCgC�:�gC�hC�iC�jC�kC�2zlC'2mCt�1�WC��1�mCVnC*oC�oC�pC	qC�qC�rC
sC12�sCT�tC�uC-vC�vCLX�wC�Y%!2qxCyC�yC�^c_3:C�:C|;C5F�5F:�CϣC�CեC�C�t2��CɨC��CU�C�1��1>�1�1��1�0C�1C�2C�3C�4CK<C}=Cf>C��1p�1>�1:?C�?CY�1AC��1�ACK�1�BC�CC%EC��1�1�_oFCrGC�HCqp�IC�rWs�#2zC�zC�zC�{C�|C.k�}CY�C�C�C��C��CQ�Cv�B�BGa1b1��B��BN�B�B��B��BXe1�e1^f1�B��B(�B��BT�B�Bd�B�B��B�B��BV�B��B��BP�B�Bf�B��B��B�Bls1�s1��B%u1�u16v1�v1EC��C��C�22�325w1Z�1LCCC�C�C�v�v�C�JC6�CC�CsC9C�C�C��C(�C�C�CbC�C�C	C�	CKC�KC�LCMCONCOC�OC�PC�QCe
C�
CKC�CbC�C�
C��12RC�z{E�E�1��1C��1�C{C�C�CCOC}C�C�C�C��Ck�CD�C�C�C�C��C&�Cq�C$�C�C+�CU�C,�C�C�C�C[�C��C��Cv�Ct�C#C:$Ca�C�C��C��C��C�&CL(CE'CQ)CE�Cw�CI�C��C��Cx�C��C�+CN�C�CӿC��CF�C\�C<�C�CZ�C��C:�C%�C��C��C��C��C��C��C^�CL�C%�Cq�C��Cv�C/C	0CЅC(6CK7C�7Cg�C�:2M;2�C�<2�Cȃx8CU9C��1�C��C��CC�C��C��Cf�C�CnjC��CX�C�CŏCs�C2�CבC��C��C��Co�C��Q��ߤI�C�C�C�C$�CVU2ӚCc�C��CbY2�C"[2\�CB�PK�S��P
�Pm�P��P4�P
5Q`�P�P��Sc�Y3�SX�SְSֳS��S�S��Y��S�Y��SQ�Yy�S��S��Y��YY�Y{�S��Sb�XτQ��Y*�Y;�Y��Y��S�S��Y�S�STBT�Y�T�T�TpT	T=�Y�TeT�T;T�TR�R�R%T�TuT�T�T|T��Y�T� TL!T"T#T��XG&TW'T� R�!R(T�(TU�Y�+T�,Ta-T0.TC,R/T�/T�-R&0T<1T�1Tb2T�2T)�Y�Xn�X$�Y��YNCT�ET>�Y�KT�NT��X�VT$�Y�]T�cT��YikT�mT�oT�qT�sT!uTf�Y��X�T��T�T�T�Y��T��T]�T>�T��T{�T	�Y'\2�\27]2�]2G^2�^2s_2`2�`2,a2�a2:b2�b2Zc2�c2ld20f2�f2���d2�e2ӱj�kg2��C�q�C��C/j2����C2�CԢC�XC8YCM�1��1"ZC�ZC�[Cp\C�2Y]C�]C�^C�_C�`CxaC)bCC2�2cC�cCdC�dCPeC�eC[2
2FfCgC�:�gC�hC�iC�jC�kC�2zlC'2mCt�1�WC��1�mCVnC*oC�oC�pC	qC�qC�rC
sC12�sCT�tC�uC-vC�vCLX�wC�Y%!2qxCyC�yC�^c_3:C�:C|;C:�CϣC�CեC�C�t2��CɨC��CU�C�1��1>�1�1��1�0C�1C�2C�3C�4CK<C}=Cf>C��1p�1>�1:?C�?CY�1AC��1�ACK�1�BC�CC%EC��1�1�_oFCrGC�HCqp�IC�rWs�#2zC�zC�zC�{C�|C.k�}CY�C�C�C��C��CQ�Cv�B�BGa1b1��B��BN�B�B��B��BXe1�e1^f1�B��B(�B��BT�B�Bd�B�B��B�B��BV�B��B��BP�B�Bf�B��B��B�Bls1�s1��B%u1�u16v1�v1EC��C��C�22�325w1Z�1LCCC�C�C�v�v�C�JC6�CC�CsC9C�C�C�CbC�C�C	C�	CKC�KC�LCMCONCOC�OC�PC�QCe
C�
CKC�CbC�C�
C��12RC�z{E�E�1��1C��1�C{C�C�CCOC}C�C�C�C�COC�RCCC�C�C�C�SC C�TC� CJ!C�!C#C]UC�#C:$C�$C�UC
&C�&CL(CE'CQ)C�VCy*C�*C�+C�,C�-C/C	0CЅC(6CK7C�7CjWCg�C�:2M;2�C�<2�Cȃx8CU9C��1�C��C��CC�C��C��Cf�C�CnjC��CX�C�CŏCs�C2�CבC��C��C��Co�C��Q��ߤI�C�C�C�C$�CVU2ӚCc�C��CbY2�C"[2\�C��PB�P
�Pm�P
5Q�%UV&U�&U��V��VD�V��V��V��Yn�Vu�Vq�Vn�V60U5uU2U�2UvU�vU5U/Xf�V��V=�V?�V�~UkU9�U�U؁U>U�>U?U��V��V��V_�Ul�U��U��U%�U�U`�P�P��Uq�X��X��X��Xv@Q�Xt�X��U�U��UƂY
XQ݇Y.�U�UD�U�U��U��XBWK�Y4&W|{Qc�X<�XτQ��Xx�X�X��X��Xn�X�AWMYLW Y�MW�NW�OWQW�S��U��U��U�Ux
Y��UD�U9�Ug�Y��Q�UI�U�Y�Y�V��Q
V�V�rW@sW�sW�Q�tWBTVYY�T�T�Y�{W�VUYV�T�V�VpT	T�	TmTH
T�VT�T�TeT� Y?V%"Y�Ve V�"YR�R�"V�#VHR�R�$V%T�%V2&VuT�T�&Vs'YT��W�'V|TW(V�T� T �W�*Y�(V��W�)Vh*V�+V�,V;1Y�.V�/V�0V��WE�Wr�W� R�!R(T>3V�3Y͟W�Y�+T�,Ta-TҟY�;V]�Y�>V.@V�@Ve5Y�6Y�8Y��W�GV/T�[U�/T�-R&0T<1T��WA�W�W`:YHV�TJV;Y>Y
�Y?VVPRYDY�UR�\VצY�EY�^RdV�hV�KY�OY�PYrtV��WLTY�wV�UYa�W�ZY�V��V��Y�\Y_Y/aY�cYd�V�eY��WohY>lY��YϴY`X�nY��Y�XsYuY��V�wYm{Yg$X`�Rq�V-&X�~Y��V=�P��P��P`�P��V&�V�V�
>\d)Vf)7������
�����o�Ph)����t��l�۪;���6���1���;���7j)m)ɴ1�����r)p)�w)�z)@})�)�>>>>�>Z>�>�>{>� >*#>�%>G(>��)��)��)��)�*>��)�)C,>�)s->;�)a.>>�)}�)��)D�)��)��)��)��)��)�) �)^�)٢)T�)��)����m���a�)K���"�����i���F�ϧ)�����)C�)��)��)K/>�1>�9>=<>�>>�A>m4>
7>SD>�F>mI>�K>Z�)�N>JO>V�)��)I�)��){�)7�)��)��)��)�)o�)h�)� `�)>-P�)��)D�)��)��)2P>d�)��)T�)��)D�)��)N�)��)X�)��)>�)��)B�)��)2�)��)�)��)�)V �  ��)��Bܕ"3�"F�,қ,^�,�,v�,ĝ,�,�,ȟ,ǡ,N�,��,ͥ,b�B�,h�,��Bf�B�B^�,��,��,��I�;I�;I�;I�;I�;L�;O�;R�;U�;X�;X�;X�;Y�;Z�;[�;\�;`�;d�;h�;l�;p�;q�;r�;r�;r�;r�;r�;u�;x�;{�;~�;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;��;^�';�'�'��'��;����'�'r�;��;��;�;y�;��;�;x�;��'��'��;7�;�<9<�<Y<�<W"<�	<B
<�<R<�%<V)<�,<V0<�3<6<��'��'��'[�'�'��'��'��'M8<�8<��'��'��'��'��'��'�9<�:<�;<�<<�=<�><@<A<8B<sC<�C<cD<�D<sE<�E<aF<�F<bG<�G<{H<��'�'��'�'A�'%�'^�'�(�(.("(p(T(�(	(�	(�
(�(
(@(f(�(�(�(n(�(*(K(�(s(�(H(y(!(%"(�#(M%(�&(
((/)(�*(�+(--(�.(%0(�1(�2(44(I5(�6(Y8(�9(�:(<(;=(->(,?(�?(�@(�A(oB(6C(D(E(F(�F(xG(fH(JI(�K(yN(�P(�S(�U(�X(�H<SK<�M<UP<
S<[U<�W<_[(�\(m](H^(Z_(Ip(�q(s(it(�u(�v(x(y(;z(L{(p|(�}(~(�`(�e(�a(�b(Kc(
d(e(�f(�g(Zh(<i(�i(tj(\k(3l(#m(Nn(ro(�(F�(7�(��(��(׉(�(�(��(K�(��(��('�(��(/�(�(	�(ގ(��(��(�Z<J]<�_<b<�d<�f<_i< l<�n<'q<qs<v<Ox<�z<�}<�<v�<��<*�<G�<��<g�<��<4�<a�<�<�<w�<k�(%�(��(��(l�(�(��(ɝ(��(V�(�(ʤ(G�(�('�(�(��(x�(N�((�(��(ܯ(��(��(H�(��(��(��(%�<۠<|�< �<֥<w�<�<��<Ȫ<��<F�<�<��<Ȯ<��<F�<�<��<c�<�<��<`�<�<��<I�<�<�<�<"�<�<<�<h�<�<��<��<��<��<�<�<(�<Z�<w�<��<��<��<'�<o�<��<�<M�<��<��<+�<s�<��<�<N�<��<��</�<w�<��<�<O�<��<��<0�<{�<��<�<S�<��<�<1�<|�<�<�<Z=�=�=9=�=�==d	=�
=�=4�(�(ζ(��(e�(5�(�(պ(��(��(��(��(��(f�(:�(�(��(��(��(d�(8�(��(��(��(Q�(�(��(��()�(��(�(��(�(��(��ݒ�(��(��(s�(=�(
�(��(��(z�(N�("�(��(��(��(u�(L�( �(��(��(s�(9�(�(��(��(^�(��(R�(��(D�(��(>�(��(6�(��(5�(��(7�(��(E
=`=e=m=�=�=�=�=�=�=�=�=�=�=�=�==y=� =#"=�#=%=h&=�'=)=�*=,=x-=�.=B0=�1=63=�4=�5=^7=�8=�9=.;=�<=�==7?=�@=�A=�B=�C=�D=XE=JF=HG=1H=I=�I=LK=�L=�M=O=oP=�Q=S=oT=�U=!W=�X=�Y=][=�\=Q^=�_=�`=�a=�b=�c=�d=te=df=Wg=Gh=:i=*j=k=l=�m=�o=yq=
s=�t=�v=xx=Wz=$|=	~=�=��=>�=!�=�=Lj=��=��=@�=ޏ=��=��=��=e�=�=��=l�=K�=-�=�=�=͟=��=z�=\�=>�=��=��=O�=ĩ=��=<�=ޮ=��=R�=��=	�=c�=��=�=w�=��=�=w�==�(�(��(�(��(��(��(A�(G)I)J)S)X)\)@)п=J�=��=7�=��=�=��=�=��=,�=��=)�=~�=��=k�=��=P�=��=��=2�=�>/>�)b)�)�)�)�)�)h!)�")�#)�$)�%)')y+)�/)�3)�7)�;)
@)zA)�B)�C)E)�E)"G)�G)dH)I)�I)GJ)�J)nK)�K)}L)M)�M)�N)tO)N)�N)�O)aP)�P)NQ)�Q)YR)�R)xS)T)�T)�	>�b@�c@�d@�e@�f@�g@�h@�i@�j@�k@�l@�m@xn@mo@bp@Wq@Or@Gs@?t@7u@/v@$w@x@
y@z@�z@�{@�|@�}@�~@�@��@��@��@��@��@��@x�@p�@h�@`�@X�@P�@E�@:�@.�@"�@�@
�@�@��@�@�@ݕ@і@ŗ@��@��@��@��@��@��@��@y�@q�@f�@[�@O�@C�@7�@+�@"�@�@�@�@��@�@�@ۭ@Ю@ů@��@��@��@��@��@��@��@|�@+�@
�@-�@t�@Ƚ@��@L�@��@@+N+��@J�@��@4�@��@�@|�@��@��@%�@��@�@e�@��@J�@��@5�@c�@X
+�+�++�+N+�++G!+x#++&+�(+�++D.+�0+�3+54+�4+?5+�5+L6+�6+\7+�7+i8+�8+s9+�9+�:+
;+�;+<+�<+��@ �@L>+�>+^?+�?+p@+�@+ZA+�A+VB+�B+dC+�C+uD+�D+�E+F+�F+G+�G+H+�H+#I+�I+)J+�J+/K+�K+=L+�L+KM+�M+_N+�N+sO+�O+�P+Q+��@CA�Q+�Q+TR+�R+$S+�S+�S+\T+�T+6U+�U+V+qV+�V+IW+�W+7X+�X+9Y+�Y+>Z+�Z+C[+�[+E\+�\+A]+�]+=^+�^+?_+�_+-`+�`+-a+�a+?b+�b+=c+�c+;d+�d+?e+�e+=f+�f+?g+�g+Eh+�h+Ni+�i+Wj+�j+�A�A�A�A�A�AA�A	AN
A�
A&A�AA�A�At
A�
A[A�A]k+2l+m+�m+�n+�o+fp+<q+r+[s+�t+�u+0w+zx+�y+{+A�A^A�A�A5A�A�AbA%A�A�AfA&A�A�AB!A�"A�$A&A�'A^)A+A�,A#.A�/A�0A�1A�2A�3A�4A�5A�6A8A�8A:A,;A@<A4=AN>At?A�@A�AA�BA�CAEAAFA}GA�HA�IA:KAyLA�MA�NA-PAiQA�RA�SA&UAeVA�WA�XAZAU[A�\A�]A_AQ`A�aA�bAdAAeA�fA�gA�hA=jA|kA�lA�mA5oAspA�qA�rA0tAnuA�vA�wAU|+�|++yA4zA@{AI|AU}A^~A�|+~+jAU�AC�A.�A�A�A+�+��A��A*�AǏAg�A�A��A��A��A��A��A��A��A��A��A��At�Ar�A}�Ar�Aj�AB�A��A�A]�A��A��Aa�A��A�AI�A°A%�A��AѴA=�A��A�A�AŻA�A_�A��A�A�Al�A��A��A�A
�A��A��A��Am�A_�A<�A�A��A3�At�A��A��A&�A}�A��A�A&�A��A�AH�A��A*�A��A��A��A��A��A��Ax�A_�AC�A*�A�A��A��A��A��A{�A.�A��Aq�A?�A	�A��A��AO�A(B�B�B9BB�	B�Bz
BDB�B�B"B�B3B�BmBB�B�BhB; B!B�!B�"By#BO$B%%B�&Bp(B*B{+B--B�.Bq0B(2B�3B5Bl6B�7B9BY:B�;B�<B5>B�?B�@BADB�GBKBG\B�_B cB�fB}NB�QBfUB�XB�iB]mB�pB.tB�wB�yB�{BQBĂB6�Bx�8v�8t�8r�8p�8q�8r�8s�8t�8u�8s�8q�8p�8o�8n�8m�8o�8q�8s�8u�8w�8v�8u�8s�8q�8o�8m�8n�8o�8p�8q9r9p9n9m9l9k9j9l9n	9p
9r9t9s
9r9p9n9l9j9k9l9m9n9o9m9k9j9i9h9g9i9k9m 9o!9q"9p#9o$9m%9k&9i'9g(9h)9i*9j+9k,9l-9j.9h/9g09f19e29d39f49h59j69l79n89m99l:9�:9@;9�;9<9�Aq<9=9v=9�=9�>9�>9�?9$@9�@9A9�A9B9uB9�B9[C9�C9kD9�D9}E9�E9sF9�F9iG9�G9SH9�H9=I9�I9K9AL9�M9�N90P9�Q9�R9%T9sT9�T9�U9�V9|Z9�]9za9s9�v9z9�}9�d9h9l9�o9�9��9��9�9��95�9p�9��9��9��9y�9��9��9��9ї9X�9И9N�9˙9V�9Κ9��9�9��9�9��90�9��9#�9��9�9��9�9��9��9m�9�9s�9�9z�9c�9ڥ9H�9ɦ9L�9��90�9��96�9é9=�9&�9��9 �9��9�9��9�9��9<�9��95�9��9߰9��9!�9��9��9O�9ų96�9��9�9��9�9y�9�9X�9÷9-�9��9�9t�9�9O�9��9�9w�9ڻ9P�9Ƽ9<�9��9+�9��9'�9��9l�9.�9�9��9��9d�9��9��9�9��9"�9��9"�9��9�9��9�9��9�9��9	�9��9�9��9�9��9��9r�9��9h�9��9p�9��9r�9��9r�9��9c�9��9Y�9��9=�9��9�9��9��9i�9��9_�9��9>�9��9�9s�9��9�9u�9��9Q�9��9K�9��9�9�9��9��9��9��9��9��9��9�9M�9��99�9��9E�9��9/�9��9,�9��9A�9��9	�9��9�9�9��9w�9��9s�9�����9t�K�9��9ί%��ٰ�9�9f�9�9��9[�9��9�9��9s: :�:�:�:i::�:�:-
:�:z
:$:�:z::�:�:�:�:�:�::-:E:=:m :�!:�":�#:�$:�%:':1(:/):�):*:j*:�*:"+:w+:�+:!,:g-:�.:�/:91:�2:�3:5:]6:�7:�8:/::u;:�<:>:P?:�@:�A:%C:kD:�E:�F:CH:�I:�J:L:aM:�N:�O:6Q:R:�S:U:ZV:�W:�X:1Z:y[:�\:^:T_:�`:�a:/c:wd:e:�f:�g:�h:�t��h:[i:�i:%j:�j:�j:Qk:�k:!l:�l:�l:Ym:u����3�����P����m:?n:�n:�o:p:�p:q:�q:�q:\r:�r:.s:�s:t:lt:�t:wu:�u:Jv:�v:!w:�w:,x:�x:	y:ty:�y:yz:{:�{:W|:�|:I}:�}:6~:�~:�n::�:�:}�:�:��:�:��:�:|�:�:��:�:y�:��:��:1�:ԇ:v�:��:��:��:��:��:��:Ϗ:ΐ:Б:��:��:ϔ:Ε:Ж:��:(�:��:�:.�:��:
�:m�:Т:�:��:�:s�:é:9�:��:)�:��:�:K�:��:�:�:j�:̸:�:g�:��:��:y�:c�:-�:�:�:�:��:��:�:`�:��:��:0�:��:��:*�:X�:��:H�:��:�:��:��:N�:F�:A�:9�:4�:"�:�:�:��:��:��:��:��:��:�:<�:�:��:k�:?�:��:��:��:��:L�:�:��:��:s;8;;�;�;G
;�;\
;;�;a;;�;�;�;a;A;;�;�;�;�;E;� ;�";$;�%;y';);�*;�,;�-;</;�0;�1;G3;�4;�5;A7;�8;�9;k=;�@;TD;�U;/Y;�\;`;�G;EK;�N;>R;�c;�f;rj;�m;Sq;�s;�u;/y;�|;(�;��;��;Q�;��;%�;��;�;��;�;��;�;��;�;v�;�;R�;��;+�;��;�;v�;�;X�;Ǎ;7�;��;&�;��;*�;��;6�;��;-�;��;�;~�;�;[�;֔;Q�;̕;K�;ʖ;��8�8k'_�8��8���}�8�C�h�%�Em'�m'!n'�n'-o'�o'Np'�p''q'�q'Ar'�r'�s'rt'Gu'%v'w'�w'9�8��8�8~�8��8|�8��8E�8��8E�8F2!��x'(y'�y'�y'7z'�z'�z'��8��8��8$r�n�j�E�8�|'�}'�~'�'G�'��'A�B��BZ�B�B��B��BېB��B�B��Bp�B[�B�#3–Bo�B�B˜Bh�B��B��B8�B��BšBD�B��B�B��B��B��B �B��B5�BӟB;�B��B �B�B�B��!��!`�B=�B�BФB��!�!��B�B.�BݦB��Bq�BF,TG,tJ,�M,�P,�S,V,�W,;�Bb�B�BƭB��B�B	�B
�BP�B��B�B3�B��B1�BU�B�h,k,Im,�o,y�B�r,fs,�BJ�B�s,@t,��B�B�t,u,d�B�u,?v,�v,�v,9w,�w,�w,�Bj�B�BZx,�x,��B!�By,�5"@6"Hy,�y,!z,�z,�z,J{,�{,|,�|,�|,]},�},~,��B�~,D,�B�,�,��,�,��,"�,��,�,p�,��B1�B��Bƃ,�,B�B��Bs�,ʄ,(�,N�B��Bg�B��,؅,��B{�B1�,~�,؆,�B��B�B&�,t�,��B�BRB"�B"�B"IC"�C"�C"ZD"�D"��B��B]�B��BG�B��B��BZ�B�B͉,��,��,)�,��BD�B��Ba�Bd�)�)��)b�)�)��)`�)
�)�Q>�P>�R>S>�S>CU>�V>�W>Y>bZ>�[>�\>@^>_>�_>�`>"b>Mc>xd>�e>�f>h>0i>�j>(l>�m>�o>.q>�r>�t>v>�w>Py>�z>B|>~>�>?�>Ă>��>�>ʇ>R�>Ŋ>8�>�>��>#�>��>o�>�>��>�>��>��>��>;�>ڠ>V�> �>��>A�>��>Y�>�>�>��>O�>�>�>��>Z�>�>t�>�>��>-�>�>L�>�>��>=�>��>a�>�>��>��>�>.�>+�>��>��>d�>�>��>��>G�>�>��>��>k�>=�>��>^�>��>��>�>��>6�>�>��>'�>��>:�>D5*-7*�8*�:**<*>*�?*{A*�>5?�?^?�?k?�?�
??�
?.?�?�?�?)?�?0?�?e?�?r?�?� ?�"?�#?S%?�&??(?�)?+?X,?�-?�/?�1?S3?(5?�6?�8?�:?g<?>?�??bA?\C?E?�F?�H?�J?;L?N?�O?Bl*ym*�n*�P?(R?FS?�T?�U?,W?SX?�Y?�y*rz**{*�Z?�{*�|*E}*�[?�\?w]?s^?-_?�_?�`?1�*f�*fa?b?�b?�c?�d?�e?�f?Pg?�*׈*��*��*Z�*G�*>�*��*َ*��*v�*`�*)h?eh?ti?,j?�j?Ak?�k?`l?�l?�m?in?�o?��*-�*ɔ*]�*�*w�*�*��*7q?��*�*�q?r?Zt?�v?�x?�z?}??�?��?��?�?��*'�?I�?_�?�?��?�?�?j�?�?��?p�?��?��?1�?��?!�?�?t�?�?��?q�?��?��?)�?��?
�?ֲ?T�?��?s�??�?��?d�?�?T�?�?��?�?��?�?��?d�?�?��?��?k�?6�?��?Y�?��?��?*�?��?S�?��?5�?��?z�?�?��?b�?��?��?�?��?I�?B�?��?��?o�?k�?�?�?��?@@6@�@�@Z@S	@@�@�@�@j@6@�@[@�@�@/@�@Z@�@'�*�*��*x�*�*�*��*s�*�!@�"@L$@&@w'@
)@z*@8,@�-@A/@�0@2@v3@.5@�6@+8@�9@S;@�<@V>@�?@A@RB@�C@AE@�F@H@�I@�J@uL@�M@sO@Q@S@�T@�V@^X@aZ@\@�]@�_@Ra@�*	�*��*�+B+B+�+�++�4��4U�4��4��4'�4��4a�4�4��4�4�4�4ۊ4Ջ4ό4ɍ4Î4��4��4��4��4��4��4��4��4��4}�4x�4p�4h�4_�4V�4M�4D�4>�48�42�4,�4&�4�4�4�4�4��4�4�4�4�4�4ۮ4ӯ4˰4±4��4��4��4��4��4��4��4��4��4w�4o�4g�4_�4W�4R�4M�4H�4C�4>�46�4.�4%�4�4�4
�4�4��4��4��4��4��4��4��4��4��4��4��4��4��4��4��4��4�$�$�$��4��
$$�$�!$�$$�'$�*$x-$d0$m3$��4��4��4<�4��4n6$�6$7$c7$�7$�7$T8$�8$�8$C9$�9$��4��4��43�4��4��4�4n�4�9$�:$��4��4Z�4r�4��4��4�4)�4�@$xA$�B$�C$�D$F�4��43�4�5"5�55�5&5�5!5�5� 5q$5�'5_+5�.515�CyE$;35F$�G$�H$�I$-45s65�85"95�95�;51>5�@5�B5xE5F5�H5I5lK5�K5	N5�P5�R5=U5�U5�W5Jg$	Z5a\5�^5�`5�g5�b5oe5k5i5~h5�m5�o5r5�t5�v5�x5b{5�}5�5�5P�5�5K�5�5(�5��5��$��5Ͳ$S�5��5D�5V�5�5o�5��5��5�5ڨ5u�5"�5�5��$�5�5-�5��$?�5��$Q�5Z�5o�5^�$��5*�$��$��$6p$�$j�$��53�5��$��$��$��$��$�$ʹ5J�5ǵ5G�5Ķ5>�5ȷ5V�5߸5i�5�5_�5�5f�5�5m�5�5h�5�5d�5m�5�5�5x�5��5��5�5��5�5��5�5��5��5w�5;�$��$�$��5D�5��5��$�$��$�$o�$(�$��$��$��$>�$��$��5��$m�$��5/�5��5U�5�$��$2�$�$��5��5�5��5:�5��5$�5q�5��5��5��5�5?�5e�5j�5o�5t�5y�5x�5��5��5��5��5c�5��5a�5��5?�5��5/�5��5�5��5z�$J�$�$�$�$�%}%P%5%%%�%�%�%x%]	%�	%�
%(%"%�%�
%x%�%�%�%�%x%O%5%%�%%[%�%�%�A%!%�%[%%�%�!%D#%%%�&%�(%G*%,%�-%z/%,1%�2%�4%l6%%8%�9%S;%�<%��5�=%c>%2?%p@%�A%�B%D%ME%�F%�G%I%J%NK%BL%�M%�N%�O%+Q%.R%S%�S%�T%�U%�V%rW%VX%;Y%�Y%�Z%[%\%~\%v]%V^%�^%�_%a%!b%.c%;d%We%qf%�g%yh%�i%\j%�k%ul%�m%�n%�o%�p%�q%�r%�s%�t%v%-w%Fx%0y%@z%{%<|%"}%N~%b%?�%R�%f�%y�%~�%��%��%��%��%��%��%r�%��%z�%��%��%��5_�5��5��5?�5�5r�5J�5
�5�5y�5E�56�6�6L66�6N6�6�
606�
6x6�6q6�6�6�6�6�6�6�6�6�66+6B69 6V!6"6�#6�$6�%6\'6)6k�%��%ȟ%�16Z36q56�66s�%§%��%��%��%+�%�%Y86��%��%$�%��%*�%��%8�%��%�*6�+62-6a.6�/6�06:6];6�<6�=6?6\@6�A6�B6"D6aE6�F6�G6I6`J6�K6�L6&N6eO6�P6�Q6"S6dT6�U6�V6*X6iY6�Z6�[6&]6h^6�_6�`6.b6pc6�d6�e62g6sh6�i6�j69l6zm6�n6�o6@q6er6�s6t6�u6�v6�w6\y6�z6r�%|6a}6�~6(�6��%��6��6u�6��6p�6�6��6��%�6y�6�6��69�6�6j�6�6e�6�6C�6��6�6}�6�6_�6ܡ6}�6	�6��6p�6�6��66�6ٮ6��6o�6��6��63�6��6s�6D�6�6��6��6�6\�69�6��6��6��%{�6C�6�6��6��6U�6'�6��6��6��6[�6��6f�6��6�6��6��6R�6��6q�6?�6��6��6��6_�6-�6�6��68�6�6��6g�6��6G�6��6v�6(�6�6��6�6��6�7q7
�6��6.77�7�75	7H-&�
77�1&�
7&7y7�7#7�77�;&~77�7,!7�"7�7�7Z7$7�%7K&'7�(70*7�+7A-7�.7D07�17�27�37157�77�87s:7R67�;7_=7�>7�?7�@7B7;C7]D7�E7?G7}H7�I7�J7�K70M7�N7P7fy&�z&lQ7}&�R78T7�U7+W7~~&�X7.Z7�[7U]7�^7y`7 b7Y�&{�&�c7]e7�f7�h7*j7I�&��&�k7Am7�n76p7�q7͙&^s7U�&�&�&u7�v7Nx7�&Ĭ&z7�{7R}7
7��&e�&<�&,�&�&�&��&��&��7��&��&��&k�79�7�7�7��7��7f�7��&?�7�7��7��&��&�&��&7�7�7ێ7��7��7c�7>�7B�&!�&~�&`�&�7��7e�7�7ϙ7��7V�7��&s�&�&��&	�&��&Y�&~�&�7Y�7��&�&z�&�&Ρ7��&:�7̤7'_�7��7��7y
'��7��7A'.�7ԯ7k�7�7/ 'N�7��7Ҷ7 �7��70�7��7W�78�7�7
�7K�7��7��7�7T�7�7]�7��7�7�7�7�7!�7�7�7*�7"�7�7��7��7�7��7��7��7D�7��7��75�7��7�7_�7��7��7s�7��7B�7��7��7v�7��7E�7��7�7.�7v�7��7�7G�7��7�7�8�8�8�8w8`8U8588�85
8y8�8�
848�8�88@8�8"8k8�8S8�88�8�8� 8�!8�"8�#8�$8q%8X&8B'8)(8)8*8�+8�-8K/8�08�28q48&68�78�98�;8[=8%?8�@8�B8fD8$F8H8�I8�K8M8�N8P8�Q8zS8U8�V8fX8<Y8Z8�Z8�[8�\8s]85^8_8�_8�a88c8�d8If8�g8�i8Hk8m8�n8�o8Oq8�r8�s8Eu8�v8�w8*y8~z8�{8?8��8�8Z�8˚89�8��8��8��8s�8�8�8�8�8V�8��8�8�8��8��8p�8B'C'9D'�E'��8z�8�8��8qG'
I'�J'HL'B�8��8��8�8�M'�N'�O'�P'}Q'R'P�3m�3��-F/i1&2�3>5�6+:�;�<��3��3a�3��3x�3+�4��3��3�3S�3*��K4zM4<O4�P4�Q4�S4KU4�Z�\�^�3��3oqv�wvxy�y,z����3�zWq3�s3�t3q�38�3ܠ"��B<4�4�4�4�4��"�B�44o4-{�|�}�~����ԕ�O���!�#L�#y�#m37���p3�5�������v3qw3Fy3{3}|3�R~3�3�3��3��3��Q�3Lj3'�3��3Œ3=�3��z�3������3��h�3U�3ZA0��3,�3#�3{x�3f�3X�3��3a�3f�3��3��3��3j	4`
4��"��"�"�4��H	�.3��v7�4z13��<4�c�?�>+ �C��37���3#�#>'���3ߢ��-���G2�3x4t5}6�7�8:L<4�4e@JC�A�DAP��#ARiS�43|G�I*KYL{M�N�O>_���\0
�
���3�3IM#4W4�4�4�54�64�W4�4�4MY4nZ4[44�4$4�74�84�Z4-\4Y]4�4�4�]4�^4344I_4=`4/a4b4h4�4�94�:4�b4{d4�e4�f4& 4�4�g4i4[j4�!4�;4�<4'k4�l4<m4�"4�=4�>4�m4�?4�n4��3@4^A4�#4fB4�o4�p4i$4�&4(4�%4jC4�D4�E4�q49s4�t4gv4x4q)4D,4�*4Ey4Yz4v{4R/4�-4�|4�}4�!�04�G4AH424��Q\3�14����Ƥ3k]3�_3q24b3V�3 44d3!I4�I4����*�3��3��#��#��#��#��#��#Q�#2g3�h3�i3]j3k3�k3Kl3�"˾"��3ܴQ�.�3-�3�#U�#��#B�3��3��3��3��3��3��3��3����������T����i�����3I�3e�3���3��3x�PB�P��P
�Pm�P��P4�P��P��P
5Q`�P�Pq�X��X��X��Xv@Q�Xt�X��U�U��U��X
XQP�X.�U�U�X�U��U��XBWE�X4&W|{Qc�X<�XτQ��Xx�X�X��X��Xn�X�AWMYLW Y�MW�NW�OWQW�S��UY�Y�	Y��U��Q�
Yx
Y��UD�U2YKY��Q�UI�U�Y#�Q�V��Q
V�V�rW@sW�sW�Q�tWBTVYY�T�T�Y�{W�VUYV�T�V�V YpT	T�	TmTH
T�VT�T�TeT� Y?V%"Y�Ve V�"YM$Y�$YR�RHR�R�%Y�$V%T &Y�%V2&VuT�&Y�T�&Vs'YT��W�'V�(Y|TY)YW(V�T�)Y �W�*Yz+YF,Y-Y�(V�-Y��W�.Yh*VR/Y\0Y�+V�,V;1Y�.V�2Y�/V�0V"3Y��WE�Wr�W� R�!R(T�"R�3Y�+T�,Ta-Te5Y�6Y�8Y��WC,R/T�/T�-R&0T<1T��WA�W�W`:YHV�TJV;Y>Y�@Y?VVPRYDY�UR�\V�EYSIY�^RdV�hV�KY�OY�PYrtV��WLTY�wV�UYa�W�ZY�V��V�\Y_Y/aY�cYd�V�eY��WohY>lY`X�nYOqY�XsYuY��V�wYm{Yg$X`�Rq�V-&X�~Y��V=�P��P��P`�P2�3}�3
�3��3��3��3�3q�3�3��3P�3m�3��-F/i1&2�3>5�6+:�;�<��3��3a�3��3x�3��3��3��3�3S�3*��Z�\�^�3��3oqv�wvx�B0y�y,z����3�zWq3�s3�t3q�38�3ܠ"<4�4�4�4�4��"�44o4-{�|�}�~����ԕ�O���!�#L�#y�#m37���p3�5�������v3qw3Fy3{3}|3�R~3�3�3��3��3��ݘ3��3��3�3F�3�3Q�3Lj3'�3��3Œ3=�3��z�3������3��"�3h�3U�3ZA0��3,�3#�3{x�3f�3X�3��3��3a�3f�3��3��3��3j	4`
4��"��"�"�4��H	�.3��v7=03z13�23��<4^�#�c�?�33P43�>+ �C��37���3#�#���3ߢ��-���G2�3x4t5}6�7�8:�=?e@�A�43|G�I*KYL{M�N�O���\0
�
���3�3IM#ı363&83�63=93B:3J;3��3��3L�30<3�=3�<3�>3�?3�@3y�3�3z�3b�3@A3�C3IB3�D3�E3'G3�3F�30�3�30H3%I39J3	�3Ը3��3h�3PK3VM3/L3fN3dO3eP3�3"�3�3��3DQ34S3R39T3,U3"V3��3D�3�V3��3��3�W3�X3�Y3�Z3f�3k�3�!(�3�[3)�3��3��Q\3����Ƥ3k]3�_3b3V�3d3�e3����*�3��3��#��#��#��#��#��#Q�#2g3�h3�i3]j3k3�k3Kl3�"˾"��3ܴQ�.�3-�3�#U�#��#�3i�3��3̬3�3B�3��3��3��3��3��3��3��3	$����������T����i�����3I�3e�3���3��3B�PK�S��P
�Pm�P��P4�P
5Q�%UV&U�&U]'U�(U�*U��X�,U�.U60U+1U2U�2Ut3UN4U5U�5U(7U�8U]:U<Ui=U>U�>U?U�?UAU�BU�CUs�XKEUFU`�P�P�FU��SȧS3�SX�SְSֳS��S�SjGU��S�NU��X��Xy�S��S8�XK�X�SU��X��Sb�X��S�S~�S�S�STBT�X�T�TpT	T�TeT�T;T�TR�R�R%T�TuT�T�T|T&T�T� TL!T"T#T��XW'T� R�!R(T��T�+T�,Ta-T�XU8ZU0.TN�T/T�[U�/T�-R<1T�1Tb2T�2T�X�Xn�X��X�\UNCT�ET�IT]aU�KT�NT��X�VT��X�]T�cT�fTikT�mT�oT�qT�sT!uT�wT�yT��X��X��XZ�X��XBeUlU�T��T��T��T]�T>�T��T{�T"pU�#3�l
�m
�n
[o
&p
�p
�q
r
�r
ns
t
�t
xu
8v
�v
}�ĵ����2X�2���33�~3�3�K3�3�
3s3��38 3��o����� �2B��U�/�	
z
�
M� k
��A�����R$���qL'�L��� �!O"#�#�$|%P&'�'�(�)w*d+Q,
�
��
}�
8�
��
��
j�
 �
��
��
L�
�
��
n�
&�
��
��
��
>�
��
��
T�
��
�
��
��
x�
��
��
L�
��
��
V�
�
��
O�
��
��
H�
��
��
U�
��
��
H�
��
��
R�
�
��
�
��
(�
b�
��
��
��
Z�
��
�
��
E�
��
�
��
�
O�
�
<��h3
��
��g!��U��a!0#�$�&�(M*,�-u/+154�4�2�3�6�7�516-89�9�:�;<�<=�=>>;B�EDI�L�OjS�W�[�_�c�g�kso�rtv�y}��ۄ��ܐϔߘs��ݣn��\�ر�w
�x
2y
�y
�y
>z
�|
�~
1�
��
ԅ
T�
��
�
��
��
,�
��
:�
�
A�
��
>�
n�
ߚ
z�
��
4�
��
�
R�
��
�
+�
]�
*�
��
��
o�
M�
�

�
��
��
_�
3�
�
۷
��
'�
͹
��
��
��
�
z�
�
`�
�
�G�M�2hU�VDY�[(^�`
c�eqh�jIo�lRqs��2ju�v
xZy�z�{P}�~�@����6�N�G�~����%�]���͗�=�u������������u�������$���w���W���F�~����,�f
�
�

N	
�
�

�
g
�
=
�

~
� 
T#
�%
*(
�*
-
z.
�/
n1
�2
e4
�5
47
$8
�9
;
�<
>
�?
�C
A
�E
,G
LH
dI
�J
�P
�V
)\
b
(h
m
�r
Tx
�}
�
��
g�
��
]�
%�
��
Ԣ
ۣ
�
�
@�
J�
T�
��
��
��
ĭ
��
&�
�
�
�
��
�
�
&�
I�
:�
+�
E�
_�
P�
A�
[�
�
u�
$3.����F�����_��Ųx�*�ܴ��@����V����O��z����<�ҽh�����)���S��}����=%3�'3�)3,3<����S�(���������(�R�|���0���E�i�p�a.3����:
G<
E>
�>
H?
�?
z@
%A
�A
kB
C
�C
�D
YE
/F
�F
SG
 H
�H
�I
^J
K
�K
uL
"M
�M
�N
1O
�O
 Q
=R
bS
�T
�U
�V
�W

Y
2Z
�Z
[
�[
�[
x\
�\
i]
8_
a
�b
�f
S�2�h
�i
oj
k
�k
�-�.D�
��
j�
��
[�
��
�
��
��
��
f�
3�
��
��
��
�#3��
�
��
�
��
/�
?�
��
j�
��
6�
t�
q�
n�
k�
h�
h�
h�
h�
h�
��
�
�
�
��
W�
��C�?�j:���	��G�� �#�+�3�6�4�2�0�V��Q�S�V � ^!�!f"�"i#�#g$�$e%�%c&�&�'0(�(�)�)�*�*|+,�,-�-.�./�/0�01�1
2�233�3�4.5�506�637�7;8�8C9�9F:�:D;�;B<�<@=�=f>
?�?a@�@cA�AfB�BnC�CvD�DyE�EwF�FuG�GsH�H�I@J�J�KL�LM�MN�N%O�O-P�P+Q�Q)R�R'S�S%T�TsUV�V�Y�[\�\�\�]^�^6`�cf�hVi�iLj�jTk�k\l�ldm�mbn�n`o�o^p�p\qr�rSs�s�tuuv�vw�wx�xy�yz�z{�{|�|6}�}�~1�3���6���>�‚F�ʃI�ȄG�ƅE�ĆC�‡i����d��f��i��q���y���|���z���x���v�����C����"������ ���(���0���.���,���*���(�Ϟv��ʠU�̡K�ϢS�ף[�ߤc��a��_�ާ]�ܨ[����R�yh5		�		�
	�	�	�
	�		Z	�	b	�	7	�	�	�	U	�	O	�	/	�		�	.	�	e 	!	�!	"	o"	#	�#	$	�$	�%	W&	'	�'	I(	�(	[)	�)	U*	�*	P+	
,	�,	[-	.	�.	/	�/	�0	�1	3	@4	i5	�6	�7	�7	�8	:	5;	^<	�=	�=	+>	�>	?	�?		@	�@	�@	C	�C	�D	�E	�F	�G	nH	^I	MJ	>K	2L	�M	�N	AP	�Q	�S	�T	V	bW	�X	&Z	�[	�\	S^	�`	7b	�c	>e	sf	�g	�j	�k	Rm	�n		p	q	�r	ev	�y	K}	��	�	P�	��	&�	Ǐ	�	��	��	M�	h�	
�	�	ڕ	��	��	�	�D0�	��	��	�;�>��2�	��	X�	$�	ݳ	?�	�	��2�	��	~�	T�	*�	�	�	��	��	��	j�	R�	:�	"�	
�	��	��	��	��	+�	��	^�	&�	�	B�	��	�	2�	�
�

H
#

�

G
�
s

�
X
�
�
,
�
h
�
�
K
�
�
)
*
<
H
�
l


�
C
�
}
$
�
�
�
#
�&
*
�-
�0
G4
�7
V�����I��rT���[F `�������"
�H�@�x�����Y���$����26��U�R�q���j��2�#��2ϯ2��2W�2��2ּ21�2��2,��2�21/g�2��2�2P2:�2��2u5��2N�26F�FG�G��2��2��2��2��2,�2��2��2:Q�Q�R	TU�U�V"W�WhX
Y<Zn[�\�]`�ahcBd�egaj�k�mMo$pdq&tuv	w z�|#�Q�����x�h�h����@���c����6��]�4�2��"��X���ĩܬS�ϱ^�޶M�ڻg�&������8�G����%��������������#����O�&��"�g�����2�s��	%	o	�	�	�	
	1�1ڬ1M�1��1i2�2�2
2�272�2:2�2s22�2C2�2[2*	2�	2A
2�
2�2[2
2�
2]2�:�2�2+2�>�2�2,2'2�2t�1�1��1E�2�2nG�2�2�2�2�R12~S�2S2U2 2%!2�!2�"2~�1Wk2"l2$m2�m2o2p2q2�q2�r2�s2�t2�u2�v2�w2�x2�y2�z2d{2b|2W}2E~2�1��1>�1�1��1P�1�1��1L�1��1��1��1�1
�1ķ1N�1ظ1��1p�1>�1�1��1Y�1M�1��1��1K�1&�1��1��1��1�1�i�1��1�1
�1��1��1��1�#2i$2M%2�'2�(2�)2�+2A-2�~22:�2�2��2S�2�2��2~�2O�2q`1�EGa1b17J�b1�c1{d1Xe1�e1^f1�f1tg1h1wh1Oi1j1�j1Vk1�k1{l1!m1/n1
o1�o1Ap1�p1�q1,r1�r1ls1�s1�t1%u1�u16v1�v1��.212�22�325w1Z�1�w1�x1~u
vm42�v�42�52�1X62wy1}z1p{1}|1}}1�2�2ǔ281S�1�1U�1%�1{�1	�1H�13�1�1�1��1��1��1��1��1�1�1NJ1��1i�1)�1��1N�1{72X82%�1�1�1��1,�1��1ː1^�1��1��1��1��1�1_�1N�1ݕ&�?���2>�1���w�2J�2�2�_�I����2�2Ţ2\�s�=��
�����2��2��2��2+�2��2��2��2��2��2��I�a�2����20�2q�2��2+�2��2����ߜ2k�2����A92�1��11�2�2��2v�2K�2��2��2G�2�2��2G�2�2��2O�2��2ސ2:2�:2M;2�;2�<2l=2ȃ3�2��1��1��1D�1��1��1u�1��1��2c�2��1��1]�1���1��1e�1S�1�1�1�1
?2�?2@2�@2A2�A2
B2�B2C2�C2�D2]E2F2�F2�G2�H2�I2K2�K2�L2M�SM2ϖ	N2Q��N2ՙuO2\�.P2��P2l��Q2�YR2�S2nT2VU2 V2�W2�X2bY2�Y2"[2�[2B�PK�S��P
�Pm�P��P4�P
5Q`�P�P��SȧS3�SX�SְSֳS��S�Sg�S��Sj�S��X��Xy�S��S8�XK�X��X��X��Sb�X��S�S~�S�S�STBT�X�T�TpT	T�TeT�T;T�TR�R�R%T�TuT�T�T|T&T�T� TL!T"T#T��XW'T� R�!R(T�(T�+T�,Ta-T0.TC,R/T�/T�-R<1T�1Tb2T�2T�X�Xn�X��X?TNCT�ET�IT�KT�NT��X�VT��X�]T�cT�fTikT�mT�oT�qT�sT!uT�wT�yT��X��X��XZ�X��X�T��T��T��T]�T>�T��T{�T���g2'\2�\27]2�]2G^2�^2s_2`2�`2,a2�a2:b2�b2Zc2�c2ld20f2�f2���d2�e2ӱj�kg2�Fh2`���/j2]�ۻ1�1ڬ1M�1��1i2�2�2
2�272�2:2�2s22�2C2�2[2*	2�	2A
2�
2�2[2
2�
2]2�:�2�2+2�>�2�2,2'2�2t�1�1��1E�2�2nG�2�2�2�2�R12~S�2S2U2 2%!2�!2�"2~�1Wk2"l2$m2�m2o2p2q2�q2�r2�s2�t2�u2�v2�w2�x2�y2�z2d{2b|2W}2E~2�1��1>�1�1��1P�1�1��1L�1��1��1��1�1
�1ķ1N�1ظ1��1p�1>�1�1��1Y�1M�1��1��1K�1&�1��1��1��1�1�i�1��1�1
�1��1��1��1�#2i$2M%2�'2�(2�)2�+2A-2�~22:�2�2��2S�2�2��2~�2O�2q`1�EGa1b17J�b1�c1{d1Xe1�e1^f1�f1tg1h1wh1Oi1j1�j1Vk1�k1{l1!m1/n1
o1�o1Ap1�p1�q1,r1�r1ls1�s1�t1%u1�u16v1�v1��.212�22�325w1Z�1�w1�x1~u
vm42�v�42�52�1X62wy1}z1p{1}|1}}181S�1�1U�1%�1{�1	�1H�13�1�1�1��1��1��1��1��1�1�1NJ1��1i�1)�1��1N�1{72X82%�1�1�1��1,�1��1ː1^�1��1��1��1��1�1_�1N�1ݕ��1�1��1��1��1��1?�1��1��1�1��1o�1>�1
�1�1��1�1�1
�1G�1��1�1�1��1[�1�1�1!�1��1�1��1ߨ1`�1��1�1K�1��1
�1��1ͫ1Z�1A92�1��11�2�2��2v�2K�2��2��2G�2�2��2G�2�2��2O�2��2ސ2:2�:2M;2�;2�<2l=2ȃ�1��1��1��1D�1��1��1u�1��1��2c�2��1��1]�1���1��1e�1S�1�1�1�1
?2�?2@2�@2A2�A2
B2�B2C2�C2�D2]E2F2�F2�G2�H2�I2K2�K2�L2M�SM2ϖ	N2Q��N2ՙuO2\�.P2��P2l��Q2�YR2�S2nT2VU2 V2�W2�X2bY2�Y2"[2�[2�1l1G1�1i1�1�11�161�1_1�1%11w11�11�1M1�1131 1� 1�!1�"1�#1�$1n%1�'1�)1�+1),1�,1-1�-1'.1�.1O/1�/1r11�11�21&31Y01�01�31@41�41K51�51q61�6181�81�91�:1l;1�<1�=1^>15?1@1�A1c0P�0��01~1�1b1�1I1�1�1 1�1c1�1l1�1M1�121�1	1l	1�	1.
1�
1�
1b1�1;1�1
1�
1�
1X1�1"1�18�/��/�/�l'f�/�/y�/��/��/i�/�/��/��/W�/��//�/��/�/�/��/N�/��/&�/��/�/��/��/��/D�/�/s�/	�/m�/�/.�/��/��/��/5�/��/9�/��/60�0,0�00g0�0m0�0s0�0k0�0ڛ/�/ל/��/Q�/�/˟/��/K�/�/ˢ/d�/��/��/�/��/A�/֦/k�/�/��/u�/,�/�/��/T�/�/ȭ/��/,�/ϯ/r�/�/��/^�/�/��/�C1�D1PE1P�/�/��/0�/ж/s�/�/��/F1\�/��/��/9�/ػ/z�/�/��/�F1�G1gH1"I1�I1�J1YK1L1�L1`�/�/Կ/��/H�/�/��/�/<�/��/��/�M12�/��/��/9�/��/�M1ON1�/x�/�N1��/&O1�O1�O1EP1�P1Q1mQ1�Q1:R1�R1�S1T1�T11U1�U1.V1]�/��/2�/��/�/g�/�V1��/3�/W1��/?�/��/��/U�/
�/��/t�/�W1)�/��/��/<�/��/��/U�/	�/|X1MY1��/c�/	�/��/U�/��/��/P�/��/��/C�/��/��/5�/��/��/Z1�Z1�[1x\1A]1
^1�^1�_1-�/��/��/��/M�/�/��/��/�h'�^'�X0��.)Y0Z0�Z0�[0��.s�.R�.1�.��.��.��.I]0��.��.T�.��.��.!�.��.M�.�.V�.�.Z�.��.U/�/W/o�.y`0 a0�c0-d0�]0/�/i/�d0/�/�/�/t	/k
/b/Y/�e0sf0lg0eh0P
/'/�/�/�/�/l/^i0W/Gj0�j0k0�k0�k0Fl0�l0m0Wm0�m0"n0�n0Y/Z/�n0;/!/�o0/�/�p0�/�q0&r0�r0Js0�s0`t0�t0|u0v0�v0w0�w0>x0�x0Ry0�y0lz0�z0F/�/6/�/|{0�|0|0�/�/p /��%x}00&~0�"/n#/E$/%/�%/�&/{'/(/�(/t)/#*/�*/�+/h,/5-/�-/�0��0��%v�0�0C�0�0�0�0�0��0�%��0u�0��0��0��0��0��0��0Б0��0��0)�0�2/r3/N4/�&*5/�	&7/��0
6/��0ɘ0ҙ0��0��08/z&�8/�9/�&y:/ܚ0Û0^;/</�>/��0�</�=/��0q?/)@/�@/�A/+B/�B/�C/dD/�D/�E/MF/�F/uG/H/�H/�I/�J/RK/L/ݠ0�0�L/l�0��0�M/NN/O/.P/�P/�Q/��0��0TR/�R/uS/;T/�T/DU/��0��0�U/V�0��0��0E�0�0��0*�0Ǫ0f�0�0��0E�0�0��0)�0ϯ0k�0	�07V/�V/iX/Y/��0�0S�0�[/c\/']/^�&�0��0��0�_/}`/ja/Wb/Kc/d/�d/�e/nf/3g/�g/�h/�i/j/bk/>l/l�0g�0��&M�0{�&�0��0޻0�0�0��0��&տ0�0�0��0�0"�0�0@�0l�0M�0e�0�0�n/�o/�p/��&�q/��&�s/��0�r/}�0��0�0��0��0�t/��&�u/�v/<�&zw/>�0;�0ux/Iy/�{/'�0z/�z/%�0�|/�}/}~/I/�/��/_�/�/��/c�/7�/�/(�/��/ȇ/{�/y�/R�/��0u�03�/&�0T�0^�/h�0g�09�/�/Ǝ/g�/�/��/H�/f�03�0�0��0��0�/8�0D�/͒/��0{�0X�0�0��0{�0:�0��0�0��0	�0r�0��0\�0��0��0A�0��0�0��0��0w�0��0_�0��0M�0��0&�0��0��0��0:�0��0>�0��0�0��0�0��0��0L�0��0*�0��/�/��0Q�/Ք/8�/�0_�0��/�0�/;�0t�/��0�/��/%�0��0�0~�0�0b�0�0L�0�0S�0�0Z�0�0��/_�/ɘ/�)�0�+�,�f���F/i1&2�3>5�6+:�;�<LC�F0
F�G0�F;I0�H�J0|JWL�K0M0;OqN0�P�O0`S�TDV�Z�\�]N^�^�b%joqv�wvx�B0y�y,z���zI�Q�����d�v���x�
3���#-{<|�|�}�~��������(�ď\����ԕ�O���!�#L�#y�#t�7���/�#��#9�#��#L�#�#��#0�0 0�!0�"0�#0�70�$0�%0�'0�)0+0�,0�.0�00|20Q40#6090�90S:0Z;0`�P�����������2�������;0�=0e���ZA0_�v��x�B�8��P0X���ސ��$$�Q�ړ���q�E��~��H���v����H	A��DΧ#�#��0�0�4^�#��c9
0A�0��>+ !I��a�#�#>'��ߢ��-���G2�3x4t5}6�70@0�8:C;L<e@JC�A�D�#��#۬#F|G�I*KYL{M�N�O�U��>C0�\0
�
�vPpx�����,�S�!�#�#��#w�#i�#��#� �#8�#��#�#R�#r�#E�#^�#�#R$$�$�$Y�#��#"c0z[0�{}y0B5�0E0��#z�Z�0	���"$3F0����4�ى��"��Q0αֲr�ܴQ��T0��w����D���r�	$����������T����i�������~��=��U0�*(��P
�P�,Xm�P�%UV&U�,X��V��VD�V�V��Vq�Vn�Va-X+1Ut3UN4UO.X/XB0X�1X�2Xu3X4X�4X`�P�Pi5X��P�5X�7XI;X_>X�?XAXKBX�CX�DX�FX�GX�HX>JX�KXILXMX�MXPNXOX�PX�QX�RX�SXVUXuVXoS�WX�S�S~S1S�SWS�S�SVXXYX�YXtZX0[X�[Xo	S�\X�	S&]X�]XD^X_X�_X�`X�aXqbXPcXOdX=eX�S,fX�SQS�S�SgX�gX�hXbiX7jX�jX�kXhlXmX�mX�nX^oXpX�pX�S4S�SOSpqX�qX�rX4 S^sX� S%!SktX�tXxuXvX�vX�wXxyX�zX�|X~XXX��X�X|1Q>�X�X��Vc08�/��/�/�l'f�/�/y�/��/��/i�/�/��/��/W�/��//�/��/�/�/��/N�/��/&�/��/�/��/��/��/D�/�/s�/	�/m�/�/.�/��/��/��/5�/��/9�/��/60�0,0�00g0�0m0�0s0�0k0�0ڛ/�/ל/��/Q�/�/˟/��/K�/�/ˢ/d�/��/��/�/��/A�/֦/k�/�/��/u�/,�/�/��/T�/�/ȭ/��/,�/ϯ/r�/�/��/^�/�/��/P�/�/��/0�/ж/s�/�/��/\�/��/��/9�/ػ/z�/�/��/`�/�/Կ/��/H�/�/��/�/<�/��/��/2�/��/��/9�/��/�/x�/��/]�/��/2�/��/�/g�/��/3�/��/?�/��/��/U�/
�/��/t�/)�/��/��/<�/��/��/U�/	�/��/c�/	�/��/U�/��/��/P�/��/��/C�/��/��/5�/��/��/-�/��/��/��/M�/�/��/��/��.��.��.s�.R�.1�.��.��.��.��.��.T�.��.��.!�.��.M�.�.V�.�.Z�.��.U/�/W/o�./�/i//�/�/�/t	/k
/b/Y/P
/'/�/�/�/�/l/W/Y/Z/;/!//�/�/F/�/�/6/�/�/F/�/�/p /1!/�!/�"/n#/E$/%/�%/�&/{'/(/�(/t)/#*/�*/�+/h,/5-/�-/�./�//�0/�1/�2/r3/N4/*5/7/
6/8/�8/�9/y:/^;/</�>/�</�=/q?/)@/�@/�A/+B/�B/�C/dD/�D/�E/MF/�F/uG/H/�H/�I/�J/RK/L/�L/�M/NN/O/.P/�P/�Q/TR/�R/uS/;T/�T/DU/�U/7V/�V/�W/iX/Y/�Y/�Z/�[/c\/']/�]/�^/�_/}`/ja/Wb/Kc/d/�d/�e/nf/3g/�g/�h/�i/j/bk/>l/6m/
n/�n/�o/�p/�q/�s/�r/�t/�u/�v/zw/ux/Iy/�{/z/�z/�|/�}/}~/I/�/��/_�/�/��/c�/7�/�/(�/��/ȇ/{�/y�/R�/3�/^�/9�/�/Ǝ/g�/�/��/H�/�/D�/͒/��/�/Q�/Ք/8�/��/�/t�/�/��/��/_�/ɘ/̴.�.#U�.Ͷ.��.;�.=�.�.��.Ӻ.Z�.�.�.��.@�.�.��.)�.��.6�.��.2hc�.��.�0�0��#B�#<	0�
0�0�#��#�0��.0�.ç.��.m�.�.��.p�..�.�.B�.�.د.��.�.�{�#&�#��#.0c�.l�.c�.&�.(��G�]����R��!a%G����������3�.�.3�.�.��.`�.e�.�.�.��.�.��.-�.��.E�.S�SsTU�.��.2�.H�.\�.o�.��.��.��.��.ӟ.�.��.��.H�.�.z�.ܰ.Z�.��.`t�qڱ.�4w{�.�!�.��.��.Q�.p�=�.P���.��.��.=�.n�.��.��.��.P�.�.��.s�.*�.��.4���.3�.��.�.��.��.��..�.��.6�.��.��.
�.�.Gr-�s-At-0�-0�-�-�-�-�-7�-@�-$�-��-d�-W�-��-Mm.ؘ-�-��-��-{�-�n.'�-H�-ao-͡-�t-��-Hx.�y.{.w|.�}.B.��.�.U�.��-��,�k.��-Ƈ-��-�u-=x-#{-�o.'�-�q.<�-p�-f�-(�-5�.~�.Ӈ.�.˵,�-��-Ȯ-1�-s�-J�,�,�r.[s.h.Vj.�t. v.��,��,l�,��,��,��,��,��,�,��,�-��-��-��,�,K�,��,ց-ɂ-~-��,�-��-��-��-��-N�-��-P�-Q�-;�- �-�-��-��-
�-2�-��-��-��-��-}�-6�-��-��-��-��-�-X�-<.	.�.�./.^
.v.�.�.?.�.�..�.+.2." .
".	$.�%.(.:*.o,.�..�0.I3.�5.R8.-:.�;.�=.�?.B.�C.WE.{G.�I.�K.YM.�N.�P.�R.�T.�V.YX.�Z.�\.�^.�`.�a.d.f.��.۵--�-g�-wi.{w.��-Gr-�s-At-0�-0�-�-�-�-�-7�-@�-$�-��-d�-W�-��-W�-ؘ-�-��,��-{�-J�-'�-H�-ao-͡-�t-̲,��-��,�-��-Ƈ-��-�u-=x-#{-j�-'�-<�-p�-f�-(�-˵,�-��-Ȯ-1�-s�-J�,�,��-�-p-��-�-��,��,l�,��,��,��,��,��,�,��,�-��-��-��,�,K�,��,ց-ɂ-~-��,v�,��,��,��,��,��,�,��,�,��,��,5�,'�,��,~�,��,M�,��,��,~�,W�,�,P�,8�,t�,u�,�,�-j-1-�-�-�
-L-�--\-�-@-/-]-W-T- -$-�!-}&-�(-+---�1-	/-e4-�6-�8-�:-�>-[<--A-yC-F-�G-oK-5I-�M-�O-[Q-U-�R-MW-dY-�Z-�^-e\-�`-�b-dd-�e-xg- k-�h-Rm-�-��-��-۵--�-g�-gq-�-��-��PB�P
�Pm�P
5Q�%UV&U�&U��V��VD�V��V��V�V��V��V��Vn�Vu�V~�V`�V��Vq�Vn�V60U5uU2U�2UvU�vU5U�V�Vf�V��V=�V��V?�V�~UkU9�U�U؁U>U�>U?U��V��V��V_�Ul�U��U��U%�U�U`�P�P��U��V��VP�V_�Vp�Vv@QwW��U�W�W�UW�	W�W�W.�UPWwW�U��U�WBW#W4&W�)W5-W\�QτQZ0W+2W�4W�7W.;Wa>W�AWIFWIWLW��U�MW�NW�OWQW�S��U��Q��U�QWϯQ��QD�UnTWuXW�]WaW�dWjW�mW�Q��Q
V�V�rW@sW�sW�Q�tWBTV�uW�wW�T�yW�{W�V�|WV�T�V�V>}WpT	T�~Wt�Wy�Wx�WTu�W�T�TeT1�W?V��W�Ve VE�WR�R�"V�#VHR�R�$V%T�%V2&VuTT�T�&V@TT��W�'V|TW(V`�W� T �W�WȏW��W�WڑWԒW�)Vh*V��W�W*�W�+V�,Vf�W�.V�/V�0V��WE�Wr�W5�W"�W� R�!R(T>3V.�W͟W1�W�+T�,Ta-T��W�;VK�W�W.@V�@V��Wm�W/T�[U�W�/T�-R&0T<1T��WA�W�W��W�2RH�WG�WHVh�W��Wz�W��W<�W?VV�W��W�UR�\V��W?�W��W<�WdV�hV��Wm�W��W"�Wm�W��WO�W��W��Wa�W݆R�V��V��W��W-�Wr�W��Ww�W��W�X�Xm
X�X`XiX�X2X�X�VX��V>�T� Xg$X`�Rq�V-&X�)X��V=�P��P`�P��V&�V�V�c)\d)Vf)7������
�����o�Ph)����t��l�۪;���6���1���;���7j)m)ɴ1�����r)p)�t)&u)�u)�u)[v)�v)*w)�w)�w)�z)@})�)��)��)��)��)]�)��)�)���)Ҏ);�)��)>�)}�)��)D�)��)��)��)��)��)�) �)^�)٢)T�)��)����m���a�)K���"�����i���F�ϧ)���|�)ެ)E�)��)�)~�)�)F�)��)C�)��)��)Z�)�)��)��);�)��)V�)��)I�)��){�)7�)��)��)��)�)o�)h�)� `�)P�)��)D�)��)��)d�)��)T�)��)D�)��)N�)��)X�)��)>�)��)B�)��)2�)��)�)��)�)V �  ��)͚,ܕ"3�"F�,қ,^�,�,v�,ĝ,�,�,ȟ,ǡ,N�,��,ͥ,�,h�,��,F�,ҫ,^�,��,��,��,V)W)X)�X)�Y)�Z)�[)d\)s])�^)�_)�`)�a)�b)G�+��+i�+
�+��+i�+��+v�+��+��+@�+��+�+r�+��+N�+��+D�+��+�+��+��+H�+��+�+��+�+��+)�+��+A�+�+��+��+@�+�+n�+?�+�+y�+N�+�+~�+�+��+T�++�+�+��+�+m�+�+8,j,�,O,
,�,�,Q,�,W,�,y",�$,q',^�';�'�'��'��'W�'��'�'��'��'��'��'��'[�'�'��'��'��'��'��'��'��'��'��'��'�'��'�'A�'%�'^�'�(�(.("(p(T(�(	(�	(�
(�(
(@(f(�(�(�(n(�(*(K(�(s(�(H(y(!(%"(�#(M%(�&(
((/)(�*(�+(--(�.(%0(�1(�2(44(I5(�6(Y8(�9(�:(<(;=(->(,?(�?(�@(�A(oB(6C(D(E(F(�F(xG(fH(JI(�K(yN(�P(�S(�U(�X(_[(�\(m](H^(Z_(Ip(�q(s(it(�u(�v(x(y(;z(L{(p|(�}(~(�`(�e(�a(�b(Kc(
d(e(�f(�g(Zh(<i(�i(tj(\k(3l(#m(Nn(ro(�(F�(7�(��(��(׉(�(�(��(K�(��(��('�(��(/�(�(	�(ގ(��(��(k�(%�(��(��(l�(�(��(ɝ(��(V�(�(ʤ(G�(�('�(�(��(x�(N�((�(��(ܯ(��(��(H�(��(��(��(4�(�(ζ(��(e�(5�(�(պ(��(��(��(��(��(��(f�(:�(�(��(��(��(d�(8�(��(��(��(Q�(�(��(��()�(��(�(��(�(��(��ݒ�(��(��(s�(=�(
�(��(��(z�(N�("�(��(��(��(u�(L�( �(��(��(s�(9�(�(��(��(^�(��(R�(��(D�(��(>�(��(6�(��(5�(��(7�(��(=�(�(��(�(��(��(��(A�(G)I)J)S)X)\)@)�)b)�)�)�)�)�)h!)�")�#)�$)�%)')y+)�/)�3)�7)�;)
@)zA)�B)�C)E)�E)"G)�G)dH)I)�I)GJ)�J)nK)�K)}L)M)�M)�N)tO)N)�N)�O)aP)�P)NQ)�Q)YR)�R)xS)T)�T)JU)�
++@+N+X
+�+�++�+N+�++G!+x#++&+�(+�++D.+�0+�3+54+�4+?5+�5+L6+�6+\7+�7+i8+�8+s9+�9+�:+
;+�;+<+�<+4=+�=+L>+�>+^?+�?+p@+�@+ZA+�A+VB+�B+dC+�C+uD+�D+�E+F+�F+G+�G+H+�H+#I+�I+)J+�J+/K+�K+=L+�L+KM+�M+_N+�N+sO+�O+�P+Q+�Q+�Q+TR+�R+$S+�S+�S+\T+�T+6U+�U+V+qV+�V+IW+�W+7X+�X+9Y+�Y+>Z+�Z+C[+�[+E\+�\+A]+�]+=^+�^+?_+�_+-`+�`+-a+�a+?b+�b+=c+�c+;d+�d+?e+�e+=f+�f+?g+�g+Eh+�h+Ni+�i+Wj+�j+]k+2l+m+�m+�n+�o+fp+<q+r+[s+�t+�u+0w+zx+�y+{+U|+�|+�|+~++�+`�'�Aɂ'b�'ք'i�'��'��'g�'�'��'��'��'�'��'�'A�'��''�'��'�'��'�'x�'�'^�'=�')�'�'��'z�'��'p�'�'`�'՘'J�'��'@�'H�'I�'E�'F�'��'B�'2�'��'4�'5�'1�'2�'/�'�'��'�'�'�'�'�'7�'T�'̭'&�'��'�'j�'د'@�'��'&�'��'�'v�'�'L�'��'"�'��'�'��'�'P�'��'�'��'��'\�'Ƹ'(�'��'ܹ'B�'��'��'q�'û'�'��'�'y�'�'��'��'U�'��'�'x�'�'&�'��'��'k�'��'�'��'>�'��'�'d�'��'@�'��'�'{�'��'<�'��'�'w�'��'d�'��'6�'��':�'��',�'��'�'v�'��'��'�'o�'��'5�'��'��']�'��'W�'��'>�'��'�'��':�'(�'�'i�'��'�'{�'��'K�'��'�'c�'��'[�'��'U�'��'/�'��'�'��'��'Z�'��'^�'��'j�'��'`�'��'�+W�+��+2�+K�+څ+g�+�+��+�+�+ى+��+I�+Ջ+J�+��+4�+��+�+��+��+e�+@�+(�+�+��+m�+�+[�+Ҕ+C�+��+%�+��+�+�+�+�+ۚ+��+ś+��+ �+��+��+v�+e�+P�+)�+��+#�+�+��+�+զ+A�+��+�+��+��+f�+ͩ+�+�+w�+ͬ+/�+��+��+e�+Ů+-�+��+��+Y�+��+�+{�+ڱ+?�+��+��+a�+dz+3�+��+�+L�+��+��+V�+¶+'�+��+�+G�+��+�+R�+��+�+]�+��+'�+{�+޻+H�+��+��+a�+̽+"�+��+�+Q�+��+4�+��+�+e�+�+a�+��+K�+��+�+��+��+��+	�+r�+��+0�+��+��+N�+��+0�+��+�+z�+��+t�+�+n�+��+�+t�+��+<�+��+=�+��+-�+��+��+��+n�+��+B�+��+�+��+�+��+��+V�+��+R�+��+V�+��+D�+��+k'6l'��l'�l'%�Em'�m'!n'�n'-o'�o'Np'�p''q'�q'Ar'�r'�s'rt'Gu'%v'w'�w'F2!��x'(y'�y'�y'7z'�z'�z'$r�n�j�[{'�|'�}'�~'�'G�'��'*,�+,�,,|-,@.,7/,�/,�0,x2,=3,4,�4,�5,�5,86,�6,)7,�7,:8,�8,-9,�9,r:,�:,=;,�;,�;,d<,
=,�=,�>,^?,
@,�@,A,�A,�B,YC,(D,E,F,TG,tJ,�M,�P,�S,V,�W,�Y,�Z,H[,O\,],�],�^,r_,"`,�`,�a,Bb,Kc,�c,e,�e,�e,Uf,g,Xg,�g,bh,�h,k,Im,�o,�r,�r,fs,�s,@t,�t,u,}u,�u,?v,�v,�v,9w,�w,�w,Zx,�x,y,�5"@6"Hy,�y,!z,�z,�z,J{,�{,|,�|,�|,]},�},~,|~,�~,D,�,�,��,�,��,"�,��,�,p�,ƃ,�,s�,ʄ,(�,��,؅,1�,~�,؆,&�,t�,RB"�B"�B"IC"�C"�C"ZD"�D"Ç,(�,��,�,g�,͉,��,��,)�,�cfd�)�)��)b�)�)��)`�)
�)��)_�)
�)��)`�)�)��)a�)z�)�)��)��)%�)t�)��)�)d�)��)�)Z�)��)��)g�)-�)�)��)Z�) �)�)_�)"�)��)C�)�)�**�*Q*�*^*�	*�*�*�*
*�*O*�*p*e**�*�*�*S!*0#*�$*�&*�(*a**,*.*�/*�1*T3*D5*-7*�8*�:**<*>*�?*{A*C*�C*jE*�F*kH*�I*�K*�L*�N*P*�P*OR*�S*NU*�V*mX*�Y*j[*�\*�^*e`*-b*�c*�e*zg*Oi*k*Bl*ym*�n*�o*�p*r*as*�t*v*2w*|x*�y*rz**{*�{*�|*E}*~*@*<�*��*��*u�*1�*f�*�*ۅ*��*Z�*�*׈*��*��*Z�*G�*>�*��*َ*��*v�*`�*T�*��*-�*ɔ*]�*�*w�*�*��*�*��*�*F�*@�*I�*+�*�*�*�*��*��*v�*��*ި*�*#�*�*c�*�*��*G�*ȶ*l�*�*��*+�*˾*H�*
�*��*)�*��*��*X�*.�*��*��*��*q�*'�*�*��*x�*�*�*��*s�*�*��*P�*��*g�*�*��*'�*��*W�*��*l�*�*��*	�*��*�*	�*��*�+B+B+�+�+Kh'�h'i'ti'�i'Bj'�j'�R'S'S'�S'\T'�T'<U'�U'V'�V'�V'OW'�W'#X'�X'�X']Y'�Y'%Z'�Z'�Z']['�['G\'�\'1]'�]'-^'�^'(_'�_'%`'�`')a'�a'*b'�b'�b'Tc'�c'd'�d'�d'Ye'�e'Sf'�f'Mg'�g'�$�$�$�$��
$$�$�!$�$$�'$�*$x-$d0$m3$Jo��n6$�6$7$c7$�7$�7$T8$�8$�8$C9$�9$�9$�:$�;$�<$=$�=$N>$�>$�?$9@$�@$xA$�B$�C$�D$�CyE$�EF$�G$�H$�I$`L$�N$�P$HS$�U$
X$cZ$z\$�^$�`$c$5e$Jg$ii$gk$lm$Dr$�t$iw$z$�|$@$��$�$��$�$��$D�$ސ$�o$s�$��$~�$��$Ş$H�$ţ$�$b�$5�$��$��$P�$Ͳ$)�$`�$u�$��$��$�$$�$��$��$��$�$n�$j�$l�$h�$i�$G�$#�$��$��$��$��$��$��$^�$M�$*�$'�$�$��$��$��$6p$�$j�$��$��$��$��$��$�$;�$��$�$��$��$��$��$�$��$�$o�$(�$��$��$��$>�$��$N�$��$m�$�$��$2�$�$f�$���$��$	����$��
�z�$J�$�$�$�$�%}%P%5%%%�%�%�%x%]	%�	%�
%(%"%�%�
%x%�%�%�%�%x%O%5%%�%%[%�%�%�A%!%�%[%%�%�!%D#%%%�&%�(%G*%,%�-%z/%,1%�2%�4%l6%%8%�9%S;%�<%�=%c>%2?%p@%�A%�B%D%ME%�F%�G%I%J%NK%BL%�M%�N%�O%+Q%.R%S%�S%�T%�U%�V%rW%VX%;Y%�Y%�Z%[%\%~\%v]%V^%�^%�_%a%!b%.c%;d%We%qf%�g%yh%�i%\j%�k%ul%�m%�n%�o%�p%�q%�r%�s%�t%v%-w%Fx%0y%@z%{%<|%"}%N~%b%?�%R�%f�%y�%~�%��%��%��%��%��%��%r�%��%z�%��%��%k�%��%ȟ%�.>0	�%Z5s�%§%��%��%��%+�%�%ڲ%��%��%$�%��%*�%��%8�%��%�%c�%��%�%[�%�%��%��%s�%��%��%��%��%�%r�%1�%K�%�%��%��%��%i�%�%��%��%�%R�%�%��%��%��%��%��%��%��%>�%
�%�%T�%
�%��%��%�%��%��%��%*�%��%��%��%��%�%U�%�%��%*�%G�%Z�%��%4�%��%��%c�%�%�%�%��%a�%��%X�%t�%�%b�%�%�%[�%�%�%S&��%��%�&&�&�&�&f&&�&�	&O
&�
&]&�&�&�&&~&3&�&�&z&.&�&F&&�&�&=&�&�&�&�&W &�!&�&&"(&�"&$&v%&w)&�*&i+&1,&H-&�.&X0&�1&\3&�4&
6&87&U8&�9&�:&�;&�=&+?&U@&�E&G&�@&�A&�C&sH&�I&K&�L&uN&�O&�P&�Q&VS&�T&�U&�V&�W&�X&\Z&�[&�\&jY&�]&_&�_&�`&�a&�b&�c&vd&Qe&�f&�g&�i&�j&�h& l&�l&�m&�n&�o&�p&�q&�r&�s&�t&�u&w&x&fy&�z&U|&}&�}&�&π&��&~~&e�&ك&��&^�&�&��&�&׉&��&Y�&{�&�&�&Ȏ&��&t�&I�&��&6�&�&Ε&��&+�&͙&��&U�&�&�&�&��&`�&�&{�&c�&�&��&��&�&Ĭ&��&o�&�&�&��&��&��&g�&�&��&e�&<�&,�&�&�&��&��&��&��&��&ף&��&��&(�&�&��&��&��&a�&�&��&��&j�&��&��&�&��&B�&!�&~�&`�&�&��&q�&�&��&^�&<�&�&��&K�&/�&�&��&��&s�&�&��&	�&��&Y�&~�&	�&�&��&�&z�&�&M�&��&%'�'''�'	'y
'A'�
'�'�'A'6'�'R'�'^'�'T'�'/ '�!'�"'�#'�$'�&'�''6)'�%'p*'�+'�,'�-'�.'�/'�0'�1'�2'4',5'K7'�8'=6':'�:'�;'�<'�='�>'�?'"A'B'C'9D'�E'qG'
I'�J'HL'�M'�N'�O'�P'}Q'R'�)N)�*���+,�,�f���F/i1&2�3>5�6+:�;�<LC
F�F�H|JWL;O�P`S�TDV�Z�\�]N^�^�b%joqv�wvxy�y,z���zI�Q�����d�v���x�
3���#-{<|�|�}�~��������(�ď\����ԕ�O���!�#L�#y�#t�7���/�#��#9�#��#L�#�#��#�5���������#�����/�������z�L������`�P�����������2����������e�����_�v��x�B�8���X���ސ��$$�Q�ړ���q�E��~��H���v����H	�
A��DΧ#�#�N�4^�#��c9A���>+ !I��a�#�#>'��ߢ��-���G2�3x4t5}6�7�8:C;L<e@JC�A�D�#��#۬#F|G�I*KYL{M�N�O�U���\0
�
�vPpx�����,�S�!�#�#��#w�#i�#��#ְ#��#��#� �#8�#�#��#�#R�#��#ƶ#��#�#�#�#��#��#r�#E�#^�#�#R$�#��#�$$�${$��#�#��#$$�$�$Y�#��#�#��#��#��##�#�$M
$�$a�#4�#��#.
$B$B�#��#_$��#"
�#C�#��#�$�yzB{�{}y��#v�#B5�
�#��#z�Z�	�����"$S%�%n&��#��#��#��#��#��#Q�#����4�ى��"���αֲr�ܴQ��#U�#��#����w����D���r�	$����������T������i�������~��=������*(>�V��Vi�V��V�V!�V��V�V;�Vm�V��V#�#w#�#d#�-uo#e4p#�p#�q#=>;r#�r#"@�s#,t#�t#R�u#fv#�<#M=#t>#�>#�?#�@#�A#nB#�B#��"W&Y)w#�`2h�x##Jy#�#��"i�#�#�"�#��#B�#ʉ#h�#�#&�#��#�#��#�#��#�#-�#|�#Й#�C#�D#`H##��#Q���j�{�#G{�#&��#�#��#%�#N�"1�"S�"��"��".z#0{#9���#S|#
�"��"}#G��������)�*,M-�.�/W0�0��"�P[�"o�",�"�Q��"gR��"�U�V[�#*�#��#&m�"�m��"�#L#�H#uI#]�"��"��"��"��"|�"�#� #�I#�J#�K#R�"*!#�!#9L#IM#N#�"��"�"��"'�"%�"�"#t##�N#�O#�P#.�"x�"��"*�"�"P�"H$#J%#xQ#oR#�S#T#��"�"��"<�"��"i�"R&#'#)U#�U#��V#9�"�"N�"4w��"��"�'#�(#uW#RX#�Y#��"�)#c*#YZ#��"9[#\#��"��"�"��"��"�"4+#	,#�\#�]#�^#�_#�"�"#�"�"O�"T#�,#�-#P`#a#�a#d#@#V#)#x#s#�.#_/#�b#Vc#yd#y#00#�0#8e#6#�1#2#�e#H3#�3#�f#�#�4#�5#tg#h6#h#	i#�	#�#9
#�
#�#�#7#8#�i#�j#�!�#9#�9#*l#�l#x#�#�##�#�#N:#�:#Rm#�m#�n#�#�;#/<#�}#y�E�#��e�#�����{��������#��D�]������D���T����
�G�B���d�W�V�V��V��V��Vm�VB�PK�S��P
�Pm�P��P4�P
5Q�%UV&U�&U�pU]'U�(U�rU�sU�,U�.U60U5uU2U�2UvU�vU5U�wU�xU3zU�{U�}U�~UkU9�U�U؁U>U�>U?U��U��U_�Ul�U��U��U%�U�U`�P�P��U��S@�U��UؐUְS��U��U�U��U��U��S��U.�U�UD�U�U��U��Um�UC�U5�U��U�TM�U��U��U8�S��UV�S(�SQ�U��T#�U��U��U��U�S�S��U��U��U�UϯQ��UD�U9�U��U��Q�UI�U�U�V�V��Q
V�V�VN
VVTBTV{V�V�T�V�V�VQVV�T�V�VpT	T'VmTH
T�VT�T�TeT�V?V�V�Ve Vi!VR�R�"V�#VHR�R�$V%T�%V2&VuTT�T�&V@TT�T�'V|TW(V�T� TL!T�(V�)Vh*V�+V�,V�-V�.V�/V�0Vu1V�1VW'T� R�!R(T>3V�4V�6Vs8V�+T�,Ta-TY:V�;V�<V�>V.@V�@V�AV�CV�EV0.T�GV/T�[U�/T�-R&0T<1T�1Tb2T�2THV�TJV�LV�OVuRV?VV�ET�XV�ZV�\V^V�`V�NTdV�hV�kV�oVqVrtV�T�wVzVVx�V��V�V��V9�VikT�mT�oT�qT�sT!uT�wT�yTd�VƔVA�V˛V͟VáV�V�T��V:�V�V�V
�V��V>�T�V{�T`�Rq�V��V��V=�P��P��P`�P��V&�V�V�����?�7������
�����o������t��l�۪;���6���1���;�����ɴ1������i�ɹh�
���U�޼��C�������7���n��������v�����������
�������������k�A����;���b������������j���)����m�����K���"�����i���F������
���P�����$����2���e������P�����  " 2  + � � 0 	 �	 �
 �
 ] � C � 2
 �
  � R � ? � � � � v >-� [ � � � I � V �  �q
�ܕ"3�"p�"�"\�"җ"<���S�(�H�"������(�R���"�"u�"��"E�i�p���"[�%������P�����{�E�����s�=������k�5�������c�-�������X�%�������M������{�E������w�E��������<�����j����j�������}�h����d���
�Z���S��
LbzX��)��M#����W/��� u!]"J#�#8$�$F%�%2&�&F'�'!(!):*
+:,�,.j/j0�1V2�3F4^5�6o7`8Q9E:9;<�<.>z?�@�A�B!D�E�F(H.I�J�K�LWN�O�P�QDS<T�UWHX�Y�Z\�\E^�_�`�a�b�cedffg�h�i�jl�m�n�oTqFr�su�u�v{w:x�x�yxz!{�{�|�~Z�x����a��L���ΓF�T����1�y�������Ե����G��������W�����\�ʣ�ޠ��#����٤j�����z�ۧ<���`�������t�޾��x�O������R��L�������N����r�������M����=�1��c����3�m�a������B�m�	�
lJ�u���tR�6�r�v��6�.!�"$o%	&�&@'�'w()�*�+p-�.[0�1D3$45�5�6�7�8r9]:E;-<=>?@A B+C6D>EFFNGVHaIlJtK|L�M�N�O�P�Q�R�S�T�U�V}W"X�XjYZ�Zb[\�\�]�^�_z`$a�avb#c�c}d(e�ffhj�kDm�n�pqzq�qjr�rbs�s�t�u�v�w�x
z{|�|�}{~L4����؃������e�J�/�� �.�<�H�Y�j�{���ݒ4�ٓ~�#�ƕn����d����b�
���d����[�����6�إy����5���'���������m�O�/����ܳ���*�A�X�o���Y�1��ۿ����R� ��������������.�M�t��������=�U�p�����������@�����b��,�D�Y�n����������K��������k�@������f�.�����R��a���R��^��p�%	�
�
�`�D��]�y�$��{!m"�#�$�%�&	(*):*�*�+S,	-�-u./�/�0�1y2R3*4�46_9�<�?DC�F�I-MxP�SW[Z�]�_�a
f/j$n2r�uzU{{|t}�~Rb���͈��ΐ������� ���t<��b*�� �!M"#�#�$p%8&'�'�(^)&*�*�+�,L-.�.�/o0:12�2�3Z4%5�5�6�7H89�9�:r;><
=�=�>t?Y@�@,A�A�A!B�B�C�D�E�FyGH�HoI�I_J�JCK�K'LFM3OQS�ST|T�TjU�URV�V�W�X�Y�Z�[w\j]K^*_`�`Pd�g�jFn�q�t6x�{�~&�y�ʈފ��� �����c����	�}�����o�C������%���3���A�����v����ڝ����\�.�����\�Ѥf�ۥP�æL�ӧ_���P�����!���+���7����n�t�ί%��ٰ3����k�նm����8�ӹk�ݻR�ľ6������n�O�-�����������e�N�4�:�@�F�L�U�^�d�j�p�v��������������������������������T��t���4���
�x���T����}���K���u����3�����P��� ���2�����Y���C��?���=���<���&���/���.������#�,�/�`�����]�S	�
��	]
}��
�
p�_�h�;�G�P#���uA��wC`�� �!�"�#%D&i'�(�)�*�+
- .6/O0e12�2�344�4�5�6�7�8�9;<?=i>�?�@�A`B0CD�D�ElF5G�G�H�ISJK�L(N�OQ�R7T�UIW�X\Z�[Q]�^a`�aVc�df�girj"k�k�l8m�m�n=o�p@r�su�vxy�z�{�|�}0�>���և!�j����M����*�s���ȪԬx����C�ƯI�ͰQ�ձL�ò:���.��������h�۶��J�����������$���j����a��C�h�%�~���`�����r��a��A����A����L���Y���"�������V��F2!��t�&��G	�
��$r�n�j�hJ'�a�!��!��!��
Q�!��
[�
��
��!]�!X�!��!�!��!H�!��!b�!��!��!��!��!�
��
3�!��!��!H�!/�
?�
��
j�
��!V�!�!��!�!}�!��!a�!��!=�!��!�!{�!�!Q�!��!�!e�!��!A�!��!��!i�!��!��!��!+�!��!W�!��!�!W�!��!��!��!,�!v�!��
��!t�
q�
n�
k�
h�
h�
h�
h�
h�
��
p�!�
�
��
W�
��C�?���!K�!A�!��	��G���!6�!��!��!O�!��!�!x�!��!>�!��!��![�!��!�!x�!��!6�!��! �!��!
�!u�!��!+�!��!��!T�!��!�!}�!��!;�!��!��!X�!��!�!u�!�!_�!�!L�!��!�!j�!�!0�!��!��!Y�!��!�!z�!�!8�!��!��!U�!��!)�!��!�!��!�!J"�""o"�"5"�"�"Z"�""w"�"5"�"�"h"�"S"�"2"�"�"K	"�	"
"t
"�
":"�"�"W"�"
"t
"�
"2"�""�"	"q"�"'"�"�"P"�""y"�"7"�"�"T"�""q"�"["�"H"�""f"�","�"�"U"�""v"�"4"�"�"Q"�"%"�" "� "� "F!"�!"""k""�""1#"�#"�#"V$"�$"%"s%"�%"1&"�&"�&"d'"�'"O("�("�V+"k-")0"�0"�0"j1"�1"?2"�2"3"}3"�3"4"`4"�4"5"u5"�5"�5"@6"�6"�6"d7"�7"E8"�8"9"k9"�9"0:"�:";"l;"�;"2<"�<"*="�="Z>"�>"?"g?"�?"@"q@"�@"&A"pA"�A"B"RB"�B"�B"IC"�C"�C"ZD"�D"E"|E"
F"�\�]^�F"CH"�K"=N"�cf�S"LT"�T"U"eU"�U"+V"�V"�V"TW"�W"X"qX"�X"/Y"�Y"�Y"LZ"�Z"6["�["#\"�\"�\"A]"�]"^"j^"�^"0_"�_"�_"Q`"�`"a"na"�a",b"�b"c"uc"�c"bd"�d"!e"�e"�e"Ff"�f"g"og"�g"1h"�h"�h"Ni"�i"j"kj"�j"?k"�k"*l"�l"	m"`m"�m""n"�n"�n"Ko"�o"p"pp"�p".q"�q"�q"Kr"�r"	s"~s"�s"it"�t"Hu"�u"�u"av"�v"'w"�w"�w"Px"�x"y"my"�y"+z"�z"�z"H{"�{"2|"�|"}"�}"�}"=~"�~""f"�",�"��"�"M�"��"�"j�"ɂ"(�"��"��"q�"�"^�"ƅ"�"|�"߆"B�"��"�"k�"Έ"-�"��"�"J�"��"�"g�"Ƌ";�"��"&�"��"�"\�"��"�"��"�"G�"��"
�"l�"ˑ"*�"��"�"G�"��"�"z�"�"e�"yh� D �! [" �" �# 1$ �$ e% �% ]& �& U' �' �( ) �) A* �+ �, �- / #0 .1 �2 �3 �4 6 7 8 p9 �: �; �< �= �> >@ TA �B �C �D �E 7H RJ |L �N �O �P 2S NU zW �Y �Z �[ ^^ �` c Re of �g Ij �l �n Bq ^r zs �t �u w 
x y z a{ }| �} �~ &� � � ֆ LJ �� � � ֎ �� �� �� Г ” ו ɖ �� �� �� �� � � � 0� �� �� �� �� Ŭ ح (� k� ɮ [� �� �� 8� � z� 
� �� U� � �� @� y� *� ۷ �� =� � �� X� � Ƽ }� /� � �� E� �� �� _� � �� {� 0� �� �� ;� �� �� =� �� �� Q� � �� n� #� �� �� E� �� �� i� � �� �� A� �� �� g� � �� �� H� � �� i� � �� �� ;� �� �� ]� � �� �� 4� �� �� M� � �� j� � �� �� ?� �� �� _� � �� � 5� �� �� Z� � �� �� &� �� � p� � N� �� /� �� � �� � \� � 6� �� � �� f� � �� b� !w!�!M!�!%!�!!n!�!J!�!*!�!!w!�!W!�!s!	!�	!t
! !�!y!&
!�
!�!.!�!�!)!�!|!&!�!|!'!�!~!*!�!z!"!�!s!!�!p!!�!o!!�!n !!!�!!m"!#!�#!p$!%!�%!u&!"'!�(!�)!+!E,!c-!�.!�/!!1!r2!�3!�4!�5!H7!r8!�9!�:!<!=!�>!�?!�@!&B!>C!VD!
G!OI!�K!�M!O!P!�R!U!nW!�Y!�Z!�[!�^!Ca!�c!Bf!rg!�h!�k!�m!�p!s!/t!^u!�v!�w!7y!]z!r{!�|!~!9!��!ʁ!J�!\�!|�!��!��!��!*�!<�!]�!s�!~�!��!ژ!�!�!$�! �!�!��!ɢ!��!�!H�!w�!�!9�!r�!��!γ!��!Z�!��!��!P�!��!��!U�!��!=���9�ղw�Y�ĮE�ƯP�հ\��n���~�	���������#�����Z�ܸf��M�ҺW�ܻ\��r�ͽ(���߾;�ÿJ��+������-������%���7�����b�#����l�-����s�7�����z�;�������E������O������S������]������g�(�����k�/�����}�B������Y�}������Q���
�p�����2�����^�����C����vJo���
{S7
�B�L��>�{$ ��R��3"~%�(,Y/�2�549�<�?�A�C�D�E�F�GI-J|K�L?M�M�N&O�OAP�PgQTfV�X�Zc]`�`ataPb1cd�d�e^f�f�g�h�itjWk�k�l�mcn9o�o6pq/rfs,t"uv�v�wy�z�{v}�~i���8�	��#���
�$��‹��`�+���ϐ=���9����������L����Q�����V��t��b�؛N���2��� ��������m�۠F�ơF�΢Q�ԣT�ҤZ�ڥK���j��h��[���#�����]���#����U���&����P����a������Q�������e�N�2����������r�@�S�1������H���P���V���T���\���:���=���9���g������M����u�+���u�]����x�&�����e�%�����]�"�����7�&�����V���y�8����c�����6���w�/����S��;�]��	�
[�
1�vM'����eG&��}3!�6�8R*�+�.>0S2Z5>EGl;9=�>P@B�C�"a#{$�%�&�')�H�I�J�K�L�M�N�O�P�Q�R�S�T�U�V�W�X�Y�Z�[�\�]�^�_�`�a�b�cbd�e�f�g�hAj�kUl�l|np}qs�t)u�uYw�x�y�z�{�|F~�p�̀+��I��Dž$���߆<���ψ9�$��G���Ր‘`������ϗm�]����9�˞��(�2������]�5����̫b�5�
�ٰ��t�@��Դ��a�&����ɹ����'�=�[�|�����������'�6�H�W����^�
���e�q����������������?������i�2������>������2���A������4���8���=���$���&���
�}���4��[�Y�)
�
J�T���� 8�/�2�5 9D<h?�B�E?��S"%�'h*-�HLGO�R�UYW\�_�b!fci�l�o�q�s�t�uRw�xOz�{J}#��~"�m�ʅ�����@����`�Ƙ(����Z���$�����Y���}����9���a����� ���I���m�����)���Q���y����9���]��������A���i����+���S���~����I���}����O���k���\���n���w�������"���<���T���b���k���m���g����!���3���F���k�����(��\��v����J�������V����*�X�z�/�_��&�6�C	�	S
�
l�q�w
�
k��-�Q�u�1��Y�K�_�s���9�l�(�P�P�[ � i!�!�"#�#J$�$t%&�&'�'T(�(})*�*:+�+l,-�--.�.H/�/}01�172�2g34�455�5a6�6�7)8�8S9�9l:�:{;�;�<
=�=)>�>\?�?�@%A�AIB�BrC
D�D7E�E`F�F�G(H�HRI�IwJ�J�K2L�L[M�M�NO�ORP�P�QR�R,S�SeT�T�U%V�VQW�W�XY�YDZ�ZX[�[�\"]�]G^�^s_`�`;a�afb�bzcd�dDe�eif�f�gh�h3i�i>j�jIk�k[l�lzmn�n$o�o8p�pGq�qVr�rfs�s}tu�uv�v.w�~3�{|�|}�}~�w3x�xRy�yyz�z*�ƃ�V��w�
���Z�����F����8�܈��$�Ȋo����^����M����<����+�ғd��������@�ї_��~����?�כo�
���6�͞a����� ���C�Ӣc��������'���>�ȧ\�������F�֫f��������?�ԯf��������9�dzX��z����H��{����:�κe����� ���C�Ӿf��������,���C���Z���p����2���]���}����3���V���v����:���h����.���\�����������>���p������-���U���}����@���l����5���e�����"��X�-tY8Zl[�]�\�^�_�`�a�b�7�:dcTe*Bpf�g�P`���*���x���R���(�����Bh�hui"j�j�p+q�q�r�Rv~ST�U�i�u�Y�[jbvc_�/�1[3ĝ"�"��"ܠ"6�"��"�"8�"��"��"��"��"�"h�"ɮ"!�"~�"-{�|�}�~�v�w������#d� �#E>�&�(�*�,�5����758�8R:j;0=D>�\?�@�AxC�D�E�F��k�k�lXm6n�n�o<pl�}H�IGJ�J�K)�p�uLN�O&Q�RT�ULW�X�Y[�\|^�_a�bReqp�fChi�x{�{�~�������������"ó"�"~�"��"a�"��"�"r�"1�1�3�67Jv7"8�:�;a<��<�=�>?�;T�?�@3A4B8C��C�7�����$D�EJ��v����v�p��wG�H�I�J}6�7�8:KK�LN�N�DAPQARiScUV�W}Y�Z�[�\^>_���\0
�
���-a�b�c�a�d�e3f�f�g�h�i5h�jQk�khl�m�nmso3p�pJq�r�s�qUtu�u,v�wex�v7y�y�z�rFt7u`s4v�ww�x?y�y�{�|�zs}R~�~���p�B�<�P�?�0�݆R������P�(���	���n��{�|�}�{�~�V�	���=��s����E�ֆX�*����Ί`��s�E��� ��{����`�đ;������U�R�c�[��L�֍��C�����֓}�s����������������l�?�%���������h����'�ɗ����������d�Ν���c� ���3���c�ߡ2��y��l�2������H�Ѫ;��}�Э���d����.�����סI����F�+�i������~��	�խ̮�̯��e�.��������P����0����:��a���z����%��j���n���}����&�p�*���9���`��,���n���Y��������*������ղ���n�ڷ��)�˺�����q������{�A�5�U�2����P����0f��8���;��j�����_���������=�u�)���0���E������C������i���U���\���q���7���o�n����,���K������H�����
���r������h�"��<������R��
�
P	�x
�
��������y�X�����;���e���+���m���^��������&������\���W���f�����V����5��������`������y�
���=����������c����3�����K�'��t��-�P�i�������q�����H�f�(�b�G��{��	�
�-
b
�
�=���9����y���j���I���Z����`��7����e���m�J���-����q:�v�B��bF�3�K�_���+� �e� `!�!w"�#�$&#g%!&�&8'�(b)�'(*�*n+-����D��]�	�
!	�p
�
uO����;�k+�v!@"� #�#e$�$�+k-5.�,/�/K0�0v1�2a324�435�5�6�7H6J8�8k9�9&;�;�:�<'=�=>^?	@�>�@_A�A�Q��W��1�[�!�b% � p!�"�#+"�$V%�%}%(&�&�'�(�)/(O*�*+�+VBbDyESC�F�GfHAIJ�K�L�J�M�NnO&P�Q�RQ�S�TyU1V�W�XW�Y�Z�[<\^�^]�_�`�ao&�(�)p'�*�,�+�-�.^/R1W2R0b3a4%5-6�6�7:;�8.<9=>�,�-�.�/1y2�0y3m4+5�5Gbde(cfg�gnh,i�j�k�ill:m�msnp�p6o�q�rs�sLu v}t�v�wexy�zg{�yA|}�}�>�@�A�?�BjD�C<E�E�F{HdI�GSJ6K�K�LqM_N3P(QCO#RS�S�6e7989�:�;�9�<Z=�=�>H~���ǂd��̈́K�������@�։T�������I�ߎ]�'�������R��f�0����ė[�mT-VWDUX�YY�Z;[�[�]y^�\^_7`�`�a\b@ce�ed�f�gih@?
@�@�A=CDnB�D�EUF�F�ٚޛܙ����^�&�̡�����f�
���Y�7�������@��ĭ
�����'�Ͱs�Q���5�
���ik	l�im�nn�o�p]q-s t?ruv�v�w_xWy?{>|EzC}<~�~�G�H_IJJL�L!K�M�NhOPZ���3�
�����U����i�ν?�	���B������r�<���u�����4���o����2��g�����?��w�n���n��\����f����:��ɋq�Y�������ʏ����7��P�Q\R9S�T�UT�VeWX���
��������?�$�/�"�������������]�.�r�&�������^�*���������D�����]�"������j�:���[�1���
����h��-��F���[���6�͜���g�+��֡�Ǣ����v�P�A���'�
�������G
'l	
��
�@���b:�kG�)���8a��� c!#�#."�$�%e&0'�(�)�'�*f+2,˭k�F���'�ʲ����p�:������ٹ��������j�g�r�j�a�N��,�-�.�/W1=2v0)3	4�4�5f6�7�8%7�9`:0;�;�<A>?r=�?�@�ADB�C�DC�ERFG�GkI?J�HK�K�LnMO�O1N�P|Q@R:�������v�	�C�����a���9���������v�e�9�.�I�)����S�S�T�U5WX\V�X�Y�Zj[\�]n^�\B_
`�`�aSb�c�dc�eWfg�ghi:j�hk�k�lam�n�o"n�peq'r�rvtHu�s v�v�w��m�>��������u�6�����������c�:�%���������������t�oxByz�z�|s}�{U~+�€n����'���J��Ά��
�ۉD���w�5��x�F�������]�������M��ȗN�������v�V������z��A�����G��������s�Z�,��������3��ў��G��t���ϣ��^������ߦh�:��ګ��C��l��װ��n���9�ε��p�;�ݹ�����q�=������Ӿh�>�
�g�1�f	�A
��
���uVK+"��������y�/��N��������F�����
���h�@����q�M���/������>��g�������i����4�����k�6��������l�8��vQ�2�� {!E"$%%#�%�&�'�(�)�*u,r-}+u.l/Y0�������]�C�|�/������l����+���f�6�����G��x������J��
�X�qE�	�	�
t
�
7��FE1�2�32�46N5�6�7l8(:;D9<�<�=�>�?p@DB9CTA4D#EF	���;b���p�T� ��\ �!Y"� )#�#�$`%�&�'&x(<)�)�*-,�,h+�-�.E/�/|1F2�03�3�4�FhH1I�GJK�JHLM�MaO@P�N%Q�Q�R�S~TcU#WX=V�X�Y�ZM56�6�7M9!:~8�:�;�<P=>�?t@�>PA B�B�C�DF�FJE�G�HwI@J�K�L	K�MlN6O�O�QwR�PWS+T�T�U\W6X�VY�Y�Z�[5]^a\�^�`�_ea.b�b�d�e�c�f�gjh]i;j0kmnlop�p}[X\3]^�_�`�^�aybMc d�dnfDg�e h�h�i�jQk�l�ml�n}oGpq�r�s�qht<uv�vmxGy�w'z�z�{�|,~W}������qrsKt�r*u�v�u�wkx3y�z�{z�|�}�~�x�m�M�H�W�I�>�)�M�(���������k�I�����&��]�Ȏ��^��ّg�9����ݕ��`����!���d�&��u�G�������n���Τ/���r�4����p�ӈG�֌���h�(��Ǒ������l�W�-���ݙ��֚Û����Ȩ��{����F�ۭ��}�H����l�űH�����y���B�Ѻ��o�8�־�����d�.�����o���O�#�����T�.���������$���P�ܠ{���T�������å��z�Y�L�*������	����۲u�P�+����������q�E����N����������D����a���9���������I�����N����o���G������$���W�����\�ųQ�"�������ķY��ں��y���f�G��	������������u�X�������E�!�n�����L����~���(�-���:�*�;�^�<�����O�p� ��>��� ��"˾"�	���������T�������������0�ύn�
����Q��ߤ������������]����G�B�PK�S��P
�Pm�P��P4�P
5Q�%UV&U�&U]'U�(U�*Uw+U�,U�.U60U+1U2U�2Ut3UN4U5U�5U(7U�8U]:U<Ui=U>U�>U?U�?UAU�BU�CU�DUKEUFU`�P�P�FU��SȧS3�SX�SְSֳS��S�SϼSjGU��S�NU��S��Sy�S��SO�S��S�SU{�S��S��SτQ8�S��SV�S(�S��S�S~�S�S�STBT�T�T�T�T�TpT	T�	TmTH
TT�T�TeT+T�T;T�TR�R�R%T�TuTT�T�T@TT�T|T&T�T� TL!T"T#TWUG&TW'T� R�!R(T��T�)T�+T�,Ta-T�XU8ZU0.TN�T/T�[U�/T�-R&0T<1T�1Tb2T�2T�3T�5T�9T@<T�\UNCT�ET�IT]aU�KT�NT
RT�VT�YT�]TaT�cT�fTikT�mT�oT�qT�sT!uT�wT�yT�{T�T�T��T�TBeUlU�T��T��T��T]�T>�T��T{�T��T@�T"pU>-�l
�m
�n
[o
&p
�p
�q
r
�r
ns
t
�t
xu
8v
�v
}�ĵ�Ѷ��+�����"������;����/������F����:�����o����� �2B��U�/�	
z
�
M� k
��A�����R$���qL'�L��� �!O"#�#�$|%P&'�'�(�)w*d+Q,
�
��
}�
8�
��
��
j�
 �
��
��
L�
�
��
n�
&�
��
��
��
>�
��
��
T�
��
�
��
��
x�
��
��
L�
��
��
V�
�
��
O�
��
��
H�
��
��
U�
��
��
H�
��
��
R�
�
��
��
�
��
(�
b�
��
U�
��
��
Z�
��
�
��
�
E�
��
�
��
�
O�
�
<��h3
��
��g!��U��a!0#�$�&�(M*,�-u/+154�4�2�3�6�7�516-89�9�:�;<�<=�=>>;B�EDI�L�OjS�W�[�_�c�g�kso�rtv�y}��ۄ��ܐϔߘs��ݣn��\�ر�w
�x
2y
�y
�y
>z
�|
�~
1�
��
ԅ
'�
T�
��
�
��
��
,�
��
:�
ڒ
�
A�
��
>�
n�
ߚ
M�
z�
��
4�
��
�
R�
��
�
+�
]�
*�
��
��
o�
M�
�

�
��
��
_�
3�
�
۷
��
'�
͹
��
��
��
�
z�
�
`�
�
�G�MDThU�VDY�[(^�`
c�eqh�jIo�lRqsuju�v
xZy�z�{P}�~�@����6�N�G�~����%�]���͗�=�u������������u�������$���w���W���F�~����,�f
�
�

N	
�
�

�
g
�
=
�

~
� 
T#
�%
*(
�*
-
z.
�/
n1
�2
e4
�5
47
$8
�9
;
�<
>
�?
�C
A
�E
,G
LH
dI
�J
�P
�V
)\
b
(h
m
�r
Tx
�}
�
��
g�
��
]�
%�
��
Ԣ
ۣ
�
�
@�
J�
T�
��
��
��
ĭ
��
&�
�
�
�
��
�
�
&�
I�
:�
+�
E�
_�
P�
A�
[�
�
u�
�.����F�����_��Ųx�*�ܴ��@����V����O��z����<�ҽh�����)���S��}����<����S�(���������(�R�|���0���E�i�p�����:
G<
E>
�>
H?
�?
z@
%A
�A
kB
C
�C
�D
YE
/F
�F
SG
 H
�H
�I
^J
K
�K
uL
"M
�M
�N
1O
�O
 Q
=R
bS
�T
�U
�V
�W

Y
2Z
�Z
[
�[
�[
x\
�\
i]
8_
a
�b
�f
�d
�h
�i
oj
k
�k
�-�.D�
��
j�
��
[�
��
�
��
��
��
f�
3�
��
��
��
�/0�0�
��
�
��
/�
?�
��
j�
��
6�
t�
q�
n�
k�
h�
h�
h�
h�
h�
��
�
�
�
��
W�
��C�?�j:���	��G�� �#�+�3�6�4�2�0�V��Q�S�V � ^!�!f"�"i#�#g$�$e%�%c&�&�'0(�(�)�)�*�*|+,�,-�-.�./�/0�01�1
2�233�3�4.5�506�637�7;8�8C9�9F:�:D;�;B<�<@=�=f>
?�?a@�@cA�AfB�BnC�CvD�DyE�EwF�FuG�GsH�H�I@J�J�KL�LM�MN�N%O�O-P�P+Q�Q)R�R'S�S%T�TsUV�V�Y�[\�\�\�]^�^6`�cf�hVi�iLj�jTk�k\l�ldm�mbn�n`o�o^p�p\qr�rSs�s�tuuv�vw�wx�xy�yz�z{�{|�|6}�}�~1�3���6���>�‚F�ʃI�ȄG�ƅE�ĆC�‡i����d��f��i��q���y���|���z���x���v�����C����"������ ���(���0���.���,���*���(�Ϟv��ʠU�̡K�ϢS�ף[�ߤc��a��_�ާ]�ܨ[����R�yh5		�		�
	�	�	�
	�		Z	�	b	�	7	�	�	�	U	�	O	�	/	�		�	.	�	e 	!	�!	"	o"	#	�#	$	�$	�%	W&	'	�'	I(	�(	[)	�)	U*	�*	P+	
,	�,	[-	.	�.	/	�/	�0	�1	3	@4	i5	�6	�7	�7	�8	:	5;	^<	�=	�=	+>	�>	?	�?		@	�@	�@	C	�C	�D	�E	�F	�G	nH	^I	MJ	>K	2L	�M	�N	AP	�Q	�S	�T	V	bW	�X	&Z	�[	�\	S^	�_	�`	7b	�c	>e	sf	�g	\i	�j	�k	Rm	�n		p	q	�r	ev	�y	K}	��	�	P�	��	&�	Ǐ	�	��	��	M�	h�	
�	�	ڕ	��	��	�	�D0�	��	��	�;�>�A�	��	X�	$�	ݳ	?�	�	o�	�	��	~�	T�	*�	�	�	��	��	��	j�	R�	:�	"�	
�	��	��	��	��	+�	��	^�	&�	�	B�	��	�	2�	�
�

H
#

�

G
�
s

�
X
�
�
,
�
h
�
�
K
�
�
)
*
<
H
�
l


�
C
�
}
$
�
�
�
#
�&
*
�-
�0
G4
�7
V�����I��rT���[F `�������"
�H�@�x�����Y���$���6��U�R�q���j��2�#��%#��(�=�,�!Z%1/�(,e/P2�8+<u5�?�B6F�FG�G
HJL2N:PzP�P�P:Q�Q�R	TU�U�V"W�WhX
Y<Zn[�\�]`�ahcBd�egaj�k�mMo$pdq&tuv	w z�|#�Q�����x�h�h����@���c����6��]�ޙ��"��X���ĩܬS�ϱ^�޶M�ڻg�&������8�G����%��������������#����O�&��"�g�����2�s��	%	o	�	�	�	
	s"'K�#g$Q%{&M'i(b)*�*),�,.�.�/�0�1�2m3R4p5$6�6�7~8�9�:|;�<y=�>.?;@�@*BCD���E�E�FnGH=NO�O�Rv~ST�T�U)VWLX�X�YbZ�[*\�]�^c_"L~���D�;V�3���	��o���>���X��[�����.p��#L(-�.�0�4{8]<E>�>�?8A�B�C�s�t�u�vgwx y�y^P�Q�R�SUHV�W*Y�Z�[�]�^1`�aGc
e�f_h�i�jl�nqp,q�rWs�_~and�egEh�i.k��������
���0�����-�E�E�F�I7J�J�K=LM�M�N$O�O�P6Q�QTRRS�S;TU�U�V�W�WGX�XY�YZ5[�[�\0]�]'l�l!q�r�sh^u R_�`~u
v�v�vwwQx�z�y�a�c*e�f�h������j�l&o�p�qfsu||}�~b�ՁG���,��uw�w(y�z|�}��q��z{E�E�Ԁ܁������>�V�����ǎԐ
�ݓݕ&�?���>�1���r�E����_�I�����~�H�\�s�=��
��������I������������{@F`G���M�����#���O|�|�}�H�]�ȃ��
'���-�Z�*��	��UbGB|H������������������0�ύn�
���D�Ӑe�������Q�M�ӕϖU�Q�٘ՙ]�\���m�l�����}�����n���Q��B�PK�S��P
�Pm�P��P4�P
5Q`�P�P��SȧSG�Tf�TְS�T��S�S1�T��T��S
�Tu�T��Ty�S�T��T�T{�S��T�TτQ8�S��SV�S(�S��T�S�T�S�STBT�T�T�T�T�TpT	T�	TmTH
TT�T�TeT+T�T;T�TR�R�R%T�TuTT�T�T@TT�T|T&T�T� TL!T"T#T^$TG&TW'T� R�!R(T��T�)T�+T�,Ta-T0.TN�T/T�/T�-R&0T<1T�1Tb2T�2T��T�TL�T��TL�TNCT�ET��T��T�NT
RT�VT�YT��T�T�T�fT�T1�TQUhUU�U(U�	U�{T�T�
UzU�U�UUUoU��T_U>�T��T@!U�"U@�T�l
�m
�n
[o
&p
�p
�q
r
�r
ns
t
�t
xu
8v
�v
�w
�x
2y
�y
�y
>z
�|
�~
1�
��
ԅ
'�
T�
��
�
��
��
,�
��
:�
ڒ
�
A�
��
>�
n�
ߚ
M�
z�
��
4�
��
�
R�
��
�
+�
]�
*�
��
��
o�
M�
�

�
��
��
_�
3�
�
۷
��
'�
͹
��
��
��
�
z�
�
`�
�
�.����F�����_��Ųx�*�ܴ��@����V����O��z����<�ҽh�����)���S��}����<����S�(���������(�R�|���0���E�i�p�����:
G<
E>
�>
H?
�?
z@
%A
�A
kB
C
�C
�D
YE
/F
�F
SG
 H
�H
�I
^J
K
�K
uL
"M
�M
�N
1O
�O
 Q
=R
bS
�T
�U
�V
�W

Y
2Z
�Z
[
�[
�[
x\
�\
i]
8_
a
�b
�f
�d
�h
�i
oj
k
�k
D�
��
j�
��
[�
��
�
��
��
��
f�
3�
��
��
��
��
�
��
�
��
/�
?�
��
j�
��
6�
t�
q�
n�
k�
h�
h�
h�
h�
h�
��
�
�
�
��
W�
��C�?�j:���	��G�� �#�+�3�6�4�2�0�V��Q�S�V � ^!�!f"�"i#�#g$�$e%�%c&�&�'0(�(�)�)�*�*|+,�,-�-.�./�/0�01�1
2�233�3�4.5�506�637�7;8�8C9�9F:�:D;�;B<�<@=�=f>
?�?a@�@cA�AfB�BnC�CvD�DyE�EwF�FuG�GsH�H�I@J�J�KL�LM�MN�N%O�O-P�P+Q�Q)R�R'S�S%T�TsUV�V�Y�[\�\�\�]^�^6`�cf�hVi�iLj�jTk�k\l�ldm�mbn�n`o�o^p�p\qr�rSs�s�tuuv�vw�wx�xy�yz�z{�{|�|6}�}�~1�3���6���>�‚F�ʃI�ȄG�ƅE�ĆC�‡i����d��f��i��q���y���|���z���x���v�����C����"������ ���(���0���.���,���*���(�Ϟv��ʠU�̡K�ϢS�ף[�ߤc��a��_�ާ]�ܨ[����R�yh5		�		�
	�	�	�
	�		Z	�	b	�	7	�	�	�	U	�	O	�	/	�		�	.	�	e 	!	�!	"	o"	#	�#	$	�$	�%	W&	'	�'	I(	�(	[)	�)	U*	�*	P+	
,	�,	[-	.	�.	/	�/	�0	�1	3	@4	i5	�6	�7	�7	�8	:	5;	^<	�=	�=	+>	�>	?	�?		@	�@	�@	C	�C	�D	�E	�F	�G	nH	^I	MJ	>K	2L	�M	�N	AP	�Q	�S	�T	V	bW	�X	&Z	�[	�\	S^	�_	�`	7b	�c	>e	sf	�g	\i	�j	�k	Rm	�n		p	q	�r	ev	�y	K}	��	�	P�	��	&�	Ǐ	�	��	��	M�	h�	
�	�	ڕ	��	��	�	�	0�	��	��	\�	]�	>�	�	��	X�	$�	ݳ	?�	�	o�	�	��	~�	T�	*�	�	�	��	��	��	j�	R�	:�	"�	
�	��	��	��	��	+�	��	^�	&�	�	B�	��	�	2�	�
�

H
#

�

G
�
s

�
X
�
�
,
�
h
�
�
K
�
�
)
*
<
H
�
l


�
C
�
}
$
�
�
�
#
�&
*
�-
�0
G4
�7
`�������w�
�H�@�x�����Y�������6��U�R�q���j��2�#��#��=����!Z%�(,e/�2�5�8+<�?�B6F�FG�G
HJL2N:PzP�P�P:Q�Q�R	TU�U�V"W�WhX
Y<Zn[�\�]`�ahcBd�egaj�k�mMo$pdq&tuv	w z�|#�Q�����x�h�h����@���c����6��]�ޙ��"��X���ĩܬS�ϱ^�޶M�ڻg�&������G����%��������������#����O�&��"�g�����2�s��	%	o	�	�	�	
	s"'K�#g$Q%{&M'i(b)*�*),�,.�.�/�0�1�2m3R4p5$6�6�7~8�9�:|;�<y=�>.?;@�@*BCD���E�E�FnGH=NO�O�Rv~ST�T�U)VWLX�X�YbZ�[*\�]�^c_"L~���D�;V�3���	��o���>���X��[�����.p��#L(-�.�0�4{8]<E>�>�?8A�B�C�s�t�u�vgwx y�y^P�Q�R�SUHV�W*Y�Z�[�]�^1`�aGc
e�f_h�i�jl�nqp,q�rWs�_~and�egEh�i.k��������
���0�����-�E�E�F�I7J�J�K=LM�M�N$O�O�P6Q�QTRRS�S;TU�U�V�W�WGX�XY�YZ5[�[�\0]�]'l�l!q�r�sh^u R_�`~u
v�v�vwwQx�z�y�a�c*e�f�h������j�l&o�p�qfsu||}�~b�ՁG���,��uw�w(y�z|�}��q��z{E�E�Ԁ܁������>�V�����ǎԐ
�ݓݕ�{@F`G���M�����#���O|�|�}�H�]�ȃ��
'���-�Z�*��	��UbGB|H������������������0�ύn�
���D�Ӑe�������Q�M�ӕϖU�Q�٘ՙ]�\���m�l�����}�����n���Q��B�PK�S��P
�Pm�P��P4�P
5Q`�P�P��SȧS3�SX�SְSֳS��S�SϼSg�S��Sj�S��S��Sy�S��SO�S��SA�S{�S��S��SτQ8�S��SV�S(�S��S�S~�S�S�STBT�T�T�T�T�TpT	T�	TmTH
TT�T�TeT+T�T;T�TR�R�R%T�TuTT�T�T@TT�T|T&T�T� TL!T"T#T^$TG&TW'T� R�!R(T�(T�)T�+T�,Ta-T0.TC,R/T�/T�-R&0T<1T�1Tb2T�2T�3T�5T�9T@<T?TNCT�ET�IT�KT�NT
RT�VT�YT�]TaT�cT�fTikT�mT�oT�qT�sT!uT�wT�yT�{T�T�T��T�T�T��T��T��T]�T>�T��T{�T��T@�TѴ�����L�������[�2�����ӱj��`����;�]��ۻs"'K�#g$Q%{&M'i(b)*�*),�,.�.�/�0�1�2m3R4p5$6�6�7~8�9�:|;�<y=�>.?;@�@*BCD���E�E�FnGH=NO�O�Rv~ST�T�U)VWLX�X�YbZ�[*\�]�^c_"L~���D�;V�3���	��o���>���X��[�����.p��#L(-�.�0�4{8]<E>�>�?8A�B�C�s�t�u�vgwx y�y^P�Q�R�SUHV�W*Y�Z�[�]�^1`�aGc
e�f_h�i�jl�nqp,q�rWs�_~and�egEh�i.k��������
���0�����-�E�E�F�I7J�J�K=LM�M�N$O�O�P6Q�QTRRS�S;TU�U�V�W�WGX�XY�YZ5[�[�\0]�]'l�l!q�r�sh^u R_�`~u
v�v�vwwQx�z�y�a�c*e�f�hj�l&o�p�qfsu||}�~b�ՁG���,��uw�w(y�z|�}��q��z{E�E�Ԁ܁������>�V�����ǎԐ
�ݓݕ����ט;�x�S�������4�Ћs�č�r����n�N����;��ݝ~�&�ɟ�^���đ$���_�����)�Ɣ�h�����U���S����.�E�ʥe����ը?�p����������L��.���֞�x���4�m�Ы�h���F��}�����
���&���&���7�Ƥ�Z������I���гA�d�������շC�l�w��������:�̫e����F����$�$�U����Ȱ��"���f�����5�f��d����>�׳w�µ�m����`��A��������#���i�����)�����N�I��y�
����H���ο/�s���������V�����,�K�������o���N���v������9�x����P�����(�[�������"�����^�����*���+���c�����]����y���>������Z���k���#�������Q�}�(�E������g�����H�����e������c������>���������A�������9�����k�#�������@���G�?�������O������]���7���������C�����W����j����W�a�����	���W������S����g���B����D�~����D��]����-���l�����G�����#�g��Y���� ��F���j*��G����M����n�(����l��+�b	�

�rM
�
z�"y1!���9���G�!
m	�
v�%��9��{@F`G���M�����#���O|�|�}�H�]�ȃK
'���-�Z�*��	��UbGB|H������������������0�ύn�
���D�Ӑe�������Q�M�ӕϖU�Q�٘ՙ]�\���m�l�����}�����n���Q��KW,�,�,S-�->.�.R/�/\0�0:1�1&2�2)3�3;4�4>5�5K6T�'�qj��%��n�)��g � %!�!�!a"�"#q#�#+$�$�$I%�%&]&�&�67}7�7[0M�-'�'�>,?�?@�@,�EQ8�8�?(�(�(I)�)�)P*�*����*�E
�
�8D9AQA�AY��������<����`�����F�)����o�����K�$�������<���8���=�������G�����R���+�&�����$�x���(�����D���F�������R���5����z�K��������c�����=��������e���s����)������;���X������l����e��9�
4��A��1��S�	o�2�,��:k:�B�BgC�C@D�DM�2�6�v�:;�;�;y<�c���]�����H���r�[�>�>�5��|��QG+�+�<?=,�=�=W>e	�	t
�
��S�SG�S��SԎS*�S�S��S��SxS„S��S�S�~ST�S�SLSS/�S�S��Sr�S�S՞S��S-�S�Sa�S�S��S
�S1�SэSa�S��S�Se�S��S�S��Se�SAQS�QSƔSa�S&�SC\S�\S�RS��S��Sf�S΀S�\S
~S*�S��S^zS�zSUyS�|S�{S|S��SR]�\`�_	_<^X�[�Z�Y2Y�XT\la�`�`�m`c�c�g^dAe	f�fDi�hj�j>l�k�l�ac�babuVumx�u�wMv�vUw�n�p]r7o�o,p�t|ht,n�s�s0sXy]q�qhn�xyWz�ys{�z{�5_5�3
4{4�4�

����6�^�Mx�6
��Q�?	m��	;b���F�$��������%M��]�Pu��Il�� %�"�!'"� !$3 ��$#0!?��Z|���-�z~`}�}�|��J���k�x�a����:���т2�,���m�ҍ���݋��5�A�h�ƑX�����u�؝ŜB���4�n�"�”��|����X�����
�Ɔ'�ʅ5�Z����^�߉֊.�)�
�������<���շI�޶s�>�l�������f��:����Ӳd�c���,�x�7�����������$���.�3���V���߿]�����k�����V�������=���M��]�x�������g��������6�y��������h�13&�'�,)�(�+Y'�)�&D,`+?(�)`*y&4-�2/�1�.�/.?2Y1x/X0�-��#<n
���
�
vj8,�=�aI������g��6�6N7�89�9�f������78k8՟��I�F��N�p�����С�������<�\�sVS�JSwOS_MS\S�@SHSGS@S2ESFSVJS]S�aSC_S�cS8?S�CS�BSLSS�USKTS�TSZUS5YS�YS�VS�WS�XSYWSb[S�ZS-LS�KS�LS^PS�OS�PSnNS�MS�NS�[S�ZSAQS�QSRSXjS�wS�mSfSgSC\S�\S�RS�pS1tS0hSAiS"BSIS�IS`AS�\S�>S}S�yS�yS^zS^zS�zSUyS�|S�{S|S�}S�W<�;�J3J�II�DRHXG�F
FcE;+R�P�Q�N8O�OSP�RdS�S�T5U�MNhV�U�V{K8ML�L�L�Y�R���ИF���D�~<\>'@=c=>IAM��@���A-?y?��]��BB����5_5�3
4{4�4�

����6�^�Mx�6
��Q�?	m��	;b���F�$��������%M��]�Pu��Il�� %�"�!'"� !$3 ��$#0!?��q������-���J���k�l���B����$������_��lJl#k�f�k�g�dbedf0gji�ijhMtsfq%r�o(nlmup�n�|F{]y:zlw�u�tQx{v��K���O��~E}���~�c�b@ca}`�^�_ta`9b�ai^IP7O�O�M)MYKkN�K'J�I:I�J|L(;�;�<C<�HNH�GzFGRE�D�E�]q]�\�[\WZ�Y�Z\U�S~T8��CC�?mBe?�A+>K@�>�@+�GD�C�BSD�Ck���DS�PxR�Q��Δ��=D=nY�X@XW�W�U�U�V�DՇ:���13&�'�,)�(�+Y'�)�&D,`+?(�)`*y&4-�2/�1�.�/.?2Y1x/X0�-K��#<n
���
�
vj8,�=�aI�������6�6N7�89�9�9A:�:Q6�78k8��~�;���7�J�y�������p�������*���'�s����S�������E�ܠ��=���6�գm����7�Цi��a����n����{�*�Q����j������6�����7�����D���������O����h��������'���m����V�A����!���c����ѩi�ǯb����1�ʬc�����.��������1�˲e����6�ѵl���2�����E�����X���F�����e������	���k����~�/���9�����X�
���w�,�I����/�ѿu����s3S�5S�4S�7S�2S�<S�=S4SX;S�;S>S�8S\<S6�:��C���U�������W�~�q�Q�1�y�&!��&&='�)v(^/��.�,[+�175[43�21�NVN\A�Do���E:CD�M�A;@�J��SL�GzGv��N�O>O�j���������ݻֺ)ʳ���c	�	�����m|�.����7��%i$U"�#�" �:�ҧ�F���;:�9j8�8D9]�j6O��6G�_7ڝh�9�:�9o��6%����3��'��5��e;��/<r<���+z()"+8*�'�)�&'�.s/�+d-�-Y,�,z.�2�1152$��|[5P�;�[
���n*��L"!�"{#�!z ה�Ww�������/
����g�1���I��4��.��}�B���I��
\�ʼn��c�.�đl����g��z%�:��\����7y>�>J�
8�=��&?>S�S}TU�UaVW�W�PpQ
R�Rs3S�5S�4S�7S�2S1:S�:S4S5S�5S�8S���Z���S��}�4 W�ڥ��*�y�&!��?�~�����?�{�������]���o�~����l�N���NVN\A�D�H%B�E:CD�M�����;@�J�HSLGzG�N�O>O�j���������ݻֺ��ʳĵ����Լ`�.������m|�.��=��%i$U"�#�" �:�ҧ]�ߜO���G�#�ڝh�9�:�9�7�6Z8M6�3�4>4�5��e;��/<r<=�+z()"+8*�'�)�&'�.s/�+d-�-Y,�,z.�2�1152P�;�2�������,�\�����ה�
�
�w�b���������}�r�a�����g�����4����}�!�����I���(���j���0�~�����ʼn���r�}���đl����g������B�ԡ��y>�>&>���=�?&?>S�S}TU�UaVW�W�PpQ
R�Rs3S�5S�4S�7S�2S�6SR7S4S5S�5S�8S���Z�
�4 � H&!�&r�?�~�����3���NVN\A�D�H%B�E:CD�Mr@%P;@�J�HSLGzG�N�O>O0�߂��8t�x�vu�w�}�{z�|�����bs�p�q�rApGX0b�_>]�Z�d�m�k_ig=��%i$U"�#�" �E�ݕ]�ߜO���G�#�ڝh�9�:�9�7�6Z8M6�3�4>4�5��e;��/<r<=�+z()"+8*�'�)�&'�.s/�+d-�-Y,�,z.�2�1152��G�/�ňה�`�j���ӎ'�<���������U�
������d�V�ْʼnđl����g����B�ԡ��y>�>&>���=�?&?>S�S}TU�UaVW�W�PpQ
R�Rx�P��PB�P��P
�Pm�P��P4�P��P��P`�P�P��PK�RH�R�R2�Rk�R��R�R��R��Rr�Rs�R��R(�R��R��R�R1�RoS
S�S�S~S1S�SWS�S�S6S�S�So	S�	S%
Sy
SFSS�S�
S�S�S�S�SjSQS7SS�S�S�SbS5S�S�SeSS�S4S�SOS�StS4 S� S%!S�!SA"S�"SF#S�#S%S�&S (S�)SE+S�,S�-S /S|1Qu0SN1S�������X����`�����<����n��7��������5�i����K����}�&���n���������������-�/�0�2e4V8�5�7�8�9;F<�==>�>~?"@VEHWM�I�K�M�NFPR�RW&Y�`2h�oVt��	#�Q���j��G&���B�9�(��G�]����R��!a%G�����������)�*,M-�.�/W0�0�P�QgRS�SsTU�U�V�W�ZB]�_�b4e�g�j&m�m�n��T�pts`t�q���Rv4w�f�!�&��!�y�p���P������{���������D�]�����4�=��D���T����
�G�B���d�x�P��PB�P��P
�Pm�P��P4�P��P��P`�P�P=�P��P��P`�P
5Q5Q	7Q";Q=<Qv@Q;CQ�FQ�IQfKQ�LQ�OQ�RQ
XQ�ZQ�_Q�aQ�cQ�eQ4hQ6lQqQ�tQ"xQ|{QQ\�QτQÆQ��QJ�Q>�Q[�Qx�Qi�Q,�Qg�Q\�Qd�Q��Q��Q��Q��Q˨Q��Q��Q=�QϯQ��QD�QW�Q^�Q��Qg�QK�Q��Q#�Q�Q��Q��Q��Qz�Q9�Q��Q�Q�Q��Q��Q��Q�Qh�Qb�QH�Q[�Q�Q��Q��Q��QE�Q��Q��Q�Q��Qt�Q9�Q2�Q%�Q��Q�Q��Q��Q�Q�Q��Qi�Q|�QR�RHR�R�R�R:R�R�RoRNR	R�	R�
R�RFR

R�
RmRARR�R�R�RZR�RZRSRVR�R�R�R�R#R�RR�R�R� R�!R"R�"R($R�%R7'R�'R�(R�)R\+RC,R�,R[-R�-R\.R�/R0R�0R�1R+2R�2R�4R�7R7;R7=R�AR�DR�FRyIR`MRPR�SR�UR)WR�XR_\R�^RbR6gR�jRnRqoR�pR9tRWwR9zR�|R��R��R݆R+�R�R��R��RD�R��R$�RޚR�R}�R��R�RZ�R�R7�R�R*�R �R��R�R��R��R`�R��Ry�R��R��R��P"�P3�P9�Pc�P��PF�P��P�PH�P�P0�P��P
�P��P��PJ�Pp�P�Pq�P�P;�P*Q�Q�Q?Q�Q�QZQ(Q�Q�Q>Q�QQ�Q�	Q�
Q�Q�Q�
Q�Q�Q�Q�Q�Q�Q�Q�QwQHQ/Q�Q�Q�Q`Q-Q�Q"Q�Q�Q6Q
 Qt Q!Q�!QX"Q�"Q�#Q$Qy%Q*'Q�(Q�*Q�+QZ-Q�.Q0Q|1Qu2Qb3Q�������X����`�����<����n��7��������5�i����K����}�&���n�������������)N)�*���+,�,�f���F/i1&2�3>5�6+:�;�<LC
F�F�H|JWL;O�P�e`S�TDV�Z�\�]N^�^�b%joqv�wvxy�y,z���zI�Q�����d�v���x�
3�-{<|�|�}�~��������(�ď\����ԕ�O���t�7����5���������#�����/�������z�L������`�P�����������2����������e�����_�v��x�B�8���X���ސ�$�Q�ړ���q�E��~��H���v����H	�
A��D�N�4��c9A���>+ !I��a�#�#>'��ߢ��-���G2�3x4t5}6�7�8:C;L<�=?e@JC�A�DF|G�I*KYL{M�N�O�U���\0
�
�vPpx�����,�S��3Xu�r!� hxv"�yzB{�{}y�}�~B5���z�Z�	�����"$S%�%n&����4�ى��"���αֲr�ܴQ�����w����D���r�����������T������i�������~��=������*(���-�/�0�2e4V8�5�7�8�9;F<�==>�>~?"@VEHWM�I�K�M�NFPR�RW&Y�`2h�oVt��	#�Q���j��G&���B�9�(��G�]����R��!a%G�����������)�*,M-�.�/W0�0�P�QgRS�SsTU�U�V�W�ZB]�_�b4e�g�j&m�m�n��T�pts`t�q���Rv4w�f�!�&��!�y�p���P������{���������D�]�����4�=��D���T����
�G�B���d�[�~�P'�PC�P��Pg�P��P��P��P1�P��P�P�P��Py�P��P�����(I
HE���x86GenuineIntel-6-BEGenuineIntel-6-(3D|47)GenuineIntel-6-56GenuineIntel-6-4FGenuineIntel-6-9[6C]GenuineIntel-6-CFGenuineIntel-6-5[CF]GenuineIntel-6-7AGenuineIntel-6-B6GenuineIntel-6-A[DE]GenuineIntel-6-(3C|45|46)GenuineIntel-6-3FGenuineIntel-6-7[DE]GenuineIntel-6-6[AC]GenuineIntel-6-3AGenuineIntel-6-3EGenuineIntel-6-2DGenuineIntel-6-(57|85)GenuineIntel-6-BDGenuineIntel-6-A[AC]GenuineIntel-6-1[AEF]GenuineIntel-6-2EGenuineIntel-6-A7GenuineIntel-6-2AGenuineIntel-6-8FGenuineIntel-6-AFGenuineIntel-6-55-[01234]GenuineIntel-6-86GenuineIntel-6-8[CD]GenuineIntel-6-2CGenuineIntel-6-25GenuineIntel-6-2FAuthenticAMD-23-[[:xdigit:]]+AuthenticAMD-25-[[:xdigit:]]+AuthenticAMD-26-[[:xdigit:]]+GenuineIntel-6-(97|9A|B7|BA|BF)GenuineIntel-6-(1C|26|27|35|36)GenuineIntel-6-55-[56789ABCDEF]GenuineIntel-6-(37|4A|4C|4D|5A)GenuineIntel-6-(4E|5E|8E|9E|A5|A6)AuthenticAMD-23-([12][0-9A-F]|[0-9A-F])AuthenticAMD-25-([245][[:xdigit:]]|[[:xdigit:]])|s|iii{ type: sample }i|i|OOii|ii|ssTYPE_HARDWAREperf: Init failed!pagesoverwritetimeoutinheritsample_freqread_formatdisabledexclude_idlecontext_switchinherit_statenable_on_execwatermarkmmap_datawakeup_eventsbp_addrcpustrtracepointGet tracepoint config.TYPE_SOFTWARETYPE_TRACEPOINTTYPE_HW_CACHETYPE_RAWTYPE_BREAKPOINTCOUNT_HW_CPU_CYCLESCOUNT_HW_INSTRUCTIONSCOUNT_HW_CACHE_REFERENCESCOUNT_HW_CACHE_MISSESCOUNT_HW_BRANCH_INSTRUCTIONSCOUNT_HW_BRANCH_MISSESCOUNT_HW_BUS_CYCLESCOUNT_HW_CACHE_L1DCOUNT_HW_CACHE_L1ICOUNT_HW_CACHE_LLCOUNT_HW_CACHE_DTLBCOUNT_HW_CACHE_ITLBCOUNT_HW_CACHE_BPUCOUNT_HW_CACHE_OP_READCOUNT_HW_CACHE_OP_WRITECOUNT_HW_CACHE_OP_PREFETCHCOUNT_HW_CACHE_RESULT_ACCESSCOUNT_HW_CACHE_RESULT_MISSCOUNT_SW_CPU_CLOCKCOUNT_SW_TASK_CLOCKCOUNT_SW_PAGE_FAULTSCOUNT_SW_CONTEXT_SWITCHESCOUNT_SW_CPU_MIGRATIONSCOUNT_SW_PAGE_FAULTS_MINCOUNT_SW_PAGE_FAULTS_MAJCOUNT_SW_ALIGNMENT_FAULTSCOUNT_SW_EMULATION_FAULTSCOUNT_SW_DUMMYSAMPLE_IPSAMPLE_TIDSAMPLE_TIMESAMPLE_ADDRSAMPLE_READSAMPLE_CALLCHAINSAMPLE_IDSAMPLE_CPUSAMPLE_PERIODSAMPLE_STREAM_IDSAMPLE_RAWFORMAT_TOTAL_TIME_ENABLEDFORMAT_TOTAL_TIME_RUNNINGFORMAT_IDFORMAT_GROUPRECORD_MMAPRECORD_LOSTRECORD_COMMRECORD_EXITRECORD_THROTTLERECORD_UNTHROTTLERECORD_FORKRECORD_READRECORD_SAMPLERECORD_MMAP2RECORD_AUXRECORD_ITRACE_STARTRECORD_LOST_SAMPLESRECORD_SWITCHRECORD_SWITCH_CPU_WIDERECORD_MISC_SWITCH_OUTperf.evlistopen the file descriptors.get_pollfdread_on_cpureads an event.perf.evselperf.thread_mapperf.cpu_mapperf.context_switch_eventsample_ipevent typesample_pidevent pidsample_tidevent tidsample_timeevent timestampsample_addrevent addrsample_idevent idsample_stream_idevent stream idevent periodsample_cpuevent cpunext_prev_pidnext/prev pidnext_prev_tidnext/prev tidperf.sample_eventperf.read_eventperf.lost_eventlostnumber of lost eventsperf.throttle_eventperf.comm_eventperf.task_eventevent ppidevent ptidperf.mmap_eventevent miscstartstart of the mapmap lengthpgoffpage offsetfilenamebacking store{ type: context_switch, next_prev_pid: %u, next_prev_tid: %u, switch_out: %u }{ type: lost, id: %#llx, lost: %#llx }{ type: mmap, pid: %u, tid: %u, start: %#llx, length: %#llx, offset: %#llx, filename: %s }{ type: read, pid: %u, tid: %u }{ type: %sthrottle, time: %llu, id: %llu, stream_id: %llu }{ type: %s, pid: %u, ppid: %u, tid: %u, ptid: %u, time: %llu}{ type: comm, pid: %u, tid: %u, comm: %s }|iKiKKiiiiiiiiiiiiiiiiiiiiiiKKperf: can't parse sample, err=%dCOUNT_HW_STALLED_CYCLES_FRONTENDCOUNT_HW_STALLED_CYCLES_BACKENDmmap the file descriptor table.poll the file descriptor table.get the poll file descriptor table.adds an event selector to the list.open the event selector file descriptor table.;��@K���P����6f���8�p���ԣz��� �������p���������@���$ ����\ Ѓ��x �� `���� Є��!����L!����`!p����!`����!��� "����d"�����"�����"�����"Ћ���"���"��"���#0���$#��p#����#0����#`����#�����#�����#��$���0$0���H$`���`$�����$�����$���$����$0����$`���%���� %����8%P���t%`����%p����%�����%`��� &���l& ����&0����&Д���&����$'����l'@����'��'`���$(����@(���d( ���x(�����(�����(���(P���) ����)О���)���� *P���<*����P*࠙��*����*�����*��*����*�H+���\+@���x+p����+����+�����+Ч�� ,�T,���h,0���|,P����,�����,�����,ਖ਼��,0����,�����,����-���4-0���H-����\-Э���-Ю��.0���<.@���P.`���d.��.����.�����.����/����$/����8/@���x/��/p����/����/���0����40���`00���|0�����0ෙ��0���1�81к���1@����1�����1�����1����2P���<2����h2п��|20����2�����2�����2����3����3P���T3����p3�™��3�™��3�™��3�Ù��3 ę�4@ę�(4�ę�D4�ę�X4�ƙ��4�ƙ��4@Ǚ��4�Ǚ��4ș�5pș�L5�ș�t5�ș��5�ș��5 ə��5�ə��5�ʙ�6 Ι��6�Ι��6ϙ�70ϙ�07@ϙ�D7�ϙ�l7�Й��7�ԙ�(8 ՙ�D8Pՙ�X8�ՙ�l8�ՙ��8P֙��8�֙��8�֙��8 י��8@י�9�י�<9�י�P9Pؙ�l9�ؙ��9�ؙ��9@ٙ��9ۙ�:0ۙ�,:�ܙ��:�ݙ��:�ݙ��:ޙ�;�ޙ�X;�ޙ�l;�ޙ��;pߙ��;�ߙ��;�,<0�@<��d<���< ��<P��<��=��X=��l=��=��=��>��$>�8>0��>���>���>@�?0�h?���?��?p�D@��@�����@�@A@���XA�tAP����A`����Ap����A@����AP����A����B���TB����B����C@���,C���HC����C@���C����C��4Dp��HD���dD�
���D`���D���4E���HE@��\E`��xE����E����Ep��F0��$F %��lF@%���FP%���F�%���F&���F�&���FP'��G(��TGp(���G�(���G�(���G )���G�,�� H�-��HH�.��xH�3���H4���H`5��I7��0I0?��`I�A���I�B���I�C���IPG��J�I��0J�K��XJpN���JpQ���J�R���J�S��K�T��<KpU��lK�X���K�[���K0\���K�\��L@]��8L�^��hL�d���Le���L�e���L�k��Ml��(M�l��HM�m��hM�u���M0v���M�w���M`��N���8Np���dN����N@����NЈ���N`���O@���8O`���hOЙ���O`����OМ���OЧ��P���<P����lP`����P�����P ����P`���Q����<Q����dQ`����Q0����Q����Q����R����0R0š�`R�Ú��RŚ��R ˚��R`˚�S�˚�0S@͚�`S�њ��S�֚��S�ۚ��S`� T��@T��`T ��T`��T���T@��T��U@�8U@�dU����Up&���U@)���U�)��Vp*��DV�,���V@1���V�2���V�@��W�B��DWpC��hW�E���W�J���W�K���W0M��XPN��8X O��\X�O���X�P���XT���X�T���X�U��YpV��4Y�Z��\Y�[���Y]���Y@_���Y�`��Z a��0Z�a��XZ�b��|Z�c���Z�d���Z0f���Z@g��[�g��4[�g��T[Ph��|[�i���[�j���[�l���[�q��(\�r��L\Ps��l\t���\u���\�u���\Pv���\�v��]�w��@]�x��h]`y���]�y���]`z���]�z���]`{��^�{��4^`~��l^0���^����^Ѐ���^����_ ���D_0���h_P����_@����_p����_����`@���H`���p`�����`0����`�����` ���a��4a���`a ����a@����a`����a ���b@���0b`���`bд���b����b�����b����c����4cpś�dcpʛ��c̛��c�͛��c ϛ�d�Л�$d@қ�Hd�ӛ�ldp՛��dכ��d؛��dٛ�eڛ�(eۛ�LePܛ�pe�ݛ��epߛ��e���ep�f��8f��hf���f ��f@����f��� g��@g���pg0���g����g�	���g 
��h�
��@h@
��lh����h����h����h ��i`��<i0��li�*���i�.���i0���i@2��(j`<��Xj�B���jPL���jO���jpP��kT��<kZ��lk`\���k_���k�_���k`c�� l�c��Hld��hl�g���l�h���li���l�i��m�o��Lm�r��|m�w���m�x���m�y��n@|��<n`}��ln�~���n�����n@����n����o����Do����poP����o�����oС��p����4pP���dp����pജ��p��p����$qp���Tq�œ��q Ĝ��q�Ŝ��q0ɜ�rʜ�<r�ʜ�dr̜��rp̜��r͜��rќ�s@ќ�(s�ќ�Hs�Ԝ�ts�Ԝ��s@ל��spۜ��s�,tP�Lt@�pt���t0��t���t��u �8u��hu�u`�u���u����v���Lv���lv�����v����v���v��wP��8w@��\w�)���wp*���w�1���w9��x�:��4xP<��`x�=���x�Z���x]���x ^��y�_��8y@b��dy@m���yPn���y o���y q��zpr��Dz�s��lz�t���zPv���z�w���z�y��{�z��4{ ~��\{0���{�����{`����{��{����|0���,|0���X|�����|@����|��|�����|P���}���H}����x}0����} ����}����~�0~ ���`~�����~࿝��~@����~����~P��ŝ�HƝ�l�Ɲ��ȝ��0͝��Ν� �0ѝ�P�0ٝ���0ڝ����۝�؀�ߝ����8���h�`�����0���ȁ�������(�P��H����h�������
��̂�
������� ��L�`��l�0��������� ����'���@(��D��(��l��)����1�����5���@6����7��@� 8��l�;����=��ȅ>���P>����>��4� @��\��@����@A�����A��̆�A���@C����C��4� D��\�`D��|��D�����D�����E����E���!F��$�`F��D��F��d�7G�����G����=H��ĈoH����H���%I��$��I��D�J��`�0K����pL�����N���0O���PP��(�Q��L��R��p��T�����U����W����X����Y��,�PZ��L��Z��l�0]�����^��ȋ�`�����c��(��g��X�h��x�`h�����h�����n����p����p��8��q��h�0r�����r����s��ЍPs����s����s��0�t��P�Pt��p��t�����u����v���@v���w��$�Pw��D��w��l�px�����x����Py����y����z��4�p{��X��{��x�0|����p|�����|��ؐ�|����0}���p}��8��}��X��}��x�0~����p~�����~��ؑ�~����P������<���`��������������Ȓ����$�����T�0�����В����p������������D�`���t����������Ԕ@����ଞ�,�0���L�����l�Э���� �����p���̕���������`���,�����L����l�P������������̖@���������0���4�����T�в��t�@���������ė��P����p���0�p���\�P���������������ܘ`����0���8����h����������� Þ��ƞ� �`ʞ�T� Ҟ����֞���p۞�� ޞ���ߞ�L�@�|�p����ܛ�� ���0�p���T��x�@�������������0����p���(�0���L�����x�P������Н0����������8�`���d������@���������� ������8� ��`����������@��؟P������$����P�@	�����	�������Ԡ�
�����,�P��L���x�@������������������ � ��P�`��|������@��آ����@+��X��+��x�,�����-����.��@��1��p��5�����8��Ф�:����>��H�@A��x�PB�����G��Х�G���0H���pH��0��H��P��H��p�0I����pI�����I��Ц�I���@J����J��4��K��X��L�����L����@M��ħ�M����M���0N��$��N��H��O��x��P����R��̨�S����@T����T��<��T��`��U�����W����Y����Y����[��D��^��t�_����`�����b���j���@o��L��q��|��u����w��ԫ�{���P���0����`�Љ�������������p���4�P���X�p�����������@���ح�����8�Ф��d������� ���Į�����@��� �����P����t� ����������������@ٟ��0ޟ�@�0�p�����а���P�(� �P�����������ر��P�(�����X���x������`���Բ@��������(�P���L�����h������p��������س �����D����p��������̴��������P��8����X�`���������0��ȵ����p	�����L�P��p�������
��Ķ�������$�0��P�����������0��ط����P��0����\�P�����������ȸP��� ���@��0����P���t�@�������Ĺ`�������`��4����T�0���������0��̺����@ ���� ��8��"��p�#�����#����$��ػ`$����0(�� ��(��D� )��h��*����`+�����+��ܼ�,����-��8��.��`��.�����/����@0��̽04����p4����4��<��4��\�05��|�p5�����5�����5��ܾ06����p6����6��<��6��\�07��|�p7�����7�����7��ܿ08����zRx�$�+��6FJw�?;*3$"D�b��<X�b���F�B�B �A(�D0�|
(A BBBA4�c��SF�A�H �k
ABCOAB�<c��%E�[�Pc��E�L8Tc��pF�E�E �D(�C0�p
(D BBBG,D�c��fJ�G�D �ABI���Ht�c���F�B�B �B(�A0�A8�DP�
8D0A(B BBBH�<d��D�Hd���F�N�G �B(�A0�D8�D@}8A0D(B BBBD�d���B�B�B �A(�A0�G� I�!�
0A(A BBBD,dhe���A�C
F���G��V
I@��f���B�B�B �D(�A0�I�q
0D(A BBBH,�Tg���A�C
I�����k
Ai��i��0�h��D�h��X�h��l�h����h��'H^H�i���B�B�A �A(�G� I� m� I� ^� h
(A ABBA�hi��0HN
JKxi��HO�i��'H^4�i��0HN
JKT�i��HOl�i��'H^��i��0HN
JK��i��HO��i��'H^��i��0HN
JK�j��HOj��'H^$(j��0HN
JKD8j��HO\@j��'H^tXj��0HN
JK�hj��HO8�pj���F�B�A �A(�I�g
(D ABBE��j����j��L�j��KF�B�B �B(�A0�A8�G��
8D0A(B BBBF0`�l���F�A�A �Kpt
 DABBH�8m���F�B�A �A(�G� I� m� I� ^� h
(A ABBA��m����m��@�m���F�A�A �G� I� j� I� ^� e
 AABAHL�m���F�B�A �A(�G� I� m� I� ^� h
(A ABBAD�Tn��F�A�A �G� I� p� J� \� D� `
 DABG@�,o���F�A�A �G� I� j� I� ^� e
 AABA@$�o���F�A�A �G� I� j� I� ^� l
 AABG,h�o��kE�QH hDA
ENK �4p��<HP c �Xp��dE�G Q
AA��p��	 ��p��dE�G Q
AA	�p��	$$	�p��@F�K�D dDB0L	q��fF�K�D p
DBEODB��	<q���F�B�B �B(�A0�A8�G� L�!x�!F�!b�!A�!R
8C0A(B BBBH^
�!P�!JT�!B�!Z�!A�!S�!M�!Q�!I�!I�!P
|r���F�B�B �B(�A0�A8�G� I�!�
8D0A(B BBBI,d
�t���F�A�G� I� z
ABD�
Xu���K��
A�
v��#D�
(v��WF�D�D �O
ABHA(L0Y(D AAB@v�� Lv���K��
A<�v��$E�^Xw��	Llw���B�B�B �A(�A0��
(D BBBGD
(A EBBB��w����w��1K�aD���w��0Hg�w���MiR$ Lx���J�hF�H�O
�EHH�x��F�B�B �B(�A0�A8�Dp
8D0A(B BBBC0��{��F�H�A �D@`
 DABF��|�� ��|����|��
�|��&
�|��,
�|��N@
}��ET
H}��ch
�}��(|
�}��LF�F�C �yAB�
�}���
�}��i`�
<~��"F�B�B �B(�D0�A8�D`5
8D0A(B BBBIL
8D0A(B BBBGL4����F�B�B �D(�G0��
(D BBBFD
(A BBBE(�����\I�A�H �FAB�쀙��耙�8���F�B�E �A(�D0�m(D BBBH���(D����MiR<D�����J�D�D oAAE��H ��v
D�A�E�h����t���<�p����Q�E�D �A(�F0T(A ABBE����H������B�B�E �A(�A0�
(A BBBGA(C BBB8$���qL�����E��h���	(|����F�D�H �yDB(�t���AS�D�F `AAA�������!E�[$�����XE�D�D HAAH䄙�CF�B�B �E(�D0�D8�DPG
8D0A(B BBBAd膙�0x��V�D�A ��
ABFD���H������F�B�B �B(�A0�A8�D`�
8A0A(B BBBA�D���k����F@ ܉���F�H�B �A(�A0�G`@
0A(A BBBGd����4x�����F�F�D �w
ABFACB(����SS�D�F kAAF���@���(�L���SS�D�F kAAF��4����^F�E�D �D(�G0z(D ABBT����h����|����	4������J�A�C �CABD���C �����7E�q��������!0���	H ,����F�B�B �B(�A0�C8�DPo
8D0A(B BBBHl����QE�K��������mF�f�d���,�p����B�A�D �u
EBE� ���5L���EE�{4,����~F�B�D �D(�D0c(A ABB$dȒ��8E�A�D kAA0���iE�A�D0L
AAHDCA$����/E�A�D bAA$�$���/E�A�D bAA,���$(���38T����@L��F�I�B �B(�A0�Dp�
0A(B BBBAx�����\B�B�B �E(�D0�A8�D��
8F0A(B BBBAx
8C0A(B BBBD8F0A(B BBB�J���������(,`����E�A�G P
AAA4Xė��]F�L�A �C(�M0p(D ABB�엙������
$���E�A�G �HA,�l����O�A�D ���D�B��옙�F�B�B �B(�A0�A8�D���M�K�D�I�j
8D0A(B BBBF��d�B�D�I���H�B�D�Q��p���uH@g
A�Ԝ��(��)����D0�H���~F�C�G T
DBFGDB(����%<����bA�U
JA\���#p���(�(���cF�L�J �z
ABA�l����x���sHd
D�ܞ���Hu
CH
EL���H���G`(�����Q�B�B �E(�A0�A8�DP�
8D0A(B BBBJ@������FP��������P�����CQ�B�E �B(�A0�A8�DP�
8D0A(B BBBC`������\�����Eb�K�E �E(�A0�A8��
0F(B BBBDZ0A(E BBBA������T袙�h�E�SD�����cF�B�E �K(�H0�A8�H@o8A0A(B BBB� ����,���	H�(����B�B�B �E(�A0�A8�DPP
8D0A(B BBBB@����HT����}F�B�E �B(�A0�A8�D@O
8D0A(B BBBG�ܣ�� �裙��A�\�_
AA�T����AAG�\������B�B�B �A(�A0�G� L�"h
0D(A BBBJ^
0A(A BBBBT8���+ hT����H e
CT
E<����F�A�D |
DBCG
DBCbDB� ���# �<���FY�_
�HD�8h����F�I�D �A(�D@�
(A ABBA8@,����F�B�A �A(�D0w
(F ABBH|����,K��
E�����L�����0B�L�B �B(�A0�A8�D��
8A0A(B BBBA ������A�\�_
AA ����AAG�L<�����F�E�D �A(�K0q
(A ABBFP
(A ABBAL� ����B�B�B �B(�A0�D8�J��
8A0A(B BBBB(�����gF�J�A �xA\` ���)F�B�B �B(�A0�A8�D@�
8O0A(E BBBLD
8A0A(B BBBJHl ��F�B�B �B(�A0�D8�D`
8A0A(B BBBGD� $���F�B�E �L(�D0�A8�DP�8G0A(B BBBd!�����F�B�E �E(�D0�A8�DP"
8A0A(B BBBIq
8F0A(B BBBHHh!����mF�B�A �A(�D0~
(F ABBID(C ABB�!����JH}�!ങ��K��
A�!t���TL�C"����"����(,"�����E�A�D0m
AAGX"T���l"P���LHh
HS8�"����[H�D�C �x
CBDDABA���`�"�����B�B�A �A(�G0\
(A ABBA)
(C HGNG�
(J FBBJL,#@����F�E�D �A(�D0E
(A ABBAk
(A ABBO |#�����A�\�_
AA�#����AAG�H�#�����B�B�B �E(�A0�A8�D`�
8D0A(B BBBN $����Dz
B_
Ab,$����|d@$쿙�FB�B�B �E(�D0�D8�I�
8D0A(B BBBEh
8D0A(B BBBK�$�Ǚ�W�$ ș�"EAD �$4ș��F�B�D �J�H�$ə��F�B�B �B(�D0�D8�DP�
8F0A(B BBBD\H%�ʙ�nF�B�B �B(�A0�D8�J�
8A0A(B BBBFg�I�P�B��%�֙��%�֙�B�%�֙�E�P$�%�֙�4E�D�G aAA@&�֙�2B�B�B �A(�A0�J�w
0D(A BBBD X&�י��A�`�_
AA|&`ؙ��AAG�D�&ٙ��F�B�B �D(�D0�G��
0D(A BBBI�&�ޙ�E�O�&�ޙ�'�ޙ�!E�M
FD0'�ޙ�}H`o
A L' ߙ��A�\�_
AAp'�ߙ��AAG�8�'0��B�I�D �A(�Q0n
(A ABBA(�'��iF�H�Q k
DBA�'��JE�@(�-((4�*T<(P��F�B�B �B(�A0�A8�G� L�@L�E
8D0A(B BBBG$�(���A�J
G���
E,�(P�'A�C
I�����
A,�(P��E�J
H����D��
F$)��uE�C
G��`
A,D)8�JE�C
B�Q�����
A,t)X��E�O
D��O���8
A,�)��.E�T
P�����3
D(�)����A�C
Y������
F$*L����E�C
E����
B$(*���$E�F
I����
E(P*���[E�X
L����
H$|*@���#E�T
I����
G$�*H���#E�T
I����
E(�*P���E�T
I����
K(�*����E�T
I���
H($+���E�F
D��E�M��
H,P+���)E�C
D��E�H�J��
E,�+����E�C
F���E�G��
A,�+\	���E�F
D��H��D��
A,�+�	��4A�F
F���J��

L,,
��A�F
F���J���
F @,���pE�O
V
A d,8��vE�O
\
A �,����E�E
E�q
A,�,��6E�C
D��E�N���
A,�,��DE�F
F���I�U�
G-0��8E�C
j
A,-P���E�C
B�O�,L-���E�F
F���I�U��
K|-���8E�C
j
A�-����E�C
B�O��-`���E�C
C��,�- ��cE�F
F���I�U��
J.P(��8E�C
j
A(,.p(���E�C
F���O��
G,X.�)���E�F
F���I�U��
D �.D1���E�E
E�q
A(�.�1��vE�C
D��J��
J,�.3���E�F
F���I�U��
G/d9��9E�C
h
D (/�9���E�E
E�q
A,L/�9���A�C
M�����
I,|/P;���E�C
M�����"
I,�/=��E�C
M������
A,�/�?��b
E�F
F���I�U�
A 00J���E�E
E�q
A,00�J��fE�C
B�I���O�-
A,`0�L���
E�F
F���I�U�
J�0�W��:E�C
j
C,�0�W���A�G
P�����
D,�0,Z���E�F
F���I�U�
D1�^��3E�C
c
C,01�^��|E�F
F���I�R�?
A`1,`��:A�C
p
A,�1L`��@E�F
F���I�U��
I$�1\d���E�C
I����
A$�1$e���E�C
_���
A$2�e���A�C
U����
A,(2tf���A�C
Y������
H X2$h���A�C
�
A$|2�h���A�C
X����
A,�2hi���A�N
I������
F,�2�o���E�F
B�I�T��3
A(38q��#A�C
D��I��
B,03<r��E�C
e������
G`3,x��>E�C
n
C �3Lx��kE�C
A�\
A,�3�x��nE�H
M�����"
A,�3�y���E�F
F���I�U��
F,4X~���E�F
F���I�U��
F,44�����E�F
F���I�R��
A,d4�����A�M
H����E�
E�48���7E�C
c
G�4X���:E�C
c
J�4x���:E�C
c
J�4����:E�C
c
J5����:E�C
c
J 45؏���E�C
F�]
E X5T����E�C
F�]
E,|5А��YA�F
H����D��
D(�5����A�C
M������
I,�5Ԙ��aE�C
P�����'
A,6���� A�[
P������
A086���E�J
F���L�M�
A l6@Ӛ�UE�C
A�F
A$�6|Ӛ��E�J
C���
D8�6$Ԛ�A�C
D��E�D��
J�
Ky
G,�6֚��A�G
F���M���
E4$7�ښ��A�M
D��E�J���
H�
B,\7ܚ��
E�H
D��I�I�G��
A(�7���A�C
e���8
G �7t��E�V
F��
A0�7�MA�C
\�����|
Dy
E,8��A�E
F���M��x
G @8���E�C
F�b
H d8��E�C

A �8���E�C
t
A �8����E�C
J
A �8�����E�C
J
A �8h����E�C
J
A 9���;E�C
�
D <90����E�C
J
A `9����E�C
J
A �9�����E�C
J
A$�94���E�C
G���
A �9��=E�C
E��
A,�98��ME�C
D��E�G���
H,$:X��"A�C
\������
E(T:X��EA�C
G�����
A �:|���A�W
A�m
A$�:����E�C
R���[
K �:P���E�C
E��
H$�:,��!E�C
�
HP
H ;4	���E�C
E�P
C$<;�	��dE�C
G���
G d;8��E�C
E�p
C�;$��CE�J
n
A�;T��CE�J
n
A$�;���hE�J
p
QP
C,�;���yE�C
M�����

A$ <���E�C
�
HP
H H<���E�C
E��
H,l<����E�C
h�����3
A �<����E�C
E�P
C�<\���E�C
~
A �<����E�C
�
D$=x���E�C
�
HP
H,=P���E�C
y
AL=����E�C
y
Al=P���E�C
y
A$�=����E�C
�
HP
H$�=����E�C
G��S
F(�=P���E�J
B�y
FI
A>���}E�J
q
A(>$��}E�J
q
AH>���}E�J
q
Ah>���}E�J
q
A�>D��}E�J
q
A4�>���sE�C
M������
E�
A(�>����E�X
B�I�K��g
F,?� ���E�C
B�I�I�F���
A,<?0!���E�C
B�I�I�F���
Al?�!���A�C
u
G(�?p"���E�C
D��H��
K �?�&��E�C
e
C$�?�'��E�C
I����
E(@�(���E�C
S
E@
F(0@|)��+E�C
m
K@
F(\@�*��+E�C
m
K@
F0�@�+���E�C
B�J���
H^
J$�@�,���E�C
v
JP
H$�@�-���E�C
v
JP
H,A .��iE�C
B�H��
HZ
N <A`/��VE�C
E��
F$`A�0���E�C
I����
A�A2���A�C
t
H(�A�2��E�C
B�L���
I(�A�6��E�C
B�L���
I(B�:��E�C
B�L���
I(,B�>��E�C
B�L���
IXBtB���A�C
t
H(xBC��E�C
B�L���
I,�BG��E�C
H����H��
G(�B�M��cE�C
B�L���)
A(C<R��!E�C
B�L���!
I ,C@Z���A�C
F�s
C(PC�Z���E�C
F���H�)
A(|C�\���E�C
F���H�)
A,�Cd^���E�C
D��L����
G,�Cb���E�C
H����H�I
G D�f���E�C
A��
D ,D@h���E�C
A��
D PD�i���E�C
A��
D tDk���E�C
A��
D �D�l���E�C
A��
D �D�m���E�C
A��
D$�D\o���E�C
C���
E$E�p���E�C
C���
E 0ELr���E�C
E�H
K TE(s���E�C
E�H
K xEt���E�C
E�W
D �E�t���E�C
E�W
D �E�u��@E�C
v
B �E�v���E�C
A��
D FDx���E�C
A��
D$,F�y��A�C
S���
A(TF�z���A�J
J����
C(�F\}��	E�C
P������
H,�F@���-E�C
F���I�O��
G,�F@����A�E
F���J��O
K,G���E�C
B�U�����
F(<GP���A�C
\�����L
D(hGD���mE�C
g����k
F�G����QE�J
|
A,�Gș���E�C
B�E�E�E�H�H
G�GX���WE�C
m
K,H����mE�C
B�J�����
D,4H؛���E�H
H����H�T
G$dH�����E�C
G��O
B$�H����E�C
B�D�e
E(�H����WE�C
f����
A,�H̤��jE�C
h�����
A,I����E�H
D��V����
D,@I̬��>A�O
F���T��
FpIܯ��7E�C
c
G�I����:E�C
c
J,�I����E�O
P�����
I,�I����DE�H
H����H�
G,J���XE�C
D��E�Q���
E,@Jś�A�C
B�E�E�E�H�W
D(pJ�ś�<E�Q
B�S����
J,�Jț�
E�M
M�����_	
B(�Jқ�E�M
J����
G,�J�כ��	E�H
M���I�R�
E,(K���E�C
H����R�N
H$XK�aE�C
C���
I,�K\��E�C
Y������
E,�K���A�C
h������
I(�K��OE�C
g����~
C,L��E�C
e������
G$<L@�lE�C
D��Z
A,dL���A�P
M�����A
A$�L8���XE�C
p
HP
C�Lp���=E�C
k
E,�L�����E�J
M�����\
H$M����E�J
H��w
B(4M���sE�E
B�z
JY
E,`M����E�C
D��H�d
H^
E,�M�����E�C
B�h����
G,�Ml��#E�C
F���F���
E,�Ml���A�C
H����D�
E, N�	��E�C
H����H��
J,PN�
��0E�J
F���E�H��
B,�N���oE�C
\������
C,�N�
��E�C
I�����
A(�N���$E�Q
I������
A O����A�C
F��
G,0O����A�C
D��L����
A(`OL��AE�C
P������
H(�Op��
E�C
P������
H(�OT$���E�C
P������
H0�O&���A�H
B�E�I�I�H�q
E,P�'��+E�C
F���I�D��
G,HP�/��CE�C
F���I�D��
G,xP�1���E�C
F���I�D�
G,�Pt5���E�C
F���I�D�~
G,�P�;���E�C
F���I�D��
G(QtB���E�C
P������
F04QD��A�H
B�I�I�I�H��
A,hQ�E���A�C
F���E�D��
E,�Q�I���A�C
F���E�D��
E,�QM��!E�u
N��W�S�O��
A,�QQ��qE�M
D��I�I�K�8
A,(RdR��aE�J
D��I�I�K�+
A,XR�S���E�s
N��W�S�O�
A$�RW���E�d
K��z
B$�R�W���E�C
u
CG
E$�R4X��eA�K
B�V��
H S|Y��QA�C
E�~
E$$S�Y���E�C
u
CG
E,LS0Z���E�H
K��L�T��
F|S�]��8E�C
j
A�S^��8E�E
h
A(�S0^��E�H
_����
A�Sa��3E�C
c
C,T4a��lA�C
P�����#
A,8Ttc��*A�C
P������
A4hTtg���E�H
K��L����
E�
A�T�l��6E�C
c
F �T�l���A�K
I��
F(�T�m��mE�C
M������
G(Uo��vE�C
M������
G,<U`p��QE�C
D��E�M���
AlU�s��;E�C
m
A�U�s��EE�C
w
A,�U�s��SE�S
P�����

A,�Uw��~A�K
H����R�
A$V`z��_E�E
l
JP
J(4V�z�� E�V
G�����
F,`V�{��E�X
M������
A,�V|���"E�C
H����R�j
D�V|���6E�C
h
A �V�����A�E
I��
A,W(���E�L
I�����A
E(4W���8A�C
M������
D(`W����E�H
D��U���
C�W�3E�C
e
A �W����A�C
^��
A,�Wܚ��_A�C
i�����:
A X����E�E
l
J($X����A�C
B�E�D�Y
H,PX����cA�C
\�����
A$�X���E�C
B�G�
I(�XTœ��E�C
X����
A,�X��yA�G
B�I�E�M���
F(Y8Ŝ��E�V
D��Q��Q
A(0Y���E�O
B�Q��J
G$\Y �E�V
H��u
H$�Y��E�C
J����
G(�Y��HE�C
I���'
H,�Y���
E�C
\������
D(Z��A�C
B�E�L��
A 4Z���E�T
E��
A,XZ4����E�C
H����H�i
G,�Z���HA�C
B�I���D��
B$�Z$���E�J
H���
F$�Z���4E�J
H���
A [4���yE�C
F��
C ,[����vE�C
F��
G$P[���E�C
I����
D,x[����E�V
D��B
G�
E$�[����7E�J
J����
J �[���E�C
k
A �[���}E�H
�
B \���E�O
]
A<\����E�C
u
A \\���E�[
H
A�\����E�V
l
A(�\����E�C
M���H
C$�\�	��VE�G
C���
D �\���E�C
D
A ]����E�C
H
A <] ���E�C
E
A,`]����E�C
P�����~
J(�],���A�C
L�����
J,�]����E�C
P�����#
E,�]7���A�C
B�G��M��@
A,^�8���E�H
F���I�[��
B,L^@;���E�M
P������
F$|^�=��@O�H
B�D��
F,�^�>��.A�C
F���J���
I �^�?��wE�J
F��
F�^A��@E�C
i
J_4A��\E�O
$4_xA���E�O
B�H�p
B,\_B��KE�C
D��R��H�
A,�_0C��KE�H
P�����,
G �_PF��eE�C
U
C$�_�F���E�C
I���T
K,`$G��;A�C
D��H����
G,8`4H��"E�S
B�K��Q���
E(h`4M���A�M
D��I�t
A,�`�M��A�C
\������
H,�`�P���E�L
B�I�I�J���
G$�`�X���E�C
U����
A,a�Y���A�C
B�I���G��
G,La[���A�C
Y�����,
G,|a�^���A�C
Y�����
D,�ape��
A�C
Y������
G,�aPm���A�C
\�����M
C,b�y���E�E
B�K��I�D�
G,<b`}��{E�]
F���G���
I,lb�~��QE�C
D��I�F��
F�b���6E�C
c
F�b���6E�C
c
F,�b ���A�G
H����K��
C0c����E�M
B�L�L�I�H�|
G@c����6E�C
c
F,`c̊���A�H
M�����H
B,�c����\E�C
M������
A�c̒��3E�C
c
C,�c쒝��E�H
H����R��
A d����WE�C
A�H
A(4dȘ���A�C
X���
I,`d,����	A�C
i������
D$�d����_E�E
l
JP
J$�d��E�C
B�M�i
A �d\����E�C
�
A,e����dE�J
M���I�O��
H,4e8����E�C
D��I�I�D��
Ade����CE�C
u
A,�e(���FA�C
H����H�,
A(�eH����E�C
D��I�Z
A,�e�����E�C
Y������
B(fl����E�O
F���O�-
J$<f0���E�F
H���
Edf���9E�F
h
A �f8���uA�C
Q�Z
A$�f����LE�H
D���
F(�f�����E�C
F���D�E
I �f0���rE�V
Q
A g����6E�C
h
A@g����6E�C
h
A$`g̺��rE�C
J����
G�g$���;E�C
m
A$�gD����E�C
D��
A�g����;E�C
m
A�gܼ��@E�C
r
Ah����5E�C
e
C$0h����E�C
D���
AXh����?E�C
vxhý��RE�C
I�h����?E�C
v�h����E�C
��h����?E�C
v�h����E�C
vi
����E�C
~8iq���2E�C
iXi����7E�C
nxi����E�C
v�i����xE�C
o�iQ���cE�C
 �i�����E�X
g
C,�i����:E�C
D��G��M��
A,(j��rA�O
D��]���0
GXjŝ�5E�C
c
E xj$ŝ�E�C
t
A �j Ɲ��E�C
H
A �j�Ɲ�dE�C
F��
I$�jȝ��E�C
G���
D k�ɝ�BE�C
F��
D 0k˝�9E�C
F��
I Tk(̝��E�C
F��
E$xk�͝�E�E
E����
D�k|��E�C
x
A�k���E�C
x
A,�k|ϝ�7E�C
M������
A(l�ѝ�IA�C
P������
G,<l�ҝ�A�H
B�[�����
F,ll�ԝ�3E�C
B�V�����
H,�l�ם��E�C
B�E�O����
D�l@۝�VE�V
s
C�l�۝�VE�V
s
Cm�۝�=A�O
,(m�۝�(E�C
M������
G0Xm���E�J
P�����I
H�m`�ME�O
q
C,�m���E�C
B�H�x
Fr
N�m`�XE�C
t
L �m��bE�C
B�R
A  n��eE�C
B�U
ADn8�1E�C
c
AdnX�1E�C
c
A�nx�3E�C
c
C�n��?E�C
p
B�n��@E�C
q
B$�n���E�F
B�J�P
I oP��E�C
�
A 0o��FE�C
B�r
ETo�@E�C
q
B to8��E�C
B��
A�o��1E�C
c
A$�o��E�C
\
LP
A$�ol��E�I
c
GP
A$p��VE�C
t
DP
A(0p�zE�H
B�J�O�L
A$\p`��E�C
I���m
A �p���E�C
�
A �pd��E�C
�
A�p�TE�C
z
F�pP�TE�C
|
Dq��3E�C
e
A,q��3E�C
e
ALq��3E�C
e
Alq��3E�C
e
A�q�3E�C
e
A�q0�9E�C
k
A�qP�7E�C
i
A�qp�3E�C
e
Ar��3E�C
e
A,r��3E�C
e
ALr��3E�C
c
Clr��]E�F
n
G �r0��E�F
H��
A �r���E�C
H��
A �rx��E�C
E��
F�r$�3E�C
e
A sD��E�C
h
H,<s��FA�C
B�E�E�F��
H(ls��E�F
D��H��
G,�s��E�C
B�J�����
F,�s4����E�C
B�J�����
F,�s�����E�C
B�J�����
F,(t����E�C
B�J�����
F,Xt����E�C
B�J�����
F,�t����E�C
B�J�����
F,�td	���E�C
B�J�����
J,�t����E�C
B�J�����
F,uT���E�C
B�J�����
F,Hu����E�C
B�J�����
F$xu4���E�R
W
J]
E�u���ME�X
j
A�u���ME�X
j
A�u��ME�X
j
Av<��ME�X
j
A vl��ME�X
j
A@v���ME�X
j
A`v���ME�Q
q
A�v���ME�X
j
A�v,��ME�Q
q
A�v\��ME�X
j
A�v���ME�Q
q
Aw���ME�X
j
A w���ME�Q
q
A@w��ME�X
j
A`wL��ME�Q
q
A$�w|���E�C
l
LT
E�w���ME�X
j
A�w$��ME�Q
q
A,�wT��fE�C
G���
Ep
Hx���PE�L
y
A8x���TE�P
y
AXx��UE�K

A(xxD��E�C
M������
A(�x8��E�C
F���I��
H(�x ���E�C
K����k
B$�x� ��+E�C
B�H�
A($y�!��
E�C
M������
D(Py�"���E�C
M������
A,|yP#���E�C
B�E�r
G^
J,�y�#���E�C
H����H��
A$�y�$���E�C
I���u
A,z%��5E�C
M�����r
I,4z'��8A�C
H����K��
G,dz()���A�C
F���F���
I0�z�+��DA�C
B�E�E�H�D�=
G,�z0���A�C
B�E�G��N��
C,�z�7��~A�C
H����K��
G,({�;���A�C
P������
A4X{�@���A�C
B�I���H�^
K�
F,�{�B���E�C
B�I���H��
E,�{|D��kE�O
B�J�V����
F,�{�L��/
E�C
H����D�
D, |�V��!E�C
B�E�G��D�
A P|�W��eE�C
W
A,t|X��E�H
D��L�E�H��
H �|�X��IE�C
F�u
A �|Y��qE�C
_
E�|pY��FE�C
l
M}�Y��ME�C
u
K,,}�Y��^A�C
I�����
E\}[��:E�C
c
J|} [��;E�C
c
K �}@[���E�C
�
A(�}�[��`E�C
K����T
A(�}]���E�C
D��D�A
G(~�]���E�C
B�J��k
AD~^��=E�E
m
Ad~8^��AE�G
n
B$�~h^��lE�C
B�G�Q
E(�~�^��hE�C
D��E�M
E,�~�^��mE�C
B�G��D� 
K$4`��kE�C
C��V
E$0|`��aE�C
D��O
A X�`��lE�C
B�X
E,|a���E�C
F���E�D�-
D$�pb��bE�C
B�D�N
A$��b���E�C
B�D�v
A� c��JE�C
{
B,�Pc��?E�C
D��I�F���
J(L�`f��E�C
F���H��
Jx�Dg��}E�C
q
G(���g���E�C
I������
H,Āhh���E�C
B�E�J�H�U
E ��h��RE�H
B�y
E,��h��GE�C
D��E�E�I��
H0H�k���E�C
M�����h
Cd
L |�pl��wE�L
E�[
A���l��EE�C
w
A(���l���E�C
D��E�_��
D��n��5E�C
c
E ��n���E�C
�
A 0�<o���E�F
p
HT��o��5E�C
c
Et��o��9E�C
c
I,���o��
E�C
B�N�����
E(�r��3A�C
F���D��
A(��t��<A�C
W�����
C,��u���A�C
M�����b
E,L�`w���A�C
M�����b
E,|��x��[E�C
h������
A��&ޘ�
������̃��5E�C
c
E(����sE�C
B�E�D�V
E,�T����A�C
\�����6
JH��ݘ�
������(h�ԉ���A�C
B�G��H��
A��Rݘ�
�����,��h����A�C
H����D�-
C,�(���8E�C
F���F���
L,�8����A�I
P������
A(D�����E�C
K����T
Ap��ܘ������(��|���A�E
P������
A,��`����A�C
M�����7
H$�����E�C
E����
A,�����WE�K
D��L�Q�M��
AD�ء��8E�C
j
Ad�����3E�C
c
C�����3E�C
e
A��8���3E�C
e
AĆX���3E�C
e
A�x���3E�C
e
A�����3E�C
e
A$�����9E�C
k
AD�آ��3E�C
c
Cd�����HE�C
z
A ��(���|E�C
A�m
A �������A�C
B��
G(̇@����E�C
D��F���
A�����ME�Q
q
A�D���ME�Q
q
A8�t���ME�Q
q
AX�����ME�Q
q
Ax�ԥ��ME�Q
q
A �����VE�C
B�B
E,��@���EE�F
B�I���D�!
E(�`���E�E
F���D��
K$�D���E�C
B�D��
D(@�<����E�C
I������
F l�ઞ�[E�C
B�K
A�����<E�C
l
C ��<���QE�C
A
C(ԉx����E�C
F���D��
A,�쫞�CE�C
H����D��
I(0����"A�C
I�����X
K(\�����E�C
D��H�z
B,�������A�C
D��E�F���
G,��t����A�C
\�����j
F�$���TE�C
|
D$�d����A�I
B�Q��
E,0�<���qA�C
F���E�G��
C,`������A�C
B�G��E�D�
G,��쾞�#A�C
B�E�H��G�]
I,�����A�C
P�����q
A,�Lƞ��A�L
F���T��#
F$ ��ɞ�mA�C
B�H��
E(H�$˞��E�C
K����<
A,t��Ϟ��A�C
B�F�H�F��&
H,��֞��E�C
I������
A0Ԍ�ڞ��A�O
B�E�E�E�D��
D$�4ݞ�A�C
E����
F$0�,ޞ��A�C
B�D��
E$X�ߞ�A�C
B�K��
F$���ߞ�dE�C
I����
J ��4��E�C
F��
F$̍��E�C
B�H��
I(���A�G
I������
H( ����A�C
M������
G,L�`��A�G
H����H��
L,|���A�G
B�N����|
D(�����A�C
M������
A0؎d����A�C
K�����
Dr
N(����A�C
K�����
G,8�T���vA�C
M�����.
A(h������E�C
K����6
G,�����9E�C
B�G��E�D�
E ď(���E�_
]
E ����E�I
�
L$����`E�C
B�N�B
A4����5E�C
c
E,T����� A�C
h�����
A,��()���E�F
B�J����
D,���-���E�O
H����D�A
G,��/���E�C
B�D��
F^
E,�h0���E�K
H����R��
B,D�83���E�C
B�E�J�Q���
A$t��3��nE�C
B�D�c
G$�� 5���E�M
B�M�|
K,đ�5���E�H
D��S�E�H�W
H(�X<���E�H
F���H�z
A( ��<��E�C
N�����
D,L��>��E�C
B�L�y
Ae
E|�?��>E�C
c
N,�� ?��DE�C
I�����Z
E̒@C��;E�C
m
A,�`C���E�C
B�L�`
Jc
E(� D���E�C
F�X
JT
A(H��D���E�C
G�����
J$t�8F��XE�H
H��
F ��pG���E�C
F�]
E���G��=E�C
B�ܓ H��oE�C
B�$��tH��fE�C
B�D�N
E( ��H��[E�K
K��H��
G,L��I��BE�C
B�G��E�K��
C8|�M���E�C
B�E�E�E�H�x
GB
E(���N��oE�C
D��E�T
E,�O���E�C
B�E�E�E�H�Y
E(�hO���E�F
B�J�O�Y
A(@�P���E�F
B�J�O�Y
Al��P��HE�C
c
X���P��CE�C
c
S��Q��1E�C
c
A(̕0Q���E�C
B�E�K���
E ���Q��{E�E
B�e
E�0R��KE�C
r
L <�`R��aE�H
N
A,`��R���E�C
H���
IX
A,��LS���E�C
P�����'
A ���T��@E�J
k
A$��T��8A�M
E�
A(��U��UA�C
M������
H,8�$W���E�C
B�E�E�K��.
C,h��X��E�C
P������
A(��TZ���E�C
M������
I,ė�Z���E�C
D��O���}
A(�H]���E�C
G�����
F( ��]���E�C
G�����
F(L�P^���E�C
B�E�O�k
A(x��^���E�C
B�E�O�k
A(��_��vE�C
B�E�H�Y
A(Иl_��tE�C
B�E�H�W
A���_��RE�M
z
A�`��DE�C
c
T<�0`��BE�C
c
R,\�``���E�C
B�H�E�J�H�x
G��a����a��5E�C
g
A ę(a��|E�J
A�_
A��a��<E�C
l
C,��a���E�C
B�D�T
NY
A$8�b���E�C
V
RP
C `�lb��SE�C
B�B
B ���b���E�J
A�~
B��$c��<E�C
l
C,ȚDc���E�C
B�D�T
NY
A ���c��SE�C
B�B
B ��c���E�O
J�e
E @�\d��wE�O
J�D
F d��d���E�O
J�U
E ��$e���E�O
J�u
E4���e���E�C
F���E�H�~
G�
D�Hg��JE�C
x
E �xg��lE�J
J
N (��g��xE�J
c
AL� h��QE�C
x
L$l�`h���E�U
C���
G ��l��SE�C
B�B
B ��Dl���E�C
A�p
G(ܜ�l���E�J
O����l
F �4n���E�C
A�z
A ,��n��SE�C
B�B
B0P��n��E�C
H����J��
H~
J$���o��E�J
G���
J$���p���E�J
B�H�p
Gԝ(q��`E�G
w
E$�hq���E�J
B�H�t
C ��q���E�W
h
D,@�lr���E�O
M������
Hp�,v��3E�C
c
C��Lv��3E�C
c
C��lv��3E�C
c
C�v��3E�C
c
C��v��1E�C
c
A��v��6E�C
c
F0��v��1E�C
c
AP�w��1E�C
c
Ap�,w��6E�C
c
F��Lw��3E�C
c
C��lw��3E�C
c
C�w��1E�C
c
A��w��6E�C
c
F��w��3E�C
c
C0��w��3E�C
c
CP�x��3E�C
c
Cp�,x��6E�C
c
F@���/j��	��	��	Ґ	ې	�	�	�	��	x�	��	pB
͛k��	tO�
̛k��	g!h
��jd�	l}�	��j�	�		l,o
��j=�	TU�	�k �	GLi
^�	P�	���		l,o
��j=�	��	��	sC(
�j�	x�	f	�	�j5K
��		l}�	��j8�	c�%
��kp�	��	t�0
��ji�	r��	��j��	fF�	��j��	s��	��j�	S�	��jH�	m!�	��jx�	`�	t�0
�ji�	w��	�j��	s��	�j�	S�	�jH�	m!�	�jx�	��	t�0
L�ki�	w��	T�k�	s��	@�k�	S�	A�kH�	m!�	B�kx�	P�	t�0
,�ji�	q�	8�jx�	s��	 �j�	S�	!�jH�	m!�	"�jx�	B
�	%�j�	pi�	$�j��	��	t�0
L�ji�	r��	H�j��	M3�	C�j9�	s��	@�j�	S�	A�jH�	m!�	B�jx�	��	t�0
L�ki�	r��	`�j��	f�	\�j��	n�	0�k(�	R>�	E�k��	vH�	F�kP�	m]�	(�k(�	Bd�	D�kX�	Np�	,�k��	Sw�	)�k�	E��	*�k��	x�	t�0
��ki�	r��	h�j��	f�	d�j��	n�	��k(�	Np�	��k��	R>�	��k��	vH�	��kP�	��	s�	��k��	t�	��k�	m(�	x�j�	M4�	t�j@�	i@�	p�jx�	IR�	l�j��	0�	iG�	|�j`�	8�	ir�	��j`�	j��	��j��	��	��	e�9
�j�9
��		n,�	��j8�		iG�	��j��	a6�	��j��	CmX
��jmX
�	p�S
��j�S
8�	tPi��jPi`�	u�[
��j�
?�	O�	��jZ�	��	�	�	iG�	��j`�	
�		l,o
��j=�	��	I�	hw
 ���w
 ����w
 �`�(x
��@��"
���"
���"
 �0��"
 ����x
 ����"
 ���x
 �p��"
 ���#
 ���#
 � ��x
 ��y
 �@(y
 ���Py
 ���xy
 ���y
 � ��y
 �Pz
 ��'#
 ���0z
 ��<#
 �P�O#
 � �d#
 ��+q#
 ��,~#
 �0�hz
 ���#
 ��-�#
 ��.�#
���#
���#
0��#
0�|!
��#
p�	$
��$
0�$$
��*$
��2$
��:$
�B$
�^$
��k$
��r$
��y$
���$
��$
���!
� �!
P*�!
p"�!
$�z
���$
���$
 ��$
Ф�!
01�$
��z
p��z
 ��$
@@{
�	%
��%
��&%
��@%
�\%
 ��{
P�{
�y%
���{
��%
��%
0��%
�/�%
P��%
 ��%
���%
@��%
��&
@��{
��M 
p���|
@�!&
�++&
�,4&
0�X|
�E&
�-W&
�.h&
��&
���&
���&
��&
�%�&
�(�&
 ''
P2'
ФL'
�2h'
��'
0��'
�/�|
��'
��g-
.
]-
&.
2.
W.
d.
:.
A.
L.
Y.
g.
{.
�.
�.
�.
�.
J-
�.
�.
/
/
!/
2/
G/
\/
g/
l�	n:
�:
�:
�:
�:
�:

;
$;
>;
X;
r;
�;
�;
�;
�;
�;
<
<
(<
5<
B<
O<
\<
i<
v<
�<
�<
�<
�<
�<
�<
�<
�<
�<
=
=
=
*=
8=
F=
T=
b=
p=
~=
�=
�=
�=
�=
��
В
�
0�
`�
|@
�@
�@
��
�@
�@
�@
��
P@
�@
K@
A
A
�@
%A
A
A
7@
8A
4@
QA
\A
�@
sA
QA
\A
 @
�A
�A
�A
�@
�A
�A
�A
�A
@
�
�A
�A
�@
�A
�A
�A
�A
�?
�A
B
B
�@
B
.B
<B
B
�?
FB
UB
`B
�@
`B
qB
UB
`B
�?
}B
�B
�B
�@
�B
qB
�B
�B
�?
�
�B
0�
�@
0�
qB
�B
0�
�?
�B
�B
�B
�@
�B
�B
C
�B
�?

C
C
!C
�)
��
?
C
��
7C
BC
��
[C
?
aC
��
{C
��
 �
[C
?
h�
 �
�C
��
��
[C
?
�
��
�C
�C
�C
�C
?
�C
�C
?
�C
�C
�C
?
�C
�C
�,
 TkD
�Sk�2
 SkD
�Rkd@�,
dP�2
d�D
�@�,
�P�2
��D
,@D
,P�2
,�D
d�,
��,
,D
,^
X
a
Tl
[l
@
�A
bl
 Vn
on
�n
�n
�n
�n
�n
�n
�p
0��
��p
���p
`��
`��#
���p
0�8�
��p
��p�
���
��
��
��
�WfC�`f@�if@rf� vf�uf���uf���ufm��uf��8zf� 0zf�`�f1�f2�`�f��  �f	L�) �f>��fx�fI�:
@�fb�@{f��
+�zf��@zf�l
@�f0�f� (�f���f� �f�2���f�� `�f�� �fL�:
@�f����flN��f����f�l
`�f��� �f���f4��f��`�f�X�fô��f���f��g�@gOؑ/(g��0$gv�0 g�0�fH2��f��.�f�R'�f>h'@�f�� `�f1�0�f�:
,�f!V)(�f�� �fB��g) /g�`$g�2��#g1�R'`#g>h' g�� �gL�)@g��`g6�:
�g�	�+�g��@g��
+ gDN�gB:�+�
g!V)@
g���g�l
�
gz��`5gB@7g�7g	���6g��=g��AgS`Ag	��\Ag��Gg)�Og. \gT��P�Xg�5Q�Xgcg���P^g<5Q�]g���]g���]g�0L�]g�/L�]g
��]gm�p]g��@fg.��P�eg(5Q`ngx�gg�2��gg��K�gg=�Kpgg��K`gg��Kgg�
+@tgE@~gb�|ga��zgl+		zg3�`yg$�:
�xg'��`wgT�G�vg$|w
�ug?��
`ug�l
�g��gd��g����g�+		`�g����g%�:
��gJ��`�g��G`�g=|w
`�g��
�g�l
��g`�g> �g	����g�`�g��gi �gW2���g�� ��g	L�)��g��d�gx@�gI�:
�g��@�g&�
+��g�N� �g����g�l
��g��@�g�@�gS�g	����g����g��
h��
h��`h�+		@hE�`h7�:
`h>���g��G�g;|w
��g���
`�gOq� �g�l
�hv`hx0h���h��hô�h���(hY�'hOؑ/h'h��0d'hv�0`'h�0�!hh2��!h��.�!h�R'�!h>h'h��  h1�h�:
`h%!V)Hh��@hB��@.h��6h��6hؑ/`3h�2�@3h��.`1hx� �0h1�`0h�:
T0h��`9h�@<h�?h(`hh�@]h�2�`\h1�R'@\h>h'�Xh�� �XhL�) Vh��@Uh6�:
�Rh�	�+�Oh�� Mh��
+HhDN��FhB:�+�Fh!V) Fh���Eh�l
�Chz���nh�nh1�sh�sh� �sh�@�h(	`�h�2� �h��  �h���hL�:
�h�� {hlN��zh��`zh�l
xh�����h���hw��h�� �h�+		�hF� �h7�:
 �h9����h��G��h>|w
`�h���
�hRq���h�l
`�h�iX`�h���h�+		��hB���h7�:
��h9����h>|w
��h�l
`i�	i��i���i��iô�i�i?`i�= i
I6�iN�iK�&i��P�&i�=@&i
I6!iP)i
�,i�P�,i�=@)i� -i�0i�P�0i�=`-i��0i�3i�P�3i�= 1i� 4i]`6i� 6i��6i���5i��5im��5i��@<iQ��P�8i�5Q�8i Ci���P@>i55Q>i���=i���=i��=im��=i��PEi�LEiHEiH`Ei�Ei�Ei:�Ei�Ei��Ei��Eix�Ii�Ji�Ji�Ji�Ji�JiKi"Ki8KiRKihKi�Ki�Ki�Ki�Ki�Ki�Ki�KiLi#Li:LiRLimLi�Li�SiTi�Li�Li�Li�Li�LiMi,MiEMi_MiyMi	�Mi�Mi�Mi�Mi�Mi�Mi �Mi@�Mi��Mi�MiNiNi+NiENiONi\NihNitNi�Ni�Ni�Ni�Ni�Ni�Ni	�Ni
�Ni�NiOi
Oi"Oi9Oi @�k�k@�k �k`�k`�k �k`�k~k@zk@zkf�8
��kK�	��	rZ�	��	�	�	��	@�	��	p�	iG�	��j��	mX�	��j0�	na�	��jh�	vH�	��	p��	�	a��	�	b��	��	p�	�	t�0
i�		pc�	k�		t�	��	G��	�	��	P��	�	��	L��	�	�	T��	�	��	l}�	(�	s�	X�	uk�	��	R�	��	W%�	�	B1�	 �	Z@�	H�	r��	p�	zQ�	[�	Iy�	��	0��	�		x��	
d��	��	a5K
��		H��	@
c��	h
m��	��	q�	�
S�	*�	C�Z
C�	
 :MV�	_�	`
�9�>
�>
�>
�Ei�Hi�Ei�Ei�Ei�Hi�Ei�Ei�EiFi�EiFi�Ei�Hi
�Ei)Fi�Ei>Fi�EiPFi�EieFi�EiwFi�Ei�Fi�Ei�Fi�Ei�Fi�Ei�Fi�Ei�Fi
�Ei�Fi�EiGi
�EiGi
�Ei*Gi�EiAGi�EiSGi	�EihGi�Ei~Gi�Ei�Gi�Ei�Gi�Ei�Gi�Ei�Gi�EiIi�Ei0Ii�Ei�Gi
�Ei�Gi�EiHi�EiHi�Ei+Hi�Ei=Hi�EiXIi�EiOHi�Ei�Ii�EimHi�Ei�Hi�Z�Z�Z�Z�Z�
,�	�/j�/j���o`p� 
�[��jQpI���N	���o���o����o�oh����om��j6�F�V�f�v���������Ơ֠�����&�6�F�V�f�v���������ơ֡�����&�6�F�V�f�v���������Ƣ֢�����&�6�F�V�f�v���������ƣ֣�����&�6�F�V�f�v���������Ƥ֤�����&�6�F�V�f�v���������ƥ֥�����&�6�F�V�f�v���������Ʀ֦�����&�6�F�V�f�v���������Ƨ֧�����&�6�F�V�f�v���������ƨ֨�����&�6�F�V�f�v���������Ʃ֩�����&�6�F�V�f�v���������ƪ֪�����&�6�F�V�f�v���������ƫ֫�����&�6�F�V�f�v���������Ƭ֬�����&�6�F�V�f�v���������ƭ֭�����&�6�F�V�f�v���������Ʈ֮�����&�6�F�V�f�v���������Ư֯�����&�6�F�V�f�v���������ưְ�����&�6�F�V�f�v���������Ʊֱ�����&�6�F�V�f�v���������Ʋֲ�����&�6�F�V�f�v���������Ƴֳ�����&�6�F�V�f�v���������ƴִ�����&�6�F�V�f�v���������Ƶֵ�����&�6�F�V�f�v���������ƶֶ�����&�6�F�V�f�v���������Ʒַ�����&�6�F�V�f�v���������Ƹָ�����&�6�F�V�f�v���������ƹֹ�����&�6�F�V�f�v���������ƺֺ�����&�6�F�V�f�v���������ƻֻ�����&�6�F�V�f�v���������Ƽּ�����&�6�F�V�f�v���������ƽֽ�����&�6�F�V�f�v���������ƾ־�����&�6�F�V�f�v���������ƿֿ�����&�6�F�V�f�v����������������&�6�F�V�f�v����������������&�6�F�V�f�v�������������������&�6�F�V�f�v�������������������&�6�F�V�f�v�������������������&�6�F�V�f�v�������������������&�6�F�V�f�v�������������������&�6�F�V�f�v�������������������&�6�F�V�f�v�������������������&�6�F�V�f�v�������������������&�6�F�V�f�v�������������������&�6�F�V�f�v�������������������&�6�F�V�f�v�������������������&�6�F�V�f�v�������������������&�6�F�V�f�v�������������������&�6�F�V�f�v�������������������&�6�F�V�f�v�������������������&�6�F�V�f�v�������������������&�6�F�V�f�v�������������������&�6�F�V�f�v�������������������&�6�F�V�f�v�������������������&�6�F�V�f�v�������������������&�/sys/kernel/tracing��	0jJ����	�	�X����	 0jcart��	P0j gbd��	p0j��Q"
�0jreeb`�`���
Κ	x�	x�	x�	x�	x�	

d@B���

@@
'����d��ddd�������������������������dd�a�	g�	` k��	��	�k��	��	@k��	 �	� kż	˼	�k�	�	 k�	�	�k+�	6�	 kL�	S�	�k5K
e�	��j��j��j��j5K
�	o�	��	��	��	J^
��	J^
��	��	��	,^
��	,^
��	XX
��	��	I^
��	��	��	��	J^
��	J^
��	��	��	,^
��	,^
��	XX
��	��	I^
��	��	�	�	��	��	J^
��	J^
��	��	��	,^
��	J^
��	XX
��	��	I^
��	 �	��	��	07
��	J^
��	��	��	1�	��	C
��	XX
��	��	I^
��	5�	��	��	07
��	J^
��	��	��	1�	��	G�	��	XX
��	��	I^
��	K�	��	��	07
��	J^
��	��	��	Y�	��	]�	��	XX
��	��	I^
��	a�	��	��	J^
��	AM
��	��	��	4]
s�	��	I^
{�	��	��	J^
��	MM
��	��	��	4]
s�	��	I^
��	��	��	J^
��	z�	��	��	��	4]
s�	��	I^
��	��	��	07
��	AM
��	��	��	4]
s�	��	I^
��	��	��	AM
��	AM
��	��	��	4]
s�	��	I^
��	��	��	MM
��	MM
��	��	��	4]
s�	��	I^
��	��	��	MM
��	MM
��	��	��	4]
s�	��	I^
��	�	��	��	��	MM
��	z�	��	��	��	4]
s�	��	I^
�	��	��	MM
��	[�	��	��	��	4]
s�	��	I^
�	��	��	[�	��	MM
��	��	��	4]
s�	��	I^
(�	��	��	[�	��	MM
��	��	��	4]
s�	��	I^
��	�	@�	��	��	AM
��	J^
��	��	��	4]
s�	��	I^
R�	��	��	MM
��	J^
��	��	��	4]
s�	��	I^
d�	��	��	[�	��	J^
��	��	��	4]
s�	��	I^
v�	��	��	�<
��	J^
��	]X
��	4]
s�	��	I^
��	��	��	��	��	J^
��	��	��	4]
s�	��	I^
��	��	��	07
��	J^
��	��	��	XX
��	��	I^
��	��	��	AM
��	J^
��	��	��	XX
��	��	I^
��	��	��	MM
��	J^
��	��	��	XX
��	��	I^
��	��	��	[�	��	J^
��	��	��	XX
��	��	I^
��	��	��	[�	��	J^
��	��	��	XX
��	��	I^
��	�	�	��	��	�<
��	J^
��	]X
��	XX
��	��	I^
�	��	��	J^
��	MM
(�	]X
��	XX
��	��	I^
+�	��	��	J^
��	[�	(�	]X
��	XX
��	��	I^
;�	��	��	J^
��	�<
(�	��	��	XX
��	��	I^
K�	��	��	J^
��	��	(�	�U
��	XX
��	��	I^
[�	��	��	07
��	AM
��	��	��	XX
��	��	I^
l�	��	��	MM
��	MM
��	��	��	XX
��	��	I^
}�	��	��	MM
��	z�	��	��	��	XX
��	��	I^
��	��	��	MM
��	[�	��	��	��	XX
��	��	I^
��	��	��	MM
��	[�	��	��	��	XX
��	��	I^
��	�	��	��	��	AM
��	AM
��	��	��	XX
��	��	I^
��	��	��	�X
��	�X
��	��	��	XX
��	��	I^
��	��	��	07
��	�<
��	��	��	XX
��	��	I^
��	��	��	J^
��	��	��	��	��	XX
��	��	I^
��	��	��	J^
��	��	(�	��	��	XX
��	��	I^

�	��	��	J^
��	��	(�	��	��	XX
��	��	I^
��	�	�	��	��	07
��	�<
(�	-�	��	XX
��	��	I^
1�	��	��	07
��	�<
(�	-�	��	XX
��	��	I^
��	�	��	H�	xh
h�	t�	��	��	��	��	 �	O�
`�	��	��	5K
��	�T
��	��	н	�	��	�	��	*�	��	k�	 �	a�	H�	5K
3�	L�	p�	Q�	��	V�	��	�	��	d�	�	5K
3�	l�	(�	J�	P�	s�	x�	5K
��	|�	��	��	��	�
��	��	��	5K
˾	�	�	B
8�	5K
�	��	
�	5K
*�	�	��	��	�	��	@�	
	

	
�	
�	
�?	
�?	
�?	d @d�Pd ����, P� P,�P,�@,�P, �d����@��@, @,�Pd��d�@d�P��@��@� �,�@, @,�P, �d��d�@d�@d�Pd P��@� �,�@, @, �X��,
�,
JD
��,
�,
]D
��D
D
JD
���,
�,
iD
�D
�2
PD
��D
D
#D
��D
D
OD
��,
D
.D
���,
D
#D
���,
D
D
���,
�2
XD
���,
�2
PD
X��,
�,
JD
��,
�,
]D
��D
D
JD
���,
�,
iD
�D
�2
PD
��D
D
#D
��D
D
OD
���,
D
#D
���,
D
D
���,
�2
XD
���,
�2
PD
��,
D
.D
��,
�,
JD
�D
D
#D
�D
D
JD
�D
D
OD
��,
D
#D
��,
D
D
��,
�2
XD
��,
�2
PD
��,
�,
iD
�� @�@�@�@ @�@�P�@ @�@ P�@ @�@�@ ��� @�@�@ @�P @�P @�@ ��P�@d��d�@d�@d�Pd P��@� �,�@, @, ��*�̀����� 3k-
�~
H�
�2
p�
��
��
�D
F
�F
'G
�G
0�
H
�H
x�
p�
9K
�K
MN
0�
�P
�o
�
�R
�T
�T
�T
]W
�W
"Z
NZ
nZ
�Z
�[
�_
�_
%`
Q`
`
�`
�`
va
Yb
�c
�c
�d
�d
be
'h
Nh
8i
[k
Bl
jl
m
�m
o
o
o
Wo
^o
wo
�o
�o
�o
�o
�o
�o
�o
�o
�o
�o
p
 p
��

�v
 ?k5"
"
"
��&"
�v
"
�1"
�v
"
�="
w
C"
�W"
i"
���"
@w
@��)
 @k�)
2C
P8�)
�)
`;�)
�)
`@0-
-
 Ss/
�~
�X�/
@Ak�/
�/
"
�_�
�Ak�
�
"
 b�0
H�
�f�1
@Bk�1
p�
"
@n�1
��
"
n�1
ȁ
"
�m��
�Bk�2
��
"
t��
�2
`�3
p�
�4
��
@�4
��
 ��9
@Dk�9
h�
�&"
�9
@��9
	:
P�:
":
�5:
>:
0�R:
\:
`�E?
@EkZ?
j?
���?
�?
���o
��
�?
��?
 �
��?
X�
p��?
�gj�?
@gj�fj�fj�?
 fj�?
�ej@
`ej @
4@
ej7@
K@
�djP@
d@
@djP@
j@
@djP@
p@
@djP@
v@
@djej�dj@dj�gj@gj�fj�fj fj�ej`ej�ij�ij ij�hj`hjhj�dD
 d#D
�d.D
�dPD
 dXD
�dBD
�dPD
 dXD
�dBD
�dJD
 dOD
�dWD
�dJD
 d]D
�diD
�D
�D
 �(F
F
p��F
�F
��CG
'G
��G
�G
�G
0�
�!H
H
`&�H
�H
P0�I
�VkGI
MI
bI
�5{I
�I
bI
 7�I
�I
bI
�8�I
�I
bI
:J
x�
>MJ
p�
�DLK
9K
`F�K
�K
�MeN
MN
�R�N
0�
�b�P
�P
�~�o
�o
��R
�
p��R
�R
���T
�T
д�T
�T
�)T
�T
`�nW
]W
 ���	�W
0�Y
 [k"Y
5Y
p�HY
VY
��dY
rY
P��Y
�Y
���Y
�Y
�8Z
"Z
��^Z
NZ
���Z
nZ
��[
�Z
��[
�[
���_
�_
���_
�_
6`
%`
Q`
Q`
��`
`
�`
�`
 a
�`
�va
va
0Yb
Yb
��c
�c
&�c
�c
.�d
�d
@Pe
�d
Tqe
be
�U�e
�`k�e
ض
�e
�V�e
�e
�e
0WAh
'h
�Z^h
Nh
`bTi
8i
�f�i
bk�i
�i
��
k�i
�i
Ⱥ
@kjk
[k
�|�k
Bl
p~wl
jl
� m
m
��m
�m
�"o
`�*o
���
@��
���
��
}�1o
�So
 =k�o
�>k�������pdk��0�p��dk���dk��0�7�
���� �����@���F�
�����0�������P�
��������������@�
����@�����p���J�
�����P�������T�
����`��������]�
������������e�
���������P���l�
����`��������t�
������������	|�
�������������
s�
�	�	2w�
~�
�	3��
�	�	4��
��
��
5��
��
�	6MPi��
�	7�	��
��
8�X	�Y	0[	@\	p[	]	P^	�]	perf event selector list object.perf event selector list object.thread map object.cpu map object.perf context_switch event object.perf sample event object.perf read event object.perf lost event object.perf throttle event object.perf comm event object.perf task (fork/exit) event object.perf mmap event object.F�
R
�IiJi
JimX
�R
�Z
�0
!h
Ji� 
m
JiC
1
)Ji5JiJi�
�
�
�

>Ji�1
KJi�S
_
ZJigJi��	vJi 
�Ji�R
�Ji�
�Ji�
�S
Pi�[
�Ji�,
9�	��������@qk�Ji��	�JiPOi�
@�	 sk�lk�skp}	 o	�p	�1
Ѓ	 Ti))
p�	\Oi�	��	@TiwOi��	`Ti�3
�	�Ti�Oi~	�Oi�Oi�}	 mkvk@y	))
��	�Ti�Ois	�wkPmk�q	@r	�r	�Oi@q	�ykpmk`o	�o	 p	�Oi�ps	�mk�{k�Oi�Oi�Oi Pi
Pi$PiPi(+Pi;Pi0GPiRPi8\PiePi@vPiC
H�Pi�Pip�Pi� 
��Oi�Pi��Pi�Pi��Pi�Pi�t	�u	�mk�k�Oi�Oi�Oi Pi
Pi$PiPi(+Pi;Pi0GPiRPi8\PiePi@vPiC
H�Pi�Pip�Pi� 
��Oi�Pi��w	�mk�k�Oi�Oi�Oi Pi
Pi$PiPi(+Pi;Pi0GPiRPi8\PiePi@vPiC
H�Pi�Pip�Pi�S
�PiPi�PiQi��t	�mk��k�Oi�Oi�Oi Pi
Pi$PiPi(+Pi;Pi0GPiRPi8\PiePi@vPiC
H�Pi�Pip�Pi�Y
�\PiQi�Qi-Qi��w	nk�k�Oi�Oi�Oi Pi
Pi$PiPi(+Pi;Pi0GPiRPi8\PiePi@vPiC
H�Pi�Pip�Pi� 
��Oi�
�1Pi�Y
�\PilPi�vPiAQi��x	0nk�k�Oi�Oi�Oi Pi
Pi$PiPi(+Pi;Pi0GPiRPi8\PiePi@vPiC
H�Pi�Pip�Pi� 
��Oi�S
�PiPi�Pi�S

��S
QQi�`x	`nk��k�Oi�Oi�Oi Pi
Pi$PiPi(+Pi;Pi0GPiRPi8\PiePi@vPiC
H�Pi�Pip�Pi� 
��Oi�S
�PigQi�aQiPi�PirQi�lQi�
�1PiwQi� u	�nk�k�Oi�Oi�Oi Pi
Pi$PiPi(+Pi;Pi0GPiRPi8\PiePi@vPiC
H�Pi�Pip�Pi� 
��Oi�Qi��Qi�S
�PiPi�Pi�Qi��Qi:
��Qi�Qi��Qi�Qi
��Qi*stapsdtj�Tiperftest_targetperf.cpython-39-x86_64-linux-gnu.so.debug.��D.data.rodata.shstrtab.dynamic.note.gnu.build-id.eh_frame.stapsdt.base.gnu.hash.fini.gnu_debuglink.dynsym.gnu.version.rela.dyn.data.rel.ro.gnu.version_r.eh_frame_hdr.dynstr.bss.note.stapsdt.init.rela.plt.got.noinstr.text.fini_array.init_array"88$M���o``�l  P|�p�p��[t���oh�h�\
����o���������N�BpIpIQ���� � �6�0�0���P�	P�	�W,�	,�	
�	�	��_ ?�Ti�Ti��Ti�Ti�5ptipti����/j�j��/j�j��/j�j�o ��j��j ���j��j8�j�j��@���k��k( ���k@]�k0 �k

Youez - 2016 - github.com/yon3zu
LinuXploit